Sponsored Research Projects

  1.  PI, "Multi-Kernel Correntropy Based Learning: Theory and Methods",National Natural Science Foundation of China (No. 61976175), 600,000 RMB, 2020.1-2023.12

  2. PI, "偏差补偿的稀疏互相关熵自适应滤波及其应用",陕西省自然科学基础研究计划重点项目(2019JZ-05),100,000RMB.

  3. PI, "Human-Machine Cooperative Interaction Method and Application for Dexterous and Complaisant Lower Limb Rehabilitation Robot", National Natural Science Foundation of China (No. 91648208), 3,000,000 RMB, 2017.1-2020.12

  4. PI, "Visual Cognition Encoding and Decoding for Advanced Brain Machine Interfaces", National Basic Research Program of China (973 Program) (No. 2015CB351703), 6,300,000 RMB, 2015.1-2019.12

  5. PI, "New methods and applications of adaptive filtering in reproducing kernel Hilbert space",National Natural Science Foundation of China (No. 61372152), 800,000 RMB, 2014.1-2017.12

  6. PI, " Research on survival information theoretic learning", Key Laboratory Young Academic Backbone Construction Project, 150,000 RMB, 2013.3-2014.3

  7. PI, "Nonlinear system parameter identification based on the error entropy criterion", National Natural Science Foundation of China (No. 60904054), 195,000 RMB, 2010.1-2012.12

  8. PI, "Nonlinear modeling and control of a six degree of freedom maglev precision stage", China Postdoctoral Science Foundation (No. 20080440384), 30,000 RMB, 2009.7-2010.7

 

Research Interests

My research interests span a wide spectrum of topics in signal processing, machine learning, brain machine interface (BMI) and cognitive computation.

1. Information Theoretic Learning

 
 
 
The term “information theoretic learning” (ITL) was proposed by Jose C. Principe (http://www.cnel.ufl.edu). Generally speaking, ITL uses descriptors from information theory (entropy, mutual information, divergences, etc.) estimated directly from the data to substitute the conventional statistical descriptors of variance and covariance. Information theoretic quantities can capture higher-order statistics and offer potentially significant performance improvement in the adaptation of linear and nonlinear models and also in supervised and unsupervised machine learning applications. 

2. Online Kernel Learning

 

 

During the last few years, enormous research efforts have been dedicated to the development of the kernel learning methods such as support vector machine (SVM), kernel regularization network, and kernel principal component analysis (KPCA). These methods show powerful classification and regression performance in complicated nonlinear problems when using Mercer kernels to map the original input space into a high dimensional feature space and then performing the linear learning in feature space. However, they usually require significant memory and computational burden due to the necessity of calculating a large Gram matrix. The online kernel learning (OKL) provides more efficient alternatives which approximate the desired nonlinearity incrementally, usually with gradient descent techniques. As the training data are sequentially (one by one) presented to the learning system, the OKL requires much less memory and computational cost. The resource allocating networks and the growing and pruning radial basis function (RBF) networks are two important OKL algorithms in recent literature, and more recently, the kernel adaptive filtering has become an emerging and promising subfield of online kernel learning. The kernel adaptive filtering algorithms are developed in reproducing kernel Hilbert spaces (RKHS), using linear adaptive structure in RKHS to obtain nonlinear filters in the input space, which preserve the conceptual simplicity of the classical linear adaptive filters (no local minima), while inherit the rich expressiveness from the kernel methods (universal approximation). These algorithms include the kernel least mean square (KLMS), kernel affine projection algorithms (KAPA), kernel recursive least squares (KRLS), and extended kernel recursive least squares (EX-KRLS), etc. 

3. Brain Signal Analysis and Neural Decoding

1)  Quantifying dependence between cognitive processes

 

  

      The question of localizing and quantifying cognitive processes in human brain is an old and difficult one. One fundamental process in cognition study is the extraction and selection of relevant features from the wealth of sensory data that the brain confronts during cognitive activities, used for guiding an observer’s behavior. Our research focuses mainly on how sensory systems support adaptive behaviors in complex, rapidly changing environments, exploring the temporal and spatial dynamics of the human brain during cognitive tasks. In specific, we focus on developing efficient measures of directionality and dependency that reflect brain processes based on information theory and kernel methods. To be an useful tool for exploratory investigations purpose in brain science, it should not require a priori definition of the type of interaction, so that non-parametric methods is preferred,  such as information theory based approaches and generalized measure of association. We are also  applying signal processing approaches to clinic data, such as epileptic seizures. We hope to characterize the patterns associated with channels’spatial-temporal dynamics during the inter-ictal to pre-post ictal transition using adaptive algorithms, leading us to develop an online clinic epilepsy alarm system.

2)  Mind reading with fMRI responses

 

                            

     Mind reading has traditionally been the domain of mystics and science fiction writers. Now, all the imaginations about mind reading are becoming true in visual cognition decoding. In the laboratory, when volunteers looked at handwritten letters, a computer model was able to produce fuzzy images of the letters they were seeing, based only on the volunteers’ brain activity. The basic idea is that different external visual stimuli can elicit different brain activity patterns, which means that we can derive the information of the stimuli, or even reconstruct the stimuli by decoding brain activity patterns that measured by plenty variety of technologies, such as fMRI, EEG, MEG, PET, etc. On the other side, mind reading can also help us to understand how the information is encoded by neurons in the brain. So there are two complementary operations in mind reading: encoding uses stimuli to predict brain activity while decoding uses brain activity to predict information about the stimuli.
    Basically, there are two factors in a mind reading experiment. The first is choosing an appropriate brain activity scanning method. Implanted electrode technology can provide high-quality neural signals, thus it has been used to monkeys and other animals. However, it cannot be used to normal human participants. The fMRI signals have become an important one recently due to its non-invasive and high spatial resolution. The fMRI is not a direct brain activity signal, but BOLD signal that is evoked by brain activity and other noises. The assumption of linearity between BOLD amplitude and brain activity is always adopted by researchers to simplify their subsequent work in mind reading. The second, also the most significant factor is to establish an encoding or decoding model. The brain can be viewed as a system that nonlinearly maps stimuli into brain activity. According to this perspective a central task of encoding is to discover the nonlinear mapping between input and activity. Encoding models generally consist several distinct components. First is a set of stimuli used in the experiment. Some use particular objects such as faces or houses, while others use man-made shapes. Natural scene images have also been used to study the higher-level visual perception. The second component is a set of features that describes the abstract relationship between stimuli and responses. Phase-invariant Gabor wavelets and labels that reflect different levels of the independent variable (e.g. faces versus houses) are two kinds of features that are widely used. The third component is one or more regions of interest (ROI) in the brain from which voxels are selected. The final component is an algorithm that is used to estimate the model from data. Think of the stimuli, features,and ROIs existing in three separate abstract spaces. The experimental stimuli exist in an input space whose axes correspond to the stimulus dimensions.The activity of all the voxels within an ROI exists in an activity space whose axes correspond to the individual voxels. Interposed between the input space and the activity space is an abstract feature space. Each axis of the feature space corresponds to a single feature, and each stimulus is represented by one point in the feature space. The mapping between the input space and the feature space is nonlinear, while the mapping between the feature space and the activity space is linear. The feature space is called linearizing, because the nonlinear mapping into feature space linearizes the relationship between the stimulus and the response. Linearizing encoding models have a simple interpretation and are relatively easy to estimate. Linearizing feature spaces are also helpful for thinking about decoding models. In these terms, the key difference between encoding and decoding models is the direction of the linear mapping between feature space and activity space. In an encoding model the linear mapping projects the feature space onto the activity space. In a decoding model the linear mapping projects the activity space onto the feature space. Once the brain activity data have been acquired and the models have been established, the data will be used to fit the model, and then the fitted model will be used to interpret how our brain perceive visual stimuli or to identify or reconstruct new stimuli.
  

3)  Brain Machine Interfaces

        

       Brain Machine Interface (BMI), sometimes called Brain Computer Interface (BCI), is a direct communication pathway between a brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. BMI is also recognized as a man made device that either substitutes a sensory input to the brain, repairs functional communication between brain regions or translates intention of movements. The reasons that motivate BMI studies are simple. On one hand, we have the need to help people with disabilities. For example, we are encouraged to develop technologies to assist blind people to visualize external images and to assist paralyzed people to operate external devices without physical movement. On the other hand, BMI is promising to help us achieve the scenarios described in science fictions such as decoding information stored on human brain (i.e. memories) and decoding information from brain to display human thinkings or dreams on a screen.

版权所有:西安交通大学 站点设计:网络信息中心 陕ICP备05001571号 联系电话:82668579 82668902 QQ群号:496666580
欢迎您访问我们的网站,您是第 位访客
推荐分辨率1024*768以上 推荐浏览器IE7 Firefox 以上