Codes(源程序)

[32] Wei Wei, Liyuan Yi, Qi Xie, Qian Zhao, Deyu Meng, Zongben Xu, Should We Encode Rain Streaks in Video as Deterministic or Stochastic? ICCV, 2017.[matlab code]

[31] Qian Zhao, Deyu Meng, Zongben Xu, Wangmeng Zuo, Yan Yan. L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method. IEEE Transactions on Neural Networks and Learning Systems. 2015. [matlab code]

[30] Hongwei Yong, Deyu Meng, Wangmeng Zuo, Lei Zhang. Robust Online Matrix Factorization for Dynamic Background Subtraction, accepted in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. [arxiv version][code]

[29] Liang Lin, Keze Wang, Deyu Meng, Wangmeng Zuo, Lei Zhang. Active Self-Paced Learning for Cost-Effective and Progressive Face Identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. [arxiv version][code]

[28] Yang Chen, Xiangyong Cao, Qian Zhao, Deyu Meng, Zongben Xu. Denoising Hyperspectral Image with Non-i.i.d. Noise Structure. IEEE Transactions on Cybernetics. 2017. [arxiv version][Appendix][code]

[27] Kede Ma, Hui Li, Hongwei Yong, Zhou Wang, Deyu Meng, Lei Zhang. Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach. IEEE Trans. on Image Processing, 2017. [supplementary material][code]

[26] Xiangyong Cao, Lin Xu, Deyu Meng, Qian Zhao, Zongben Xu. Integration of 3-dimensional discrete wavelet transform and Markov random field for hyperspectral image classification. Neurocomputing 226, 90-100, 2017. [code] (This paper designs a new feature leading to the state-of-the-art performance on hyperspectral image classification.)

[25] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, Lei Zhang, Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising, IEEE Trans. on Image Processing, 2017. [code]

[24] Shoou-I Yu, Deyu Meng, Wangmeng Zuo, Alexander G Hauptmann, The solution path algorithm for identity-aware multi-object tracking. CVPR 2016. [code and data]

[23] Chenqiang Gao, Yinhe Du, Jiang Liu, Jing Lv, Luyu Yang, Deyu Meng, Alexander G Hauptmann, InfAR dataset: Infrared action recognition at different times, Neurocomputing, 2016. [InfAR Dataset]

[22] Qi Xie, Qian Zhao, Deyu Meng, Zongben Xu, Shuhang Gu, Wangmeng Zuo and Lei Zhang. Multispectral images denoising by intrinsic tensor sparsity regularizationIEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [supplementary material] [Matlab code]

[21] Wenfei Cao, Yao Wang, Jian Sun, Deyu Meng, Can Yang, Andrzej Cichocki, Zongben Xu. Total Variation Regularized Tensor RPCA for Background Subtraction from Compressive Measurements. IEEE Transactions on Image Processing, 2016. [Demo code]

[20] Xiangyong Cao, Yang Chen, Qian Zhao, Deyu Meng, Yao Wang, Dong Wang, Zongben Xu. Low-rank Matrix Factorization under General Mixture Noise DIstributions. ICCV (oral), 2015. [supplementary material] [Matlab code][arxiv version]

[19] Lu Jiang, Shoou-I Yu, Deyu Meng, Teruko Mitamura, Alexander Hauptmann. Bridging the Ultimate Semantic Gap: A Semantic Search Engine for Internet Videos. In ACM International Conference on Multimedia Retrieval (ICMR). 2015. [BibTex |supplementary materials | slides | project page] Best paper runner-up [featured in] Pittsburgh Supercomputing Center

[18] Yong Xu, Bo Huang, Yuyue Xu, Kai Cao, Chunlan Guo, Deyu Meng. Spatial and Temporal Image Fusion via Regularized Spatial Unmixing. IEEE Geoscience and Remote Sensing Letters, 12(6): 1362-1366, 2015. [Matlab code].

[17] Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, Alexander Hauptmann. Self-paced Curriculum Learning. AAAI, 2015. [Supplementary material], [slides], [code].

[16] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhen-Zhong Lan, Shiguang Shan, Alexander Hauptmann.Self-paced Learning with Diversity. NIPS, 2014.[Supplementary material], [code].

[15] Deyu Meng, Fernando De la Torre. Robust Matrix Factorization with Unknown Noise. ICCV, 2013. [Matlab code].

[14] Chenqiang Gao, Deyu Meng, Yi Yang, Yongtao Wang, Xiaofang Gao, Alexander G. Hauptmann. Infrared Patch-image Model for Small Target Detection in A Single Image. IEEE Transactions on Image Processing. 22(12): 4996-5009, 2013. [Matlab Code]  (Our robust PCA method can effectively detect small targets from a single infrared image.

[13] Haoquan Shen, Shoou-I Yu, Yi Yang, Deyu Meng, Alexander Hauptmann. Unsupervised Video Adaptation for Parsing Human Motion. ECCV, 2014. [Project Page] [Code and Dataset] [Demo Video]

[12] Qian Zhao, Deyu Meng, Zongben Xu, Wangmeng Zuo, Lei Zhang. Robust principal component analysis with complex noise, [Supplementary Material], ICML, 2014, [Matlab code](Our robust PCA method can well fit more complex noise beyond conventional Gaussian, Laplacian, Sparse noise or any combinations of them)

Introduction: We propose a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG). The MoG is a universal approximator to continuous distributions and thus our model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them. A varia-tional Bayes algorithm is presented to infer the posterior of the proposed model. All involved parameters can be recursively updated in closed form. The advantage of our method is demon-strated by extensive experiments on synthetic data, face modeling and background subtraction. 

 

[11] Ji Zhao, Deyu Meng. FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test, arXiv:1405.2664, 2014, [Matlab Code]. (Our proposed FastMMD method decreases the time complexity of MMD calculation from conventional O(N^2d) to O(Nlog(d))).

Introduction: The maximum mean discrepancy (MMD) is a recently proposed test statistic for two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform. Taking advantage of sampling of Fourier transform, FastMMD decreases the time complexity for MMD calculation from O(N^2d) to O(Nd), where N and d are the size and dimension of the sample set, respectively. For kernels that are spherically invariant, the computation can be further accelerated to O(N log d) by using the Fastfood technique. The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We have further provided a geometric explanation for our method, namely ensemble of circular discrepancy, which facilitates us to understand the insight of MMD, and is hopeful to help arouse more extensive metrics for assessing two-sample test.

 

[10] Yi Peng, Deyu Meng, Zongben Xu, Chenqiang Gao, Yi Yang, Biao Zhang. Decomposable Nonlocal Tensor Dictionary Learning for Multispectral Image DenoisingSupplementary Material, CVPR, 2014. (We got the state-of-the-art performance for multi-spectral image denoising) [Matlab code].

Introduction: In this paper we propose an effective MSI denoising approach by combinatorially considering two intrinsic characteristics underlying an MSI: the nonlocal similarity over space and the global correlation across spectrum. In specific, by explicitly considering spatial self-similarity of an MSI we construct a nonlocal tensor dictionary learning model with a group-block-sparsity constraint, which makes similar fullband patches (FBP) share the same atoms from the spatial and spectral dictionaries. Furthermore, through exploiting spectral correlation of an MSI and assuming overredundancy of dictionaries, the constrained nonlocal MSI dictionary learning model can be decomposed into a series of unconstrained low-rank tensor approximation problems, which can be readily solved by off-the-shelf higher order statistics. Experimental results show that our method outperforms all state-of-the-art MSI denoising methods under comprehensive quantitative performance measures.

 

[9] Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang. A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding. ICCV, 2013.(The corrected solution of non-convex sparse coding by iterated thresholding) [Supplementary material], [Matlab Code].

Introduction: In many sparse coding based image restoration and image classification problems, using non-convex ℓp-norm minimization (0 ≤ p < 1) can often obtain better results than the convex l1-norm minimization. A number of algorithms, e.g., iteratively reweighted least squares (IRLS), iteratively thresholding method (ITM-lp), and look-up table (LUT), have been proposed for non-convex ℓp-norm sparse coding, while some analytic solutions have been suggested for some specific values of p. In this paper, by extending the popular soft-thresholding operator, we propose a generalized iterated shrinkage algorithm (GISA) for lp-norm non-convex sparse coding. Unlike the analytic solutions, the proposed GISA algorithm is easy to implement, and can be adopted for solving non-convex sparse coding problems with arbitrary p values. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. Compared with IRLS and ITM-lp, GISA is theoretically more solid and can achieve more accurate solutions. Experiments on image restoration and sparse coding based face recognition are conducted to validate the performance of GISA.

 

[8] Lu Jiang, Wei Tong, Deyu Meng, Alexander G. Hauptmann.Towards Efficient Learning of Optimal Spatial Bag-of-Words Representations. ICMR, 2014 (Best paper runner up). [Slides]Please download our code in [JS Tiling webpage]

Introduction: Spatial Pyramid Matching (SPM) assumes that the spatial Bag-of-Words (BoW) representation is independent of data. However, evidence has shown that the assumption usually leads to a suboptimal representation. In this paper, we propose a novel method called Jensen-Shannon (JS) Tiling to learn the BoW representation from data directly at the BoW level. The proposed JS Tiling is especially appropriate for large-scale datasets as it is orders of magnitude faster than existing methods, but with comparable or even better classification precision. Specifically, JS Tiling systematically generates all possible spatial BoW representations called tilings, which are then evaluated using the computationally inexpensive metric based on the JS divergence. Experimental results on four benchmarks including two TRECVID12 datasets validate that JS Tiling outperforms the SPM and the state-of-the-art methods. The runtime comparison demonstrates that selecting BoW representations by JS Tiling is more than 1,000 times faster than running classifiers.

 

[7] Deyu Meng, Hengbin Cui, Zongben Xu, Kaili Jing. Following the Entire Solution Path of Sparse Principal Component Analysis by Coordinate-Pairwise Algorithm. Data & Knowledge Engineering, accepted, 2013.  [PDF][Matlab Code]

Introduction: In this paper we derive an algorithm to follow the entire solution path of
the sparse principal component analysis (PCA) problem. The core idea is to iteratively identify the pairwise variables along which the objective function of the sparse PCA model can be largest increased, and then incrementally update the coefficients of the two variables so selected by a small stepsize. The new algorithm dominates on its capability of providing a computational shortcut to attain the entire spectrum of solutions of the sparse PCA problem,which is always beneficial to real applications. The proposed algorithm is simple and easy to be implemented.

 

[6] Yee Leung, Deyu Meng, Zongben Xu. Evaluation of a spatial relationship by the concept of intrinsic spatial distance. Geograhpical Analysis, 45(4):380-400, 2013. [PDF] [Matlab code]

Introduction: We propose the concept of intrinsic spatial distance (ISD) for the study of a spatial relationship between any two points in space. The ISD is a distance measure that takes into account the separation of two points with respect to their physical and attribute closeness. We construct an algorithm to implement this concept as an ISD measurement. Based on the ISD concept, two points in space are related through a transitional path linking one to the other. As an ISD measurement decreases, the spatial relationship between two points becomes increasingly stronger. We argue theoretically and demonstrate empirically that the ISD concept is not predisposed in favor of the first law of geography, but directly considers variance of nearness in physical distance and attribute distance to derive the extent to which two points are spatially associated. Specifically, in single attribute cases, the information uncovered by an ISD measurement is more elaborate than that revealed by Moran’s I, local Moran’s I, and a semivariogram, giving a meticulous account of relatedness in both local and global contexts. The ISD concept is also sufficiently general to be used to study multiple attributes of relationships.

 

    

 

[5] Deyu Meng, Zongben Xu, Lei Zhang, Ji Zhao. A cyclic weighted median method for L1 low-rank matrix factorization with missing entries. AAAI, 2013. [PDF]  [Matlab code]

(A very simple but very efficient and effective L1 matrix factorization algorithm)

Introduction: In this work we propose a novel cyclic weighted median (CWM) method, which is intrinsically a coordinate decent algorithm, for L1-norm LRMF. The CWM method minimizes the objective by solving a sequence of scalar minimization sub-problems, each of which is convex and can be easily solved by the weighted median filter. The extensive experimental results validate that the CWM method outperforms state-of-the-arts in terms of both accuracy and computational efficiency.

    

  

[4] Qian Zhao, Deyu Meng, Zongben Xu. A recursive divide-and-conquer approach for sparse principal component analysis. arXiv:1211.7219, 2013.[PDF] [Matlab code]

(Our method performs better than most of the state-of-the-art algorithms for sparse PCA)

Introduction: We propose a new method for sparse PCA based on the recursive divide-and-conquer methodology. The main idea is to separate the original sparse PCA problem into a series of much simpler sub-problems, each having a closed-form solution. By recursively solving these sub-problems in an analytical way, an efficient algorithm is constructed to solve the sparse PCA problem. The algorithm only involves simple computations and is thus easy to implement. The proposed method can also be very easily extended to other sparse PCA problems with certain constraints, such as the nonnegative sparse PCA problem. Furthermore, we have shown that the proposed algorithm converges to a stationary point of the problem, and its computational complexity is approximately linear in both data size and dimensionality.

 

[3] Deyu Meng, Qian Zhao, Zongben Xu. Improve robustness of sparse PCA by L1-norm maximization. Pattern Recognition. 2012, 45(1) 487-497. [PDF] [Matlab Code]

Introduction: A new sparse PCA method is proposed. Instead of maximizing the L2-norm variance of the input data as the conventional sparse PCA methods, the new method attempts to capture the maximal L1-norm variance of the data, which is intrinsically less sensitive to noises and outliers. A simple algorithm for the method is specifically designed, which is easy to be implemented and converges to a local optimum of the problem.

 

 

[2] Zongben Xu, Mingwei Dai, Deyu Meng. Fast and efficient strategies for model selection of Gaussian support vector machines. IEEE Transactions on Systems, Man and Cybernetics, Part B. 2009, 39(5) 1292-1307. [PDF] [Matlab Code]

(Our method can very effectively find a good σ for Gaussian kernel)

Introduction: How to select the kernel parameter (σ) and the penalty coefficient (C) of Gaussian support vector machines (SVMs) are suggested in this paper. Based on viewing the model parameter selection problem as a recognition problem in visual systems, a direct parameter setting formula for the kernel parameter is derived through finding a visual scale at which the global and local structures of the given data set can be preserved in the feature space, and the difference between the two structures can be maximized. In addition, we propose a heuristic algorithm for the selection of the penalty coefficient through identifying the classification extent of a training datum in the implementation process of the sequential minimal optimization (SMO) procedure, which is a well-developed and commonly used algorithm in SVM training.

 

[1] Deyu Meng, Yee Leung, Zongben Xu. Evaluating nonlinear dimensionality reduction based on its local and global quality assessments, Neurocomputing, 2011, 74(6): 941-948. [PDF] [Matlab code]

Introduction: We present a new quality assessment criterion for evaluating the performance of the nonlinear dimensionality reduction (NLDR) methods. Differing from the current quality assessment criteria focusing on the local-neighborhood-preserving performance of the NLDR methods, the proposed criterion capitalizes on a new aspect, the global-structure-holding performance, of the NLDR methods. By taking both properties into consideration, the intrinsic capability of the NLDR methods can be more faithfully reflected, and hence more rational measurement for the proper selection of NLDR methods in real-life applications can be offered.