当前位置:学术研究>内容
Kernel K-Means Sampling for Nystrom Approximation
2018年04月14日 浏览量:6827次 来源:学院教师
A fundamental problem in Nystr枚m-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel k-means sampling, which is shown in our works to minimize the upper bound of matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nystr枚m approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the k-means error of data points in kernel space plus a constant. Thus, the k-means centers of data in kernel space, or the kernel k-means centers, are the optimal representative points w.r.t. the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world datasets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
近期热点