Clustering and Ranking from Pairwise Comparisons over Hilbert Spaces – We consider the use of the kernel approximation for decision problems involving the stochastic gradient method, and propose two simple formulations of the kernel method. In the traditional way, this means to obtain a non-negative $k$-norm regularizer, that is, a kernel function that is independent of the objective function. We prove a tight connection with the conventional algorithm of the gradient method, which is equivalent to the nonparametric gradient method. We illustrate the connection with a Bayesian network of the same type.
Given a network of latent variables we propose a non-local model that learns the model parameters from a source random variable in the latent space, without learning the other variables themselves. We show that this method achieves better state-of-the-art results compared to other methods that have a local model learning the model parameters based on a latent random variable as well as on a non-local model learning the model parameters, and the resulting model is better performing on real-world datasets.
A unified and globally consistent approach to interpretive scaling
Sufficiency detection in high-dimension: from unsupervised learning to scale constrained k-means
Clustering and Ranking from Pairwise Comparisons over Hilbert Spaces
On the Nature of Randomness in Belief Networks
Learning Gaussian Graphical Models by InvertingGiven a network of latent variables we propose a non-local model that learns the model parameters from a source random variable in the latent space, without learning the other variables themselves. We show that this method achieves better state-of-the-art results compared to other methods that have a local model learning the model parameters based on a latent random variable as well as on a non-local model learning the model parameters, and the resulting model is better performing on real-world datasets.