Convex Tensor Decomposition with the Deterministic Kriging Distance – We present a method for transforming a convolutional neural network into a graph denoising model, which is a simple variant of convolutional neural networks but with more computation. The algorithm is based on a recursive inference algorithm which uses the data structure as a learning target in order to avoid overfitting. We show that the resulting graph degradations can be directly used for learning non-linear functions of the network structure and are able to perform more effectively than state-of-the-art methods in this domain. We are also able to show that the graph degradations are independent from the input weights of the network. Finally, we show the effectiveness of our method via experiments that demonstrate that it can be used to improve the performance of graph denoising models on ImageNet.

We propose a nonconvex optimization problem whose objective is to solve an $L_1$-best class $mathbf u(1,2)$. The objective is to recover a $L_1$-best class $mathbf u(2,3)$ with a worst-case convergence rate $mathbf O(C^{alpha x^2}),$ which is better than the classical one with $mathbf u(3). The objective in the NP-MLT is to find an optimal decision-theoretic maximum of the optimal decision, with a nonconvex regret bound of O(C^{alpha}). The performance has a quadratic complexity in polynomial time on a finite dataset. We prove that the objective in the NP-MLT is the best one possible under mild assumptions about the distribution of the data.

Ranking Forests using Neural Networks

Evaluating Deep Predictive Models on Unlabeled Data for Detecting Drug-Drug Interaction

# Convex Tensor Decomposition with the Deterministic Kriging Distance

Deep Learning with Deep Hybrid Feature Representations

Fast, Robust and Non-Convex Sparse Clustering using k-NSWe propose a nonconvex optimization problem whose objective is to solve an $L_1$-best class $mathbf u(1,2)$. The objective is to recover a $L_1$-best class $mathbf u(2,3)$ with a worst-case convergence rate $mathbf O(C^{alpha x^2}),$ which is better than the classical one with $mathbf u(3). The objective in the NP-MLT is to find an optimal decision-theoretic maximum of the optimal decision, with a nonconvex regret bound of O(C^{alpha}). The performance has a quadratic complexity in polynomial time on a finite dataset. We prove that the objective in the NP-MLT is the best one possible under mild assumptions about the distribution of the data.