
Zhao Kang
https://sites.google.com/site/zhaokanghomepage/
less
Related Authors
Chong Peng
Southern Illinois University Carbondale
Zhouchen Lin
Peking University
Adrian Barbu
Florida State University
Emmanuel Candes
Stanford University
Rupaj Kumar Nayak
IIIT Bhubaneswar
Uploads
Papers by Zhao Kang
been widely recognized by academia and industry. How-
ever, the recommendation quality is still rather low.
Recently, a linear sparse and low-rank representation
of the user-item matrix has been applied to produce
Top-N recommendations. This approach uses the nu-
clear norm as a convex relaxation for the rank func-
tion and has achieved better recommendation accuracy
than the state-of-the-art methods. In the past several
years, solving rank minimization problems by leveraging
nonconvex relaxations has received increasing attention.
Some empirical results demonstrate that it can provide
a better approximation to original problems than con-
vex relaxation. In this paper, we propose a novel rank
approximation to enhance the performance of Top-N
recommendation systems, where the approximation er-
ror is controllable. Experimental results on real data
show that the proposed rank approximation improves
the Top-N recommendation accuracy substantially.
both in industry and academia. However, the recommendation
quality is far from satisfactory. In this paper, we propose
a simple yet promising algorithm. We fill the user-item matrix
based on a low-rank assumption and simultaneously keep
the original information. To do that, a nonconvex rank relaxation
rather than the nuclear norm is adopted to provide a
better rank approximation and an efficient optimization strategy
is designed. A comprehensive set of experiments on real
datasets demonstrates that our method pushes the accuracy of
Top-N recommendation to a new level.
norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and
thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet)
function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace
clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective
function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-
rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation
and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
completion and subspace clustering, require a matrix to be of low-rank. To meet this
requirement, most existing methods use the nuclear norm as a convex proxy of the rank
function and minimize it. However, the nuclear norm simply adds all nonzero singular values
together instead of treating them equally as the rank function does, which may not be a good
rank approximation when some singular values are very large. To reduce this undesirable ...
The nuclear norm is used to substitute the rank function
in many recent studies. Nevertheless, the nuclear norm ap-
proximation adds all singular values together and the ap-
proximation error may depend heavily on the magnitudes of
singular values. This might restrict its capability in dealing
with many practical problems. In this paper, an arctan-
gent function is used as a tighter approximation to the rank
function. We use it on the challenging subspace clustering
problem. For this nonconvex minimization problem, we de-
velop an effective optimization procedure based on a type
of augmented Lagrange multipliers (ALM) method. Exten-
sive experiments on face clustering and motion segmentation
show that the proposed method is effective for rank approx-
imation.
been widely recognized by academia and industry. How-
ever, the recommendation quality is still rather low.
Recently, a linear sparse and low-rank representation
of the user-item matrix has been applied to produce
Top-N recommendations. This approach uses the nu-
clear norm as a convex relaxation for the rank func-
tion and has achieved better recommendation accuracy
than the state-of-the-art methods. In the past several
years, solving rank minimization problems by leveraging
nonconvex relaxations has received increasing attention.
Some empirical results demonstrate that it can provide
a better approximation to original problems than con-
vex relaxation. In this paper, we propose a novel rank
approximation to enhance the performance of Top-N
recommendation systems, where the approximation er-
ror is controllable. Experimental results on real data
show that the proposed rank approximation improves
the Top-N recommendation accuracy substantially.
both in industry and academia. However, the recommendation
quality is far from satisfactory. In this paper, we propose
a simple yet promising algorithm. We fill the user-item matrix
based on a low-rank assumption and simultaneously keep
the original information. To do that, a nonconvex rank relaxation
rather than the nuclear norm is adopted to provide a
better rank approximation and an efficient optimization strategy
is designed. A comprehensive set of experiments on real
datasets demonstrates that our method pushes the accuracy of
Top-N recommendation to a new level.
norm as a convex surrogate of the rank operator. However, all singular values are simply added together by the nuclear norm, and
thus the rank may not be well approximated in practical problems. In this paper, we propose using a log-determinant (LogDet)
function as a smooth and closer, though nonconvex, approximation to rank for obtaining a low-rank representation in subspace
clustering. Augmented Lagrange multipliers strategy is applied to iteratively optimize the LogDet-based nonconvex objective
function on potentially large-scale data. By making use of the angular information of principal directions of the resultant low-
rank representation, an affinity graph matrix is constructed for spectral clustering. Experimental results on motion segmentation
and face clustering data demonstrate that the proposed method often outperforms state-of-the-art subspace clustering algorithms.
completion and subspace clustering, require a matrix to be of low-rank. To meet this
requirement, most existing methods use the nuclear norm as a convex proxy of the rank
function and minimize it. However, the nuclear norm simply adds all nonzero singular values
together instead of treating them equally as the rank function does, which may not be a good
rank approximation when some singular values are very large. To reduce this undesirable ...
The nuclear norm is used to substitute the rank function
in many recent studies. Nevertheless, the nuclear norm ap-
proximation adds all singular values together and the ap-
proximation error may depend heavily on the magnitudes of
singular values. This might restrict its capability in dealing
with many practical problems. In this paper, an arctan-
gent function is used as a tighter approximation to the rank
function. We use it on the challenging subspace clustering
problem. For this nonconvex minimization problem, we de-
velop an effective optimization procedure based on a type
of augmented Lagrange multipliers (ALM) method. Exten-
sive experiments on face clustering and motion segmentation
show that the proposed method is effective for rank approx-
imation.