Monday, July 20, 2009

An Efficient Projection for L_{1, infty} Regularization


This optimization problem is a little different from the L1 penalized version, in that the variables to optimized is a matrix, and the corresponding penalty is the so-called L_{1, \infty} norm,
\| X \|_{1, \infty} = \sum_{i = 1}^m \max_j X_{i, j}.
The optimization is based on a previous ICML 07 paper. But now we are dealing with matrices instead of vectors. For example, the toy optimization problem is modified to its matrix version
\min_{B, \mu} \frac{1}{2} \sum_{i, j} (A_{i, j} - B_{i, j})^2,
such that \forall i, j: B_{i, j} \leq \mu_i, \sum_i \mu = C and nonnegative constraints for B given a nonnegative matrix A.

But soemhow there seems to be nothing else new.

2 comments:

Anonymous said...

See also this paper from last year:
http://www.optimization-online.org/DB_FILE/2008/07/2056.pdf

Unknown said...

thanks for the link