# Good Initialization for Alternating Minimization

Relating with my work, I have used AM for 2 optimization problems so far. One was in my paper “Sparse Kernel PCA for Outlier Detection” where it is used to obtain the approximate sparse eigenvectors. In this case, we were lucky, since using the actual eigenvectors as the initial solution worked pretty well (nothing genius about that!). It is in the second problem, where I am struggling to find a good initialization. The second problem is non-linear blind compressed sensing, which is very similar to the dictionary learning problem except that there is also a sensing matrix term (which is known by the way) that is different for every training example and most importantly a non-linear transformation of the data. Also, there are further positivity constraints on the dictionary as well as the sparse codes due to the domain of the non-linear function in consideration. The sensing matrix is also not the usual Gaussian or Bernoulli ($\pm 1$) matrix ):