Blog posts


Good Initialization for Alternating Minimization

2 minute read


This short post is on the importance of having proper initialization while using alternating minimization for two variables, such that the objective function under consideration is convex in each variable individually but not jointly convex with respect to both the variables.

Recent Advances in Non-Convex Optimization for Deep Learning

8 minute read


This post contains a summary of recent advances in non-convex optimization in deep learning, discussing about the optimality of local minima for several models, the issue of saddle points and modifications to stochastic gradient descent which are robust to saddle points.

Theoretical Research in Deep Learning

6 minute read


This post contains some guidelines (gathered from self experience and also from some highly experienced people) for doing theoretical research in deep learning (and machine learning in general), strictly for newbies!