The effectiveness of nonconvex optimization in two problems

-
Yuxin Chen, Princeton University
Fine Hall 224

Many information and data science applications involve computation of the maximum likelihood estimates (MLE) over a high-dimensional space. Unfortunately, most MLEs are solutions to non-convex problems, which are in general intractable.  Fortunately, some of the problems are not as hard as they may seem, and can be solved by optimizing the nonconvex objectives directly.  This talk illustrates this observation through two problems: (1) solving a random quadratic system of equations, which spans many applications ranging from the century-old phase retrieval problem to various latent-variable models in machine learning;  (2) recovering a collection of discrete variables from noisy pairwise differences, which arises when one wishes to jointly align multiple images or to retrieve the genome phases from paired sequencing reads. In both cases, the proposed solutions can be accomplished within linear time, while achieving a statistical accuracy that is nearly un-improvable.