Best Paper for Young Researchers Prize and Session


This prize is sponsored by SAS.

The Best Paper Session will be held on Monday, July 25 from 4:40-6pm in Baker Hall in the Zoellner Arts Center.

Finalists for the prize and session have been selected! (See the bottom of this page for information about the prize.)

The committee received approximately 40 high-quality nominations/submissions. While there were many worthy nominations/submissions, the committee has selected the following four for the list of finalists.

We look forward to welcoming all ICCOPT/MOPTA 2022 participants for the prize session!

Finalist: Kabir Aladin Chandrasekher, Ashwin Pananjady, and Christos Thrampoulidis (co-finalists)

  • Paper: “Sharp global convergence guarantees for iterative nonconvex optimization with random data”
  • Abstract: Iterative algorithms are the workhorses of modern statistical learning, and are widely used to fit large-scale, complex models to random data. While the choice of an algorithm and its hyperparameters determines both the speed and fidelity of the learning pipeline, it is common for this choice to be made heuristically, either by expensive trial-and-error or by comparing rough bounds on convergence rates of various candidate algorithms. Motivated by this, we develop a principled framework that produces sharp, iterate-by-iterate characterizations of solution quality for algorithms run with sample-splitting on a wide range of nonconvex model-fitting problems with Gaussian data. I will present the general framework and highlight several concrete consequences for parameter estimation in some popular statistical models, covering both higher-order algorithms based on alternating updates as well as first-order algorithms based on subgradient descent. These corollaries reveal multiple nonstandard phenomena and facilitate rigorous comparisons between algorithms.
  • Bio: Kabir Chandrasekher is a PhD student in Electrical Engineering at Stanford University. Prior to joining Stanford, he obtained a BS from UC Berkeley in 2017.  His research interests include high-dimensional statistics and optimization. He is a recipient of an NSF graduate research fellowship as well as a Stanford Graduate Fellowship.
  • Bio: Ashwin Pananjady is an Assistant Professor at Georgia Tech, with a joint appointment between the Schools of Industrial and Systems Engineering and Electrical and Computer Engineering. His research interests lie broadly in statistics, optimization, and information theory, as well as their applications in data science, machine learning, and reinforcement learning. Pananjady received his Ph.D. from the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California Berkeley, and the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras. His honors include an Adobe Data Science Faculty Research Award, the inaugural Lawrence D. Brown award from the Institute of Mathematical Statistics, and the David J. Sakrison Memorial Prize for the best dissertation in EECS, UC Berkeley (2020).
  • Bio: Christos Thampoulidis is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of British Columbia. Previously, he was an Assistant Professor at the University of California, Santa Barbara and a Postdoctoral Researcher at MIT. He received a M.Sc. and a Ph.D. in Electrical Engineering in 2012 and 2016, respectively, both from Caltech, with a minor in Applied and Computational Mathematics. In 2011, he received a Diploma in ECE from the University of Patras, Greece. His research is on high-dimensional estimation, machine learning and optimization.  He has received the 2014 Qualcomm Innovation Fellowship, the Andreas Mentzelopoulos Scholarship, and the I. Milias Award from the University of Patras.

Finalist: Christopher Criscitiello

  • Paper: “Negative curvature obstructs acceleration for geodesically convex optimization, even with exact first-order oracles.”
  • Abstract: Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms which make queries in a (large) bounded domain and which receive gradients and function values corrupted by a (small) amount of noise. We show that acceleration remains unachievable for any deterministic algorithm which receives exact gradient and function-value information (unbounded queries, no noise). Our results hold for the classes of strongly and nonstrongly geodesically convex functions, and for a large class of Hadamard manifolds including hyperbolic spaces and the symmetric space SL(n)/SO(n) of positive definite n×n matrices of determinant one. This cements a surprising gap between the complexity of convex optimization and geodesically convex optimization: for hyperbolic spaces, Riemannian gradient descent is optimal on the class of smooth and and strongly geodesically convex functions, in the regime where the condition number scales with the radius of the optimization domain. The key idea for proving the lower bound consists of perturbing the hard functions of Hamilton and Moitra (2021) with sums of bump functions chosen by a resisting oracle.
  • Co-author: Nicolas Boumal
  • Bio: Christopher Criscitiello is a PhD student at EPFL working in the group of Nicolas Boumal.  He is broadly interested in nonconvex optimization, with a focus on geodesic convexity, optimization on manifolds, low-rank optimization, complexity of optimization methods, and applications in statistics and machine learning. He received a bachelor degree in mathematics, magna cum laude, from Princeton University.

Finalist: X.Y. Han

  • Paper: “Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization”
  • Abstract: For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging: even when the objective is smooth at every iterate, the corresponding local models are unstable, and traditional remedies need unpredictably many cutting planes. We instead propose a multipoint generalization of the gradient descent iteration for local optimization. While designed with general objectives in mind, we are motivated by a “max-of-smooth” model that captures the subdifferential dimension at optimality. We prove linear convergence when the objective is itself max-of-smooth, and experiments suggest a more general phenomenon.
  • Co-author: Adrian S. Lewis
  • Bio: X.Y. Han is a PhD Candidate in Cornell University’s School of Operations Research and Information Engineering (ORIE) advised by Prof. Adrian S. Lewis. He holds a BSE in Operations Research and Financial Engineering (ORFE) from Princeton University and a MS in Statistics from Stanford University. His thesis work at Cornell focuses on the design of fast, first-order methods for nonsmooth optimization.

Finalist: Shanyin Tong

  • Paper: “Optimization under rare chance constraints”
  • Abstract: Chance constraints provide a principled framework to mitigate the risk of high-impact extreme events by modifying the controllable properties of a system. The low probability and rare occurrence of such events, however, impose severe sampling and computational requirements on classical solution methods that render them impractical. This work proposes a novel sampling-free method for solving rare chance constrained optimization problems affected by uncertainties that follow general Gaussian mixture distributions. By integrating modern developments in large deviation theory with tools from convex analysis and bilevel optimization, we propose tractable formulations that can be solved by off-the-shelf solvers. Our formulations enjoy several advantages compared to classical methods: their size and complexity is independent of event rarity, they do not require linearity or convexity assumptions on system constraints, and under easily verifiable conditions, serve as safe conservative approximations or asymptotically exact reformulations of the true problem. Computational experiments on linear, nonlinear, and PDE-constrained problems from applications in portfolio management, structural engineering, and fluid dynamics illustrate the broad applicability of our method and its advantages over classical sampling-based approaches in terms of both accuracy and efficiency.
  • Co-authors: Anirudh Subramanyam and Vishwas Rao
  • Bio: Shanyin Tong received her Ph.D. in mathematics under the supervision of Georg Stadler and Eric Vanden-Eijnden at New York University’s Courant Institute of Mathematical Sciences. She will join the Department of Applied Physics and Applied Mathematics at Columbia University as the Chu Assistant Professor this July. Her research focuses on applied and computational mathematics — particularly rare events, uncertainty quantification, PDE-constrained optimization, stochastic optimization, and inverse problems.

Nominations for the prize are no longer being accepted. For reference, the text below is the original invitation for nominations.

Nominations/submissions are invited for the Best Paper Prize for Young Researchers. The submitted papers should be in the area of continuous optimization and satisfy at least one of the following three criteria:

  • Passed the first round of a normal refereeing process in a journal;
  • Published during the year 2019 or after;
  • Certified by a thesis adviser or postdoctoral mentor as a well-polished paper that is ready for submission to a journal.

Papers can be single-authored or multi-authored, subject to the following criterion:

  • Each paper must have at least one author who was less than 30 years of age on January 1, 2017 or had not earned a Ph.D before that date.
  • Only authors satisfying at least one of these conditions (i.e., the age criterion and Ph.D. criterion) may be considered for the prize.
  • In case of joint authorship involving senior researchers (i.e., those who do not satisfy both the age criterion and Ph.D. criterion), one senior author must certify the contribution of the nominated author(s) in the work.

A selection committee will decide on questions concerning eligibility in exceptional cases. The selection criteria will be based solely on the quality of the paper, including originality of results and potential impact. The following items are required for a valid nomination/submission:

  1. The paper for consideration;
  2. A brief description of the contribution (limited to 2 pages);
  3. A statement on the status of the paper: published in a journal, accepted, under review, or not submitted. In the latter case, a statement by a thesis advisor or postdoctoral mentor on the readiness for submission of the paper should be included;
  4. A certification of the qualifying author’s eligibility in terms of age or Ph.D. (by the qualifying author’s adviser or department chair);
  5. In case of joint authorship involving a senior researcher, a certification by the senior researcher about the qualifying author’s contribution to the work.

When and how to submit. The deadline for submission is May 1, 2022. Submissions should be sent electronically in PDF format to the Chair of the Selection Committee, Dr. Katya Scheinberg, at . The Selection Committee members include

  • Katya Scheinberg, Chair (Cornell University)
  • Miguel Anjos (University of Edinburgh)
  • Johannes Royset (Naval Postgraduate School)
  • Suvrit Sra (Massachusetts Institute of Technology)

Up to four papers will be selected as finalists of the competition. The finalists will be featured in a dedicated session at ICCOPT 2022, and the winner will be determined after this session. The finalists are still allowed to give another talk in the conference if they wish to do so. 

The winner of the Young Researcher Prize will be announced at the conference dinner and presented with a 1000 USD award. All finalists will receive free admission to the conference dinner and a certificate.