Semi-Plenary Speakers


Daniel Kuhn

Short Biography: Daniel Kuhn holds the Chair of Risk Analytics and Optimization at EPFL. Before joining EPFL, he was a faculty member at Imperial College London (2007-2013) and a postdoctoral researcher at Stanford University (2005-2006). He received a PhD in Economics from the University of St. Gallen in 2004 and an MSc in Theoretical Physics from ETH Zurich in 1999. His research interests revolve around optimization under uncertainty. For his webpage, go here.

Talk Title: On the Interplay of Optimal Transport and Distributionally Robust Optimization

Talk Abstract: Optimal Transport (OT) seeks the most efficient way to morph one probability distribution into another one, and Distributionally Robust Optimization (DRO) studies worst-case risk minimization problems under distributional ambiguity. It is well known that OT gives rise to a rich class of data-driven DRO models, where the decision-maker plays a zero-sum game against nature who can adversely reshape the empirical distribution of the uncertain problem parameters within a prescribed transportation budget. Even though generic OT problems are computationally hard, the Nash strategies of the decision-maker and nature in OT-based DRO problems can often be computed efficiently. In this talk we will uncover deep connections between robustification and regularization, and we will disclose striking properties of nature’s Nash strategy, which implicitly constructs an adversarial training dataset. We will also show that OT-based DRO offers a principled approach to deal with distribution shifts and heterogeneous data sources, and we will highlight new applications of OT-based DRO in machine learning, statistics, risk management and control. Finally, we will argue that, while OT is useful for DRO, ideas from DRO can also help us to solve challenging OT problems.


Guanghui (George) Lan

Short Biography: Guanghui (George) Lan is an A. Russell Chandler III professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. Dr. Lan was on the faculty of the Department of Industrial and Systems Engineering at the University of Florida from 2009 to 2015, after earning his Ph.D. degree from Georgia Institute of Technology in August 2009. His main research interests lie in optimization and machine learning. The academic honors he received include the Mathematical Optimization Society Tucker Prize Finalist (2012), INFORMS Junior Faculty Interest Group Paper Competition First Place (2012) and the National Science Foundation CAREER Award (2013). Dr. Lan serves as an associate editor for Mathematical Programming, SIAM Journal on Optimization and Computational Optimization and Applications. He is also an associate director of the Center for Machine Learning at Georgia Tech.

Talk Title: Policy mirror descent for reinforcement learning

Talk Abstract: Reinforcement Learning (RL) has attracted considerable interest from both industry and academia during the past few years. The study of RL algorithms with provable rates of convergence, however, is still in its infancy. In this talk, we discuss some recent progresses that bridge stochastic nonlinear programming with RL. We pay special attention to online reinforcement learning, which intends to continuously improve the system performances in-situ, when better and better policies are being discovered and deployed. More specifically, we introduce a new and general class of policy mirror descent (PMD) methods and show that they achieve linear convergence for the deterministic case and optimal sampling complexity for the stochastic case for discounted Markov decision processes. We also show how the gradient information can be estimated efficiently online through a few recently proposed conditional temporal difference methods. Extensions of these algorithms for the average reward and block coordinate settings will also be discussed.


Angelia Nedich

Short Biography: Angelia Nedich has a Ph.D. from Moscow State University, Moscow, Russia, in Computational Mathematics and Mathematical Physics (1994), and a Ph.D. from Massachusetts Institute of Technology, Cambridge, USA in Electrical and Computer Science Engineering (2002). She has worked as a senior engineer in BAE Systems North America, Advanced Information Technology Division at Burlington, MA. Currently, she is a faculty member of the school of Electrical, Computer and Energy Engineering at Arizona State University at Tempe. Prior to joining Arizona State University, she has been a Willard Scholar faculty member at the University of Illinois at Urbana-Champaign. She is a recipient (jointly with her co-authors) of the Best Paper Award at the Winter Simulation Conference 2013 and the Best Paper Award at the International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt) 2015. Her general research interest is in large scale complex systems dynamics and optimization.

Talk Title: Random Methods for Large-Scale Constrained Optimization Problems

Talk Abstract: The optimization problems with a large number of constraints are emerging in many application domains such as optimal control, reinforcement learning, and statistical learning, and artificial intelligence, in general. The challenges posed by the size of the problems in these applications resulted in prolific research in the domain of optimization theory and algorithms. Many refinements and accelerations of various (mainly) first-order methods have been proposed and studied, majority of which solves a penalized re-formulation of the original problem in order to cope with the large number of constraints. While the main focus has been on the penalized variants, this talk is offering an alternative approach to these problems. The talk will focus on a different viewpoint and discuss the optimization methods that use randomization to deal with a large number of constraints. The performance and efficiency of such algorithms will be addressed, as well as auxiliary theory that supports them.


Pablo Parrilo

Short Biography: Pablo A. Parrilo is the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and Computer Science at MIT, with a joint appointment in Mathematics. He is affiliated with the Laboratory for Information and Decision Systems (LIDS) and the Operations Research Center (ORC). Past appointments include Assistant Professor at the Automatic Control Laboratory of the Swiss Federal Institute of Technology (ETH Zurich), and Visiting Associate Professor at the California Institute of Technology. He received an Electronics Engineering undergraduate degree from the University of Buenos Aires, and a PhD in Control and Dynamical Systems from the California Institute of Technology. His research interests include mathematical optimization, machine learning, control and identification, robustness analysis and synthesis, and the development and application of computational tools based on convex optimization and algorithmic algebra to practically relevant engineering problems. Prof. Parrilo has received several distinctions, including the Donald P. Eckman Award of the American Automatic Control Council, the SIAM Activity Group on Control and Systems Theory (SIAG/CST) Prize, the IEEE Antonio Ruberti Young Researcher Prize, and the Farkas Prize of the INFORMS Optimization Society. He is an IEEE and SIAM Fellow.

Talk Title: Shortest Paths in Graphs of Convex Sets, and their Applications

Talk Abstract: Given a graph, the shortest-path problem requires finding a sequence of edges of minimum cost connecting a source vertex to a target vertex. In this talk we introduce a generalization of this classical problem in which the position of each vertex in the graph is a continuous decision variable, constrained to lie in a corresponding convex set, and the cost of an edge is a convex function of the positions of the vertices it connects. Problems of this form arise naturally in motion planning of autonomous vehicles, robot navigation, and even optimal control of hybrid dynamical systems. The price for such a wide applicability is the complexity of this problem, which is easily seen to be NP-hard. We discuss this novel formulation along with different solution approaches, including a strong mixed-integer convex formulation based on perspective functions. This formulation has a very tight convex relaxation and makes it possible to efficiently find globally-optimal paths in large graphs and in high-dimensional spaces.


Suvrit Sra

Short Biography: Suvrit Sra is an Associate Professor in the EECS at MIT, and a core faculty member of IDSS and LIDS at MIT, as well as a member of MIT-ML and Statistics groups. He obtained his PhD in Computer Science from the University of Texas at Austin. Before moving to MIT, he was a Senior Research Scientist at the Max Planck Institute for Intelligent Systems, Tübingen, Germany. He has held visiting positions at UC Berkeley EECS, and Carnegie Mellon University (Machine Learning Department) during 2013-2014. His research lies at the intersection of machine learning with mathematics, spanning areas such as differential geometry, matrix analysis, convex analysis, probability theory, and optimization. He founded the Optimization for Machine Learning (OPT) series of workshops in 2008, that are since then held annually at the NeurIPS conference. He has co-edited a book with the same name (MIT Press, 2011). He is also the chief scientist of macro-eyes, a global AI+optimization+healthcare startup that he co-founded.

Talk Title: Two surprises when optimization theory meets machine learning practice

Talk Abstract: It is well-known that there are large gaps between optimization theory and machine learning practice. However, there are two even more surprising gaps that have persisted at the fundamental level. The first one arises from ignoring the elephant in the room: non-differentiable non-convex optimization, e.g., when training a deep ReLU network. The second surprise is more disturbing: it uncovers a non-convergence phenomenon in the training of deep networks, and as a result it challenges existing convergence theory and training algorithms. Both these fundamental surprises open new directions of research, and I will talk about some of our theoretical progress on these, as well as potential research questions.


Akiko Takeda

Short Biography: Akiko Takeda received the Doctor of Science degree in information science from the Tokyo Institute of Technology, Japan, in 2001. She is currently a professor in the Department of Creative Informatics, The University of Tokyo, and the team leader of Continuous Optimization Team at Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan. Prior to that, she was a researcher at Toshiba Corporation, an assistant professor at Tokyo Institute of Technology, an associate professor at Keio University, an associate professor at University of Tokyo, and a professor at the Institute of Statistical Mathematics. Her current focus is on the development of solution approaches in decision making problems under uncertainty and in nonconvex optimization problems such as the difference of convex optimization. Her work is motivated by optimization tasks with applications in operations research, machine learning, and control systems. She currently serves as an Associate Editor for SIAM Journal on Optimization.

Talk Title: Bilevel Optimization for Some Machine Learning Problems

Talk Abstract: Recently, bilevel optimization methods have been actively studied in machine learning (ML). Various ML models are described as bilevel optimization problems, and new approaches that take advantage of the characteristics of the models have been proposed. One of the representative ML applications of bilevel optimization is hyperparameter optimization. Most ML models are equipped with parameters that need to be prefixed, and such parameters are often called hyperparameters. In this talk, we review some bilevel formulations and approaches developed for optimizing an ML model together with hyperparameter values. The talk will explore new bilevel formulations of hyperparameter optimization for more complicated ML models that are formulated as nonsmooth optimization problems and bilevel optimization problems and show new solution methodologies.