Plenary Speakers


Ernesto Birgin

Short Biography: Ernesto G. Birgin graduated in Computer Science at the University of Buenos Aires in 1995 and received his PhD in Applied Mathematics at the State University of Campinas in 1998. He is full professor at the University of São Paulo since 2015, where he has been working since 1999. His areas of interest include numerical optimization and operations research. He has published over 100 articles in international journals and is the author, together with J. M. Martinez, of the book Practical Augmented Lagrangian Methods for Constrained Optimization, published by SIAM in 2014. He currently serves as associate editor of the journals Bulletin of Computational Applied Mathematics, CLEI Electronic Journal, Computational and Applied Mathematics, Computational Optimization and Applications, International Transactions in Operational Research, Journal of Global Optimization, Mathematics of Computation, Mathematical Programming Computation, Pesquisa Operacional, and Springer Nature Operations Research Forum.

Talk Title: Safeguarded augmented Lagrangian methods for nonconvex optimization: convergence, complexity and experiments.

Talk Abstract: Safeguarded augmented Lagrangian methods are suitable tools for solving nonconvex nonlinear programming problems. Their convergence theory, based on weak assumptions, is well understood, even in the case of infeasible problems. Iteration and evaluation complexity results are also known. But it is their practical advantages that make them attractive to tackle real-world problems. On the one hand, implementations that exploit second-order information, and some that even possess convergence to second-order stationary points, are available. On the other hand, however, it is the first-order matrix-free implementations that can more efficiently deal with real-world large-scale problems. This talk will cover both theoretical and practical elements and applications of the augmented Lagrangian method Algencan. In particular, recent complexity results and a comprehensive numerical comparison will be reported.


Asu Ozdaglar

Short Biography: Asuman Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively. She is the MathWorks Professor of Electrical Engineering and Computer Science in the Electrical Engineering and Computer Science (EECS) Department at the Massachusetts Institute of Technology. She is the department head of EECS and the Deputy Dean of Academics in the Schwarzman College of Computing. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control. She is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council. She is an IEEE fellow and served on the Board of Governors of the Control System Society in 2010. She was an associate editor for IEEE Transactions on Automatic Control and the inaugural area co-editor for the area entitled “Games, Information and Networks” in the journal Operations Research. She is the co-author of the book entitled Convex Analysis and Optimization (Athena Scientific, 2003).

Talk title: Independent Learning Dynamics for Stochastic Games: Where Game Theory Meets Reinforcement Learning

Talk Abstract: Reinforcement learning (RL) has had tremendous successes in many artificial intelligence applications. Many of the forefront applications of RL involve multiple agents, e.g., playing chess and Go games, autonomous driving, and robotics. Unfortunately, classical RL framework is inappropriate for multi-agent learning as it assumes an agent’s environment is stationary and does not take into account the adaptive nature of opponent behavior. In this talk, I focus on stochastic games for multi-agent reinforcement learning in dynamic environments and develop independent learning dynamics for stochastic games: each agent is myopic and chooses best-response type actions to other agents’ strategies independently, meaning without any coordination with her opponents. There has been limited progress on developing convergent best-response type independent learning dynamics for stochastic games. I will present our recently proposed independent learning dynamics that guarantee convergence in stochastic games, including for both zero-sum and identical-interest settings. Along the way, I will also reexamine some classical and recent results from game theory and RL literatures, to situate the conceptual contributions of our independent learning dynamics and the mathematical novelties of our analysis.


Defeng Sun

Short Biography: Defeng Sun is currently Chair Professor of Applied Optimization and Operations Research at the Hong Kong Polytechnic University and the President of the Hong Kong Mathematical Society. He mainly publishes in non-convex continuous optimization and machine learning. He received the Beale-Orchard-Hays Prize for excellence in computational mathematical programming from the MOS in 2018. He is a Fellow of SIAM and CSIAM.

Talk Title: Exploring the Sparsity of Large-scale Statistical Optimization Problems

Talk Abstract: It has been widely recognized that the structured sparsity of the optimal solutions is an intrinsic property for large-scale optimization problems arising from modern applications in the big data era. In this talk, we shall first illustrate the structured sparsity of the solutions via some popular machine learning models. In particular, we shall show that the solution of the convex clustering model can be highly structurally sparse even if the solution itself is fully dense. We shall then introduce a dual semismooth Newton based proximal point algorithm (PPDNA) and explain why it can be much more efficient than the first-order methods for solving a class of large-scale optimization problems arising from machine learning. The key point is to adaptively make use of the second-order sparsity of the solutions in addition to the data sparsity so that, at each iteration, the computational costs of the second-order methods can be comparable or even lower than those of the first-order methods. Equipped with the PPDNA, we shall then introduce some adaptive sieving methodologies to generate solution paths for large-scale optimization problems with structured sparsity of particular importance in applications. In the last part of the talk, we shall illustrate the high efficiency of our approach with extensive numerical results on several important models including convex clustering, lasso, and exclusive lasso.