Plenary Speakers


Ernesto Birgin

Short Biography: Ernesto G. Birgin graduated in Computer Science at the University of Buenos Aires in 1995 and received his PhD in Applied Mathematics at the State University of Campinas in 1998. He is full professor at the University of São Paulo since 2015, where he has been working since 1999. His areas of interest include numerical optimization and operations research. He has published over 100 articles in international journals and is the author, together with J. M. Martinez, of the book Practical Augmented Lagrangian Methods for Constrained Optimization, published by SIAM in 2014. He currently serves as associate editor of the journals Bulletin of Computational Applied Mathematics, CLEI Electronic Journal, Computational and Applied Mathematics, Computational Optimization and Applications, International Transactions in Operational Research, Journal of Global Optimization, Mathematics of Computation, Mathematical Programming Computation, Pesquisa Operacional, and Springer Nature Operations Research Forum.

Talk Title: Safeguarded augmented Lagrangian methods for nonconvex optimization: convergence, complexity and experiments.

Talk Abstract: Safeguarded augmented Lagrangian methods are suitable tools for solving nonconvex nonlinear programming problems. Their convergence theory, based on weak assumptions, is well understood, even in the case of infeasible problems. Iteration and evaluation complexity results are also known. But it is their practical advantages that make them attractive to tackle real-world problems. On the one hand, implementations that exploit second-order information, and some that even possess convergence to second-order stationary points, are available. On the other hand, however, it is the first-order matrix-free implementations that can more efficiently deal with real-world large-scale problems. This talk will cover both theoretical and practical elements and applications of the augmented Lagrangian method Algencan. In particular, recent complexity results and a comprehensive numerical comparison will be reported.


Jean-Bernard Lasserre

Short Biography: J.B. Lasserre graduated from “Ecole Nationale Supérieure d’Informatique et Mathematiques Appliquees” (ENSIMAG) in Grenoble, then got his PhD (1978) and “Doctorat d’Etat” (1984) degrees both from Paul Sabatier University in Toulouse (France). He has been at LAAS-CNRS in Toulouse since 1980, where he is currently Directeur de Recherche éméritus. He is also a member of IMT, the Institute of Mathematics of Toulouse, and holds the “Polynomial Optimization” chair at the ANITI Institute (one of the four recently created Artificial Intelligence Institutes in France). He was twice a one-year visitor (1978-79 and 1985-86) at the Electrical Engineering & Computer Science Department. of the University of California at Berkeley with a fellowship from INRIA and NSF respectively.

Talk Title: The Christoffel function, moments & sum-of-squares

Talk Abstract: In the first part of the talk we briefly introduce the Christoffel-Darboux kernel and its associated Christoffel function (CF), a well-known tool in approximation theory and orthogonal polynomials. We then describe how the CF can provide an efficient and simple-to-use tool for some data analysis problems (e.g. outlier detection). It also can be used to approximate discontinuous functions from the sole knowledge of values on a sample, or recover an unknown function f from moments of a measure µ supported on its graph. When f is a step-function (e.g. a perfect classifier in data analysis) then the CF associated with µ has nice expression which also provides a simple classifier, easy to interpret. In a second part of the talk we show that the CF is intimately connected to optimization, moments and positive polynomials, through a certain one-to-one mapping between the convex cone of positive polynomials and its dual cone of moments, introduced by Nesterov. Then we also show that in turn this connection allows to provide a “disintegration” of the Christoffel function in the spirit of the classical disintegration of a measure into a marginal and a conditional probability.


Asu Ozdaglar

Short Biography: Asuman Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively. She is the MathWorks Professor of Electrical Engineering and Computer Science in the Electrical Engineering and Computer Science (EECS) Department at the Massachusetts Institute of Technology. She is the department head of EECS and the Deputy Dean of Academics in the Schwarzman College of Computing. Her research expertise includes optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, distributed optimization and control, and network analysis with special emphasis on contagious processes, systemic risk and dynamic control. She is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council. She is an IEEE fellow and served on the Board of Governors of the Control System Society in 2010. She was an associate editor for IEEE Transactions on Automatic Control and the inaugural area co-editor for the area entitled “Games, Information and Networks” in the journal Operations Research. She is the co-author of the book entitled Convex Analysis and Optimization (Athena Scientific, 2003).

Talk title: Optimal and Differentially Private Data Acquisition from Strategic Users

Talk Abstract: The data of billions of people around the world are used every day for improving search algorithms, recommendations on online platforms, personalized advertising, and the design of new drugs, services and products. With rapid advances in machine learning (ML) algorithms and further growth in data collection, these practices will become only more widespread in the years to come. A common concern with many of these data-intensive applications centers on privacy — as a user’s data is harnessed, more and more information about her behavior and preferences are uncovered and potentially utilized by platforms and advertisers. A popular solution to the tension between privacy costs and benefits of data is to use methods such as differential privacy in order to limit the extent to which an individual’s data is uncovered and exploited. Although these methods are already used by many of the tech companies, a key practical question remains: how do we decide how much privacy heterogeneous, strategic users will obtain?

This talk is an attempt to answer this key question and study the impact of data market architecture on the design of mechanisms for purchasing data from privacy sensitive strategic users. We consider a platform interested in estimating an underlying parameter using data collected from users. We formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her (verifiable) data in exchange for a monetary reward or services, but at the same time has a (private) heterogeneous privacy sensitivity that represents her cost per unit privacy loss. We consider two popular differential privacy settings for providing privacy guarantees for the users: central and local. In both cases, we establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Building on this characterization, we pose the mechanism design problem as the optimal selection of an estimator and payments that will elicit truthful reporting of privacy sensitivities of users under a regularity condition on the distribution of privacy sensitivities. Our mechanism in the central setting can be implemented in log-linear time in the number of users, and, in the local setting, it admits a Polynomial Time Approximation Scheme.


Defeng Sun

Short Biography: Defeng Sun is currently Chair Professor of Applied Optimization and Operations Research at the Hong Kong Polytechnic University and the President of the Hong Kong Mathematical Society. He mainly publishes in non-convex continuous optimization and machine learning. He received the Beale-Orchard-Hays Prize for excellence in computational mathematical programming from the MOS in 2018. He is a Fellow of SIAM and CSIAM.

Talk Title: Exploring the Sparsity of Large-scale Statistical Optimization Problems

Talk Abstract: It has been widely recognized that the structured sparsity of the optimal solutions is an intrinsic property for large-scale optimization problems arising from modern applications in the big data era. In this talk, we shall first illustrate the structured sparsity of the solutions via some popular machine learning models. In particular, we shall show that the solution of the convex clustering model can be highly structurally sparse even if the solution itself is fully dense. We shall then introduce a dual semismooth Newton based proximal point algorithm (PPDNA) and explain why it can be much more efficient than the first-order methods for solving a class of large-scale optimization problems arising from machine learning. The key point is to adaptively make use of the second-order sparsity of the solutions in addition to the data sparsity so that, at each iteration, the computational costs of the second-order methods can be comparable or even lower than those of the first-order methods. Equipped with the PPDNA, we shall then introduce some adaptive sieving methodologies to generate solution paths for large-scale optimization problems with structured sparsity of particular importance in applications. In the last part of the talk, we shall illustrate the high efficiency of our approach with extensive numerical results on several important models including convex clustering, lasso, and exclusive lasso.