The summer school will take place July 23-24 at the scenic Mountaintop Campus of Lehigh University. To participate in the summer school, choose the respective button during the registration process. We are excited for what is sure to be an amazing program! A detailed schedule is provided below, following by the two summer school themes and the invited expert lecturers. See the Summer School Transportation page for bus details.
Detailed Schedule
Saturday (July 23)
08:10 : Bus leaves for Mountaintop Campus |
08:30 – 09:00 : Registration and Breakfast |
09:00 – 10:30 : Mathematical foundations of robust and distributionally robust optimization (Dr. Kuhn) |
10:30 – 11:00 : Coffee Break |
11:00 – 12:30 : Multi-Stage Robust Optimization (Dr. Wiesemann) |
12:30 – 02:00 : Lunch (Wood Dining Room) |
02:00 – 03:30 : Regularization via Optimal Transport (Dr. Shafieezadeh-Abadeh) |
03:30 – 04:00 : Coffee Break |
04:00 – 05:30 : Statistics of Wasserstein Distributionally Robust Optimization (Dr. Blanchet) |
05:30 – 08:00 : BBQ Dinner |
08:00 : Bus leaves for Lower Campus |
Sunday (July 24)
08:10 : Bus leaves for Mountaintop Campus |
08:30 – 09:00 : Registration and Breakfast |
09:00 – 10:30 : Environment setup and basics of the Julia programming language |
10:30 – 11:00 : Coffee Break |
11:00 – 12:30 : Introduction to JuMP |
12:30 – 02:00 : Lunch (Wood Dining Room) |
02:00 – 03:30 : Modeling nonlinear and conic optimization problems with JuMP |
03:30 – 04:00 : Coffee Break |
04:00 – 05:30 : How to write your own conic solver |
05:45 : Bus leaves for Lower Campus |
Distributionally Robust Optimization
Abstract:
Mathematical optimization problems traditionally model uncertainty via probability distributions. However, observable statistical data can often be explained by many strikingly different distributions. This “uncertainty about the uncertainty” poses a major challenge for optimization problems with uncertain parameters: estimation errors in the parameters’ distribution are amplified through the optimization process and lead to biased (overly optimistic) optimization results as well as post-decision disappointment in out-of-sample tests.
This workshop provides an overview of the emerging field of distributionally robust optimization (DRO), which studies optimization models whose solutions are optimized against all distributions consistent with the given prior information. Recent findings have shown that many DRO models can be solved in polynomial time even when the corresponding stochastic models are intractable. DRO models also offer a more realistic account of uncertainty and mitigate the post-decision disappointment characteristic of stochastic models.
The workshop addresses four main topics. First, we develop a rigorous and general theory of static DRO using the language of convex analysis. Next, we investigate how the theory developed for static DRO extends to dynamic problems, where the values of the uncertain problem parameters are revealed sequentially over time, and where future decisions can depend on the parameter values that have already been observed. Subsequently, we address DRO problems that hedge against all distributions in a neighborhood of some nominal distribution with respect to the Wasserstein distance and establish connections to regularization. Finally, we develop statistical methods for certifying a good out-of-sample performance in Wasserstein DRO while optimally recovering regularization parameter choices advocated in high-dimensional statistics.
Speaker: Jose Blanchet (Stanford University) Short Biography: Jose Blanchet is a faculty member in the Management Science and Engineering Department at Stanford University – where he earned his Ph.D. in 2004. Prior to joining the Stanford faculty, Jose was a professor in the IEOR and Statistics Departments at Columbia University (2008-2017) and before that he was faculty member in the Statistics Department at Harvard University (2004-2008). Jose is a recipient of the 2009 Best Publication Award given by the INFORMS Applied Probability Society and of the 2010 Erlang Prize. He also received a PECASE award given by NSF in 2010. He has research interests in applied probability and Monte Carlo methods. He serves in the editorial board of ALEA, Advances in Applied Probability, Extremes, Insurance: Mathematics and Economics, Journal of Applied Probability, Mathematics of Operations Research, and Stochastic Systems. | Speaker: Daniel Kuhn (Ècole Polytechnique Fédérale de Lausanne) Short Biography: Daniel Kuhn holds the Chair of Risk Analytics and Optimization at EPFL. Before joining EPFL, he was a faculty member at Imperial College London (2007-2013) and a postdoctoral researcher at Stanford University (2005-2006). He received a PhD in Economics from the University of St. Gallen in 2004 and an MSc in Theoretical Physics from ETH Zurich in 1999. His research interests revolve around optimization under uncertainty. For his personal website, please go to https://www.epfl.ch/labs/rao/. |
Speaker: Soroosh Shafieezadeh-Abadeh (Carnegie Mellon University) Short Biography: Soroosh Shafieezadeh Abadeh is currently a postdoctoral researcher at Tepper School of Business at Carnegie Mellon University, working with Professor Fatma Kılınç-Karzan. Before joining CMU, he held a postdoctoral position at the Automatic Control Laboratory at ETH Zurich, working with John Lygeros and Florian Dörfler. He received his doctoral degree in Management of Technology from École Polytechnique Fédérale de Lausanne in 2020, where he worked with Professor Daniel Kuhn and Professor Peyman Mohajerin Esfahani. His doctoral dissertation entitled Wasserstein Distributionally Robust Learning was awarded an EPFL Thesis Distinction in 2020. He received a Bachelor and a Master degree in Electrical Engineering (major in Automatic Control) from the University of Tehran in 2011 and 2014, respectively. His current research interests are focused on optimization under uncertainty, the design of large-scale algorithms for solving stochastic and distributionally robust optimization problems, and the development of statistical tools for data-driven decision-making problems. His research is inspired by applications in machine learning, control, economics and finance. | Speaker: Wolfram Wiesemann (Imperial College) Short Biography: Wolfram Wiesemann is Professor of Analytics and Operations as well as Fellow of the KPMG Centre for Advanced Business Analytics at Imperial College Business School, London. Before joining the faculty of Imperial College Business School in 2013, he was a post-doctoral researcher at Imperial College London (2010-2011) and an Imperial College Research Fellow (2011-2012). He was a visiting researcher at the Institute of Statistics and Mathematics at Vienna University of Economics and Business, Austria, in 2010, the Computer-Aided Systems Laboratory at Princeton University, USA, in 2011, and the Industrial Engineering and Operations Research Department at Columbia University, USA, in 2012. Wolfram currently serves on the editorial boards of Computational Management Science, Computation Optimization & Applications, Manufacturing & Service Operations Management, Operations Research, Operations Research Letters and SIAM Journal on Optimization. Wolfram’s research interests revolve around the methodological aspects of decision-making under uncertainty, as well as applications in logistics, operations management and energy. |
Mathematical Optimization in Julia with JuMP
Abstract:
In this workshop, we will cover the basics of programming in Julia with a focus on mathematical optimization. You will learn how to write simple programs in Julia and to solve mathematical optimization problems in JuMP. The tutorial will cover integer linear programs, nonlinear programs and conic programs, including how to develop new solvers. Additional topics include data manipulation, visualization and analysis.
Julia is a high-level, high-performance dynamic language for technical computing. It is dynamically typed so it feels like a scripting language and has good support for interactive use. At the same time, Julia programs compile to efficient native code for multiple platforms via LLVM hence its performance is comparable to C or Fortran.
JuMP is a modeling language and software ecosystem for mathematical optimization in the Julia programming language. JuMP makes it easy to formulate and solve linear programming, semidefinite programming, integer programming, convex optimization, constrained nonlinear optimization, and related classes of optimization problems.
Materials for the tutorial are hosted at https://github.com/blegat/ICCOPT_SummerSchool_2022. Participants should bring their own laptop and follow the Julia setup instructions prior to the tutorial.
Speaker: Benoît Legat (Massachusetts Institute of Technology) Short Biography: Benoît Legat is a postdoctoral researcher at MIT with Prof. Pablo Parrilo in the Laboratory for Information and Decision Systems (LIDS). He received his Ph.D. degree in applied mathematics from the UCLouvain in Belgium. His research interests include polynomial optimization, semidefinite programming and invariant set computation. | Speaker: Miles Lubin (Google) Short Biography: Miles Lubin is a optimization researcher at Hudson River Trading in New York, USA. He received his PhD in Operations Research from MIT in 2017. He leads the JuMP project, an algebraic modeling language in Julia, in the role of “benevolent dictator for life”. |