Events
Wasserstrom Lecture Series

Statistical Learning with Sparsity

Trevor Hastie, Stanford University

November 1, 2016

Abstract: In a statistical world faced with an explosion of data, regularization has become an important ingredient. In many problems, we have many more variables than observations, and the lasso penalty and its hybrids have become increasingly useful. This talk presents a general framework for fitting large scale regularization paths for a variety of problems. We describe the approach, and demonstrate it via examples using our R package GLMNET. We then outline a series of related problems using extensions of these ideas.

Bio: Trevor Hastie received his university education from Rhodes University, South Africa (BS), University of Cape Town (MS), and Stanford University (Ph.D Statistics 1984).His first employment was with the South African Medical Research Council in 1977, during which time he earned his MS from UCT. In 1979 he spent a year interning at the London School of Hygiene and Tropical Medicine, the Johnson Space Center in Houston Texas, and the Biomath department at Oxford University. He joined the Ph.D program at Stanford University in 1980. After graduating from Stanford in 1984, he returned to South Africa for a year with his earlier employer SA Medical Research Council. He returned to the USA in March 1986 and joined the statistics and data analysis research group at what was then AT&T Bell Laboratories in Murray Hill, New Jersey. After eight years at Bell Labs, he returned to Stanford University in 1994 as Professor in Statistics and Biostatistics. In 2013 he was named the John A. Overdeck Professor of Mathematical Sciences. His main research contributions have been in applied statistics; he has published over 180 articles and has co-written four books in this area: "Generalized Additive Models“, "Elements of Statistical Learning “, "An Introduction to Statistical Learning, with Applications in R" and "Statistical Learning with Sparsity“. He has also made contributions in statistical computing, co-editing (with J. Chambers) a large software library on modeling tools in the S language ("Statistical Models in S", Wadsworth, 1992), which form the foundation for much of the statistical modeling in R. His current research focuses on applied statistical modeling and prediction problems in biology and genomics, medicine and industry. His current research focuses on applied statistical modeling and prediction problems in biology and genomics, medicine and industry

Watch the video

Incremental Proximal and Augmented Lagrangian Methods for Convex Optimization: A Survey

Dimitri Bertsekas, Massachusetts Institute of Technology

April 12, 2016

Abstract: Incremental methods deal effectively with an optimization problem of great importance in machine learning, signal processing, and large-scale and distributed optimization: the minimization of the sum of a large number of convex functions. We survey these methods and we propose incremental aggregated and nonaggregated versions of the proximal algorithm. Under cost function differentiability and strong convexity assumptions, we show linear convergence for a sufficiently small constant stepsize. This result also applies to distributed asynchronous variants of the method, involving bounded interprocessor communication delays.

We then consider dual versions of incremental proximal algorithms, which are incremental augmented Lagrangian methods for separable equality-constrained optimization problems. Contrary to the standard augmented Lagrangian method, these methods admit decomposition in the minimization of the augmented Lagrangian, and update the multipliers far more frequently. Our incremental aggregated augmented Lagrangian methods bear similarity to several known decomposition algorithms, including the alternating direction method of multipliers (ADMM) and more recent variations. We compare these methods in terms of their properties, and highlight their potential advantages and limitations.

Bio: Dimitri P. Bertsekas undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology.
Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept., Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979). Since 1979 he has been teaching at the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology (M.I.T.), where he is currently McAfee Professor of Engineering. His research spans several fields, including optimization, control, large-scale computation, and data communication networks, and is closely tied to his teaching and book authoring activities. He has written numerous research papers, and sixteen books and research monographs, several of which are used as textbooks in MIT classes.

Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming" (co-authored with John Tsitsiklis), the 2000 Greek National Award for Operations Research, the 2001 ACC John R. Ragazzini Education Award, the 2009 INFORMS Expository Writing Award, the 2014 ACC Richard E. Bellman Control Heritage Award for "contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control," the 2014 Khachiyan Prize for Life-Time Accomplishments in Optimization, and the SIAM/MOS 2015 George B. Dantzig Prize. In 2001, he was elected to the United States National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks."

Dr. Bertsekas' recent books are "Introduction to Probability: 2nd Edition" (2008), "Convex Optimization Theory" (2009), "Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming" (2012), "Abstract Dynamic Programming" (2013), and "Convex Optimization Algorithms" (2015), all published by Athena Scientific.

Watch the video

Financial engineering

Paul Glasserman, Columbia Business School

April 7, 2015

Abstract: Financial engineering has traditionally addressed problems of portfolio selection, derivatives valuation, and risk measurement. This talk will provide an overview of more recent financial engineering problems that arise in the design and monitoring of the financial system. Several problems in this domain can be viewed as instances of stabilizing or destabilizing feedback. Some problems result from a combination of the two: actions that are stabilizing for individual agents can become destabilizing when agents interact. Other problems draw on traditional tools of the field. I will discuss specific modeling problems in the design of capital requirements, measuring counterparty risk, margin requirements for derivatives, and the effects interconnections between financial institutions, drawing on joint work with several other researchers.

Bio: Paul Glasserman is the Jack R. Anderson Professor of Business at Columbia Business School, where he serves as research director of the Program for Financial Studies. In 2011-2012, he was on leave from Columbia, working full-time at the Office of Financial Research in the U.S. Treasury Department, where he currently serves as a part-time consultant. His work with the OFR has included research on stress testing, financial networks, contingent capital, and counterparty risk. Paul’s research recognitions include the INFORMS Lanchester Prize, the Erlang Prize in Applied Probability, and the I-Sim Outstanding Simulation Publication Award; he is also a past recipient of Risk magazine’s Quant of the Year award. Paul served as senior vice dean of Columbia Business School in 2004-2008 and was interim director of its Sanford C. Bernstein Center for Leadership and Ethics in 2005-2007.

Routing Optimization Under Uncertainty

Patrick Jaillet, Ph.D., Massachusetts Institute of Technology

April 29, 2014

Abstract: We consider various network routing problems under travel time uncertainty where deadlines are imposed at a subset of nodes. Corresponding nominal deterministic problems include variations of classical shortest path problems and capacitated multi-vehicle routing problems. After providing motivating examples, we will introduce several new mathematical frameworks for addressing a priori and adaptive versions for these problems, under varying degree of uncertainty. We will show how some of these problems can be solved in a computationally tractable way. We will then compare their solutions to those of other stochastic and robust optimization approaches.

Joint works with Yossiri Adulyasak, Arthur Flajolet, Jin Qi, and Melvyn Sim.

Bio: Patrick Jaillet is the Dugald C. Jackson Professor in the Department of Electrical Engineering and Computer Science and a member of the Laboratory for Information and Decision Systems at MIT. He is also one of the two Directors of the MIT Operations Research Center. Before MIT, he held faculty positions at the University of Dilpome d'Ingenieur from France, and then an SM in Transportation and a PhD in Operations Research from MIT. His current research interests include on-line and data-driven optimization. Dr. Jaillet was a Fulbright Scholar in 1990 and received several awards including most recently the Glover-Klingman Prize. He is a Fellow of INFORMS and a member of SIAM.

WATCH THE VIDEO

A Flexible Point Process Model for Describing Arrivals to a Service Facility

Peter W. Glynn, Ph.D., Stanford University

April 16, 2013

Abstract: In many applied settings, one needs a description of incoming traffic to the system. In this talk, we argue that the Palm-Khintchine superposition theorem dictates that the process should typically look "locally Poisson". However, there are usually obvious time-of-day effects that should be reflected in the model. Furthermore, in many data sets, it appears that medium-scale burstiness is also present. In this talk, we consider a Poisson process that is driven by a mean-reverting process as a flexible vehicle for modeling such traffic. We argue that this model is tractable computationally, is parsimonious, has physically interpretable parameters, and can flexibly model different behaviors at different scales. We discuss estimation methodology and hypothesis tests that are relevant to this model, and illustrate the ideas with call center data. This work is joint with Jeff Hong and Xiaowei Zhang.

Bio: Peter W. Glynn is the current Chair of the Department of Management Science and Engineering at Stanford University. He received his Ph.D in Operations Research from Stanford University in 1982. He then joined the faculty of the University of Wisconsin at Madison, where he held a joint appointment between the Industrial Engineering Department and Mathematics Research Center, and courtesy appointments in Computer Science and Mathematics. In 1987, he returned to Stanford, where he joined the Department of Operations Research. He is now the Thomas Ford Professor of Engineering in the Department of Management Science and Engineering, and also holds a courtesy appointment in the Department of Electrical Engineering. From 1999 to 2005, he served as Deputy Chair of the Department of Management Science and Engineering, and was Director of Stanford's Institute for Computational and Mathematical Engineering from 2006 until 2010. He is a Fellow of INFORMS and a Fellow of the Institute of Mathematical Statistics, has been co-winner of Best Publication Awards from the INFORMS Simulation Society in 1993 and 2008, was a co-winner of the Best (Biannual) Publication Award from the INFORMS Applied Probability Society in 2009, and was the co-winner of the John von Neumann Theory Prize from INFORMS in 2010. In 2012, he was elected to the National Academy of Engineering. His research interests lie in simulation, computational probability, queueing theory, statistical inference for stochastic processes, and stochastic modeling.

WATCH THE VIDEO

Operations Research and Public Health: A Little Help Can Go a Long Way

Margaret Brandeau, Ph.D., Stanford University

May 1, 2012

Abstract: How should the Centers for Disease Control and Prevention revise national immunization recommendations so that gaps in vaccination coverage will be filled in a cost-effective manner? What is the most cost-effective way to use limited HIV prevention and treatment resources? To what extent should local communities stockpile antibiotics for response to a potential bioterror attack? This talk will describe examples from past and ongoing model-based analyses of public health policy questions. We also provide perspectives on key elements of a successful policy analysis and discuss ways in which such analysis can influence policy.

Bio: Her research focuses on the development of applied mathematical and economic models to support health policy decisions. Her recent work has focused on HIV prevention and treatment programs, programs to control the spread of hepatitis B virus, and preparedness plans for bioterror response. She is a Fellow of the Institute for Operations Research and Management Science (INFORMS), and has received the President’s Award from INFORMS (recognizing important contributions to the welfare of society), the Pierskalla Prize from INFORMS (for research excellence in health care management science), a Presidential Young Investigator Award from the National Science Foundation, and the Eugene L. Grant Teaching Award from Stanford, among other awards. Professor Brandeau earned a BS in Mathematics and an MS in Operations Research from MIT, and a PhD in Engineering-Economic Systems from Stanford University.

Watch the video