Inside Our ProgramProgram Events
Events
-
Mar22
EVENT DETAILS
Winter Degrees Conferred
TIME Friday, March 22, 2024
CONTACT Office of the Registrar nu-registrar@northwestern.edu EMAIL
CALENDAR University Academic Calendar
-
Mar25
EVENT DETAILS
Spring Break Ends
TIME Monday, March 25, 2024
CONTACT Office of the Registrar nu-registrar@northwestern.edu EMAIL
CALENDAR University Academic Calendar
-
Apr9
EVENT DETAILS
Abstract: Many reinforcement/machine learning problems involve loss minimization, min-max optimization and fixed-point equations, all of which can be cast under the framework of Variational Inequalities (VIs). Stochastic methods like SGD, SEG and TD/Q Learning are prevalent, and their constant stepsize versions have gained popularity due to effectiveness and robustness. Viewing the iterates of these algorithms as a Markov chain, we study their fine-grained probabilistic behavior. In particular, we establish finite-time geometric convergence of the iterates distribution, and relate the ergodicity properties of the Markov chain to the characteristics of the VI, algorithm and data.
Using techniques of coupling and basic adjoint relationship, we characterize the limit distribution and how its bias depends on the stepsize. For smooth problems, exemplified by TD learning and smooth min-max optimization, the bias is proportional to the stepsize. For nonsmooth problems, exemplified by Q-learning and generalized linear model with nonsmooth link functions (e.g., ReLU), the bias has drastically different behavior and scales with the square root of the stepsize.
This precise probabilistic characterization allows for variance reduction via tail-averaging and bias reduction via Richardson-Romberg extrapolation. The combination of constant stepsize, averaging and extrapolation provides a favorable balance between fast mixing and low long-run error, and we demonstrate its effectiveness in statistical inference compared to traditional diminishing stepsize schemes.
Bio: Qiaomin Xie is an assistant professor in the Department of Industrial and Systems Engineering at the University of Wisconsin-Madison. Her research interests lie in the fields of reinforcement learning, applied probability, game theory and stochastic networks, with applications to computer and communication networks. She was previously a visiting assistant professor at School of Operations Research and Information Engineering at Cornell University (2019-2021). Prior to that, she was a postdoctoral researcher with LIDS at MIT. Qiaomin received her Ph.D. in Electrical and Computing Engineering from University of Illinois Urbana-Champaign in 2016. She received her B.S. in Electronic Engineering from Tsinghua University. She is a recipient of NSF CAREER Award, JPMorgan Faculty Research Award, Google Systems Research Award and UIUC CSL PhD Thesis Award.
TIME Tuesday, April 9, 2024 at 11:00 AM - 12:00 PM
LOCATION A230, Technological Institute map it
CONTACT Kendall Minta kendall.minta@gmail.com EMAIL
CALENDAR Department of Industrial Engineering and Management Sciences (IEMS)
-
Apr9
EVENT DETAILS
Abstract: Many reinforcement/machine learning problems involve loss minimization, min-max optimization and fixed-point equations, all of which can be cast under the framework of Variational Inequalities (VIs). Stochastic methods like SGD, SEG and TD/Q Learning are prevalent, and their constant stepsize versions have gained popularity due to effectiveness and robustness. Viewing the iterates of these algorithms as a Markov chain, we study their fine-grained probabilistic behavior. In particular, we establish finite-time geometric convergence of the iterates distribution, and relate the ergodicity properties of the Markov chain to the characteristics of the VI, algorithm and data.
Using techniques of coupling and basic adjoint relationship, we characterize the limit distribution and how its bias depends on the stepsize. For smooth problems, exemplified by TD learning and smooth min-max optimization, the bias is proportional to the stepsize. For nonsmooth problems, exemplified by Q-learning and generalized linear model with nonsmooth link functions (e.g., ReLU), the bias has drastically different behavior and scales with the square root of the stepsize.
This precise probabilistic characterization allows for variance reduction via tail-averaging and bias reduction via Richardson-Romberg extrapolation. The combination of constant stepsize, averaging and extrapolation provides a favorable balance between fast mixing and low long-run error, and we demonstrate its effectiveness in statistical inference compared to traditional diminishing stepsize schemes.
Bio: Qiaomin Xie is an assistant professor in the Department of Industrial and Systems Engineering at the University of Wisconsin-Madison. Her research interests lie in the fields of reinforcement learning, applied probability, game theory and stochastic networks, with applications to computer and communication networks. She was previously a visiting assistant professor at School of Operations Research and Information Engineering at Cornell University (2019-2021). Prior to that, she was a postdoctoral researcher with LIDS at MIT. Qiaomin received her Ph.D. in Electrical and Computing Engineering from University of Illinois Urbana-Champaign in 2016. She received her B.S. in Electronic Engineering from Tsinghua University. She is a recipient of NSF CAREER Award, JPMorgan Faculty Research Award, Google Systems Research Award and UIUC CSL PhD Thesis Award.
TIME Tuesday, April 9, 2024 at 11:00 AM - 12:00 PM
LOCATION A230, Technological Institute map it
CONTACT Kendall Minta kendall.minta@gmail.com EMAIL
CALENDAR Department of Industrial Engineering and Management Sciences (IEMS)
-
May17
EVENT DETAILS
New Developments, Industrial Applications, and Opportunities in Generative AI
Join us for an exciting event exploring the latest advancements in Generative AI! This in-person gathering will take place at the Hyatt Centric Chicago Magnificent Mile in Chicago, IL. Discover how Generative AI is revolutionizing varied industries and uncover the endless possibilities it offers. Whether you're a tech enthusiast, researcher, or industry professional, this event is a must-attend. Network with like-minded individuals, engage in thought-provoking discussions, and gain valuable insights from expert speakers. Don't miss out on this unique opportunity to stay ahead of the curve in the world of AI!
TIME Friday, May 17, 2024 at 9:00 AM - 5:00 PM
LOCATION 633 N St Clair Street
CONTACT Master of Science in Machine Learning and Data Science Program mlds@northwestern.edu EMAIL
CALENDAR Master of Science in Machine Learning and Data Science (MLDS)
-
Jun10
EVENT DETAILSmore info
McCormick School of Engineering PhD Hooding and Master’s Degree Recognition Ceremony
TIME Monday, June 10, 2024 at 9:00 AM - 11:00 AM
LOCATION Welsh-Ryan Arena
CONTACT Amy Pokrass amy.pokrass@northwestern.edu EMAIL
CALENDAR McCormick School of Engineering and Applied Science
-
Jun10
TIME Monday, June 10, 2024 at 2:00 PM - 4:00 PM
LOCATION Welsh-Ryan Arena
CONTACT Amy Pokrass amy.pokrass@northwestern.edu EMAIL
CALENDAR McCormick School of Engineering and Applied Science