Strong Northwestern CS Presence at ICLR 2026, ICRA 2026, and NeurIPS 2025
Faculty and students contribute to prestigious international conferences in AI and robotics research
Northwestern Engineering researchers are making significant contributions to AI and robotics research by sharing new work at international conferences this academic year, including the International Conference on Learning Representations (ICLR 2026), the IEEE International Conference on Robotics and Automation (ICRA 2026), and the Conference on Neural Information Processing Systems (NeurIPS 2025).
Such conferences serve as key bridges between academia and industry, explained Samir Khuller, Peter and Adrienne Barris Chair of Computer Science at the McCormick School of Engineering. He noted that access to both computing resources and large datasets increasingly makes collaboration with industry essential for modern research.
“Companies have the ability to provide access to hundreds of GPUs, and GPUs are extremely expensive,” Khuller said “Most universities do not have the ability to provide that compute infrastructure for students for large-scale projects.”
Additionally, Khuller said, students presenting their work often gain opportunities to connect with companies—which might later lead to career opportunities.
ICLR 2026
In work spanning spatial reasoning in foundation models, embodied cognition, and methods for aligning large language models, Professor Manling Li and computer science graduate students in her Machine Learning and Language (MLL) Lab led the research teams of seven papers accepted by ICLR 2026.
Scheduled in April in Rio de Janeiro, ICLR 2026 focuses on cutting-edge research on deep learning used in the fields of artificial intelligence, statistics and data science, as well as application areas including machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
In addition to contributions from the MLL Lab, several Northwestern teams will be represented at ICLR 2026 with accepted papers, including:
- “Homeostatic Adaptation of Optimal Population Codes under Metabolic Stress” — Yi-Chun Hung, a PhD student in computer science at Northwestern Engineering; Emma Alexander, assistant professor of computer science at Northwestern Engineering; Gregory Schwartz, Derrick T. Vail Professor of Ophthalmology at Northwestern University Feinberg School of Medicine; and Emily Cooper (University of California, Berkeley).
- “Explaining and Improving Information Complementarities in Multi-Agent Decision-making” — PhD student in computer science Ziyang Guo, Jason Hartline, professor of computer science; Jessica Hullman, Ginni Rometty Professor of Computer Science at Northwestern Engineering, and postdoctoral researcher at Microsoft Research Yifan Wu (PhD ’25)
- “Belief-Based Offline Reinforcement Learning for Delay-Robust Policy Optimization” — Xiangyu Shi and Sinong (Simon) Zhan, PhD students in electrical and computer engineering; Philip Wang (BS/MS ’25); Frank Yang; a master’s degree student in computer science; Qi Zhu, professor of electrical and computer engineering at Northwestern Engineering; and Chao Huang and Qingyuan Wu (University of Southampton)
IEEE ICRA 2026
With two accepted papers, incoming Assistant Professor of Computer Science Ruohan Zhang, is among the Northwestern researchers contributing to IEEE ICRA 2026, the premier conference on robotics and automation. The conference will run in June in Vienna.
“My research focuses on building robotics systems (both software and hardware) to solve real-world tasks,” said Zhang, who also co-authored several papers appearing at ICLR 2026. “I believe one important approach in robotics nowadays is to enable robots to understand the physical world through so-called ‘world models.’”
In one of Zhang’s projects, called IMPASTO: Integrating Model-Based Planning with Learned Dynamics Models for Robotic Oil Painting Reproduction, a robot learns how to reproduce oil paintings by experimenting with brushstrokes and gradually building a predictive model of how paint behaves on a canvas.
Zhang explained that the robot can use the model to infer the actions needed to recreate a painting.
“The robot learns the world model by applying random brushstrokes to the canvas and gathers data to gradually understand the consequences of its own actions,” Zhang said.
NeurIPS 2025
In December, Northwestern CS Theory Group members contributed advances at NeurIPS 2025, including work on clustering algorithms and tensor decomposition methods.
NeurIPS is an interdisciplinary conference that brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields.
Building on research years in the making, a team including Professor Konstantin Makarychev, third-year PhD student in computer science Ilias Papanikolaou, and Liren Shan (PhD ’23) demonstrated ways to make machine learning models easier for people to understand in their paper “Dynamic Algorithm for Explainable k-medians Clustering under Lp Norm.”
Makarychev said the team began working on the project roughly six years ago, investigating approaches to approximate sophisticated machine learning systems using simpler, more transparent models such as decision trees.
“The aim of this project is to make AI algorithms and models more understandable, explainable, and interpretable,” Makarychev said. “This can help ensure that these systems are trustworthy and fair, while also giving people clearer insight into how algorithmic decisions are made.”
Another Northwestern Computer Science team, including Professor Aravindan Vijayaraghavan and PhD students Dionysis Arvanitakis and Vaidehi Srinivas, presented the NeurIPS 2025 paper “Guarantees for Alternating Least Squares in Overparameterized Tensor Decompositions.”
According to Srinivas, nonconvex optimization is a key building block of recent advances in machine learning and AI.
“Understanding why nonconvex optimization methods work so well in so many different settings is one of the big questions in the theory of machine learning,” Srinivas said.
Traditionally, computer scientists analyze algorithms that use randomness by arguing that randomness works in the algorithm’s favor to ensure desirable and predictable behavior, explained Arvanitakis. Instead, the team relied on an emerging tool known as anti-concentration, which focuses on the opposite phenomenon.
“The key idea is that for the algorithm to fail, certain quantities must align in a very particular way—almost as if an adversary had carefully arranged them to make our life difficult,” Arvanitakis said. “Randomness then comes to the rescue: when the relevant objects are random, such pathological alignments occur only with very small probability.”