Faculty Directory
Qi Zhu

Professor of Electrical and Computer Engineering and (by courtesy) Computer Science

Contact

2145 Sheridan Road
Tech Room L454
Evanston, IL 60208-3109

Email Qi Zhu

Website

Qi Zhu's Homepage

IDEAS - Design Automation of Intelligent Systems Lab


Departments

Electrical and Computer Engineering



Download CV

Education

Ph.D. Electrical Engineering & Computer Science, University of California, Berkeley

B.E. Computer Science, Tsinghua University, Beijing Shi, China

 


Research Interests

My recent research spans the algorithm, software, and hardware layers for building safe, secure, and robust foundation model-driven AI systems, with efforts in formal safety verification and robustness certification, safety and security analysis of foundation model-driven embodied agents, safety-assured learning and system design, multimodal time-series learning, continual learning, and federated learning. Earlier work centers on design automation for cyber-physical systems (CPS), cyber-physical security, energy-efficient CPS, and system-on-chip design.


Selected Publications

Safe, Robust, and Secure Learning for Embodied AI Systems:

  • Enhancing Inverse Reinforcement Learning through Encoding Dynamic Information in Reward Shaping, L4DC, 2026.
  • Belief-Based Offline Reinforcement Learning for Delay-Robust Policy Optimization, ICLR, 2026.
  • Directly Forecasting Belief for Reinforcement Learning with Delays, ICML, 2025.
  • Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems, ICLR, 2025.
  • On Large Language Model Continual Unlearning, ICLR, 2025.
  • Empowering Autonomous Driving with Large Language Models: A Safety Perspective, https://arxiv.org/abs/2312.00812.
  • Variational Delayed Policy Optimization, NeurIPS, 2024. (Spotlight)
  • Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays, ICML, 2024.
  • Case Study: Runtime Safety Verification of Neural Network Controlled System, RV, 2024.
  • State-wise Safe Reinforcement Learning with Pixel Observations, L4DC, 2024.
  • REGLO: Provable Neural Network Repair for Global Robustness Properties, AAAI, 2024.
  • POLAR-Express: Efficient and Precise Formal Reachability Analysis of Neural-Network Controlled Systems, TCAD, 2023.
  • Enforcing Hard Constraints with Soft Barriers: Safe-driven Reinforcement Learning in Unknown Stochastic Environments, ICML, 2023.
  • Joint Differentiable Optimization and Verification for Certified Reinforcement Learning, ICCPS, 2023.
  • Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding, DATE, 2022. (Best Paper Award)

Machine Learning under Data Challenges:

  • TimeSliver: Symbolic-Linear Decomposition for Explainable Time Series Classification, ICLR, 2026.
  • MAESTRO: Adaptive Sparse Attention and Robust Learning for Multimodal Dynamic Time Series, NeurIPS, 2025.(Spotlight)
  • Phase-driven Domain Generalizable Learning for Nonstationary Time Series, TMLR, 2025.
  • Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval, NeurIPS, 2024.
  • Missingness-resilient Video-enhanced Multimodal Disfluency Detection, Interspeech, 2024.
  • DACR: Distribution-augmented Contrastive Reconstruction for Time-series Anomaly Detection, ICASSP, 2024.
  • Deja vu: Continual Model Generalization for Unseen Domains, ICLR, 2023.
  • Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization, ICLR, 2022. (Oral)
  • Addressing Class Imbalance in Federated Learning, AAAI, 2021.

Learning Applications in Various Domains (Advanced Manufacturing, Wearable Computing, Autonomous Driving, Robotics, etc.):

  • STEP-LLM: Generating CAD STEP Models from Natural Language with Large Language Models, DATE, 2026.
  • Wearable Network for Multilevel Physical Fatigue Prediction in Manufacturing Workers, PNAS Nexus, 2024.
  • Attrition-aware Adaptation for Multi-agent Patrolling, RAL, 2024.
  • Graph Neural Network-Based Multi-Agent Reinforcement Learning for Resilient Distributed Coordination of Multi-Robot Systems, IROS, 2024.
  • Semi-supervised Semantics-guided Adversarial Training for Robust Trajectory Prediction, ICCV, 2023.
  • Learning Representation for Anomaly Detection of Vehicle Trajectories, IROS, 2023.
  • Safety-driven Interactive Planning for Neural Network-based Lane Changing, ASP-DAC, 2023. (Best Paper Candidate)
  • Accelerate Online Reinforcement Learning for Building HVAC Control with Heterogeneous Expert Guidances, BuildSys, 2022.