Faculty Directory
Han Liu

Orrington Lunt Professor of Computer Science

Professor of Statistics

Contact

2233 Tech Drive
Mudd Room 3119
Evanston, IL 60208-3109

Email Han Liu

Website

MAGICS Lab


Departments

Computer Science


Education

Ph.D. Machine Learning and Statistics, Carnegie Mellon University, Pittsburgh, PA


Biography

Han Liu directs the MAGICS (Modern Artificial General Intelligible and Computer Systems) lab at the Northwestern University. He has been the director of the deep reinforcement learning center at Tencent AI Lab and had been a professor at the Princeton University and Johns Hopkins University. He received a joint PhD in Machine Learning and Statistics from the Machine Learning Department at the Carnegie Mellon University, advised by John Lafferty and Larry Wasserman. His research lies at the intersection of artificial intelligence and computer systems, which deploys statistical machine learning methods on edges and clouds to achieve analytical advantages. Han Liu has received numerous research awards including the Alfred P Sloan Fellowship in Mathematics, the IMS Tweedie New Researcher Award, the ASA Noether Young Scholar Award, the NSF CAREER Award, the Howard B Wentz Award and the Umesh Gavaskar Memorial Best Dissertation Award. He is now serving as associate editors for the Journal of American Statistical Association, the Electronic Journal of Statistics, the Technometrics, and the Journal of Portfolio Management.

Research Interests

My primary research interest is modern artificial intelligence, which exploits computation and data as a lens to explore machine intelligence. To make progress, I examine this with the point of view provided by the twin windows of statistical machine learning and computer systems. Statistical machine learning provides a unified framework which combines uncertainty and logical structure to model complex, real-world phenomena, while computer systems implement the learning algorithms with the highest performance guarantees. Together they provide a powerful tool to explore complex interactions among a large number of variables, which has important applications in modern sciences. Success on this research has the potential to revolutionarize the foundation of modern data science and push the frontier of artificial intelligence.

 

In the past years, my research group aimed to make contributions to data science and machine learning at a foundational level. For example, we have been working on nonparametric graphical models, which integrates the power of probabilistic graphical model and sparse high dimensional methods. This research won the the PECASE award (Presidential Early Career Award for Early Career Scientists and Engineers), the IMS Tweedie award (Awarded annually by the Insitute of Mathematical Statistics to one statistician for making excellent early-career contribution to statistical theory and methodology) and the ASA Noether award (Annually awarded by the American Statistical Associations to one researcher for excellent early-career contributions to nonparametric statistics). We also developed a model-based statistical optimization theory which provides provable guarantees for nonconvex learning problems. Our research along this line won the Best Paper Prize in Continuous Optimization in the 5th International Conference on Continuous Optimization (Awarded every 3 years to one best paper on continuous optimization). More recently, We developed a complete theory of post-regularization inference and divide-and-conquer inference for Big Data, which serves as the basis of my NSF CAREER award. We also pushed the frontier of a new field named combinatorial inference for graphical models, which aims at developing a new uncertainty assessment theory of statistical models with nonEuclidean (e.g., graphs, partitions) parameters. My earlier research along this direction has won the Alfred P Sloan Fellowship in Mathematics. Our research also won the Best Overall Paper Award Honorable Mention in the 26th International Conference on Machine Learning, the Notable Paper Award in the 16th Interantional Conference on AI and Statistics, the Best Paper Award in the National Science Research journal.


Selected Publications

  • Lu, Junwei; Han, Fang; Liu, Han, Robust Scatter Matrix Estimation for High Dimensional Distributions with Heavy Tail, IEEE Transactions on Information Theory 67(8):5283-5304 (2021).
  • Ma, Cong; Lu, Junwei; Liu, Han, Inter-Subject Analysis, Journal of the American Statistical Association 116(534):746-755 (2021).
  • Lin, Kevin Z.; Liu, Han; Roeder, Kathryn, Covariance-Based Sample Selection for Heterogeneous Data, Journal of the American Statistical Association 116(533):54-67 (2020).
  • Lu, Junwei; Kolar, Mladen; Liu, Han, Kernel Meets Sieve, Journal of the American Statistical Association 115(532):2084-2099 (2020).
  • Rosenblum, Michael; Fang, Ethan X.; Liu, Han, Optimal, two-stage, adaptive enrichment designs for randomized trials, using sparse linear programming, Journal of the Royal Statistical Society. Series B: Statistical Methodology 82(3):749-772 (2020).
  • Ge, Jason; Li, Xingguo; Jiang, Haoming; Liu, Han; Zhang, Tong; Wang, Mengdi; Zhao, Tuo, Picasso, Journal of Machine Learning Research 20 (2019).
  • Tan, Kean Ming; Lu, Junwei; Zhang, Tong; Liu, Han, Layer-wise learning strategy for nonparametric tensor product smoothing spline regression and graphical models, Journal of Machine Learning Research 20 (2019).
  • Neykov, Matey; Liu, Han, Property testing in high-dimensional ising models, Annals of Statistics 47(5):2472-2503 (2019).
  • Wang, Qing; Xiong, Jiechao; Han, Lei; Sun, Peng; Liu, Han; Zhang, Tong, Exponentially weighted imitation learning for batched historical data, Advances in Neural Information Processing Systems 31:6288-6297 (2018).
  • Eisenach, Carson; Liu, Han, Efficient, certifiably optimal clustering with applications to latent variable graphical models, Mathematical Programming 176(1-2):137-173 (2019).
  • Yang, Zhuoran; Ning, Yang; Liu, Han, On semiparametric exponential family graphical models, Journal of Machine Learning Research 19:1-59 (2018).
  • Neykov, Matey; Lu, Junwei; Liu, Han, Combinatorial inference for graphical models, Annals of Statistics 47(2):795-827 (2019).
  • Zhou, Wen Xin; Bose, Koushiki; Fan, Jianqing; Liu, Han, A new perspective on robust M-estimation, Annals of Statistics 46(5):1904-1931 (2018).
  • Li, Xingguo; Lu, Junwei; Arora, Raman; Haupt, Jarvis; Liu, Han; Wang, Zhaoran; Zhao, Tuo, Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex Matrix Factorization, IEEE Transactions on Information Theory 65(6):3489-3514 (2019).
  • Tan, Kean Ming; Wang, Zhaoran; Zhang, Tong; Liu, Han; Cook, R. Dennis, A convex formulation for high-dimensional sparse sliced inverse regression, Biometrika 105(4):769-782 (2018).