EVENT DETAILS
Title:
Computational/Statistical Gaps for Learning Neural Networks
Abstract
It has been known for decades that a polynomial-size training sample suffices for learning neural networks. Most theoretical results, however, indicate that these learning tasks are computationally intractable. Where does the truth lie? In this talk we consider one of the simplest and most well-studied settings for learning-- when the marginal distribution on inputs is Gaussian-- and show unconditionally that gradient descent cannot learn even one-layer neural networks. We then point to a potential way forward and sketch the first fixed-parameter tractable algorithm for learning deep ReLU networks: its running time is polynomial in the ambient dimension and exponential in only the network's parameters.
Biography
Adam Klivans is a professor of computer science at UT-Austin. He is the director of the new Machine Learning Laboratory (MLL). His research interests are in provably efficient algorithms for core tasks in machine learning.
TIME Monday November 9, 2020 at 12:30 PM - 1:30 PM
ADD TO CALENDAR&group= echo $value['group_name']; ?>&location= echo htmlentities($value['location']); ?>&pipurl= echo $value['ppurl']; ?>" class="button_outlook_export">
CONTACT Pam Villalovoz pmv@northwestern.edu
CALENDAR Department of Computer Science