Menu
See all NewsEngineering News
Events

Inaugural Deep Learning Symposium Looks to Future of Artificial Intelligence

Industry partners, researchers, and students discussed future of field

Diego Klabjan addresses guests at the symposium.Diego Klabjan addresses guests at the symposium.
Ari KaplanAri Kaplan
Patrick BoueriPatrick Boueri
MSIA students Nora Xu (left) and Yiwei Zhang present at the symposium.MSIA students Nora Xu (left) and Yiwei Zhang present at the symposium.

As artificial intelligence (AI) and machine learning become integral across industries, many are looking to deep learning — which uses models based on artificial neural networks to extract even more features from datasets — to gain new insights.

At its inaugural symposium on November 14, Northwestern Engineering’s Center for Deep Learning, a community of deep learning-focused data scientists who conduct research and collaborate with industry, convened members of academia along with current and potential corporate partners to discuss the state of the technology and the best path forward.

While the Center for Deep Learning develops new efforts in this area — including a new model-serving project — members of industry spoke about how their companies are using deep learning to innovate.

“It’s a really interesting time to be in this room,” said Ari Kaplan, director of industry marketing at DataRobot, which uses AI to help businesses solve problems and become more efficient. Deep learning is now “a huge movement, but there are still a lot of growth opportunities,” he added.

Kaplan, who created the Chicago Cubs analytics department, said more and more major companies are interested in using deep learning solutions. Recently, DataRobot used AI to help one company save $300 million annually by reducing waste. The solution only took four data scientists less than five months to create.

“For the next three to five years, you’ll see the demand grow,” he said. “Companies are trying to ask questions deeper and faster than they can get people to answer them.”

Creating a model serving system

Deep learning is so powerful because it merges two processes — extracting features from data (also known as feature engineering) and selecting and tuning a model for those features — into one process, said Diego Klabjan, director of the Center for Deep Learning. The development and training of deep learning models is challenging, but an even more daunting task is the process of running a model in production, known as model serving. That’s why the center is developing the Deep Learning Model Serving (DELOS) system, which focuses on monitoring and retraining. As an added feature, DELOS also assesses confidence of predictions while requiring less human intervention on a day-to-day basis. Klabjan said the system would enable a transition from “a human in the loop” of existing solutions to “a human on the loop.” They are working with partner companies to test the system now and hope to deploy the system fully next year.

“We are going to consider the project a success if it is adapted by many companies, as well as the research community,” said Klabjan, professor of industrial engineering and management sciences and director of the Master of Science in Analytics program at the McCormick School of Engineering.

Detecting anomalies in industry

Deep learning provides the basis for much of the anomaly detection done by Uptake, a company which uses an industrial analytics platform to help companies increase productivity, security, safety, and reliability, said Patrick Boueri, senior manager of data science at Uptake. For example, by training an anomaly detection model for an array of solar panels, they were able to get four days of lead time before one of the panel’s inverters failed. That allowed the array’s engineers to plan for maintenance without requiring any downtime on the array.

“We firmly believe that this would not have been possible without applying a deep learning approach,” he said.

A new approach for machine learning processes

But for machine learning to work, it needs to clear which processes are happening at which stages, said Pavel Dournov, a senior staff engineering manager at Google. Often, between the input of a dataset and the output of a model, several different processes created with different code libraries are used, and the two become coupled. That means it is unclear which code was used, how it was deployed, and who used it, he said.

Google is working to solve this by using metadata storage, which keeps track of code and tasks. Each step in the machine learning process is isolated and versioned. “It helps orchestrate complex workflows and to create and share components,” he said.

Deep learning for citations, fashion

Professors and industry members aren’t the only ones using deep learning to innovate. At the symposium, two student groups presented projects developed with deep learning. One, called Checkmate, is an automatic citation verification system that can link information cited in reports directly to the source material, ensuring accuracy and streamlining the process.

Another group used deep learning to develop a new way to shop: by searching for cheaper alternatives to high-fashion outfits. Master of Science in Analytics students Nora Xu and Yiwei Zhang said they understood the impact that fashion bloggers have on what women want to buy, but many consumers don’t have the budget to purchase the high-priced items those bloggers model.

So they trained a deep learning model to take a blogger image, remove the background so just the outfit remained, then detect the color and shape of the outfit in order to search the Internet for similar, cheaper alternatives.

“Expensive fashion can be replaced with affordable choices with deep learning,” Xu said.

Other speakers at the symposium included:

  • Jean Utke, technical director at Allstate
  • Plamen Petrov, vice president of artificial intelligence and chief data officer at Anthem
  • Emilio Lapiello, associate director of data science at The Boston Consulting Group