Menu
See all NewsEngineering News
Education

New Textbook Prepares Students to Tackle Today’s Scientific Computing Problems

Introduction to parallel programming crucial for solving increasingly complex mathematical models

The complexity and detail of mathematical models is rapidly increasing in response to the greater availability of quantitative data generated by experiments. The resulting equations to be solved require more and more sophisticated computing power. Solutions can be achieved most efficiently by programming multiple computers to solve independent equations simultaneously, a technique known as parallel computing.

David Chopp

For those interested in gaining a foundation in multiple methods of parallel programming for solving differential equations, a new textbook from Northwestern Engineering’s David Chopp is available to help. Called Introduction to High Performance Scientific Computing (SIAM, 2019), the book is designed for advanced undergraduates or beginning graduate students with little experience in programming.

The book is based on a class Chopp has taught since 2014. He found that available textbooks covered individual topics in depth but none offered a bigger picture or connected the topics to solving differential equations.

“I wanted to put together enough information to get someone started,” said Chopp, professor and chair of the Department of Engineering Sciences and Applied Mathematics. “There are different types of hardware out there and different ways parallelism can be done. Not every technique is good for every problem. I tried to give readers a chance to compare the methods so they could choose the best one for their application.”

Chopp’s research interests in numerical methods led him to learn parallel computing himself. Numerical methods are mathematical tools used to solve complex numerical problems, like differential equations. In his 23 years at Northwestern, Chopp has focused on solving moving boundary problems found in disciplines from microbiology to fracture mechanics, which has involved a lot of programming and algorithm development. To compute problems more efficiently, parallel programming became necessary.

“I put this book together in part because we needed our students to learn it, so I tasked myself with learning it first,” he said. “Initially, there was almost no parallel computing going on; now our graduate students are using it in many projects. It has really transformed the computing environment here.”

One of the book’s unique features is that it’s written in six stand-alone sections that allow professors and students to choose the sections most pertinent to their computing environment. Chopp himself focuses on Parts I, III, and IV in his class, covering the topics of programming in C, using Message Passing Interface (MPI), and CUDA, respectively. Students already familiar with programming in C are encouraged to read Part II about programming in OpenMP instead. And, if a school doesn’t have access to the NVIDIA GPUs needed to run CUDA, the class can study Part V on OpenCL instead. The sixth section introduces students to numerical methods and provides practice problems to be solved using parallel programming.

As Chopp wrote in the book’s preface, “In my experience, there’s no substitute for learning programming but to build something with it.”

Chopp, named the Charles Deering McCormick Professor of Teaching Excellence in 2008, will begin using his new textbook when he teaches his class ES_APPM 444: High Performance Scientific Computing, in spring of 2020.