Using Biosignals to Control a Robotic Hand

Master of Science in Robotics (MSR) student Robert Schloen talks about his independent project and what lessons he learned from the experience.

Robert Schloen has a background in biomedical engineering, and he's always been interested in brain computer interfaces and how biosignals could be used to interact with devices. He also has wanted to develop more experience with machine learning, and specifically deep learning.

When he enrolled in Northwestern Engineering's Master of Science in Robotics (MSR) program, he knew he wanted to use biosignals and machine learning to control a robotic hand. This past winter quarter, Schloen got that opportunity as part of an independent project.  

He recently took time to talk about the experience and what lessons he will be able to apply to his work life after graduating from Northwestern. 

How do you describe your independent project?

My project involves recording the electrical activity of the muscles in the forearm from the skin (surface electromyography, or EMG) and training a convolutional neural network (CNN) that produces a model that can be used to predict movements. The CNN is trained by passing a large number of EMG signals and the corresponding gestures through the network. The network extracts meaningful features, or patterns, from the EMG signals and finds the relationships between the patterns from an EMG signal and the corresponding gesture. 

Since training a CNN model requires a large amount of data, I first trained the model on a large set of EMG data from an online database, then retrained the model on a smaller amount of EMG data recorded from myself. This method is called transfer learning, and it allowed me to get an accurate model from a relatively small amount of data from my own muscles. 

Once trained, the CNN model can be used to predict which gesture the EMG signal corresponds to. The CNN model can be trained to predict a number of different gestures, such as making a fist, curling one finger, or giving a thumbs up. These predicted gestures from your movements can then be used to control a robotic hand for applications like a robotic prosthesis or robots you want to control at a distance (teleoperation).    

What was the biggest challenge you faced?

The biggest challenge for this project was improving the accuracy of the CNN model. There are many factors that affect the accuracy of the trained model, such as the amount and quality of EMG data, any processing done to the data, and the structure and parameters of the CNN. It was challenging to optimize all of these factors and avoid or find any mistakes that would reduce the accuracy of the model. 

What are the most important lessons you've learned from this project?

My initial plan for my project had been a bit ambitious given the time frame for this project, so I had to learn how to prioritize and organize the various parts of my project to complete my main goals before I spent too much time working on less necessary parts or trying to improve parts that were already sufficiently complete. I also learned how to keep myself more motivated and continue making progress by setting my own weekly goals for the project, rather than having deadlines set by the groups/boss/professor since this was an individual project.

How do you think you'll be able to incorporate the lessons you learned from the project into your life after graduation?

Whether it's for my job or for my own enjoyment, I will be working on many different projects with different levels of guidance and deadlines. By keeping in mind the lessons I learned from this project, I will be able to set my own goals to keep myself motivated and not waste time by recognizing and prioritizing the most important parts of the projects. 

What do you hope to do after MSR?

I plan to get an industry job in medical or assistive robotics or related to brain computer interfaces, machine learning, or haptics.

Is there anything else you'd like to add?

For more information about this project, you can find my portfolio post for this project, and a link for its GitHub repository, here: