Growing up in Delaware, Donald Williamson had two loves.
One was electronics. Give him a gadget, whether it was a handheld electronic or a video game system, and Williamson was in heaven. He wanted to know how to use it, how it worked.
His other love was helping people, and he has managed to combine his two passions in the hope he could improve lives.
Williamson, an assistant professor in computer science at the School of Informatics and Computing, is developing algorithms that he hopes will help the hearing impaired better handle situations in which noise pollution makes it not only difficult to hear but also puts undo stress on users of hearing aids.
“People who wear hearing aids suffer from listening fatigue,” says Williamson, who earned his undergraduate degree in Electrical Engineering from the University of Delaware, master’s degrees from Drexel and Ohio State, and a Ph.D. in computer science and engineering from Ohio State. “When they’re wearing a hearing aid, they get tired of listening and extracting valuable information. The quality of what they hear also isn’t very good, and hearing aids don’t really function well in noisy environments, such as a crowded restaurant with many conversations and music playing.
“Part of my research is to address that and try to remove some of the background noise using machine learning techniques so that people with hearing impairments can hear in those environments and reduce their listener fatigue.”
Williamson’s personal interest in music and audio processing allowed him to recognize that parts of speech processing are the same as processing music. Through the development and refinement of computer algorithms to process speech, Williamson found an avenue to help others.
“If I’m given a noisy signal with speech in it, I want to isolate the speech and still make it sound natural to the listener,” Williamson says. “You don’t want it to sound robotic. You want it to be pleasant to listen to. There are signal processing and machine learning techniques to build the algorithms, then I use human listeners to rate and evaluate the algorithm to see how well it’s performing.”
Williamson’s research is accessible since all but the profoundly deaf have experience with hearing issues.
“You can talk to anybody about this problem,” Williamson says. “We’ve all been in a room where it has been very noisy—whether we have hearing impairments or not—and it’s hard for us to hear. That helps with reaching people. We also know people who have dealt with hearing impairment, so it’s very relatable.”
Williamson also hopes his research will have applications in consumer-based products. Improving the ability to separate speech from background noise could improve human-computer interaction in any device that features speech recognition, and his work could have a future in music.
“You can apply a lot of speech processing principles to music,” Williamson says. “You can conceivably extract notes or isolate a particular instrument in a piece of music. It’s based on pitch information. Each note plays at its own pitch, so it can be identified, but it’s a very challenging problem. Still, there are a lot of similarities there.”
Then again, the challenging aspect is part of the appeal for Williamson’s work.
“I prefer a challenging problem,” Williamson says. “My work can have a big impact on society in many ways, whether it is with hearing impairment or whether it is consumer-based. It’s a far-reaching area that could bring a lot of benefits to a lot of people.”