Ken Sutton is the co-founder of Yobe, a voice technology company that was inspired by an autistic child. Sutton, who prefers to call himself an “unconventional” tech founder, has no background in technology but in finance.
In an interview with Shoppe Black, he said founding Yobe started “from a sequence of events.” He recalled working in a studio with his friend, James, on frequency manipulation. His friend’s son is autistic and had a challenge listening to music inside a car, something his friend, who is an engineer, had a problem with.
“And so what it came down to was how his autistic brains perceive frequencies which made it difficult and uncomfortable for him to listen to echoes and reverberations that you would have in a close environment like a car,” he said.
Sutton and James did some research and they found that it was just frequency manipulation and the way his friend’s son perceives frequencies. They went to the studio with millions of dollars of equipment and started bending frequencies. The duo first started with music and along the way, they found a process that James’ son responded to.
Sutton and James created sophisticated AI data processing algorithms and enhanced music in real-time. After their discovery, they hired a lawyer and got their creation patented and that was how Yobe was founded.
“Just because you built something doesn’t mean the market cares,” Sutton said. “What we found out was the music market had bigger problems than fidelity. So, we pivoted into voice. Our artificial intelligence style and our ability to track different types of biometrics like voice were uniquely suited to solve what we call the cocktail party problem, which is the actual scientific term of the signal-to-noise issues you have sometimes when talking to a device, and it’s noisy.”
Sutton and his partner realized that voice technologies for everyday use now were very limited. Nonetheless, more and more people were still accustomed to using Siri and Alexa.
“What we’re finding that’s weaving itself into the conversation is this thing we call the human standard, which is, we know how a device is supposed to respond to us because we talk to people all the time,” said Sutton, who served his country as a U.S. Army Ranger. “We don’t get upset when our devices don’t work when it’s crazy noisy and real loud. We get upset when they don’t respond the way a human would when a human was in the same environment.”
Yobe is purpose-built for live crowds and noisy environments to identify and decode human voices. “Modeled on human hearing, Yobe’s signal processing techniques substantially increase SNRs (signal-to-noise ratio) in noisy environments which enables the ability to decipher emotion, intent, mood, and other biological markers for an added layer of meaning,” Yobe says on its website.
In 2018, Yobe raised $1.8 million in seed funding with a chunk of the investment coming from Clique Capital Partners, a $100 million fund created specifically to fund innovative voice technology. Before that, Yobe had received $790,000 in the form of a National Science Foundation SBIR grant in 2016.