By Emma Elisse
Voice recognition technology has languished at the “almost-there” stage for several decades. Now that millions of consumers are speaking to their devices and expecting to be understood with a reasonable degree of accuracy, the field seems poised to deliver on its long-awaited promise. As investment and interest pours into the industry, we may see 2016 become the year when it finally achieves maturity.
Besides just understanding a few key words or phrases, like many automated telephone systems do, today's machines are developing the ability to parse more complex voice input.
Google is using its massive archive of voice information to improve its voice search performance, while rival Apple is enhancing its algorithms so its Siri software can get a better handle on what users are asking her to do. A new company called VocalZoom is developing technology to filter out background noise by using a combination of traditional microphones with an optical sensor to key in on the speaker's facial vibrations. This will result in a more accurate representation of what speakers are actually saying.
This is part of a shift toward using biometric information – voices, fingerprints, retinal scans and more – for the purpose of uniquely identifying customers. Barclays, a major British bank, has already deployed a voice recognition system for clients who conduct financial transactions by telephone and over the Internet. Smartphone payment apps are another area in which voice recognition may play a vital role in keeping the information of end users secure.
That's all changing now, and Amazon is at the forefront. Its Echo product allows for the control of multiple appliances via human speech in concert with the company's Alexa virtual assistant. Thermostats, door locks, lights and more can be adjusted by the user simply by issuing voice commands. Apple offers a competing solution in the form of its Siri virtual assistant and the HomeKit interface, and Honeywell offers its own line of voice-activated thermostats. Google's response, Google Home, is expected to debut later this year. Nevertheless, Amazon appears to be way out in front of other firms in terms of ease-of-use and compatibility with existing merchandise from third-party manufacturers.
Automobile producers are eyeing the potential of voice recognition in their cars. Honda VR is already on the market, and it enables drivers to perform navigation functions, adjust climate settings and control the audio system by issuing preset voice commands. For example, one can say, “display 3-D map” or “air conditioner on,” and the vehicle will perform the necessary tasks.
One possible problem is highlighted in studies conducted under the auspices of the AAA Foundation for Traffic Safety. Researchers discovered that drivers were distracted from the road for up to 27 seconds after talking to their cars or smart devices. This won't be an issue for the “intelligent car” project from Samsung, BMW, Panasonic and Nuance. This speech-recognition system is designed for operation in self-driving cars: Driver distraction won't matter at all because the vehicle will possess the ability to operate itself unassisted.
By allowing human beings to converse directly with their high-tech servants, we'll eliminate a layer of misinterpretation and possible frustration. According to a report by Markets and Markets, the voice and speech recognition market is expected to grow from $4.17 billion in 2015 to $11.96 billion in 2022.
Image credit: Flickr/Michael Dorausch
Emma Elisse is a freelance writer and blogger from the Midwest. After going to college in Florida she relocated to Chicago, where she primarily writes on topics at the intersection of digital technology and the environment. She lives with two roommates and one rabbit.
TriplePundit has published articles from over 1000 contributors. If you'd like to be a guest author, please get in touch!