Part of the Software applications glossary:

Speech synthesis is the computer-generated simulation of human speech. It is used to translate written information into aural information where it is more convenient, especially for mobile applications such as voice-enabled e-mail and unified messaging . It is also used to assist the vision-impaired so that, for example, the contents of a display screen can be automatically read aloud to a blind user. Speech synthesis is the counterpart of speech or voice recognition . The earliest speech synthesis effort was in 1779 when Russian Professor Christian Kratzenstein created an apparatus based on the human vocal tract to demonstrate the physiological differences involved in the production of five long vowel sounds. The first fully functional voice synthesizer, Homer Dudley's VODER (Voice Operating Demonstrator), was shown at the 1939 World's Fair. The VODER was based on Bell Laboratories' vocoder (voice coder) research of the mid-thirties.

Speech prosthesis is computer-generated speech for people with physical disabilities that make it difficult to speak intelligibly. Much of the research in this area integrates text and speech generation both, since the disabilities that create problems with speech frequently make text entry difficult as well. Given the speed and fluidity of human conversation, the challenge of speech prosthesis is to circumvent these difficulties. The main research goal is to create a prosthetic system that will as closely as possible resemble natural speech, with the least required input from the user. Speech prosthesis systems also make it possible for visually-impaired people to use computers.

Multimodal speech synthesis (sometimes referred to as audio-visual speech synthesis) incorporates an animated face synchronized to complement the synthesized speech. The same difficulties underlying an individual's speech impairment often hinder their ability to communicate through facial expressions. Although synthesized speech is increasingly life-like, it may be quite some time before it approaches the capacity for nuances of natural speech. Multimodal systems incorporate a means of adding non-verbal cues to speech (such as head-shaking, smiling, and winking, for example) to make the user's meaning as clear as possible.

This was last updated in September 2005
Posted by: Margaret Rouse

Related Terms

Definitions

  • virtual assistant

    - A virtual assistant is an electronic audio or audio/video avatar-centered program that uses artificial intelligence to perform tasks for the user. Virtual assistants have existed in concept for yea... (WhatIs.com)

  • cache thrash

    - Cache thrash is caused by an ongoing computer activity that fails to progress due to excessive use of resources or conflicts in the caching system. (SearchSoftwareQuality.com)

  • geo-fencing (geofencing)

    - Geo-fencing is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geofence is a virtual barr... (WhatIs.com)

Glossaries

  • Software applications

    - Terms related to software applications, including definitions about software programs for vertical industries and words and phrases about software development, use and management.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Ask a Question About speech synthesisPowered by ITKnowledgeExchange.com

Get answers from your peers on your most technical challenges

Tech TalkComment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.