SAPI (Speech Application Program Interface) is an application program interface (API) provided with the Microsoft Windows operating system that allows programmers to write programs that offer text-to-speech and speech recognition capabilities. Interfaces are provided for the C, C++, and Visual Basic programming languages. Using Microsoft's COM (Component Object Model) architecture, SAPI is the most widely used speech application program interface used today. In the future, Microsoft plans to embed speech technology using SAPI into their operating system.
SAPI has seven main components:
- Voice Command: Voice Command is a high-level interface that provides command and control speech recognition for applications. Voice Command allows a developer to create a Voice Command menu that contains voice commands, such as "new file" or "send mail to firstname.lastname@example.org" that a user speaks into a microphone or other audio device. The user can control the computer without needing a keyboard or mouse.
- Voice Dictation: Voice Dictation allows the user to dictate into any application that supports speech recognition. An invisible or virtual edit box receives the text the user dictates and displays the text in an application window. Voice Dictation allows text formatting such as capitalization, translation of punctuation words into punctuation symbols, built-in glossary entries, and correction of the last word spoken or a selected word. Applications that use Voice Dictation classify speech by topics that use different language styles. Topics include e-mail speech, formal writing, or programming speech. Voice Dictation stores the information for each topic on your hard drive.
- Voice Text: Voice Text converts text into speech that is played over computer speakers or sent over a telephone line. The speech played has several different modes, each with a different voice.
- Voice Telephony: Voice Telephony uses telephony controls that are similar to Windows controls. Windows controls include buttons, list boxes, sliders and other objects that can be manipulated by a mouse or keyboard. Telephony controls are codes that recognize spoken responses such as Yes or No, your phone number, the date, and the time. Telephony controls create a dialogue between the user and the computer. For example, a user calls a vendor to order an item. The user then answers several questions by speaking into the telephone receiver. The telephony controls recognize these responses and sends them to the application that processes responses. Telephony controls also handle error conditions (these are common with spoken numbers or when the caller does not respond) and variations of answers such as "January 4th" or "tomorrow."
- Direct Speech Recognition: This is a low-level interface similar to Voice Command. The main difference is Direct Speech Recognition speaks directly to the speech engine. This gives the application more control and speed.
- Direct Text To Speech : This is a low-level interface similar to Voice Text that also speaks directly to the speech engine.
- Audio Objects: An Audio Object tells the speech engine where to get its audio.
The future of speech technology will include products that allow you to do such things as surfing the Internet using speech and asking your television what is showing tonight. Software developers are developing applications that understand concepts. For example, if you tell your computer to print a certain document, your application will know whether to print it on your printer or the network's printer. Speech technology is important for medical professionals, law enforcement personnel, the physically handicapped, as well as many business and home users.