It's easy to add speech recognition capabilities to an application
built with an object-oriented framework, with minimal disruption
to your existing code. To illustrate the process, this article shows
one way to add basic speech recognition capabilities to an
application built with PowerPlant, Metrowerks' popular C++-based
application framework. You can use the same strategy with other
application frameworks as well.
Speech recognition capabilities, such as those provided by Apple's Speech Recognition
Manager, promise to revolutionize the way people use computers. The reason for this
is simple: it's often a lot easier to say what you want done than to actually do it, even in
the "user-friendly" environment provided by the Macintosh graphical user interface.
So the time you spend making your application speakable is time very well spent.
Happily, if you've built your application with a framework such as PowerPlant or
MacApp, you can add basic speech recognition capabilities quickly and easily.
To show how to add speech recognition to an application built with a framework, we'll
modify the PowerPlant DocDemo sample provided with the CodeWarrior 8 release to
add speech support for the File menu commands. Of course, there's nothing special
about DocDemo: you should be able to drop the code we provide into any PowerPlant
application. Moreover, although this code is specific to PowerPlant, you should be able
to use similar techniques with other application frameworks as well.
Before reading this article, you should be familiar with the basic operations of the
Speech Recognition Manager and with the PowerPlant application framework. For an
overview of the Speech Recognition Manager, see the article "The Speech Recognition
Manager Revealed" in this issue of develop. As mentioned in that article, you'll find
everything you need to use the Speech Recognition Manager -- including detailed
documentation (written by yours truly) -- on this issue's CD and on Apple's speech
technology Web site. For basic information about PowerPlant, see The PowerPlant
Book or other Metrowerks documentation.
We want to add speech support for the File menu commands in the DocDemo
application. This isn't the highest or best use of speech recognition capabilities (see
"Speakable Menus?"), but it makes a simple example for us to focus on. In a nutshell,
we'll define a custom C++ class and create a single instance of that class to handle all
the required speech recognition processing (such as installing a language model and
responding to recognition results sent to it via Apple events). Here are the steps we'll
follow:
The following sections explain these steps in detail, though not strictly in this order.
All the code provided here is also included on this issue's CD.
______________________________
While it's fairly easy to make your application's menus speakable, this isn't
necessarily the best use of speech recognition technology and it's definitely not
what Apple's speech engineers would like to see you focus your attention on.
Most File and Edit menu commands are just too short to be easily distinguished
by the recognizer ("quit" sounds a lot like "cut," for example).
In addition, since menus can't be seen without pulling them down, novice users
probably won't know which menu commands are available until they click in
the menu bar; at that point, they may as well just use the menu.
However, there is some value in knowing how to make menus speakable. For
one thing, the techniques used in this article can easily be extended to handle
more complex utterances that have nothing to do with menus. Also, there is
real value in making tool palettes -- which are really just graphical menus
that happen to float on the desktop -- speakable; for an example, see the demo
program PlacMac on this issue's CD.
So the moral is: make your menus speakable if you think there is value for the
user, but don't just make your menus speakable. Do something creative and
compelling with speech recognition.
______________________________
All the speech recognition processing for our PowerPlant-based application will be
handled by a single custom object of type CDocSpeech. The main application code needs
only to create (and later delete) that custom object. We'll start by adding
these lines of code to the beginning of the main application source code file,
CDocDemoApp.cp:
#include "CDocSpeech.h" extern CDocSpeech *gDocSpeechObj; Boolean gHasSpeechRecog;
The external reference is to an instance of the CDocSpeech class, and the Boolean global
variable indicates whether the Speech Recognition Manager is available in the current
operating environment. To set that variable and create our custom object, we add the
code in Listing 1 to the constructor CDocDemoApp::CDocDemoApp.
______________________________
Listing 1. Creating a custom speech recognition object
// Determine whether the Speech Recognition Manager is available;
// if it's available, create a custom speech recognition object.
long theVersion;
OSErr theErr;
gHasSpeechRecog = false;
theErr = ::Gestalt(gestaltSpeechRecognitionVersion, &theVersion);
// Version must be at least 1.5.0 to support API used here.
if (!theErr)
if (theVersion >= 0x00000150) {
gHasSpeechRecog = true;
gDocSpeechObj = new CDocSpeech();
}
We'll also need to delete gDocSpeechObj when our application quits. We do this by
adding the following code to the destructor CDocDemoApp::~CDocDemoApp:
// Shut down speech recognition, if it's running. if (gHasSpeechRecog) delete gDocSpeechObj;
______________________________
Those are all the modifications we need to make to our existing source code! The rest of
the speech processing is handled by the custom speech recognition object created by
our main application code.
The header file CDocSpeech.h, shown in Listing 2, defines a number of constants
specifying the 'STR#' resources (and indices within those resources) that contain the
names of the language models we want to create and the actual words or phrases we
want to listen for. We'll use these constants later, when we create the various language
models.
______________________________
Listing 2. Specifying 'STR#' resources and declaring CDocSpeech
#include "SpeechRecognition.h"
// Language model names
const ResIDT rSTR_LMNames = 400; // ID of STR# resource
const short kStr_GApplLM = 1; // Indices within resource
const short kStr_GUnivLM = 2;
const short kStr_GDocuLM = 3;
const short kStr_UFileLM = 4;
const short kStr_DFileLM = 5;
// Universal file command phrases
const ResIDT kSTR_UFileCmds = 500; // ID of STR# resource
const short kStr_New = 1; // Indices within resource
const short kStr_Open = 2;
const short kStr_PageSetup = 3;
const short kStr_Quit = 4;
// Document file command phrases
const ResIDT kSTR_DFileCmds = 501; // ID of STR# resource
const short kStr_Close = 1; // Indices within resource
const short kStr_Save = 2;
const short kStr_SaveAs = 3;
const short kStr_Revert = 4;
const short kStr_Print = 5;
const short kStr_PrintOne = 6;
// Apple menu command phrases
const ResIDT kSTR_UApplCmds = 503; // ID of STR# resource
const short kStr_About = 1; // Indices within resource
#define kEnableObj true
#define kDisableObj false
class CDocSpeech {
public:
CDocSpeech();
virtual ~CDocSpeech();
static pascal OSErr HandleSpeechBegunAppleEvent (AppleEvent
*theAEevt, AppleEvent *reply, long refcon);
static pascal OSErr HandleSpeechDoneAppleEvent (AppleEvent
*theAEevt, AppleEvent *reply, long refcon);
private:
OSErr MakeLanguageModels (void);
};
______________________________
CDocSpeech.h also contains the declaration of the custom CDocSpeech class. CDocSpeech
is extremely simple: it contains a constructor, a destructor, and two Apple event
handlers. It also defines a private method, MakeLanguageModels, that creates the
language models used by DocDemo. MakeLanguageModels is called by the constructor
when an instance of the CDocSpeech class is created.
All the remaining code is found in the file CDocSpeech.cp. Listing 3 shows the
beginning of that file, which declares all the global variables and function prototypes.
______________________________
Listing 3. Declaring global variables and function prototypes
#include "CDocSpeech.h" // Global variables SRRecognitionSystem gSystem; SRRecognizer gRecognizer; SRLanguageModel gGApplLM, gGDocuLM; SRPhrase gRevert; CDocSpeech *gDocSpeechObj = nil; // Function prototypes void SetLanguageObjectState (SRLanguageObject inObj, Boolean isEnabled);
______________________________
The constructor method, shown in Listing 4, performs all the necessary startup
associated with speech recognition. Much of this code should already be familiar to you
from the article "The Speech Recognition Manager Revealed."
______________________________
Listing 4. Starting up speech recognition
CDocSpeech::CDocSpeech()
{
OSErr theErr = noErr;
// Open a recognition system.
theErr = ::SROpenRecognitionSystem
(&gSystem, kSRDefaultRecognitionSystemID);
// Set recognition system properties to user-selected feedback and
// listening modes.
if (!theErr) {
short theModes = kSRHasFeedbackHasListenModes;
theErr = ::SRSetProperty(gSystem, kSRFeedbackAndListeningModes,
&theModes, sizeof(theModes));
}
// Create a recognizer with default speech source.
if (!theErr)
theErr = ::SRNewRecognizer(gSystem, &gRecognizer,
kSRDefaultSpeechSource);
// Set recognizer properties. We want to receive notifications
// when recognition begins and ends.
if (!theErr) {
unsigned long theParam =
kSRNotifyRecognitionBeginning | kSRNotifyRecognitionDone;
theErr = ::SRSetProperty(gRecognizer, kSRNotificationParam,
&theParam, sizeof(theParam));
}
// Install Apple event handlers.
if (!theErr) {
theErr = ::AEInstallEventHandler
(kAESpeechSuite, kAESpeechDetected,
NewAEEventHandlerProc(HandleSpeechBegunAppleEvent),
0, false);
theErr = ::AEInstallEventHandler(kAESpeechSuite, kAESpeechDone,
NewAEEventHandlerProc(HandleSpeechDoneAppleEvent),
0, false);
}
// Make our language models.
if (!theErr)
theErr = MakeLanguageModels();
// Install initial language model and release our reference to it.
if (!theErr) {
theErr = ::SRSetLanguageModel(gRecognizer, gGApplLM);
::SRReleaseObject(gGApplLM);
}
// Have the recognizer start processing sound.
if (!theErr)
theErr = ::SRStartListening(gRecognizer);
}
______________________________
Now we just need to write the MakeLanguageModels function called by the CDocSpeech
constructor, and the two Apple event handlers.
Probably the most time-consuming part of adding speech recognition to an application
is defining the language models that describe the words and phrases you want to listen
for. The process is straightforward, but it requires careful attention to the various
states your application can be in. This is because you want the active language model to
include only utterances that make sense at any given time. For instance, if no document
window is open, it makes no sense to listen for the Close or Save command. Similarly,
if a document isn't dirty (that is, if it hasn't changed since it was most recently
saved), you probably don't want the user to be able to execute the Revert command.
This should remind you, of course, of the context-specific menu enabling and disabling
that's a standard part of any good Macintosh application. For our demonstration
application, we'll handle context sensitivity by creating a number of embedded
language models that we'll enable or disable according to context.
The commands in the File menu fall into two main categories: those that can be issued
at any time (such as New or Open) and those that apply to a specific document (such as
Save or Close). Accordingly, we'll construct two language models, one for each type of
command. Let's call the first variety universal file commands and the second variety
document file commands. In addition, we want to make the About DocDemo command
utterable. Here's a Backus-Naur Form (BNF) diagram of our top-level language
model:
<Menu Commands> =
<Universal Commands> | <Document Commands>;
<Universal Commands> =
<Universal File Commands> | About DocDemo;
<Universal File Commands> = New | Open | Page Setup | Quit;
<Document Commands> = <Document File Commands>;
<Document File Commands> =
Close | Save | Save As | Revert | Print | Print One;
As you can see, the top-level language model Menu Commands consists of two embedded
language models, one for commands that can be issued at any time and one for
commands that require a document window to be open. Each of these embedded language
models contains other language objects. The Universal Commands language model
contains the phrase "About DocDemo" and the language model that contains the
universal file commands. The Document Commands language model contains only the
language model that contains the document file commands; you would
add other document-specific models here (for instance, document-specific editing
commands). In all, we'll create five language models. (Note that the Page Setup
command is in the universal file commands language model; that's because DocDemo
allows you to choose that command even if no document window is open.)
Listing 5 shows the code defining the MakeLanguageModels function (error checking
has been removed for the sake of readability). Apple provides a utility,
SRLanguageModeler, that you can use to build and test language models described with
BNF diagrams like that shown above. SRLanguageModeler can also save those language
models into resources or files, from which your application can load the models at run
time. Here, however, we build the language models on the fly to demonstrate the Speech
Recognition Manager routines for doing so.
______________________________
Listing 5. Creating the language models
OSErr CDocSpeech::MakeLanguageModels (void)
{
OSErr theErr = noErr;
Str255 theStr;
SRLanguageModel myGUnivLM, myUFileLM, myDFileLM;
// Make the language models (which are initially empty).
::GetIndString(theStr, rSTR_LMNames, kStr_GApplLM);
::SRNewLanguageModel(gSystem, &gGApplLM, &theStr[1], theStr[0]);
::GetIndString(theStr, rSTR_LMNames, kStr_GUnivLM);
::SRNewLanguageModel(gSystem, &myGUnivLM, &theStr[1], theStr[0]);
::GetIndString(theStr, rSTR_LMNames, kStr_UFileLM);
::SRNewLanguageModel(gSystem, &myUFileLM, &theStr[1], theStr[0]);
::GetIndString(theStr, rSTR_LMNames, kStr_GDocuLM);
::SRNewLanguageModel(gSystem, &gGDocuLM, &theStr[1], theStr[0]);
::GetIndString(theStr, rSTR_LMNames, kStr_DFileLM);
::SRNewLanguageModel(gSystem, &myDFileLM, &theStr[1], theStr[0]);
// Make any other language objects we'll need.
::GetIndString(theStr, kSTR_DFileCmds, kStr_Revert);
::SRNewPhrase(gSystem, &gRevert, &theStr[1], theStr[0]);
// ****<Universal File Commands>****
::GetIndString(theStr, kSTR_UFileCmds, kStr_New);
::SRAddText(myUFileLM, &theStr[1], theStr[0], cmd_New);
::GetIndString(theStr, kSTR_UFileCmds, kStr_Open);
::SRAddText(myUFileLM, &theStr[1], theStr[0], cmd_Open);
::GetIndString(theStr, kSTR_UFileCmds, kStr_PageSetup);
::SRAddText(myUFileLM, &theStr[1], theStr[0], cmd_PageSetup);
::GetIndString(theStr, kSTR_UFileCmds, kStr_Quit);
::SRAddText(myUFileLM, &theStr[1], theStr[0], cmd_Quit);
// ****<Document File Commands>****
::GetIndString(theStr, kSTR_DFileCmds, kStr_Close);
::SRAddText(myDFileLM, &theStr[1], theStr[0], cmd_Close);
::GetIndString(theStr, kSTR_DFileCmds, kStr_Save);
::SRAddText(myDFileLM, &theStr[1], theStr[0], cmd_Save);
::GetIndString(theStr, kSTR_DFileCmds, kStr_SaveAs);
::SRAddText(myDFileLM, &theStr[1], theStr[0], cmd_SaveAs);
unsigned long theRefCon = cmd_Revert;
::SRSetProperty(gRevert, kSRRefCon, &theRefCon,
sizeof(theRefCon));
::SRAddLanguageObject(myDFileLM, gRevert);
::GetIndString(theStr, kSTR_DFileCmds, kStr_Print);
::SRAddText(myDFileLM, &theStr[1], theStr[0], cmd_Print);
::GetIndString(theStr, kSTR_DFileCmds, kStr_PrintOne);
::SRAddText(myDFileLM, &theStr[1], theStr[0], cmd_PrintOne);
// ****<Document Commands>****
::SRAddLanguageObject(gGDocuLM, myDFileLM);
// ****<Universal Commands>****
::SRAddLanguageObject(myGUnivLM, myUFileLM);
::GetIndString(theStr, kSTR_UApplCmds, kStr_About);
::SRAddText(myGUnivLM, &theStr[1], theStr[0], cmd_About);
// ****<Menu Commands>****
::SRAddLanguageObject(gGApplLM, myGUnivLM);
::SRAddLanguageObject(gGApplLM, gGDocuLM);
// Release any embedded language models we won't need later.
::SRReleaseObject(myDFileLM);
::SRReleaseObject(myUFileLM);
::SRReleaseObject(myGUnivLM);
return theErr;
}
______________________________
MakeLanguageModels begins by calling SRNewLanguageModel five times to create the
five new, empty language models. (As indicated earlier, the names of the language
models are read from the application's resource fork.) Then MakeLanguageModels
creates a language object for the single word revert, as follows:
::GetIndString(theStr, kSTR_DFileCmds, kStr_Revert); ::SRNewPhrase(gSystem, &gRevert, &theStr[1], theStr[0]);
We treat the Revert command specially because we want to listen for it only when an
open document has a file associated with it (and, of course, when the document is
dirty). Even when the Document Commands language model is active, the Revert
command might need to be disabled.
Next, MakeLanguageModels builds the two language models Universal File Commands
and Document File Commands. In both cases, it simply adds the relevant words or
phrases, read from resources, to the language model, like this:
::GetIndString(theStr, kSTR_UFileCmds, kStr_New); ::SRAddText(myUFileLM, &theStr[1], theStr[0], cmd_New);
SRAddText sets the reference constant property of the specified language object to the
value passed in its fourth parameter. In this example, the reference constant for the
New command is set to the value cmd_New, which is a constant defined by PowerPlant.
As you'll see later, we'll use that value to get PowerPlant to react appropriately to the
user's utterances. If you don't use SRAddText, you need to explicitly set an object's
reference constant property, as is done for the Revert command:
unsigned long theRefCon = cmd_Revert; ::SRSetProperty(gRevert, kSRRefCon, &theRefCon, sizeof(theRefCon)); ::SRAddLanguageObject(myDFileLM, gRevert);
Once the two main language models have been created, the hierarchy displayed in the
BNF diagram is established by a series of calls to SRAddLanguageObject.
When a user begins speaking, your application is notified via a speech-detected
Apple event. In general, your speech-detected event handler should determine what
state your application is in and set the active language model accordingly. As we've
mentioned, we'll use this opportunity to enable or disable embedded language models
(or even single words) to limit the recognizable utterances to those that make sense at
the time. Listing 6 shows our speech-detected Apple event handler.
______________________________
Listing 6. Handling speech-detected Apple events
pascal OSErr CDocSpeech::HandleSpeechDetectedAppleEvent
(AppleEvent *theAEevt, AppleEvent *reply, long refcon)
{
#pragma unused(reply, refcon)
long actualSize;
DescType actualType;
OSErr theErr = 0, recStatus = 0;
SRRecognizer theRec;
LWindow *theWindow;
// Get status and recognizer.
theErr = ::AEGetParamPtr(theAEevt, keySRSpeechStatus,
typeShortInteger, &actualType, (Ptr)&recStatus,
sizeof(recStatus), &actualSize);
if (!theErr && !recStatus)
theErr = ::AEGetParamPtr(theAEevt, keySRRecognizer,
typeSRRecognizer, &actualType, (Ptr)&theRec,
sizeof(theRec), &actualSize);
if (theErr)
if (!theRec)
return theErr;
// Figure out what state we're in; then enable or disable the
// appropriate language models.
theWindow = UDesktop::FetchTopRegular(); // Look for a doc window.
if (theWindow != nil) { // There is a doc window.
SetLanguageObjectState(gGDocuLM, kEnableObj);
// Turn off "Revert" if there's no file or it isn't dirty.
Boolean isEnabled, outUsesMark;
Char16 outMark;
Str255 outName;
LCommander::GetTarget()->FindCommandStatus
(cmd_Revert, isEnabled, outUsesMark, outMark, outName);
SetLanguageObjectState(gRevert, isEnabled);
} else // There is no doc window.
SetLanguageObjectState(gGDocuLM, kDisableObj);
// Now tell the recognizer to continue.
theErr = ::SRContinueRecognition(theRec);
return theErr;
}
______________________________
The event handler, HandleSpeechDetectedAppleEvent, calls the PowerPlant utility
function UDesktop::FetchTopRegular to get the top document window. If there's an open
document window, HandleSpeechDetectedAppleEvent calls the application-defined
function SetLanguageObjectState to enable the Document Commands language model.
Otherwise, if no document window is open, the event handler calls
SetLanguageObjectState to disable that language model. Listing 7 shows the simple
function SetLanguageObjectState.
______________________________
Listing 7. Enabling or disabling a language object
void SetLanguageObjectState (SRLanguageObject inObj,
Boolean isEnabled)
{
Boolean theState = isEnabled;
::SRSetProperty(inObj, kSREnabled, &theState, sizeof(theState));
}
______________________________
Notice that if a document window is open, we need to determine whether to enable the
Revert command. HandleSpeechDetectedAppleEvent cleverly calls the document
window's FindCommandStatus function to determine this.
Instead of disabling the Revert command when it isn't relevant, we could just let the
recognizer keep listening for it but ignore it when the frontmost document, if any,
isn't dirty or has no file. This alternate strategy has some advantages. In particular, if
the user says "revert" but we aren't listening for that command, the recognizer might
think the user has uttered some other command (like "quit" or "print"). These
misfires are much less likely to occur if the recognizer is listening for "revert" in
addition to the other document file commands.
If you think that a user is apt to utter a particular command at an inappropriate time,
it's probably better to ignore it than to disable it. On the other hand, we don't want to
make the active language model too big, and one way to keep its size manageable is to
enable or disable parts of it according to context. That's the strategy we've adopted for
this article. Our sample application doesn't listen for the Revert command unless it's
appropriate, to illustrate how to enable and disable language objects.
So far, we've defined our language models and set up the mechanism by which relevant
parts of the language models are enabled or disabled according to context.
All that remains is to do the right thing when the recognizer recognizes an utterance.
Our application is informed of successful recognitions via recognition-done Apple
events. Listing 8 shows the DocDemo recognition-done event handler.
______________________________
Listing 8. Handling recognition-done Apple events
pascal OSErr CDocSpeech::HandleRecognitionDoneAppleEvent
(AppleEvent *theAEevt, AppleEvent *reply, long refcon)
{
#pragma unused(reply, refcon)
long actualSize;
DescType actualType;
OSErr theErr = 0, recStatus = 0;
SRRecognitionResult recResult = nil;
Size theLen;
SRPath thePath;
SRSpeechObject theItem;
long theRefCon; // Reference constant of item
// Get status.
theErr = ::AEGetParamPtr(theAEevt, keySRSpeechStatus,
typeShortInteger, &actualType, (Ptr)&recStatus,
sizeof(recStatus), &actualSize);
// Get result.
if (!theErr && !recStatus)
theErr = ::AEGetParamPtr(theAEevt, keySRSpeechResult,
typeSRSpeechResult, &actualType, (Ptr)&recResult,
sizeof(recResult), &actualSize);
// Get command from result by reading the reference constant
// of the relevant object.
if (!theErr && !recStatus) {
::SRGetProperty(recResult, kSRPathFormat, &thePath, &theLen);
theErr = ::SRGetIndexedItem(thePath, &theItem, 0);
if (!theErr) {
theLen = sizeof(theRefCon);
::SRGetProperty(theItem, kSRRefCon, &theRefCon, &theLen);
::SRReleaseObject(theItem);
}
// Release recognition result, since we're done with it.
::SRReleaseObject(recResult);
::SRReleaseObject(thePath);
}
// Send the reference constant up the chain of command.
LCommander::GetTarget()->ObeyCommand((MessageT)theRefCon, nil);
return theErr;
}
______________________________
The interesting thing in this event handler is how utterly simple the important code is:
all it does is extract the reference constant value of the recognized utterance and send
that value up the PowerPlant chain of command. For example, if the recognized
utterance is the word new, the reference constant is the value cmd_New, which is sent
to a commander. In this case, the DocDemo application creates a new document. In
effect, the CDocSpeech object does its work by calling code already in the DocDemo
application.
As you've seen, it's easy to add basic speech recognition for File menu commands to a
PowerPlant application, largely because our custom speech object can simply issue
the same commands that would be issued in response to a menu choice. You should now
be able to add speech support for Edit menu commands and for any other menu
commands supported by your application. Only one method remains to discuss, the
destructor for the CDocSpeech class. The destructor simply stops recognizing
utterances and closes down the recognition system opened by the constructor, as shown
in Listing 9.
______________________________
Listing 9. Shutting down speech recognition
CDocSpeech::~CDocSpeech()
{
::SRStopListening(gRecognizer);
::SRReleaseObject(gRecognizer);
::SRReleaseObject(gGDocuLM);
::SRReleaseObject(gRevert);
::SRCloseRecognitionSystem(gSystem);
}
______________________________
'Nuff said.
RELATED READING
TIM MONROE (monroe@apple.com) is a technical writer for Apple's Developer
Relations group. He's written more Inside Macintosh books and chapters than he cares
to remember and is currently working with the QuickDraw 3D and QuickTime VR
teams, as well as the speech recognition team, to bring the excitement of interactive
media to Macintosh applications everywhere. He's rumored to have an office in
Cupertino but prefers to spend his time in his converted garage in Oakland living the
quiet life of a telecommuting "cybermonk." That way, he's never too far from his wife,
his kids, or his model train layout.*
Thanks to our technical reviewers Mike Dilts, Guillermo Ortiz, Matt Pallakoff, Arlo
Reeves, and Brent Schorsch.*