Copyright รจ 2010 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
This document specifies usage scenarios, goals and requirements for incorporating speech technologies into HTML. Speech technologies include both speech recognition and related technologies as well as speech synthesis and related technologies.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This is the 16 December 2010 Internal Working Draft of the Use cases and Requirements for the HTML Speech Incubator. This document is produced from work by the W3C HTML Speech Incubator Group.
Publication as a Working Draft does not imply endorsement by the W3C Membership. This is an internal draft document and may not even end up being officially published. It may also be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
The mission of the HTML Speech Incubator Group, part of the Incubator Activity, is to determine the feasibility of integrating speech technology in HTML5 in a way that leverages the capabilities of both speech and HTML (e.g., DOM) to provide a high-quality, browser-independent speech/multimodal experience while avoiding unnecessary standards fragmentation or overlap. This document represents the efforts of the HTML Speech Incubator Group to collect and review use cases and requirements. These use cases and requirements were collected on the groups public email alias, were then collated into one large list, and then refactored and structured into this document. These use cases and requirements may still need to be refined and prioritized, and while they will be the framework through which the group will judge future change requests and proposals, not every use case or requirement will necessarily be handled by the proposals presented in the incubator group's final report.
Speech technologies can be used to improve existing HTML applications by allowing for richer user experiences and enabling more natural modes of interaction. The use cases listed here are ones raised by the Incubator Group members and may not be an exhaustive list. The use cases must still be prioritized and not every aspect of every use case will necessarily be supported by the end proposal of the Incubator Group. Rather this set of use cases is supposed to illustrate an interesting cross section of use cases that are deemed important by some Incubator Group members. The use cases are organized around if they are primarily speech recognition only, primarily speech synthesis only, or integrated with both input and output.
The following use case all depend primarily on speech recognition. Sometimes the output of the result of the recognition might be ambiguous and could be represented visually or could be represented with synthesized speech, but the primary purpose of the use case is speech recognition.
The user can speak a query and get a result.
A Speech Command and Control Shell that allows multiple commands, many of which may take arguments, such as "call <number>", "call <person>", "calculate <math expression>", "play <song>", or "search for <query>".
A use case exists around collecting multiple domain specific inputs sequentially where the later inputs depend on the results of the earlier inputs. For instance, changing which cities are in a grammar of cities in response to the user saying in which state they are located.
This use case is to collect free form spoken input from the user. This might be particularly relevant to an email system, for instance. When dictating an email, the user will continue to utter sentences until they're done composing their email. The application will provide continuous feedback to the user by displaying words within a brief period of the user uttering them. The application continues listening and updating the screen until the user is done. Sophisticated applications will also listen for command words used to add formatting, perform edits, or correct errors.
Many web applications incorporate a collection of input fields, generally expressed as forms, with some text boxes to type into and lists to select from, with a "submit" button at the bottom. For example, "find a flight from New York to San Francisco on Monday morning returning Friday afternoon" might fill in a web form with two input elements for origin (place & date), two for destination (place & time), one for mode of transport (flight/bus/train), and a command (find) for the "submit" button. The results of the recognition would end up filling all of these multiple input elements with just one user utterance. This application is valuable because the user just has to initiate speech recognition once to complete the entire screen.
Some speech applications are oriented around determining the user's intent before gathering any specific input, and hence their first interaction may have no visible input fields whatsoever, or may accept speech input that is far less constrained than the fields on the screen. For example, the user may simply be presented with the text "how may I help you?" (maybe with some speech synthesis or an earcon), and then utter their request, which the application analyzes in order to route the user to an appropriate part of the application. This isn't simply selection from a menu, because the list of options may be huge, and the number of ways each option could be expressed by the user is also huge. In any case, the speech UI (grammar) is very different from whatever input elements may or may not be displayed on the screen. In fact, there may not even be any visible non-speech input elements displayed on the page.
Some sophisticated applications will re-use the same utterance in two or more recognitions turns in what appears to the user as one turn. For example, an application may ask "how may I help you?", to which the user responds "find me a round trip from New York to San Francisco on Monday morning, returning Friday afternoon". An initial recognition against a broad language model may be sufficient to understand that the user wants the "flight search" portion of the app. Rather than get the user to repeat themselves, the application will just re-use the existing utterance for the recognition on the flight search recognition.
Automatic detection of speech/non-speech boundaries is needed for a number of valuable user experiences such as "Push once to talk" or "hands-free dialog". In press-once to talk the user manually interacts with the app to indicate that the app should start listening. For example, they raise the device to their ear, press a button on the keypad, or touch a part of the screen. When they're done talking, the app automatically performs the speech recognition without the user needing to touch the device again. In hands-free dialog, where the user can start and stop talking without any manual input to indicate when the application should be listening. The application and/or browser needs to automatically detect when the user has started talking, so it can initiate speech recognition. This is particularly useful for in-car, or 10-foot usage (e.g. living room), or for people with disabilities.
The following use case all depend primarily on speech synthesis. Sometimes the way the input to the synthesis might be ambiguous and could be the result of a spoken utterance, but the primary purpose of the use case is speech synthesis.
The application may wish to visually highlight the word or phrase that the application is synthesizing. Or, alternatively, the visual application may wish to coordinate the synthesis with animations of an avatar speaking or with appropriately timed slide transitions and thus need to know where in the reading of the synthesized text the application currently is. In addition, the application may wish to know where in a piece of synthesized text an interruption occurred and use the temporal feedback to tell.
The web page when loaded may wish to say a simple phrase of synthesized text such as "hello world".
The following use case all depend on both speech recognition and on speech synthesis. These richer multimodal use cases may be more dominant on one modality, but fundamentally having both input and output is important to the use case.
The application can act as a translator between two individuals fluent in different languages. The application can listen to one speaker and understand the utterances in one language, can translated the spoken phrases to a different language, and then can speak the translation to the other individual.
The application reads out subjects and contents of email and also listens for commands, for instance, "archive", "reply: ok, let's meet at 2 pm", "forward to bob", "read message". Some commands may relate to VCR like controls of the message being read back, for instance, "pause", "skip forwards", "skip back", or "faster". Some of those controls may include controls related to parts of speech, such as, "repeat last sentence" or "next paragraph".
The type of dialogs that allow for collecting multiple pieces of information in either one turn or sequential turns in response to frequently synthesized prompts. Types of dialogs might be around ordering a pizza or booking a flight route complete with the system repeating back the choices the user said. This dialog system may well be represented by a VXML form or application that allows for control of the dialog. The VXML dialog may be fetched using XMLHttpRequest.
The ability to mix and integrate input from multiple modalities such as by saying "I want to go from here to there" while tapping two points on a touch screen map.
A direction service that speaks turn-by-turn directions. Accepts hands-free spoken instructions like "navigate to <address>" or "navigate to <business listing>" or "reroute using <road name>". Input from the location of the user may help the service know when to play the next direction. It is possible that user is not able to see any output so the service needs to regularly synthesize phrases like "turn left on <road> in <distance>".
The use cases motivate a number of requirements for integrating speech into HTML. The Incubator Group has members that initially felt that each of the requirements described below are essential to the language; however, we must yet evaluate, reword, and prioritize these requirements. Each requirement should include a short description and should be motivated by one or more use cases from the previous section (not all use cases may be listed). For convenience, the requirements are organized around different high level themes.
The following requirements are around features that HTML web authors require to build speech applications.
The HTML web author must have control over specifying both the speech recognizing technology used and the speech parameters that go to the recognizer. In particular, this also means that it must be possible to do recognition on a networked speech recognizer and this should also mean that it is possible to have any user agent work with any vendor's speech services provided the specified open protocols are used. Also, any recognizer parameters or hints must be able to be specified by the web application author.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U5 Domain Specific Grammars Filling Multiple Input Fields, U7 Rerecognition, U11 Speech Translation.
An application may be required to switch between a grammar based recognition to free form recognition. For example for simple date, yes/no, quantity etc grammar based reco might work fine. But for filling a comments section etc, we might want to use a free form recognizer.
Relevant Use Cases Include: U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog
An application wants the results of matching a particular grammar or speech turn to fill a particular input field.
Relevant Use Cases Include: U3 Domain Specific Grammars Contingent on Earlier Inputs, U5 Domain Specific Grammars Filling Multiple Input Fields, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
When the speech recognition occurs the web application must be notified.
Relevant Use Cases Include: U8 Voice Activity Detection, U9 Temporal Structure of Synthesis to Provide Visual Feedback, U13 Dialog Systems, U14 Multimodal Interaction.
When a recognition is attempted and either an error occurs or an utterance doesn't provide a recognition match or else the system doesn't detect speech for a sufficiently long time (I.e., noinput) the web application must be notified.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U5 Domain Specific Grammars Filling Multiple Input Fields, U7 Rerecognition, U8 Voice Activity Detection, U13 Dialog Systems, U14 Multimodal Interaction.
Because speech recognition is by its nature imperfect and probabilistic a set of additional metadata is frequently generated including n-best list of alternate suggestions, confidences or recognition results, and semantic structure represented by recognition results. All of this data must be provided to the web application.
Relevant Use Cases Include: U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U7 Rerecognition, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
It is necessary that the HTML author must be able to specify the grammars of their choosing and must not be restricted to only use grammars natively installed in the user-agent.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U7 Rerecognition, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
The HTML author must be able to specify the language of the recognition to be used for any given spoken interaction. This must be the case even if the language is different that that used in the content of the rest of the web page. This also may mean multiple different spoken language input elements are present in the same web page.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U11 Speech Translation.
It is necessary that the web application author receive notification of the temporal and structural feedback of the synthesis of text.
Relevant Use Cases Include: U9 Temporal Structure of Synthesis to Provide Visual Feedback, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
When rendering synthesized speech HTML application authors need to be able to take advantage of features such as gender, language, pronunciations, etc.
Relevant Use Cases Include: U9 Temporal Structure of Synthesis to Provide Visual Feedback, U10 Hello World Use Case, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems.
The author may have multiple inputs that must be integrated to provide a quality user experience. For instance, the application might combine information from geolocation (to understand "here"), speech recognition, and touch to provide driving directions in response to the spoken phrase "Get me directions to this place" while tapping on a map. Since new modalities are continually becoming available, so it would be difficult to provide for integration on a case by case basis in the user agent, so it must be easy for the web application author to provide the integration.
Relevant Use Cases Include: U9 Temporal Structure of Synthesis to Provide Visual Feedback, U14 Multimodal Interaction, U15 Speech Driving Directions.
A typical approach for open dialog is to provide a statistical language model (or SLM) and use that to anticipate likely user dialog.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U6 Speech UI present when no visible UI need be present, U7 Rerecognition, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems.
Multimodal speech recognition apps are typically accompanied by a GUI experience to (i) provide a means to invoke SR; and (ii) indicate progress of recognition through various states (listening to the user speak; waiting for the recognition result; displaying errors; displaying alternates; etc). Polished applications generally have their own GUI design for the speech experience. This will usually include a clickable graphic to invoke speech recognition, and graphics to indicate the progress of the recognition through various states.
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U6 Speech UI present when no visible UI need be present, U7 Rerecognition, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
The ability to stop output (text-to-speech or media) in response to events (user starting to speak, a recognition occurring, other events or selections or browser interactions, etc.) so that the web application user experience is acceptable and the web application doesn't appear confused or deaf to user input. While barge-in aids the usability of an application by allowing the user provide spoken input even while the application is playing media/TTS. However, applications that both speak (or play media) and listen at the same time can potentially interfere with their own speech recognition. In telephony, this is less of a problem due to the design of the handset, and built-in echo-cancelling technology. However, with broad variety of HTML-capable devices, situations that involve open-mic and open-speaker will be potentially more common. To help developers cope with this, it may be useful to either specify a minimum barge-in capability that all browsers should meet, or make it easier for developers to discover when barge-in may be an issue and allow appropriate parameter settings to help mitigate the situation.
Relevant Use Cases Include: U8 Voice Activity Detection, U9 Temporal Structure of Synthesis to Provide Visual Feedback, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction.
The following requirements do not provide any additional functionality to an HTML web author, but instead make the task of authoring a speech HTML page much easier or else convert the authored application into a much more high quality application.
Running a speech service can be difficult, and a default speech interface/service is needed in the user agent so that a web application author can use speech resources without needing to run their own speech service.
Relevant Use Cases Include: U2 Speech Command Interface, U3 Domain Specific Grammars Contingent on Earlier Inputs, U5 Domain Specific Grammars Filling Multiple Input Fields,
Running a speech service can provide fine grained customization of the application for the web application author.
Relevant Use Cases Include: U1 Voice Web Search, U2 Speech Command Interface, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U6 Speech UI present when no visible UI need be present, U7 Rerecognition, U8 Voice Activity Detection, U9 Temporal Structure of Synthesis to Provide Visual Feedback, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
The time between the user completing their utterance, and an application providing a response, needs to fall below an acceptable threshold to be usable. For example, "find a flight from New York to San Francisco on Monday morning returning Friday afternoon" takes about 6 seconds to say, but the user still expects a response within a couple of seconds (generally somewhere between 500 and 3000 milliseconds, depending on the specific application and audience). In the case of applications/browsers that invoke speech recognition over a network, the platform needs to support (i) using a codec that can be transmitted in real-time on the modest bandwidth of many cell networks and (ii) transmitting the user's utterance in real-time (e.g. in 100ms packets) rather than collect the full utterance before transmitting any of it. For applications where the utterances are non-trivial and the grammars can be recognized in real-time or better, real-time streaming can all but eliminate user-perceived latency.
Relevant Use Cases Include: U1 Voice Web Search, U2 Speech Command Interface, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U6 Speech UI present when no visible UI need be present, U7 Rerecognition, U8 Voice Activity Detection, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
For longer stretches of spoken output, it may be necessary to stream the synthesis without knowing the full rendering of the TTS/SSML nor the Content-Length of the rendered audio format. To enable high quality applications user agents should support streaming of synthesis results without needing the content length header to be present with a correct full synthesized file length. I.e., consider a TTS processor that can process one sentence at a time, and is requested to read an email consisting of three paragraphs.
Relevant Use Cases Include: U9 Temporal Structure of Synthesis to Provide Visual Feedback, U10 Hello World Use Case, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
End-user extensions should be accessible either from the desktop or from the cloud.
Relevant Use Cases Include:
It should be possible to specify a target TTS engine not only via the "URI" attribute, but via a more generic "source" attribute, which can point to a local TTS engine as well. To achieve this, it'd be useful to think about extendability and flexibility of the framework, so that it is easy for third parties to provide high quality TTS engines.
Relevant Use Cases Include: U9 Temporal Structure of Synthesis to Provide Visual Feedback, U10 Hello World Use Case, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
Any public interfaces for creating extensions should be "speakable". A user should never need to touch the keyboard in order to expand a grammar, reference data, or add functionality.
Relevant Use Cases Include:
A developer creating a (multimodal) interface combining speech input with graphical output needs to have the ability to provide a consistent user experience not just for graphical elements but also for voice. In addition, high quality speech applications often involve a lot of tuning of recognition parameters and grammars to work with different recognition technologies. A web author may wish for her application to only need to tune the speech recognition with one technology stack, and not have to tune and special case different grammars and parameters for different user agents. There exists enough browser detection in the web developer world to deal with accidental incompatibility and legacy implementations without causing speech to require it by design for quality speech recognition. This is one reason to allow author specified networked speech services.
Relevant Use Cases Include: U1 Voice Web Search, U2 Speech Command Interface, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U6 Speech UI present when no visible UI need be present, U7 Rerecognition, U8 Voice Activity Detection, U9 Temporal Structure of Synthesis to Provide Visual Feedback, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
If the user can't speak, can't speak the language well enough to be recognized, the speech recognizer just doesn't work well for them, or they are in an environment where speaking would be inappropriate they should be able to interact with the web application some other way.
Relevant Use Cases Include: U14 Multimodal Interaction.
There should be a way to speech-enable every aspect of a web application that you would do with a mouse, a touchscreen, or by typing.
Relevant Use Cases Include: U1 Voice Web Search, U2 Speech Command Interface, U4 Continuous Recognition of Open Dialog, U8 Voice Activity Detection, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
If recognizers support new capabilities like language detection or gender detection, it should be easy to add the results of those new capabilities to the speech recognition result, without requiring a new version of the standard.
Relevant Use Cases Include: U13 Dialog Systems, U14 Multimodal Interaction.
Multimodal speech recognition apps are typically accompanied by a GUI experience to (i) provide a means to invoke SR; and (ii) indicate progress of recognition through various states (listening to the user speak; waiting for the recognition result; displaying errors; displaying alternates; etc). Many applications, at least in their initial development, and in some cases the finished product, will not implement their own GUI for controlling speech recognition. These applications will rely on the browser to implement a default control to begin speech recognition, such as a GUI button on the screen or a physical button on the device, keyboard or microphone. They will also rely on a default GUI to indicate the state of recognition (listening, waiting, error, etc).
Relevant Use Cases Include: U1 Voice Web Search, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U6 Speech UI present when no visible UI need be present, U13 Dialog Systems, U14 Multimodal Interaction.
No developer likes to be locked into a particular vendor's implementation. In some cases this will be unavoidable due to differentiation in capabilities between vendors. But general concepts like grammars, TTS and media composition, and recognition results should use standard formats (e.g. SRGS, SSML, SMIL, EMMA).
Relevant Use Cases Include: U1 Voice Web Search, U2 Speech Command Interface, U3 Domain Specific Grammars Contingent on Earlier Inputs, U4 Continuous Recognition of Open Dialog, U5 Domain Specific Grammars Filling Multiple Input Fields, U7 Rerecognition, U9 Temporal Structure of Synthesis to Provide Visual Feedback, U11 Speech Translation, U12 Speech Enabled Email Client, U13 Dialog Systems, U14 Multimodal Interaction, U15 Speech Driving Directions.
These requirements have to do with security, privacy, and user expectations. These often don't have specific use cases, and maybe mitigations should be explored in appropriate user agent permissions to allow some of these actions on certain trusted sites while forbidding them on others.
Some users may be concerned if their audio may be recorded and then controlled by the web application author so user agents must prevent this.
Relevant Use Cases Include:
Some users may be concerned if their audio may be recognized without being aware of it so user agents must make sure that recognition only occurs in response to explicit end user actions.
Relevant Use Cases Include:
For reasons of privacy, the user should not be forced to store anything about their speech recognition environment on the cloud.
Relevant Use Cases Include:
Selection of the speech engine should be a user-setting in the browser, not a Web developer setting. The security bar is much higher for an audio recording solution that can be pointed at an arbitrary destination.
Relevant Use Cases Include:
Many users are sensitive about who or what is listening to them, and will not tolerate an application that listens to the user without the user's knowledge. A browser needs to provide clear indication to the user either whenever it will listen to the user or whenever it is using a microphone to listen to the user.
Relevant Use Cases Include:
Some users will want to explicitly grant permission for the user agent, or an application, to listen to them. Whether this is a setting that is global, applies to a subset of applications/domains, etc, depends somewhat on the security & privacy expectations of the user agent's customers.
Relevant Use Cases Include:
The user also needs to be able to trust and verify that their utterance is processed by the application that's on the screen (or its backend servers), or at least by a service the user trusts.
Relevant Use Cases Include:
This section covers group consensus of a clearer expansion of the requirements that section 3 covers. There still is needed a fine grained pass and a prioritization. Including a requirement in this space does not necessarily mean everyone in the group agrees that it is an important requirement than MUST be addressed.
This is the evolution of requirement 29 and requirment 33.
This is one part of the evolution of requirement 27.
This is one part of the evolution of requirement 27.
This is one part of the evolution of requirement 27.
This is one part of the evolution of requirement 27.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is part of the expansion of requirements 1, 15, 16, 22, and 31. It is expected that user agents should not refuse in the common case.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is part of the expansion of requirements 1, 15, 16, 22, and 31.
This is a part of the expansion of requirement 3.
This is a part of the expansion of requirement 3.
This is a part of the expansion of requirement 3.
This is a part of the expansion of requirement 33.
This is a part of the expansion of requirement 33. Here the word abort should mean "as soon as you can, stop capturing, stop processing for recognition, and stop processing any recognition results".
This is a part of the expansion of requirement 33.
This is a part of the expansion of requirement 33 and requirement 29.
This is a part of the expansion of requirement 33 and requirement 29.
This is a part of the expansion of requirement 4.
This is a part of the expansion of requirement 4.
This is a part of the expansion of requirement 4.
This is a part of the expansion of requirement 4. These results may be partial results and may occur several times.
This is a part of the expansion of requirement 17.
This is a part of the expansion of requirement 17.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 18.
This is a part of the expansion of requirement 7.
The intent is that this requirement covers errors like bad format of grammars but also covers no input and no matches. This is a part of the expansion of requirement 5.
This is a part of the expansion of requirement 26.
This is a part of the expansion of requirement 28.
This is a part of the expansion of requirement 8.
This is a part of the expansion of requirement 8.
This is a part of the expansion of requirement 14.
This is a part of the expansion of requirement 25.
This is a part of the expansion of requirement 24.
This is a part of the expansion of requirement 24.
I.e., free-form recognition. This is a part of the expansion of requirement 2.
This is a part of the expansion of requirement 2.
This is a part of the expansion of requirement 20.
This is a part of the expansion of requirement 23.
This is a part of the expansion of requirement 12.
This is a part of the expansion of requirement 32.
This is a part of the expansion of requirement 11.
This is a part of the expansion of requirement 9.
This is a part of the expansion of requirement 9.
This is a part of the expansion of requirement 9.
Other security and privacy requirements include the following FPR: 1, 10, 16, 17, 18, and 20. This is a part of the expansion of requirement 13.