Steele et al., 2006 - Google Patents
Speech detection of stakeholders' non-functional requirementsSteele et al., 2006
- Document ID
- 1991845209498118067
- Author
- Steele A
- Arnold J
- Cleland-Huang J
- Publication year
- Publication venue
- 2006 First International Workshop on Multimedia Requirements Engineering (MERE'06-RE'06 Workshop)
External Links
Snippet
This paper describes an automatic speech recognition technique for capturing the non- functional requirements spoken by stakeholders at open meetings and interviews during the requirements elicitation process. As statements related to system qualities such as security …
- 238000001514 detection method 0 title abstract description 26
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
- G10L15/265—Speech recognisers specially adapted for particular applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06Q—DATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation, e.g. computer aided management of electronic mail or groupware; Time management, e.g. calendars, reminders, meetings or time accounting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11037553B2 (en) | Learning-type interactive device | |
| US10419613B2 (en) | Communication session assessment | |
| US8914294B2 (en) | System and method of providing an automated data-collection in spoken dialog systems | |
| Russo et al. | Dialogue systems and conversational agents for patients with dementia: The human–robot interaction | |
| US9298811B2 (en) | Automated confirmation and disambiguation modules in voice applications | |
| AU2019219717B2 (en) | System and method for analyzing partial utterances | |
| EP1901283A2 (en) | Automatic generation of statistical laguage models for interactive voice response applacation | |
| US7412383B1 (en) | Reducing time for annotating speech data to develop a dialog application | |
| CN104903954A (en) | Speaker verification and identification using artificial neural network-based sub-phonetic unit discrimination | |
| Shriberg | Learning when to listen: Detecting system-addressed speech in human-human-computer dialog | |
| Burkhardt et al. | Detecting anger in automated voice portal dialogs. | |
| Levitan et al. | Combining Acoustic-Prosodic, Lexical, and Phonotactic Features for Automatic Deception Detection. | |
| Tsai et al. | A study of multimodal addressee detection in human-human-computer interaction | |
| CN113593523B (en) | Speech detection method and device based on artificial intelligence and electronic equipment | |
| US12243517B1 (en) | Utterance endpointing in task-oriented conversational systems | |
| KR20170090127A (en) | Apparatus for comprehending speech | |
| Schmitt et al. | Towards adaptive spoken dialog systems | |
| Wagner et al. | Applying cooperative machine learning to speed up the annotation of social signals in large multi-modal corpora | |
| Steele et al. | Speech detection of stakeholders' non-functional requirements | |
| Braunger et al. | A comparative analysis of crowdsourced natural language corpora for spoken dialog systems | |
| Lai | Application of the artificial intelligence algorithm in the automatic segmentation of Mandarin dialect accent | |
| Desai et al. | Virtual assistant for enhancing english speaking skills | |
| CN119168059B (en) | Simulation court implementation method, device, equipment and medium based on intelligent agent | |
| Scheffler et al. | Speecheval–evaluating spoken dialog systems by user simulation | |
| Khan et al. | Robust Feature Extraction Techniques in Speech Recognition: A Comparative Analysis |