[go: up one dir, main page]

WO2007017361A1 - Procede et systeme de dialogue vocal pour determiner au moins une transaction pour servir une application d'arriere-plan - Google Patents

Procede et systeme de dialogue vocal pour determiner au moins une transaction pour servir une application d'arriere-plan Download PDF

Info

Publication number
WO2007017361A1
WO2007017361A1 PCT/EP2006/064501 EP2006064501W WO2007017361A1 WO 2007017361 A1 WO2007017361 A1 WO 2007017361A1 EP 2006064501 W EP2006064501 W EP 2006064501W WO 2007017361 A1 WO2007017361 A1 WO 2007017361A1
Authority
WO
WIPO (PCT)
Prior art keywords
transaction
user
context information
transactions
parameter
Prior art date
Application number
PCT/EP2006/064501
Other languages
German (de)
English (en)
Inventor
Dongyi Song
Hans-Ulrich Block
Rudolf Caspari
Jürgen Totzke
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Publication of WO2007017361A1 publication Critical patent/WO2007017361A1/fr

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning

Definitions

  • the invention relates to a method and an associated speech dialogue system for determining at least one transaction for operating a background application by evaluating contained in a user statement action information in such a speech dialogue system, each of a background application is associated with a finite set of selectable transactions and in which provided for in the speech dialogue system Input unit at least one user utterance recorded and stored in the speech dialogue system.
  • Speech dialog systems for database access which permit information access and control of communication applications via voice communication, are known as interfaces to many computer-aided applications.
  • Applications or background applications e.g. A technical device for consumer electronics, a telephone information system (train, flight, cinema, etc.), a computer-aided transaction system (home banking system, electronic goods ordering, etc.) are increasingly being used as access systems ("user interfaces") via such speech dialogue systems.
  • Such speech dialogue systems may be implemented in hardware, software or a combination thereof.
  • Dialogue progression to achieve application-specific dialogue goals is controlled via such a speech dialogue system, which manages the interactions between a dialogue management unit and the individual user.
  • the dialogue management unit has an input unit and output unit.
  • a user-generated user utterance for example in the form of a voice signal, is detected by the input unit and stored in the dialog management unit.
  • the input unit may for example be followed by a speech recognition unit, via which a determination of action information contained in a user statement is performed.
  • the output unit can be designed as a speech synthesis unit.
  • action information is obtained from the speech signal, for example in the form of individual words or word strings, which are evaluated by comparison with key words or grammars loaded in a parser.
  • a transaction associated with one or more key terms is started to handle background information.
  • a dialogue history with the respective user is created via the dialog management unit
  • voice recognition modules In order for a user to be able to place his requests in spoken natural language, the use of voice recognition modules or units is required.
  • Such language Detection modules are known, for example, from DE 197 19 381 C1 and DE 199 56 747 C1.
  • a background application is considered to be a finite set of transactions (T1, T2, .... Tx), where each transaction is associated with a finite set (which may also be empty) of transaction parameters (Pl, P2, Px).
  • the transaction parameters are known to the speech dialogue system.
  • a grammar is provided which serves to capture the transaction parameter in the dialog.
  • DE 101 10 977 C1 discloses a method and an arrangement for providing help information for a user of a speech dialogue system for operating a background application, in which predetermined help information is determined and output to the user depending on the current dialog status. The user is thus provided with context-sensitive help information by the speech dialogue system, depending on his respective dialog status.
  • unified messaging systems are known, by means of which various messages are stored on a communication platform, managed and accessed via different types of access, for example emails, SMS or even classic telephone, fax and answering machine functions be linked, so that for example via such a "Unified Messaging System” a fax, an e-mail or email an SMS message can be sent to a mobile phone. Answering machine messages can be monitored as usual or texts and emails can be read out using the "Text-to-Speech" (TTS) procedure.
  • TTS Text-to-Speech
  • the object of the present invention is thus to provide a method for determining at least one transaction for operating a background application and an associated speech dialogue system, in which a clear identification of the desired user application background fast and user-friendly, in particular already on the basis of reduced action information possible becomes.
  • the object is achieved by a method according to claim 1 and a speech dialogue system according to claim 16.
  • the essential idea of the method according to the invention for determining at least one transaction for operating a background application and evaluation of at least one action information contained in a user statement in a speech dialogue system, in each of which a background application is assigned a finite set of selectable transactions and in which via an input unit provided in the speech dialogue system at least one user statement is recorded and stored in the speech dialogue system, is that each in the context of transactions in the speech dialogue system is assigned at least one matching context information and the context information of at least the last executed transaction is stored in the speech dialogue system.
  • those transactions whose associated context information corresponds with the stored context information are advantageously preferably taken into account.
  • FIG. 1 shows a schematic block diagram of a speech dialogue system by way of example
  • FIG. 2 shows by way of example the individual components of a transaction in a further block diagram
  • FIG. 3 shows by way of example the individual steps of the method according to the invention in a flowchart.
  • Fig. 1 is a schematic block diagram exemplified a voice dialogue system 1, via which a user B can operate at least one background application HA.
  • the operation of the background application HA takes place here by way of example on the basis of individual selected and stored in the speech dialogue system 1 transactions Tl to Tx, which are assigned to one or more functions of the background application HA.
  • the speech dialogue system 1 illustrated in FIG. 1 comprises an input unit 2, a dialog management unit 3 and an output unit 4, the dialog management unit 3 being in each case in communication with the input unit 2, the output unit 4 and the background application HA.
  • a memory unit 5 for storing the transactions Tl to Tx for operating the background application HA and other parameters is provided.
  • the dialogue management unit 3 also has a
  • Speech recognition unit 6, a speech synthesis unit 7, a parser unit 8 and a control unit 9 which, for example, connected to each other via a data bus DBS system are.
  • the background application HA which in an alternative embodiment can be connected to the dialogue management unit 3 via an interface module (not shown in FIG. 1), is likewise connected to the data bus system DBS.
  • the input unit 2 of the speech dialogue system 1 is connected to the speech recognition unit 6 and the output unit 4 to the speech synthesis unit 7.
  • a user statement BE is generated, which has action information AI. This can be a
  • User statement BE have one or more action information AI, which can be arranged within the user statement BE in different orders.
  • a user statement BE of the user B preferably detected as a voice signal and stored in the dialogue management unit 3.
  • the user utterance BE is digitized or supplied in digital form to the storage unit 5 and the action information (s) AI contained in the user statement BE is determined via the speech recognition unit 6 in a manner known per se and stored in the storage unit 5.
  • the output unit 4 can provide the user B with output prompts or information prompts AP, which are output via the output unit 4, for example in the form of a synthesized speech signal generated by the speech synthesis unit 7.
  • the user B is informed, for example, about the current status of the background application HA to be operated or the actions carried out by the system on the basis of the last user statement BE, or the user B initiates the delivery of at least one further user statement BE.
  • a clarification dialog which alternates between output prompt AP and user statements BE can be carried out.
  • FIG. 2 is a schematic representation of the individual components of such a transaction T 1 to T x, which are used to operate the different lent functions of the background application HA are provided in the dialogue management unit 3.
  • a background application HA is assigned a finite set of such transactions Tl-Tx whose selection is triggered via a respective grammatical routine GR assigned to a respective transaction Tl-Tx.
  • one or more such grammars G1-Gx are assigned to the transaction T1, which are loaded into the parser unit 8 for determining the transactions T1 to Tx belonging to the stored action information AI.
  • the parser unit 8 the individual key terms of the grammars G1-Gx characterizing the respective transaction T1 are compared with the action information AI.
  • the grammars G1-Gx can be replaced or extended, for example, by contextual grammars G1 * - Gx * which, taking into account the
  • the first transaction T 1 shown by way of example in FIG. 2 has a preconditioning routine VBR, which is executed before the further execution of a transaction T 1 -Tx in the control unit 9 in order to verify the existence of preconditions such as, for example, the successful execution of another transaction T 2.
  • the preconditioning routine VBR has one or more precondition parameters VBP and associated parameter prompts PP.
  • the preconditioning routine VBR executed in the control unit 9 checks, for example, the activation or deactivation of a further transaction T2 and, depending on the result, selects the associated parameter prompt PP and outputs it to the user B via the speech synthesis unit 7 and the output unit 4.
  • the mutually correlated context information KI is assigned to the transactions T1-Tx in the speech dialogue system 1, and thus the transactions T2, T3 having a meaningful relationship are combined into a respective transaction group TG1-TGx with matching context information KI.
  • a second and third transaction T2, T3 are combined to form a first transaction group TG1.
  • the context-related link KI interlinks different transactions Tl-Tx evaluated for the current determination of a transaction Tl to Tx, thereby significantly reducing the time and computational effort for the conduct of the investigation.
  • the context information Klo t z t is stored at least the last executed in the dialogue management unit 3 transaction T iet z t and taken into account in the current determination of a transaction Tl - Tx, in such a way that preferably those transactions T2 , T3 are considered in the determination whose associated context information KI matches the stored context information Klo t z t .
  • the current transaction Tl - Tx first of all that transaction group TG1 - TGx is considered to which the last executed transaction Tietzt belongs.
  • the transactions Tl-Tx of the determined transaction group TG1-TGx are particularly advantageously prioritized in the selection process, and thus those Transactions Tl - Tx executed having the largest "hit probability".
  • context information KI transaction Tl - Tx each associated with a context information routine KIR that t the conformity of the transaction Tl associated context information AI with the data stored in the storage unit 5.
  • context information toilet z t of the last executed in the dialog management unit 3 Transaction T iet z t checked. If the last stored context information Klo t z t and the context information KI associated with the first transaction Tl match, the considered first transaction Tl is preferably taken into account in the determination of the current transaction, ie the action information AI obtained from the user statement BE in the parser unit 8 executed grammar routine GR of the considered transaction Tl evaluated.
  • the context information KI may, for example, have the parameter type "string", which represents a preamble representing the meaning context between the transactions Tl to Tx.
  • each transaction Tl-Tx is assigned a transaction parameter routine TPR which, in addition to the determined transaction parameters P1-Px, has transaction parameter prompts TPP, a parameter grammar PGR and a value determination information WEI.
  • the transaction Tl is specified content by its transaction parameters Pl - Px whose values are determined via the transaction parameter routine TPR.
  • the values of the transaction parameters Pl-Pix are determined from the action information AI via the parameter grammar PGR executed in parser unit 8.
  • the transaction parameters Pl - Px still to be determined for the execution of the transaction Tl - Tx are queried by issuing the respectively assigned transaction parameter prompt TPP to the user B and his action information AI contained in another user statement BE in the context of a clarification dialog.
  • value determination information WEI assigned in each case to the individual transaction parameters Pl - Px is determined in which manner the determination of the transaction parameters P1 - Px should or can take place. For example, to determine the transaction parameters Pl - Px in addition to the
  • a constraint routine CR per transaction Tl-Tx which includes trigger parameters TR, logical conditions LB and action instruction prompts AA.
  • the modified parameter value of a transaction parameter P 1 -Px is thus ascertained for validity via the contraint routine CR with the aid of the predetermined trigger parameters TR and the logical condition LB and communicated to the user B in the event of a deviation via action instruction prompts AA.
  • a logical condition LB it is predetermined as a logical condition LB to form the sum of two transaction parameters P 1 , P 2 and to check whether this, for example, exceeds an upper limit P max .
  • the upper limit P max is exceeded, the associated action instruction prompt AA becomes the
  • Contraint routine CR issued.
  • a predetermined system function can also be initiated.
  • a Nachbedingungsroutine NBR is checked whether all necessary for the execution of the respective transaction Tl - Tx post-conditions NB, for example, at least the required transaction parameter Pl - Px, now present, ie the gaps of the frame-modeled transaction Tl are filled and using the selected transaction Tl the associated function of the background application HA can now be started.
  • the Nachbedingungsroutine NBR those post-conditions NB are checked, which must be present at the end of the dialog to ensure a transfer of all necessary for the operation of the background application HA information about the transaction Tl.
  • the post-conditional routine NBR can be used to individually define and check different complex post-conditions NB for different transactions Tl-Tx.
  • a transaction Tl-Tx has a system operation routine SAR, with the aid of which the output of specific action prompts AP or system actions SA is implemented.
  • the system action routine SAR comprises, for example, system trigger parameters TRP and precondition information VBI as well as predetermined system actions SA.
  • system trigger parameter TRP changes, the initiation of the predetermined system actions SA is verified via the precondition information VBI and, if appropriate, the assigned system actions SA are started.
  • the user B is thus informed about a change in the dialog status or the status of the background application HA.
  • FIG. 3 shows by way of example the method according to the invention for determining at least one transaction Tl -Tx in the speech dialogue system 1 in a flow chart.
  • a selection routine AR executed in the control unit 9
  • the five method steps S1-S5 shown in FIG. 3 are executed.
  • the correlated context information KI is assigned to the transactions Tl-Tx that are in a context of meaning.
  • the context information KI associated therewith is used as last stock response associated context information i m Pushdia- logssystem 1 and the memory unit 5 of the dialogue management unit 3 stored (second step S2).
  • the previously determined transactions T2, T3 are preferably used for determining the current transaction Tl-Tx from the finite quantity of the transactions Tl-Tx.
  • a "unified messaging system" is considered by way of example, which as a rule consists of a telephone connection between the user B and the speech dialogue system 1, ie the input unit 2 is designed as a microphone unit of a telephone and the output unit 4 as a loudspeaker unit thereof
  • Basic functions of such a Unified Messaging system can be called the following functions:
  • the action information AI obtained from the user statement BE is compared with the grammars G1-Gx or their key terms required for executing the first transaction T1, and the associated transaction T1 is started if the match is correct.
  • the user B inputs his telephone number and the associated password in the voice dialogue system 1 as a user statement BE.
  • the user utterance BE is detected via the input unit 2 and, based on the speech recognition unit 6, the telephone number contained in the user utterance BE and the password determined and stored as action information AI in the memory unit 5.
  • the Nachbedingungsroutine NBR is the existence of the for the execution of the first transaction Tl "user identification" assigned function of the background application
  • an action prompt AP indicating the status of the identification process is output to the user B via the output unit 4.
  • a context information KI associated with the first transaction Tl is optionally stored in the memory unit 5.
  • the user B enters the following user statement BE in the speech dialogue system 1: "I want to listen to my new messages.”
  • the user utterance BE is recorded via the input unit 2 and the action information AI included in the user utterance BE using the speech recognition unit 6 determined and stored in the memory unit 5.
  • Possible action information AI can here be the terms “new”, “messages” and “listening”.
  • the context information routine KIR firstly compares the context information KI associated with each transaction Tl-Tx with the last stored context information Klo t z t , and preferably uses this transaction T2, T3 to determine the current transaction T2.
  • the last stored context information Klo t z t can be loaded and after querying the context information KI of the individual transactions Tl-Tx those transactions T2, T3 with matching context information KI for the determination of the current transaction T2, T3 be provided.
  • the respective grammar routine GR is considered to be meaningful, i.
  • the transactions T2, T3 having the same context information KI are started, and the agreement of the action information AI obtained from the user statement BE with the grammars G1 - Gx required for executing the second or third transaction T2, T3 and their keywords is checked and, if they match, the determined transaction T2 started.
  • the second transaction T2 is "query new
  • Unified Messaging System message query "assigned, which in turn executes the second transaction T2 in the memory storage unit 5 of the dialogue management unit 3 is stored as last assigned context information Klo t z t .
  • the function ("polling new messages") of the background application HA "Unified Messaging System" assigned to the second transaction T2 is subsequently called, and the associated information prompt AP together with the messages to the user B via the output unit 4 issued.
  • the dialogue structure on which the second transaction T2 is based can thus be illustrated as an example as follows:
  • the submitted user statement BE "I would like to listen to the new messages from Mr. Maier" is in turn detected via the input unit 2 and the voice information unit 6 determines the action information AI contained in the user statement BE, which is stored in the memory unit 5.
  • the dialog Management unit 3 is stored after the execution of the second transaction T2 as the last stored context information Klo t z t "unified messaging system: message query".
  • the respective grammar routine GR of the third transaction T3 is started and the correspondence of the action information AI obtained from the user statement BE with the grammars G1 - Gx required for executing the third transaction T3 or their key terms is checked and, if they match, the third transaction T3 continue running.
  • transaction parameter Pl "name" the name of the person ("Mr. Maier ") whose new messages are to be output to user B.
  • the context information KI" Unified Messaging System message query "in the memory unit 5 as the last present context information loo t z t stored.
  • the fourth transaction T4 is provided.
  • the grammar routine GR of the fourth transaction T4 is started and the correspondence of the action information AI obtained from the user request BE with the grammars G1 - Gx required for executing the fourth transaction T4 or their key terms is checked and, on the basis of their agreement with the stored transaction information, the fourth transaction T4 further executed.
  • the user BE In order to send a message to the given recipient "Mr Maier", it is first necessary for the user BE to enter the message into the speech dialogue system 1.
  • the value of the fourth transaction parameter P4 "receiver", namely the receiver of the recorded message, is determined by means of the transaction parameter routine TPR.
  • transmission parameters regarding the recorded message can also be queried by user B as further transaction parameters P6-Px ("Special Settings"), for example the confidentiality, urgency, encryption, etc., by means of a further clarification dialog.
  • the context information KI "Unified Messaging System: Messaging” is also stored in the memory unit 5 as last-available context information Klo t z t via the context information routine KIR.
  • the associated function (“send message") is called within the Unified Messaging system and the associated information Prompt AP generated and output via the output unit 4.
  • BEI 123456 System (AP2): Phone number 123456, please enter your password.
  • BE2 333666 System AP3: You are logged into the system. What can I do for you?
  • KIR context information routine LB logical condition parameters NBA postcondition actions NBP postcondition parameters NBR postcondition routine

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

L'analyse d'au moins une information relative à une action et contenue dans la déclaration d'un utilisateur permet d'allouer, dans un système de dialogue vocal destiné à déterminer au moins une transaction actuelle, au moins une information contextuelle correspondant à chaque transaction se trouvant dans un contexte dans le système de dialogue vocal. L'information contextuelle de la dernière transaction effectuée au moins est mémorisée dans le système de dialogue vocal et, lors de la détermination de la transaction actuelle, il est de préférence tenu compte des transactions dont l'information contextuelle allouée correspond à l'information contextuelle mémorisée.
PCT/EP2006/064501 2005-08-09 2006-07-21 Procede et systeme de dialogue vocal pour determiner au moins une transaction pour servir une application d'arriere-plan WO2007017361A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102005037621.5 2005-08-09
DE102005037621A DE102005037621A1 (de) 2005-08-09 2005-08-09 Verfahren und Sprachdialogsystem zur Ermittlung zumindest einer Transaktion zur Bedienung einer Hintergrundapplikation

Publications (1)

Publication Number Publication Date
WO2007017361A1 true WO2007017361A1 (fr) 2007-02-15

Family

ID=37179086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2006/064501 WO2007017361A1 (fr) 2005-08-09 2006-07-21 Procede et systeme de dialogue vocal pour determiner au moins une transaction pour servir une application d'arriere-plan

Country Status (2)

Country Link
DE (1) DE102005037621A1 (fr)
WO (1) WO2007017361A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9661138B2 (en) 2015-01-14 2017-05-23 Unify Gmbh & Co. Kg System and method for automatic intention evaluation and communication routing

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008025532B4 (de) * 2008-05-28 2014-01-09 Audi Ag Kommunikationssystem und Verfahren zum Durchführen einer Kommunikation zwischen einem Nutzer und einer Kommunikationseinrichtung

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1033701A2 (fr) * 1999-03-01 2000-09-06 Matsushita Electric Industrial Co., Ltd. Procédé et dispositif de sélection d'un canal de télévision basé sur la compréhension de la parole
WO2001078065A1 (fr) * 2000-04-06 2001-10-18 One Voice Technologies, Inc. Traitement de langage naturel et de dialogue
EP1335352A1 (fr) * 2002-02-11 2003-08-13 Sony International (Europe) GmbH Serveur de gestion de dialogue et procédé de gestion de dialogue
WO2006037219A1 (fr) * 2004-10-05 2006-04-13 Inago Corporation Systeme et procedes permettant d'ameliorer l'exactitude de la reconnaissance vocale

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1033701A2 (fr) * 1999-03-01 2000-09-06 Matsushita Electric Industrial Co., Ltd. Procédé et dispositif de sélection d'un canal de télévision basé sur la compréhension de la parole
WO2001078065A1 (fr) * 2000-04-06 2001-10-18 One Voice Technologies, Inc. Traitement de langage naturel et de dialogue
EP1335352A1 (fr) * 2002-02-11 2003-08-13 Sony International (Europe) GmbH Serveur de gestion de dialogue et procédé de gestion de dialogue
WO2006037219A1 (fr) * 2004-10-05 2006-04-13 Inago Corporation Systeme et procedes permettant d'ameliorer l'exactitude de la reconnaissance vocale

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9661138B2 (en) 2015-01-14 2017-05-23 Unify Gmbh & Co. Kg System and method for automatic intention evaluation and communication routing
US9883039B2 (en) 2015-01-14 2018-01-30 Unify Gmbh & Co. Kg System and method for automatic intention evaluation and communication routing
US10154141B2 (en) 2015-01-14 2018-12-11 Unify Gmbh & Co. Kg System and method for automatic intention evaluation and communication routing

Also Published As

Publication number Publication date
DE102005037621A1 (de) 2007-02-22

Similar Documents

Publication Publication Date Title
EP1964110B1 (fr) Procédé de commande d'au moins une première et une deuxième application d'arrière-plan par l'intermédiaire d'un système de dialogue vocal universel
DE69839068T2 (de) System und Verfahren zur automatischen Verarbeitung von Anruf und Datenübertragung
DE60222093T2 (de) Verfahren, modul, vorrichtung und server zur spracherkennung
DE60033733T2 (de) Datenbankabfragesystem basierend auf Spracherkennung
DE102009045187B4 (de) System und Verfahren zum Kommunizieren mit Telefonagenten in einem automatisierten Call Center
US7668716B2 (en) Incorporation of external knowledge in multimodal dialog systems
EP3108476B1 (fr) Procédé de détection d'au moins deux informations à détecter, comportant un contenu informationnel à combiner, à l'aide d'un moyen de dialogue vocal, moyen de dialogue vocal et véhicule automobile
EP0852051A1 (fr) Procede de commande automatique d'au moins un appareil par des commandes vocales ou par dialogue vocal en temps reel et dispositif pour la mise en oeuvre de ce procede
EP0925578A1 (fr) Systeme et procede de traitement de la parole
DE19933524A1 (de) Verfahren zur Eingabe von Daten in ein System
EP1956814A1 (fr) Procédé numérique et agencement d'authentification d'un utilisateur d'un réseau de télécommunication ou de données
DE102019217751B4 (de) Verfahren zum Betreiben eines Sprachdialogsystems und Sprachdialogsystem
DE60128372T2 (de) Verfahren und system zur verbesserung der genauigkeit in einem spracherkennungssystem
EP1590797A1 (fr) Systeme de communication, dispositif d'envoi de communications et dispositif de detection de messages textuels entaches d'erreurs
DE10110977C1 (de) Bereitstellen von Hilfe-Informationen in einem Sprachdialogsystem
DE102005060072A1 (de) Verwaltung von mehrsprachigen Nametags für eingebettete Spracherkennung
EP1282897B1 (fr) Procede pour produire une banque de donnees vocales pour un lexique cible pour l'apprentissage d'un systeme de reconnaissance vocale
DE69636731T2 (de) System und Verfahren zur Aufnahme von Namen in einer Spracherkennungsdatenbank
EP3058565B1 (fr) Procédé de commande vocale ainsi que produit-programme d'ordinateur pour exécuter le procédé
WO2007017361A1 (fr) Procede et systeme de dialogue vocal pour determiner au moins une transaction pour servir une application d'arriere-plan
EP1251680A1 (fr) Service d'annuaire à commande vocale pour connection a un Réseau de Données
EP1340169A2 (fr) Procede et dispositif de fourniture automatique de renseignements au moyen d'un moteur de recherche
DE60125597T2 (de) Vorrichtung für die Dienstleistungsvermittlung
EP3576084B1 (fr) Conception du dialogue efficace
WO2018015041A1 (fr) Procédé pour configurer un dispositif d'actionnement à commande vocale, dispositif d'actionnement à commande vocale et véhicule automobile

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06764241

Country of ref document: EP

Kind code of ref document: A1