WO2008018136A1 - dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. - Google Patents
dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. Download PDFInfo
- Publication number
- WO2008018136A1 WO2008018136A1 PCT/JP2006/315839 JP2006315839W WO2008018136A1 WO 2008018136 A1 WO2008018136 A1 WO 2008018136A1 JP 2006315839 W JP2006315839 W JP 2006315839W WO 2008018136 A1 WO2008018136 A1 WO 2008018136A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature data
- speaker
- voice
- collating
- input
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 13
- 238000013500 data storage Methods 0.000 claims description 22
- 238000013075 data extraction Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 description 6
- 238000012795 verification Methods 0.000 description 4
- 238000013524 data verification Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 210000000554 iris Anatomy 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/24—Speech recognition using non-acoustical features
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/04—Training, enrolment or model building
Definitions
- Speaker recognition device speaker recognition method, etc.
- the present application relates to a technical field such as a speaker recognition apparatus and method.
- Patent Document 1 discloses a speaker recognition device that can update registered speech at an appropriate timing and can ensure safety at the time of update. It is disclosed.
- the speaker recognition device in Patent Document 1 is based on the speech information obtained by the input unit 1 and the speaker identification information stored in the speech data storage unit 2 as a speech verification unit. If it is determined that the person is the result of the comparison, the update necessity determination unit 7 determines whether or not to update the speaker identification information. Is configured to update the speaker specific information using the voice information of the input unit 1 and store it again in the voice data storage unit 2.
- Patent Document 1 JP 2001-265385 A
- the present application aims to provide a speaker recognition device, a speaker recognition method, and the like that can improve the performance of speaker recognition, with the elimination of such inconveniences as an issue. .
- the invention according to claim 1 is characterized in that a voice input means for inputting a voice uttered by a speaker, and a voice feature indicating a feature of the inputted voice.
- Voice feature data extracting means for extracting data
- voice feature data storage means for storing reference voice feature data to be a reference for collating the voice feature data, the extracted voice feature data, and the voice feature data storage
- the voice feature data collating means for comparing and collating the reference voice feature data stored in the means, and speaker feature data indicating speaker features other than the voice from the speaker.
- the speaker feature data storage means for storing the reference speaker feature data as a collation reference of the speaker feature data, and the collation result by the voice feature data collation means is not correct
- the input Speaker feature data collating means for comparing and collating the recorded speaker feature data with the reference speaker feature data, and the collation result by the speaker feature data collating means is correct
- Update means for updating the reference voice feature data stored in the voice feature data storage means corresponding to the voice feature data using the voice feature data extracted by the voice feature data extraction means; It is characterized by providing.
- the invention according to claim 4 is a speech input step for inputting speech uttered by a speaker, and speech feature data extraction for extracting speech feature data indicating the features of the input speech.
- a speech feature data matching step a speaker feature input step for inputting speaker feature data indicating speaker features other than the speech from the speaker, and a reference story to be a reference for matching the speaker feature data Speaker feature data storage step for storing speaker feature data, and when the collation result in the speech feature data collation step is not correct, the input speaker feature data and the reference speaker feature deci Speaker feature data collating step for comparing and comparing data with each other, and when the collation result in the speaker feature data collating step is correct, the extracted voice feature data is used to correspond to the voice feature data.
- An update step of updating the stored reference voice feature data is used to correspond to the voice feature data.
- the invention of the speaker recognition program according to claim 5 is an audio input means for inputting a voice uttered by a speaker to a computer, and voice feature data indicating the characteristics of the inputted voice. Extracting voice feature data extracting means, the extracted voice feature data And voice feature data collating means for comparing the voice feature data stored in the voice feature data storage means with reference voice feature data serving as a reference for collating the voice feature data, and speaker features other than the voice from the speaker. If the collation result by the voice feature data collating unit is not correct, the input speaker characteristic data and the speaker characteristic data storing unit are input.
- the stored speaker feature data collating means for comparing and collating the reference speaker feature data as the collation reference of the speaker feature data, and the collation result by the speaker feature data collating means is correct.
- the stored reference speech feature data corresponding to the speech feature data is updated using the speech feature data extracted by the speech feature data extraction unit. Characterized in that to function as a unit.
- the invention of the recording medium according to claim 6 is characterized in that the speaker recognition program according to claim 5 is recorded so as to be readable by the computer.
- FIG. 1 is a diagram showing a schematic configuration example of a speaker recognition device S according to the present embodiment.
- FIG. 2 is a flowchart showing speaker recognition processing in a processing unit P in the speaker recognition device S according to the present embodiment.
- FIG. 3 is a flowchart showing speaker recognition processing in a processing unit P in the speaker recognition device S according to the present embodiment.
- Speaker characteristics Figure 6 shows a schematic configuration example of the speaker recognition device S when DB6 is also updated.
- Powerful speaker recognition device S is, for example, a car navigation device, a disc
- FIG. 1 is a diagram illustrating a schematic configuration example of the speaker recognition device S according to the present embodiment.
- the speaker recognition apparatus S includes a voice input unit 1 as a voice input unit, a voice feature data extraction unit 2 as a voice feature data extraction unit, and a voice feature as a voice feature data storage unit.
- (Speaker Sound) DB (Database) 3 Speech (Acoustic) Feature Data Matching Unit 4 as Speech Feature Data Matching Means, Speaker (Daily) Feature Input Unit 5 as Speaker Feature Input Means, Speaker Features
- a speaker (individual) feature DB6 as a data storage means, a speaker (individual) feature data collation unit 7 as a speaker feature data collation means, and a voice feature DB update unit 8 as an update means,
- the speaker (individual) feature DB6 as a data storage means
- a speaker (individual) feature data collation unit 7 as a speaker feature data collation means
- a voice feature DB update unit 8 as an update means
- the processing unit P including a CPU having a calculation function, a working RAM, a ROM for storing various data and programs
- the CPU is a predetermined program (speaker recognition program of the present application).
- the speech feature data extraction unit 2 the speech feature data collation unit 4, the speaker feature data collation unit 7, and the speech feature DB update unit 8 function.
- the voice feature DB3 and the speaker feature DB6 are constructed in the storage unit M such as a hard disk drive.
- the voice input unit 1 is a microphone or the like for inputting voice uttered by a speaker. It is a voice input device.
- the type of the voice input unit 1 is not limited as long as it can input voice.
- the voice feature data extraction unit 2 calculates (extracts) the voice force acoustic parameter (an example of voice feature data indicating the voice feature) input by the voice input unit 1.
- the acoustic parameters for example, any one that can express the hydroacoustic characteristics using MFCC (Mel Frequency Cepstrum Coefficient) or LPC cepstrum may be used.
- Speech feature DB3 is reference acoustic data indicating acoustic features of each of a plurality of registered speakers, and is a reference acoustic data (an example of reference speech feature data) that serves as a reference for comparison of the acoustic parameters. Is stored and registered. For example, GMM (Gaussian Mixture Model) generated based on the acoustic parameters of each speaker can be generated based on the acoustic parameters of each speaker. If so, don't worry about the type! /.
- GMM Global Mixture Model
- the voice feature data matching unit 4 compares and matches the sound parameter calculated (extracted) by the voice feature data extracting unit 2 with the reference acoustic data stored in the voice feature DB3, and the result of the matching is performed. For example, the recognition result or authentication result is output to the display unit D.
- the voice feature data matching unit 4 checks to which of the reference acoustic data stored in the voice feature DB3 the extracted acoustic parameter is closest, and outputs the matching result. . More specifically, the extracted acoustic parameters are applied to each of the registered speaker's reference acoustic data (for example, GMM), the likelihood is obtained, and among them, the reference acoustic data that outputs the maximum likelihood. Information (for example, name, etc.) regarding the registered speaker corresponding to (for example, GMM) is output to the display unit D as a matching result. The user who is a speaker can thus see whether or not the collation result is correct (correct) by looking at the collation result displayed on the display unit D.
- the reference acoustic data for example, GMM
- the speaker feature input unit 5 is a personal feature input device for inputting personal feature information as speaker feature data indicating a unique feature of a speaker (individual) other than voice.
- the personal feature information is fingerprint data
- the personal feature input device is “fingerprint sensor”
- the personal feature input device is “keyboard”. Or “touch panel”.
- Various known items such as an iris can be applied as the personal feature information.
- Speaker feature DB 6 stores reference personal feature information for each of a plurality of registered speakers, which serves as a reference for matching the personal feature information input by speaker feature input unit 5. Stored and registered.
- the speaker feature data matching unit 7 compares the personal feature information input by the speaker feature input unit 5 with the reference personal feature information stored in the speaker feature DB 6, and compares the matching results. Results (also known as recognition results or authentication results) are determined to be correct (for example, a force that matches a password that has been entered is registered in speaker feature DB6). ing. If the speaker characteristic data matching unit 7 determines that the answer is correct from the matching result, the speaker characteristic data matching unit 7 identifies the registered speaker (e.g., the registered speaker with the matched word) from the speaker feature DB 6 as the target of the correct answer. (decide.
- the speech feature DB update unit 8 stores in the speech feature DB 3 using the acoustic parameters extracted by the speech feature data extraction unit 2.
- the reference acoustic data of the specified registered speaker is updated. For this update, for example, MAP estimation is used.
- FIG. 2 is a flowchart showing speaker recognition processing in the processing unit P in the speaker recognition device S according to the present embodiment.
- Step Sl when a voice uttered by a speaker is input by the voice input unit 1, an acoustic parameter is calculated (extracted) by the input voice force voice feature data extraction unit 2.
- Step S2 the calculated (extracted) acoustic parameter and the reference acoustic data stored in the speech feature DB3 are compared and verified by the speech feature data verification unit 4, and the verification result (recognition result) is displayed on the display unit D.
- the similarity (distance and likelihood) of each speaker is obtained.
- the similarity (distance and likelihood) of each speaker is Since they often come close to each other, it is possible to automatically determine the correct answer or incorrect answer by judging the similarity (distance or likelihood) of each speaker as a threshold.
- the verification result information on the corresponding registered speaker (for example, name, etc.)
- the voice feature data verification unit 4 in FIG. 1 determines whether the verification result is correct.
- step S3 If the answer is correct, an input indicating the correct answer is made via the voice input unit 1 or the speaker feature input unit 5 or the like. In this way, the processing unit P that has recognized the input determines that the collation result is correct (step S3: YES), and the process is terminated.
- the voice feature data matching unit 4 performs an input indicating the incorrect answer via the voice input unit 1 or the speaker feature input unit 5 or the like.
- the processing unit P that recognizes the input determines that the collation result is incorrect (step S3: NO), and proceeds to step S4.
- the determination of the correct answer Z incorrect answer is a function incorporated in the voice feature data matching unit 4 in FIG.
- the above input is the similarity of each registered speaker of the voice feature data matching unit 4, and the output is a message to be displayed on the display unit.
- the message is determined to be correct, the result is a recognition result, and if the message is determined to be incorrect, “please enter personal feature information into speaker feature input section 5”.
- the message prompts Incidentally, if the threshold judgment is used, the recognition result is output regardless of whether the correct answer is incorrect or incorrect.
- the voice feature data matching unit 4 in FIG. 1 may determine whether or not the matching result is correct.
- the processing unit P outputs the collation result and there is no input indicating an incorrect answer (or correct answer) within a predetermined time (for example, 10 seconds), the correct answer (or incorrect answer) is obtained. If there is an input indicating an incorrect answer (or correct answer) within that period, it may be configured to determine that it is an incorrect answer (or correct answer).
- step S4 the speaker is prompted to input personal characteristic information, and in response to this, personal characteristic information is input from the speaker by the speaker characteristic input unit 5. Then, the input personal feature information and the reference personal feature information stored in the speaker feature DB 6 are!
- the collected data collating unit 7 performs comparison and collation, and determines whether or not the collation result is correct. If the matching result is incorrect (for example, the reference personal feature information that matches the input personal feature information (matches including a predetermined margin) is not stored in the speaker feature DB 6). (Step S5: NO), the process is terminated.
- step S6 if the collation result is correct (step S5: YES), the registered speaker that is the target of the correct answer is identified and the voice feature DB3 is updated (step S6).
- the acoustic parameters extracted by the speech feature data extraction unit 2 are used and stored in the speech feature DB3 corresponding to the acoustic parameters.
- the registered speaker's reference acoustic data is updated.
- the voice uttered by the speaker is input, the acoustic parameters indicating the characteristics of the input voice are extracted, and the extracted acoustic parameters and Speech features Reference acoustic data stored in DB3 was compared and collated. If the matching result is not correct !, the personal feature information indicating the speaker features other than the voice is input from the above speaker, and the input personal feature information and the speaker feature DB 6 are stored. Compared with personal characteristics information. Then, when the collation result is correct, the reference acoustic data stored in the speech feature DB 3 corresponding to the acoustic parameter is updated using the extracted acoustic parameter. As a result, it is possible to specify the information power other than the voice for the speaker, and the voice feature DB3 can be updated even if the voice pattern is prone to error when the speaker is recognized, thereby improving the recognition performance. it can.
- the utterance with an incorrect answer as a result of matching is considered to have sufficient characteristics of the speaker himself, the utterance of the incorrect answer is also actively used. It is configured to update. As a result, speaker recognition can be used with confidence even if voice quality has changed due to poor physical condition or aging.
- the voice feature DB update unit 8 has an incorrect answer by the voice feature data matching unit 4 and the matching result by the speaker feature data matching unit 7 is not correct. Only when the answer is correct, the acoustic parameters extracted by the speech feature data extraction unit 2 are used to store the reference acoustic data of the identified registered speaker stored in the speech feature DB3. Has been updated.
- the speech feature DB update unit 8 determines that the collation result by the speech feature data collation unit 4 is correct.
- the acoustic parameter extracted by the voice feature data extraction unit 2 may be used to update the reference acoustic data of the specified registered speaker stored in the voice feature DB3. This allows more updates than the method shown in Fig. 2, and can be expected to improve the accuracy of speaker recognition.
- the speaker feature DB6 is also updated to improve the accuracy of the authentication result. It is more desirable to configure to perform processing. However, according to the correct answer Z incorrect answer by the speaker feature data matching unit 7, the acoustic parameter and the reference acoustic data are different from the case of the comparative match. Because the speaker feature data matching unit 7 determines the threshold of the degree of similarity of the speaker corresponding to the matching result and guesses the correct answer Z incorrect answer. Is also included.
- FIG. 4 is a diagram showing a schematic configuration example of the speaker recognition device S when the update process is performed on the speaker feature DB6.
- the same components as those in FIG. 1 are denoted by the same reference numerals, and redundant description is omitted.
- a speaker feature DB update unit 9 is added.
- the speaker feature data matching unit 7 determines that the answer is correct (that is, when the speaker similarity level exceeds the threshold)
- the speaker feature DB update unit 9 Using the personal feature information input by the speaker feature input unit 5, the reference personal feature information stored in the speaker feature DB 6 corresponding to the personal feature information is updated. Thereby, the accuracy of the authentication result can be improved.
- the present invention is not limited to the above-described embodiment.
- the above embodiment is an exemplification, and the present invention has the same configuration as the technical idea described in the scope of claims of the present invention, and any device that exhibits the same function and effect is the present embodiment. It is included in the technical scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Telephone Function (AREA)
Abstract
L'invention concerne un dispositif de reconnaissance d'un individu en fonction de sa voix, un procédé de reconnaissance d'un individu en fonction de sa voix, un programme de traitement de reconnaissance d'un individu en fonction de sa voix, etc. pour améliorer encore les performances de la reconnaissance de d'un individu en fonction de sa voix. Des paroles prononcées par un individu sont entrées. Un paramètre acoustique indiquant une caractéristique des paroles entrées est extrait. Le paramètre acoustique extrait est comparé et collationné avec des données acoustiques de référence stockées dans une base de données de caractéristiques de voix (3). Si le résultat du collationnement montre une non-conformité, l'individu entre des informations de caractéristiques personnelles représentant une caractéristique de l'individu autre que celle de la voix. Les informations de caractéristiques personnelles entrées sont comparées et collationnées avec des informations de caractéristiques personnelles de référence stockées dans une base de données de caractéristiques de voix (6). Si le résultat du collationnement montre une conformité, les données acoustiques de référence stockées dans la base de données de caractéristiques de voix (3) correspondant au paramètre acoustique extrait sont mises à jour en utilisant le paramètre acoustique.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2006/315839 WO2008018136A1 (fr) | 2006-08-10 | 2006-08-10 | dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2006/315839 WO2008018136A1 (fr) | 2006-08-10 | 2006-08-10 | dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008018136A1 true WO2008018136A1 (fr) | 2008-02-14 |
Family
ID=39032681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/315839 WO2008018136A1 (fr) | 2006-08-10 | 2006-08-10 | dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2008018136A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023079815A1 (fr) * | 2021-11-08 | 2023-05-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Procédé de traitement d'informations, dispositif de traitement d'informations, et programme de traitement d'informations |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS626300A (ja) * | 1985-07-02 | 1987-01-13 | 松下電器産業株式会社 | 話者照合装置 |
JP2001503156A (ja) * | 1996-10-15 | 2001-03-06 | スイスコム アーゲー | 話者確認法 |
JP2002221990A (ja) * | 2001-01-25 | 2002-08-09 | Matsushita Electric Ind Co Ltd | 個人認証装置 |
JP3529049B2 (ja) * | 2002-03-06 | 2004-05-24 | ソニー株式会社 | 学習装置及び学習方法並びにロボット装置 |
JP3727927B2 (ja) * | 2003-02-10 | 2005-12-21 | 株式会社東芝 | 話者照合装置 |
-
2006
- 2006-08-10 WO PCT/JP2006/315839 patent/WO2008018136A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS626300A (ja) * | 1985-07-02 | 1987-01-13 | 松下電器産業株式会社 | 話者照合装置 |
JP2001503156A (ja) * | 1996-10-15 | 2001-03-06 | スイスコム アーゲー | 話者確認法 |
JP2002221990A (ja) * | 2001-01-25 | 2002-08-09 | Matsushita Electric Ind Co Ltd | 個人認証装置 |
JP3529049B2 (ja) * | 2002-03-06 | 2004-05-24 | ソニー株式会社 | 学習装置及び学習方法並びにロボット装置 |
JP3727927B2 (ja) * | 2003-02-10 | 2005-12-21 | 株式会社東芝 | 話者照合装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023079815A1 (fr) * | 2021-11-08 | 2023-05-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Procédé de traitement d'informations, dispositif de traitement d'informations, et programme de traitement d'informations |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111566729B (zh) | 用于远场和近场声音辅助应用的利用超短语音分段进行的说话者标识 | |
US11735191B2 (en) | Speaker recognition with assessment of audio frame contribution | |
JP4213716B2 (ja) | 音声認証システム | |
JP4588069B2 (ja) | 操作者認識装置、操作者認識方法、および、操作者認識プログラム | |
EP3740949B1 (fr) | Authentification d'un utilisateur | |
US20180040325A1 (en) | Speaker recognition | |
US9646613B2 (en) | Methods and systems for splitting a digital signal | |
CN104462912B (zh) | 改进的生物密码安全 | |
JP4897040B2 (ja) | 音響モデル登録装置、話者認識装置、音響モデル登録方法及び音響モデル登録処理プログラム | |
US11081115B2 (en) | Speaker recognition | |
JPH1173195A (ja) | 話者の申し出識別を認証する方法 | |
WO2018088534A1 (fr) | Dispositif électronique, procédé de commande pour dispositif électronique et programme de commande pour dispositif électronique | |
CN117378006A (zh) | 混合多语种的文本相关和文本无关说话者确认 | |
CN113241059B (zh) | 语音唤醒方法、装置、设备及存储介质 | |
WO2007111169A1 (fr) | Dispositif d'enregistrement de modèle de locuteur, procédé et programme informatique dans un système d'identification du locuteur | |
JP3849841B2 (ja) | 話者認識装置 | |
KR102098956B1 (ko) | 음성인식장치 및 음성인식방법 | |
WO2008018136A1 (fr) | dispositif de reconnaissance d'un individu en fonction de sa voix, procédé de reconnaissance d'un individu en fonction de sa voix, etc. | |
JP3818063B2 (ja) | 個人認証装置 | |
JP2001265387A (ja) | 話者照合装置及び方法 | |
JP3919314B2 (ja) | 話者認識装置及びその方法 | |
EP4506838A1 (fr) | Procédés et systèmes d'authentification d'utilisateurs | |
US20250046317A1 (en) | Methods and systems for authenticating users | |
JP2000148187A (ja) | 話者認識方法、その方法を用いた装置及びそのプログラム記録媒体 | |
JP3841342B2 (ja) | 音声認識装置および音声認識プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 06782633 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06782633 Country of ref document: EP Kind code of ref document: A1 |