[go: up one dir, main page]

WO2018174500A1 - Système et programme pour mettre en œuvre un son tridimensionnel à réalité augmentée de reflet d'un son réel - Google Patents

Système et programme pour mettre en œuvre un son tridimensionnel à réalité augmentée de reflet d'un son réel Download PDF

Info

Publication number
WO2018174500A1
WO2018174500A1 PCT/KR2018/003189 KR2018003189W WO2018174500A1 WO 2018174500 A1 WO2018174500 A1 WO 2018174500A1 KR 2018003189 W KR2018003189 W KR 2018003189W WO 2018174500 A1 WO2018174500 A1 WO 2018174500A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
user
computing device
augmented reality
real
Prior art date
Application number
PCT/KR2018/003189
Other languages
English (en)
Korean (ko)
Inventor
이승학
Original Assignee
주식회사 라이커스게임
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170115842A external-priority patent/KR101963244B1/ko
Application filed by 주식회사 라이커스게임 filed Critical 주식회사 라이커스게임
Priority to CN201880001772.3A priority Critical patent/CN109076307A/zh
Publication of WO2018174500A1 publication Critical patent/WO2018174500A1/fr
Priority to US16/168,560 priority patent/US20190058961A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control

Definitions

  • the present invention relates to an augmented reality three-dimensional sound implementation system and program that reflects reality sound.
  • Augmented reality refers to computer graphics technology that blends real world images and virtual images that users see with one image. Augmented reality synthesizes an image relating to a virtual object or information to a specific object of the real world image.
  • Three-dimensional sound refers to a technology that provides three-dimensional sound to feel a sense of reality.
  • 3D sound is implemented by providing sound along a path transmitted from a location of sound generation to a user using vector values of a virtual reality image.
  • the problem to be solved by the present invention is to provide a system and a program that can implement a three-dimensional augmented reality sound reflecting the real sound in real time.
  • An augmented reality sound implementation system for solving the above problems, the first computing device of the first user; And a first sound device that is worn by the first user so that the first user can be provided with 3D augmented reality sound, and is wired or wirelessly connected to the first computing device, and includes a sound recording function.
  • the first sound device obtaining real sound information indicating real sound and transmitting the real sound information to the first computing device; Acquiring, by the first computing device, a first virtual sound indicative of a sound generated in a virtual reality game played on the first computing device; Generating, by the first computing device, a 3D augmented reality sound based on the reality sound information and the first virtual sound; And providing, by the first computing device, the three-dimensional augmented reality sound to the first user through the first acoustic device.
  • the three-dimensional augmented reality sound is provided by an appropriate method of binaural method or positioning method according to the distance between users using the augmented reality game 3 reflects the real sound and virtual sound in real time more realistically
  • the augmented reality sound of the dimension can be realized.
  • FIG. 1 is a schematic conceptual view for explaining a method of implementing augmented reality sound.
  • FIG. 2 is a block diagram illustrating an apparatus for implementing augmented reality sound.
  • FIG. 3 is a flowchart illustrating a first embodiment of a method for implementing augmented reality sound.
  • FIG. 4 is a flowchart illustrating a second embodiment of a method of implementing augmented reality sound.
  • An augmented reality sound implementation system for solving the above problems, the first computing device of the first user; And a first sound device that is worn by the first user so that the first user can be provided with 3D augmented reality sound, and is wired or wirelessly connected to the first computing device and includes an audio recording function.
  • the first sound device obtaining real sound information indicating real sound and transmitting the real sound information to the first computing device; Acquiring, by the first computing device, a first virtual sound indicative of a sound generated in a virtual reality game played on the first computing device; Generating, by the first computing device, a 3D augmented reality sound based on the reality sound information and the first virtual sound; And providing, by the first computing device, the three-dimensional augmented reality sound to the first user through the first acoustic device.
  • the first computing device may include obtaining direction characteristic information indicating a location of generation of the real sound; And the first computing device further considering the direction characteristic information in generating the 3D augmented reality sound.
  • the first computing device determines whether the first user and the second user spaced apart from the first user are closer than a preset distance, and when the first user and the second user are closer than the preset distance,
  • the direction characteristic information of the reality sound may be obtained based on the reality sound information, and the reality sound information may be a binaural type measured by a plurality of microphones in the first sound device.
  • the first computing device determines whether the first user and the second user spaced apart from the first user are closer than a preset distance, and the first user and the second user are not closer than the preset distance.
  • the direction characteristic information of the real sound may be obtained based on the location information of the first user and the second user.
  • the first computing device is a relative position of the first user and the second user spaced apart from the real space and the avatar of the virtual space of the first user and the second user
  • the 3D augmented reality sound may be generated based on the position difference.
  • the position difference may be a case where the second user uses a skill to the first user, and the second user and the avatar of the second user are separated.
  • the position difference may be a case where the second user uses a skill to the first user, and the movement of the avatar of the second user is greater or shorter than the movement of the second user.
  • the 3D augmented reality sound may be generated through blending of the second virtual sound and the reality sound generated to correspond to the position of the avatar.
  • the computer-readable medium may include a program recording a program for executing the augmented reality sound implementation method performed by the augmented reality sound implementation system described above.
  • an application for a terminal device stored in a medium may be included in order to execute the augmented reality sound implementation method performed by the augmented reality sound implementation system described above in combination with the computing device that is hardware.
  • An augmented reality 3D sound implementation method reflecting reality sound according to an embodiment of the present invention is realized by the computing device 200.
  • the augmented reality sound implementation method may be implemented as an application, stored in the computing device 200, and performed by the computing device 200.
  • the computing device 200 may be provided as a mobile device such as a smartphone or a tablet, but is not limited thereto.
  • the computing device 200 may include a camera, output a sound, and process and store data. 200) is enough. That is, the computing device 200 may include a camera and may be provided as a wearable device such as glasses or a band that outputs sound. Any computing device 200 that is not illustrated may be provided.
  • FIG. 1 is a schematic conceptual view for explaining a method of implementing augmented reality sound.
  • a plurality of users 10 and 20 carry sound devices 100-1 and 100-2 and computing devices 200-1 and 200-2 to experience augmented reality content.
  • the present invention is not limited thereto, and the method of implementing augmented reality sound may be substantially the same for two or more user environments.
  • the acoustic device 100 may be provided in the form of headphones, a headset, earphones, and the like.
  • the sound device 100 may not only output a sound including a speaker, but also obtain and record a surrounding sound including a microphone.
  • the acoustic device 100 may be provided in a binaural type to increase the sense of reality. By using the binaural effect, by recording the left-right sound separately, the sound including the direction characteristic information may be obtained.
  • the sound device 100 may be provided as a separate device as the sound output device and the sound recording device.
  • the sound device 100 may obtain reality sound information generated by the users 10 and 20.
  • the acoustic apparatus 100 may obtain reality acoustic information generated around the users 10 and 20. That is, the sound source may be a location where the real sound occurs.
  • the sound source is not limited to sound generated by the plurality of users 10 and 20.
  • the real acoustic information may indicate the acoustic information actually occurring in the reality.
  • the first sound device 100-1 of the first user 10 may transmit the second user 20.
  • the second user 20 may be a user located at a position spaced apart from the first user 10.
  • the acoustic device 100-1 of the first user 10 may also acquire direction characteristic information of the real sound generated by the second user 20.
  • the first computing device 200-1 of the first user 10 may store the virtual sound information and the virtual sound information of the first user 10 based on the direction characteristic information of the real sound obtained from the first sound device 100-1.
  • the 3D augmented reality sound for the first user 10 may be generated by synthesizing the first virtual sound information indicating the sound generated in the real game (eg, background sound, effect sound, etc.).
  • the plurality of computing devices 200-1 and 200-2 or the server may acquire positions of the first user 10 and the second user 20, and generate relative position information by comparing the positions of the first user 10 and the second user 20. .
  • any well known positioning system can be used, including, for example, a GPS system.
  • the plurality of computing devices 200-1 and 200-2 or the server acquires a three-dimensional position of the first user 10 and the second user 20, and compares the three-dimensional positions of each other to the relative three-dimensional position. Information can be generated.
  • relative location information indicating that the second user 20 is located at an 8 o'clock direction, a 50m distance, and a 5m low altitude based on the first user 10 may be generated.
  • the second user 20 may generate a realistic sound.
  • the direction characteristic information of the real sound acquired by the first user 10 is determined based on the relative position information.
  • the 3D augmented reality sound for the first user 10 is synthesized by synthesizing the real sound information obtained by the first user 10 from the second user 20, the direction characteristic information of the real sound, and the first virtual sound information. Can be implemented. According to the determined direction characteristic information of the real sound, elements such as amplitude, phase or frequency of the real sound may be adjusted.
  • the augmented reality sound implementation method by using the relative position information of the binaural type sound device 100 or a plurality of users (10, 20), the three-dimensional by reflecting the real sound in real time Can implement the augmented reality sound.
  • the above-described binaural type sound device 100 and the relative position information of the first user 10 and the second user 20 may be used together.
  • FIG. 2 is a block diagram illustrating an apparatus for implementing augmented reality sound.
  • the sound apparatus 100 may include at least one control unit 110, a storage unit 120, an input unit 130, an output unit 140, a transceiver 150, a GPS unit 160, and the like. It may include.
  • Each component included in the acoustic device 100 may be connected by a bus to communicate with each other.
  • the controller 110 may execute a program command stored in the storage 120.
  • the controller 110 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed. have.
  • the storage unit 120 may be configured of at least one of a volatile storage medium and a nonvolatile storage medium.
  • the storage unit 120 may be configured as at least one of a read only memory (ROM) and a random access memory (RAM).
  • the input unit 130 may be a recording device capable of recognizing and recording voice.
  • the input unit 130 may be a microphone.
  • the output unit 140 may be an output device capable of outputting voice.
  • the output device may include a speaker or the like.
  • the transceiver 150 may be connected to the computing device 200 or a server to perform communication.
  • the GPS unit 160 may track the location of the sound device 100.
  • the computing device 200 may include at least one control unit 210, a storage unit 220, an input unit 230, an output unit 240, a transceiver unit 250, a GPS unit 260, a camera unit 270, and the like. It may include.
  • Each component included in the computing 200 may be connected by a bus to communicate with each other.
  • the output unit 240 may be an output device capable of outputting a screen.
  • the output device may include a display or the like.
  • the controller 210 may execute a program command stored in the storage 220.
  • the controller 210 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed. have.
  • the storage unit 220 may be configured of at least one of a volatile storage medium and a nonvolatile storage medium.
  • the storage unit 220 may be configured as at least one of a read only memory (ROM) and a random access memory (RAM).
  • the transceiver 250 may be connected to another computing device 200, an acoustic device 110, or a server to perform communication.
  • the GPS unit 260 may track the location of the computing device 200.
  • the camera unit 270 may acquire a real image.
  • the augmented reality sound implementation method may be realized by linking the computing device with another computing device or server.
  • FIG. 3 is a flowchart illustrating a first embodiment of a method for implementing augmented reality sound.
  • the first sound device 100-1 of the first user 10 may obtain reality sound information.
  • the real sound information may be a real sound generated from the second user 20 or a real sound generated around the first user 10.
  • the first sound device 100-1 may transmit reality sound information to the first computing device 200-1 of the first user 10.
  • the first computing device 200-1 may obtain realistic acoustic information from the first acoustic device 100-1 (S300).
  • the first computing device 200-1 may determine whether another user (eg, the second user 20) exists at a distance from the first user 10 (S310).
  • the close distance may be a preset distance.
  • the first computing device 200-1 may acquire direction characteristic information of the real sound based on the real sound information (S320).
  • the real sound information may be binaural type sound information measured by the plurality of input units 130 of the first sound device 100-1.
  • the first computing device 200-1 may acquire first virtual sound information indicating sound generated in the virtual reality game (S321).
  • the first computing device 200-1 may generate the 3D augmented reality sound based on at least one of the real sound information, the direction characteristic information, and the first virtual sound information (S322).
  • the first computing device 200-1 may generate three-dimensional augmented reality sound by blending at least one of real acoustic information, direction characteristic information, and first virtual acoustic information.
  • the first computing device For example, if the first user 10 obtains the sound source of the national anthem verse 1, "God bows so that the sea and the Baekdusan dry and wear out in the north direction of the first user 10", the first computing device.
  • the 200-1 blends the direction characteristic information, the real sound information, and the first virtual sound information indicating the sound generated in the virtual reality game so that the sound source may sound as if the sound is generated in the north, thereby generating 3D augmented reality sound. Can be generated.
  • the first computing device 200-1 may provide 3D augmented reality sound to the first user 10 through the first acoustic device 100-1 (S323).
  • the second computing device 200-2 may determine location information of the first user 10 and the second user 20. Can be obtained (S330).
  • the first computing device 200-1 may acquire direction characteristic information of the real sound based on the location information of the first user 10 and the second user 20 (S331).
  • the first computing device 200-1 may acquire first virtual sound information indicating sound generated in the virtual reality game (S332).
  • the first computing device 200-1 may generate the 3D augmented reality sound based on at least one of the real sound information, the direction characteristic information, and the first virtual sound information (S333).
  • the first computing device 200-1 may generate three-dimensional augmented reality sound by blending at least two or more pieces of information among the real sound information, the direction characteristic information, and the first virtual sound information.
  • the first computing device 200-1 may be a second user ( 20, the first user 10 located at the right side may generate the 3D augmented reality sound in consideration of the direction characteristic information so that the sound source may sound as if it is generated at the left side.
  • the first computing device 200-1 may provide a 3D augmented reality sound to the first user 10 through the first acoustic device 100-1 (S334).
  • FIG. 4 is a flowchart illustrating a second embodiment of a method of implementing augmented reality sound.
  • the first sound device 100-1 of the first user 10 may obtain reality sound information.
  • the real sound information may be a real sound generated from the first user 10 or a real sound generated around the first user 10.
  • the first sound device 100-1 may transmit reality sound information to the first computing device 200-1 of the first user 10.
  • the first computing device 200-1 may obtain realistic acoustic information from the first acoustic device 100-1 (S300).
  • the first computing device 200-1 has a relative position between a plurality of users 10 and 20 in a real space and a relative position of avatars of a plurality of users 10 and 20 in a virtual space in an augmented reality game. It is possible to determine whether a position difference that does not correspond to (S301).
  • the position difference is a case where the second user 20 uses the skill to the first user 10, and may be a case where the avatars of the second user 20 and the second user 20 are separated.
  • a specific example of the case where the avatar is separated may be as follows.
  • the second user 20 may use the skill with respect to the first user 10.
  • the avatar of the second user 20 may be separated and moved to the avatar of the first user 10 to use the skill.
  • the position difference may be a case where the second user 20 uses a skill and the avatar of the second user 20 moves at a moment.
  • Teleportation is commonly referred to in the game as teleport. Teleportation (or teleportation) can mean moving to a space in an instant. It can usually be used to move very far away.
  • the device 200-1 may generate a 3D augmented reality sound in consideration of a position of the avatar of the second user 20 and a position difference of the second user 20.
  • the position difference is a case where the second user 20 uses the skill to the first user 10, and the movement of the avatar of the second user 20 is greater than that of the second user 20. Or short.
  • the first computing device 200-1 may consider sounds that may occur while the avatar of the second user 20 moves rapidly.
  • the first computing device 200-1 may acquire location information of the first user 10 and the second user 20 (S302).
  • the first computing device 200-1 may generate a 3D augmented reality sound based on the position difference (S303).
  • the first computing device 200-1 may generate the 3D augmented reality sound by blending the second virtual sound and the real sound generated to correspond to the positions of the avatars of the plurality of users 10 and 20. Can be generated.
  • the first computing device 200-1 may perform acoustic blending according to the first-person or third-person situation.
  • the first user 10 or the second user 20 uses the skill, so that the position of the avatar of the first user 10 or the second user 20 If different, the first computing device 200-1 may generate a virtual sound and blend it with the real sound according to the third-person situation.
  • the first computing device 200-1 may provide the 3D augmented reality sound to the first user through the first acoustic device 100-1 (S304).
  • the first computing device 200-1 may determine whether another user (eg, the second user 20) exists at a distance from the first user 10 (S310). ).
  • the close distance may be a preset distance.
  • the first computing device 200-1 may acquire direction characteristic information of the real sound based on the real sound information (S320).
  • the real sound information may be binaural type sound information measured by the plurality of input units 130 of the first sound device 100-1.
  • the first computing device 200-1 may acquire first virtual sound information indicating sound generated in the virtual reality game (S321).
  • the first computing device 200-1 may generate the 3D augmented reality sound based on at least one of the real sound information, the direction characteristic information, and the first virtual sound information (S322).
  • the first computing device 200-1 may generate three-dimensional augmented reality sound by blending at least one of real acoustic information, direction characteristic information, and first virtual acoustic information.
  • the first computing device For example, if the first user 10 obtains the sound source of the national anthem verse 1, "God bows so that the sea and the Baekdusan dry and wear out in the north direction of the first user 10", the first computing device.
  • the 200-1 blends the direction characteristic information, the real sound information, and the first virtual sound information indicating the sound generated in the virtual reality game so that the sound source may sound as if the sound is generated in the north, thereby generating 3D augmented reality sound. Can be generated.
  • the first computing device 200-1 may provide 3D augmented reality sound to the first user 10 through the first acoustic device 100-1 (S323).
  • the second computing device 200-2 may determine location information of the first user 10 and the second user 20. Can be obtained (S330).
  • the first computing device 200-1 may acquire direction characteristic information of the real sound based on the location information of the first user 10 and the second user 20 (S331).
  • the first computing device 200-1 may acquire first virtual sound information indicating sound generated in the virtual reality game (S332).
  • the first computing device 200-1 may generate the 3D augmented reality sound based on at least one of the real sound information, the direction characteristic information, and the first virtual sound information (S333).
  • the first computing device 200-1 may generate three-dimensional augmented reality sound by blending at least two or more pieces of information among the real sound information, the direction characteristic information, and the first virtual sound information.
  • the first computing device 200-1 may be a second user ( 20, the first user 10 located at the right side may generate the 3D augmented reality sound in consideration of the direction characteristic information so that the sound source may sound as if it is generated at the left side.
  • the first computing device 200-1 may provide a 3D augmented reality sound to the first user 10 through the first acoustic device 100-1 (S334).
  • the steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, in a software module executed by hardware, or by a combination thereof.
  • the software module may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stereophonic System (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système de mise en œuvre d'un son de réalité augmentée pour exécuter un procédé de mise en œuvre d'un son de réalité augmentée. Le système comprend : un premier dispositif informatique d'un premier utilisateur; et un premier dispositif sonore qui est porté par le premier utilisateur de telle sorte que le premier utilisateur peut recevoir un son de réalité augmentée tridimensionnelle, qui est connecté au premier dispositif informatique d'une manière filaire ou sans fil, et qui comprend une fonction d'enregistrement sonore. Le procédé comprend : une étape au cours de laquelle le premier dispositif sonore acquiert des informations sonores réelles qui indiquent un son réel, et transmet les informations sonores réelles au premier dispositif informatique; une étape au cours de laquelle le premier dispositif informatique acquiert un premier son virtuel qui indique un son généré à partir d'un jeu de réalité virtuelle exécuté dans le premier dispositif informatique; une étape au cours de laquelle le premier dispositif informatique génère un son de réalité augmentée tridimensionnelle sur la base des informations sonores réelles et du premier son virtuel; et une étape au cours de laquelle le premier dispositif informatique fournit le son de réalité augmentée tridimensionnelle au premier utilisateur par l'intermédiaire du premier dispositif sonore.
PCT/KR2018/003189 2017-03-20 2018-03-19 Système et programme pour mettre en œuvre un son tridimensionnel à réalité augmentée de reflet d'un son réel WO2018174500A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001772.3A CN109076307A (zh) 2017-03-20 2018-03-19 反映现实音效的增强现实三维音效实现系统及程序
US16/168,560 US20190058961A1 (en) 2017-03-20 2018-10-23 System and program for implementing three-dimensional augmented reality sound based on realistic sound

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR10-2017-0034398 2017-03-20
KR20170034398 2017-03-20
KR10-2017-0102892 2017-08-14
KR20170102892 2017-08-14
KR1020170115842A KR101963244B1 (ko) 2017-03-20 2017-09-11 현실 음향을 반영한 증강 현실 3차원 음향 구현 시스템 및 프로그램
KR10-2017-0115842 2017-09-11

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/168,560 Continuation US20190058961A1 (en) 2017-03-20 2018-10-23 System and program for implementing three-dimensional augmented reality sound based on realistic sound

Publications (1)

Publication Number Publication Date
WO2018174500A1 true WO2018174500A1 (fr) 2018-09-27

Family

ID=63585873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/003189 WO2018174500A1 (fr) 2017-03-20 2018-03-19 Système et programme pour mettre en œuvre un son tridimensionnel à réalité augmentée de reflet d'un son réel

Country Status (1)

Country Link
WO (1) WO2018174500A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020157035A1 (fr) * 2019-02-01 2020-08-06 Nokia Technologies Oy Appareil, procédé ou programme informatique permettant une communication audio en temps réel entre des utilisateurs en présence d'un contenu audio immersif

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006025281A (ja) * 2004-07-09 2006-01-26 Hitachi Ltd 情報源選択システム、および方法
US20120183161A1 (en) * 2010-09-03 2012-07-19 Sony Ericsson Mobile Communications Ab Determining individualized head-related transfer functions
JP2016048534A (ja) * 2013-12-25 2016-04-07 キヤノンマーケティングジャパン株式会社 情報処理システム、その制御方法、及びプログラム、並びに情報処理装置、その制御方法、及びプログラム
US20160212272A1 (en) * 2015-01-21 2016-07-21 Sriram Srinivasan Spatial Audio Signal Processing for Objects with Associated Audio Content
US20170045941A1 (en) * 2011-08-12 2017-02-16 Sony Interactive Entertainment Inc. Wireless Head Mounted Display with Differential Rendering and Sound Localization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006025281A (ja) * 2004-07-09 2006-01-26 Hitachi Ltd 情報源選択システム、および方法
US20120183161A1 (en) * 2010-09-03 2012-07-19 Sony Ericsson Mobile Communications Ab Determining individualized head-related transfer functions
US20170045941A1 (en) * 2011-08-12 2017-02-16 Sony Interactive Entertainment Inc. Wireless Head Mounted Display with Differential Rendering and Sound Localization
JP2016048534A (ja) * 2013-12-25 2016-04-07 キヤノンマーケティングジャパン株式会社 情報処理システム、その制御方法、及びプログラム、並びに情報処理装置、その制御方法、及びプログラム
US20160212272A1 (en) * 2015-01-21 2016-07-21 Sriram Srinivasan Spatial Audio Signal Processing for Objects with Associated Audio Content

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020157035A1 (fr) * 2019-02-01 2020-08-06 Nokia Technologies Oy Appareil, procédé ou programme informatique permettant une communication audio en temps réel entre des utilisateurs en présence d'un contenu audio immersif

Similar Documents

Publication Publication Date Title
JP7170736B2 (ja) 双方向性拡張または仮想現実装置
JP5992210B2 (ja) 情報処理プログラム、情報処理装置、情報処理システム、および情報処理方法
WO2019151793A1 (fr) Appareil et procédé de partage d'un environnement de réalité virtuelle
CN108353244A (zh) 差分头部追踪装置
WO2013019022A2 (fr) Procédé et appareil conçus pour le traitement d'un signal audio
JP7473676B2 (ja) オーディオ処理方法、装置、可読媒体及び電子機器
CN111918177B (zh) 音频处理方法、装置、系统以及存储介质
US12226696B2 (en) Gaming with earpiece 3D audio
EP3529999A1 (fr) Système et procédé de production de données audio sur un dispositif de visiocasque
WO2018182190A1 (fr) Utilisation de carillons pour l'identification de roi dans une vidéo à 360 degrés
WO2018101589A1 (fr) Système et procédé de test auditif
WO2017043795A1 (fr) Procédé de transmission d'images de réalité virtuelle, procédé de reproduction d'image et programme les utilisant
WO2021169689A1 (fr) Procédé et appareil d'optimisation d'effet sonore, dispositif électronique et support de stockage
Jiang et al. The role of dynamic cue in auditory vertical localisation
US20210409677A1 (en) Methods, systems and devices supporting real-time shared virtual reality environment
WO2017188637A1 (fr) Procédé d'affichage de message en fonction de l'apparition d'un événement dans un dispositif de réalité virtuelle (vr) et appareil associé
KR101963244B1 (ko) 현실 음향을 반영한 증강 현실 3차원 음향 구현 시스템 및 프로그램
WO2019230567A1 (fr) Dispositif de traitement d'information et procédé de génération sonore
WO2018174500A1 (fr) Système et programme pour mettre en œuvre un son tridimensionnel à réalité augmentée de reflet d'un son réel
WO2021015348A1 (fr) Procédé de suivi de caméra pour fournir un contenu de rendu mixte à l'aide d'une réalité virtuelle et d'une réalité augmentée, et système l'utilisant
JP2018152834A (ja) 仮想聴覚環境において音声信号出力を制御する方法及び装置
CN117793611A (zh) 生成立体声的方法、播放立体声的方法、设备及存储介质
CN116744215B (zh) 音频处理方法和装置
WO2018194320A1 (fr) Dispositif de commande audio spatial selon le suivi du regard et procédé associé
WO2021015347A1 (fr) Procédé pour fournir un contenu de rendu mixte en utilisant la réalité virtuelle et la réalité augmentée, et système l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18772168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 17.12.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 18772168

Country of ref document: EP

Kind code of ref document: A1