[go: up one dir, main page]

CN113115175B - 3D sound effect processing method and related products - Google Patents

3D sound effect processing method and related products Download PDF

Info

Publication number
CN113115175B
CN113115175B CN202110395644.5A CN202110395644A CN113115175B CN 113115175 B CN113115175 B CN 113115175B CN 202110395644 A CN202110395644 A CN 202110395644A CN 113115175 B CN113115175 B CN 113115175B
Authority
CN
China
Prior art keywords
target
reverberation
data
sound
effect parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110395644.5A
Other languages
Chinese (zh)
Other versions
CN113115175A (en
Inventor
严锋贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110395644.5A priority Critical patent/CN113115175B/en
Publication of CN113115175A publication Critical patent/CN113115175A/en
Application granted granted Critical
Publication of CN113115175B publication Critical patent/CN113115175B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

本申请实施例公开了一种3D音效处理方法及相关产品,该方法包括:获取声源的单声道数据;确定所述单声道数据对应的目标内容场景类型;根据所述目标内容场景类型确定目标混响效果参数;依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。采用本申请实施例能够确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。

Figure 202110395644

The embodiment of the present application discloses a 3D sound effect processing method and related products. The method includes: acquiring mono data of a sound source; determining a target content scene type corresponding to the mono data; according to the target content scene type Determining target reverberation effect parameters; processing the mono data according to the target reverberation effect parameters to obtain target reverberation binaural data. Using the embodiment of the present application, the reverberation effect parameter corresponding to the content scene can be determined, and the reverberation binaural data can be generated according to the reverberation effect parameter, thereby realizing the reverberation effect suitable for the content scene, and the stereoscopic feeling is more realistic.

Figure 202110395644

Description

3D音效处理方法及相关产品3D sound effect processing method and related products

技术领域technical field

本申请涉及虚拟/增强现实技术领域,具体涉及一种3D音效处理方法及相关产品。The present application relates to the technical field of virtual/augmented reality, and in particular, to a 3D sound effect processing method and related products.

背景技术Background technique

随着电子设备(如手机、平板电脑等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。With the widespread application of electronic devices (such as mobile phones, tablet computers, etc.), electronic devices can support more and more applications, and their functions are becoming more and more powerful. Electronic devices are developing in the direction of diversification and personalization. Indispensable electronics.

随着技术发展,虚拟现实在电子设备中也得到了迅猛发展,然而,虚拟现实产品中,现有技术中耳机接收的音频数据往往是2D音频数据,因此,无法给用户带来声音的真实感,降低了用户体验。With the development of technology, virtual reality has also developed rapidly in electronic devices. However, in virtual reality products, the audio data received by earphones in the prior art is often 2D audio data, so it cannot bring the real sense of sound to users. , which degrades the user experience.

发明内容SUMMARY OF THE INVENTION

本申请实施例提供了一种3D音效处理方法及相关产品,能够合成3D音效,提升用户体验。Embodiments of the present application provide a 3D sound effect processing method and related products, which can synthesize 3D sound effects and improve user experience.

第一方面,本申请实施例提供一种3D音效处理方法,包括:In a first aspect, an embodiment of the present application provides a 3D sound effect processing method, including:

获取声源的单声道数据;Get the mono data of the sound source;

确定所述单声道数据对应的目标内容场景类型;Determine the target content scene type corresponding to the mono data;

根据所述目标内容场景类型确定目标混响效果参数;Determine target reverberation effect parameters according to the target content scene type;

依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。The monaural data is processed according to the target reverberation effect parameter to obtain target reverberation binaural data.

第二方面,本申请实施例提供了一种3D音效处理装置,所述3D音效处理装置包括:获取单元、第一确定单元、第二确定单元和处理单元,其中,In a second aspect, an embodiment of the present application provides a 3D sound effect processing apparatus, the 3D sound effect processing apparatus includes: an acquisition unit, a first determination unit, a second determination unit, and a processing unit, wherein,

所述获取单元,用于获取声源的单声道数据;The acquisition unit is used to acquire mono data of the sound source;

所述第一确定单元,用于确定所述单声道数据对应的目标内容场景类型;the first determining unit, configured to determine the target content scene type corresponding to the mono data;

所述第二确定单元,用于根据所述目标内容场景类型确定目标混响效果参数;the second determining unit, configured to determine a target reverberation effect parameter according to the target content scene type;

所述处理单元,用于依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。The processing unit is configured to process the mono data according to the target reverberation effect parameter to obtain target reverberation binaural data.

第三方面,本申请实施例提供一种电子设备,包括处理器、存储器、通信接口,以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行本申请实施例第一方面中的步骤的指令。In a third aspect, embodiments of the present application provide an electronic device, including a processor, a memory, a communication interface, and one or more programs, wherein the one or more programs are stored in the memory, and are configured by the above Executed by the processor, the above program includes instructions for executing the steps in the first aspect of the embodiments of the present application.

第四方面,本申请实施例提供了一种计算机可读存储介质,其中,上述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,上述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to execute the computer program as described in the first embodiment of the present application. Some or all of the steps described in an aspect.

第五方面,本申请实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。In a fifth aspect, an embodiment of the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute as implemented in the present application. Examples include some or all of the steps described in the first aspect. The computer program product may be a software installation package.

可以看出,本申请实施例中所描述的3D音效处理方法及相关产品,获取声源的单声道数据,确定单声道数据对应的目标内容场景类型,根据目标内容场景类型确定目标混响效果参数,依据目标混响效果参数对单声道数据进行处理,得到目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the 3D sound effect processing method and related products described in the embodiments of the present application acquire the mono data of the sound source, determine the target content scene type corresponding to the mono data, and determine the target reverberation according to the target content scene type. Effect parameters: Process the mono data according to the target reverberation effect parameters to obtain the target reverberation binaural data. In this way, the reverberation effect parameters corresponding to the content scene can be determined, and the reverberation can be generated according to the reverberation effect parameters. Two-channel data, thus, the reverberation effect suitable for the content scene is realized, and the stereoscopic effect is more realistic.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following briefly introduces the accompanying drawings required for the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.

图1A是本申请实施例提供的一种电子设备的结构示意图;FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of the present application;

图1B是本申请实施例公开的一种3D音效处理方法的流程示意图;FIG. 1B is a schematic flowchart of a 3D sound effect processing method disclosed in an embodiment of the present application;

图1C是本申请实施例公开的多路双声道数据划分方式的演示示意图;FIG. 1C is a schematic diagram illustrating a multi-channel binaural data division method disclosed in an embodiment of the present application;

图1D是本申请实施例公开的一种3D音效处理方法的演示示意图;1D is a schematic diagram illustrating a demonstration of a 3D sound effect processing method disclosed in an embodiment of the present application;

图1E是本申请实施例公开的另一种3D音效处理方法的演示示意图;FIG. 1E is a schematic diagram illustrating another 3D sound effect processing method disclosed in an embodiment of the present application;

图2是本申请实施例公开的另一种3D音效处理方法的流程示意图;2 is a schematic flowchart of another 3D sound effect processing method disclosed in an embodiment of the present application;

图3是本申请实施例公开的另一种3D音效处理方法的流程示意图;3 is a schematic flowchart of another 3D sound effect processing method disclosed in an embodiment of the present application;

图4是本申请实施例公开的另一种电子设备的结构示意图;4 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application;

图5是本申请实施例公开的一种3D音效处理装置的结构示意图。FIG. 5 is a schematic structural diagram of a 3D sound effect processing apparatus disclosed in an embodiment of the present application.

具体实施方式Detailed ways

为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make those skilled in the art better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present application.

本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。The terms "first", "second" and the like in the description and claims of the present application and the above drawings are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes For other steps or units inherent to these processes, methods, products or devices.

在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.

本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备(如智能手机)、车载设备、虚拟现实(virtual reality,VR)/增强现实(augmentedreality,AR)设备,可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(user equipment,UE),移动台(mobile station,MS),终端设备(terminal device)、研发/测试平台、服务器等等。为方便描述,上面提到的设备统称为电子设备。The electronic devices involved in the embodiments of the present application may include various handheld devices (such as smart phones) with wireless communication functions, vehicle-mounted devices, virtual reality (VR)/augmented reality (AR) devices, wearable Equipment, computing equipment or other processing equipment connected to a wireless modem, and various forms of user equipment (UE), mobile station (MS), terminal device (terminal device), R&D/test platforms, servers and many more. For convenience of description, the devices mentioned above are collectively referred to as electronic devices.

具体实现中,本申请实施例中,电子设备可对音频数据(声源发出的声音)使用HRTF(Head Related Transfer Function,头相关变换函数)滤波器进行滤波,得到虚拟环绕声,也称之为环绕声,或者全景声,实现一种三维立体音效。HRTF在时间域所对应的名称是HRIR(Head Related Impulse Response)。或者将音频数据与双耳房间脉冲响应(Binaural Room Impulse Response,BRIR)做卷积,双耳房间脉冲响应由三个部分组成:直达声,早期反射声和混响(reverb)。In the specific implementation, in the embodiment of the present application, the electronic device may filter the audio data (sound emitted by the sound source) using an HRTF (Head Related Transfer Function) filter to obtain virtual surround sound, which is also referred to as a virtual surround sound. Surround sound, or panoramic sound, achieves a three-dimensional sound effect. The name corresponding to HRTF in the time domain is HRIR (Head Related Impulse Response). Or convolve the audio data with the Binaural Room Impulse Response (BRIR), which consists of three components: direct sound, early reflections, and reverb.

请参阅图1A,图1A是本申请实施例提供了一种电子设备的结构示意图,电子设备包括控制电路和输入-输出电路,输入输出电路与控制电路连接。Please refer to FIG. 1A . FIG. 1A is a schematic structural diagram of an electronic device provided by an embodiment of the present application. The electronic device includes a control circuit and an input-output circuit, and the input-output circuit is connected to the control circuit.

其中,控制电路可以包括存储和处理电路。该存储和处理电路中的存储电路可以是存储器,例如硬盘驱动存储器,非易失性存储器(例如闪存或用于形成固态驱动器的其它电子可编程只读存储器等),易失性存储器(例如静态或动态随机存取存储器等)等,本申请实施例不作限制。存储和处理电路中的处理电路可以用于控制电子设备的运转。该处理电路可以基于一个或多个微处理器,微控制器,数字信号处理器,基带处理器,功率管理单元,音频编解码器芯片,专用集成电路,显示驱动器集成电路等来实现。Among them, the control circuit may include storage and processing circuits. The storage circuits in the storage and processing circuits may be memories, such as hard disk drive memory, non-volatile memory (such as flash memory or other electronically programmable read only memory used to form solid state drives, etc.), volatile memory (such as static or dynamic random access memory, etc.), etc., which are not limited in the embodiments of the present application. The processing circuitry in the storage and processing circuitry may be used to control the operation of the electronic device. The processing circuit may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.

存储和处理电路可用于运行电子设备中的软件,例如播放来电提示响铃应用程序、播放短消息提示响铃应用程序、播放闹钟提示响铃应用程序、播放媒体文件应用程序、互联网协议语音(voice over internet protocol,VOIP)电话呼叫应用程序、操作系统功能等。这些软件可以用于执行一些控制操作,例如,播放来电提示响铃、播放短消息提示响铃、播放闹钟提示响铃、播放媒体文件、进行语音电话呼叫以及电子设备中的其它功能等,本申请实施例不作限制。The storage and processing circuits can be used to run software in electronic devices, such as playing a ringing application for incoming calls, a ringing application for playing a short message, a ringing application for playing an alarm clock, a media file application, voice over internet protocol (voice) over internet protocol, VOIP) phone calling applications, operating system functions, etc. These softwares can be used to perform some control operations, such as playing the ringtone for incoming calls, playing the ringtone for short messages, playing the ringtone for alarm clocks, playing media files, making voice phone calls, and other functions in electronic equipment, etc. The embodiment is not limited.

其中,输入-输出电路可用于使电子设备实现数据的输入和输出,即允许电子设备从外部设备接收数据和允许电子设备将数据从电子设备输出至外部设备。Among them, the input-output circuit can be used to enable the electronic device to input and output data, that is, to allow the electronic device to receive data from an external device and to allow the electronic device to output data from the electronic device to the external device.

输入-输出电路可以进一步包括传感器。传感器可以包括环境光传感器,基于光和电容的红外接近传感器,超声波传感器,触摸传感器(例如,基于光触摸传感器和/或电容式触摸传感器,其中,触摸传感器可以是触控显示屏的一部分,也可以作为一个触摸传感器结构独立使用),加速度传感器,重力传感器,和其它传感器等。输入-输出电路还可以进一步包括音频组件,音频组件可以用于为电子设备提供音频输入和输出功能。音频组件还可以包括音调发生器以及其它用于产生和检测声音的组件。The input-output circuit may further include sensors. Sensors may include ambient light sensors, light and capacitive-based infrared proximity sensors, ultrasonic sensors, touch sensors (eg, light-based touch sensors and/or capacitive touch sensors, where the touch sensor may be part of a touch display, or Can be used independently as a touch sensor structure), acceleration sensor, gravity sensor, and other sensors. The input-output circuit may further include audio components, which may be used to provide audio input and output functions for the electronic device. Audio components may also include tone generators and other components for generating and detecting sounds.

输入-输出电路还可以包括一个或多个显示屏。显示屏可以包括液晶显示屏,有机发光二极管显示屏,电子墨水显示屏,等离子显示屏,使用其它显示技术的显示屏中一种或者几种的组合。显示屏可以包括触摸传感器阵列(即,显示屏可以是触控显示屏)。触摸传感器可以是由透明的触摸传感器电极(例如氧化铟锡(ITO)电极)阵列形成的电容式触摸传感器,或者可以是使用其它触摸技术形成的触摸传感器,例如音波触控,压敏触摸,电阻触摸,光学触摸等,本申请实施例不作限制。The input-output circuit may also include one or more display screens. The display screen may include one or a combination of a liquid crystal display screen, an organic light emitting diode display screen, an electronic ink display screen, a plasma display screen, and a display screen using other display technologies. The display screen may include an array of touch sensors (ie, the display screen may be a touch screen display). The touch sensor can be a capacitive touch sensor formed from an array of transparent touch sensor electrodes, such as indium tin oxide (ITO) electrodes, or can be a touch sensor formed using other touch technologies, such as sonic touch, pressure sensitive touch, resistive touch Touch, optical touch, etc., are not limited in the embodiments of the present application.

输入-输出电路还可以进一步包括通信电路可以用于为电子设备提供与外部设备通信的能力。通信电路可以包括模拟和数字输入-输出接口电路,和基于射频信号和/或光信号的无线通信电路。通信电路中的无线通信电路可以包括射频收发器电路、功率放大器电路、低噪声放大器、开关、滤波器和天线。举例来说,通信电路中的无线通信电路可以包括用于通过发射和接收近场耦合电磁信号来支持近场通信(near field communication,NFC)的电路。例如,通信电路可以包括近场通信天线和近场通信收发器。通信电路还可以包括蜂窝电话收发器和天线,无线局域网收发器电路和天线等。The input-output circuitry may further include communication circuitry that may be used to provide the electronic device with the ability to communicate with external devices. Communication circuits may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuit among the communication circuits may include radio frequency transceiver circuits, power amplifier circuits, low noise amplifiers, switches, filters, and antennas. For example, wireless communication circuitry in a communication circuit may include circuitry for supporting near field communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuitry may include a near field communication antenna and a near field communication transceiver. Communication circuits may also include cellular telephone transceivers and antennas, wireless local area network transceiver circuits and antennas, and the like.

输入-输出电路还可以进一步包括其它输入-输出单元。输入-输出单元可以包括按钮,操纵杆,点击轮,滚动轮,触摸板,小键盘,键盘,照相机,发光二极管和其它状态指示器等。The input-output circuit may further include other input-output units. Input-output units may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.

其中,电子设备还可以进一步包括电池(未图示),电池用于给电子设备提供电能。Wherein, the electronic device may further include a battery (not shown), and the battery is used to provide electrical energy to the electronic device.

下面对本申请实施例进行详细介绍。The embodiments of the present application will be described in detail below.

请参阅图1B,图1B是本申请实施例公开的一种3D音效处理方法的流程示意图,应用于上述图1A所描述的电子设备,该3D音效处理方法包括如下步骤101-103。Please refer to FIG. 1B . FIG. 1B is a schematic flowchart of a 3D sound effect processing method disclosed in an embodiment of the present application, which is applied to the electronic device described in FIG. 1A . The 3D sound effect processing method includes the following steps 101 - 103 .

101、获取声源的单声道数据。101. Acquire mono data of a sound source.

其中,本申请实施例可应用于虚拟现实/增强现实场景,或者,3D录音场景。本申请实施例中,声源可以为虚拟场景中一发声体,例如,游戏场景中的一个飞机,声源可以为固定声源,或者,移动声源,或者,声源还可以为物理环境中的发声体。The embodiments of the present application may be applied to virtual reality/augmented reality scenarios, or 3D recording scenarios. In this embodiment of the present application, the sound source may be a sound generator in a virtual scene, for example, an airplane in a game scene, the sound source may be a fixed sound source, or a moving sound source, or the sound source may also be a physical environment 's sound generator.

102、确定所述单声道数据对应的目标内容场景类型。102. Determine a target content scene type corresponding to the mono data.

其中,本申请实施例中,内容场景类型可以为以下至少一种:电影、人生、娱乐、军事、生活、天文、地理等等,在此不做限定。例如,每一单声道数据可对应一个频段,不同的频段可以对应不同的内容场景类型,具体地,电子设备中预先存储频段与内容场景类型之间的映射关系,进而,依据该映射关系确定单声道数据的频段对应的目标内容场景类型。Wherein, in the embodiment of the present application, the content scene type may be at least one of the following: movie, life, entertainment, military, life, astronomy, geography, etc., which are not limited herein. For example, each mono data may correspond to a frequency band, and different frequency bands may correspond to different content scene types. Specifically, the electronic device pre-stores the mapping relationship between the frequency band and the content scene type, and then determines according to the mapping relationship. The target content scene type corresponding to the frequency band of the mono data.

可选地,上述步骤102,确定所述单声道数据对应的目标内容场景类型,可包括如下步骤:Optionally, in the above step 102, determining the target content scene type corresponding to the mono data may include the following steps:

21、对所述单声道数据进行语义解析,得到多个关键字;21. Perform semantic analysis on the mono data to obtain multiple keywords;

22、按照预设的关键字与内容场景类型之间的映射关系,确定所述多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型;22. According to the preset mapping relationship between keywords and content scene types, determine the content scene type corresponding to each keyword in the plurality of keywords, and obtain a plurality of content scene types;

23、将所述多个内容场景类型中出现次数最多的内容场景类型作为所述目标内容场景类型。23. Use the content scene type with the largest number of occurrences among the multiple content scene types as the target content scene type.

其中,单声道数据为音频数据,因此,电子设备可对单声道数据进行语义解析,得到多个关键字,电子设备中还可以预先存储预设的关键字与内容场景类型之间的映射关系,进而,依据该确定多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型,将多个内容场景类型中出现次数最多的内容场景类型作为目标内容场景类型。The monaural data is audio data, so the electronic device can perform semantic analysis on the monaural data to obtain multiple keywords, and the electronic device can also pre-store the mapping between preset keywords and content scene types Then, according to the determined content scene type corresponding to each keyword in the plurality of keywords, multiple content scene types are obtained, and the content scene type with the most occurrences among the multiple content scene types is used as the target content scene type.

103、根据所述目标内容场景类型确定目标混响效果参数。103. Determine a target reverberation effect parameter according to the target content scene type.

其中,上述混响效果参数可以包括以下至少一种:输入电平、低频切点、高频切点、早反射时间、扩散程度、低混比率、残响时间、高频衰点、分频点、原始干声音量、早反射声音量、混响音量、声场宽度、输出声场、尾音等等,在此不作限定。具体实现中,不同的内容场景,可对应不同的混响效果参数,这样的话,在不同的场景下,混响效果也不一样,能够实现与场景相宜的混响效果,3D感更加真实。Wherein, the above reverberation effect parameters may include at least one of the following: input level, low frequency cut point, high frequency cut point, early reflection time, diffusion degree, low mixing ratio, reverberation time, high frequency decay point, crossover point , original dry sound volume, early reflection sound volume, reverberation volume, sound field width, output sound field, tail sound, etc., which are not limited here. In the specific implementation, different content scenes can correspond to different reverberation effect parameters. In this way, in different scenes, the reverberation effect is different, and the reverberation effect suitable for the scene can be achieved, and the 3D sense is more realistic.

可选地,上述步骤103,根据所述目标内容场景类型确定目标混响效果参数,可按照如下方式实施:Optionally, in the above step 103, the target reverberation effect parameter is determined according to the target content scene type, which may be implemented as follows:

按照预设的内容场景类型与混响效果参数之间的映射关系,确定所述目标内容场景类型对应的所述目标混响效果参数。The target reverberation effect parameter corresponding to the target content scene type is determined according to the preset mapping relationship between the content scene type and the reverberation effect parameter.

其中,电子设备中可以预先存储预设的内容场景类型与混响效果参数之间的映射关系,进而,依据该映射关系确定目标内容场景类型对应的目标混响效果参数。The electronic device may pre-store a preset mapping relationship between content scene types and reverberation effect parameters, and then determine target reverberation effect parameters corresponding to the target content scene type according to the mapping relationship.

104、依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。104. Process the monaural data according to the target reverberation effect parameter to obtain target reverberation binaural data.

其中,电子设备可基于HRTF算法对单声道数据进行处理,得到双声道数据,另外,还可以通过目标混响效果参数对双声道数据进行处理,得到混响双声道数据。The electronic device can process the mono data based on the HRTF algorithm to obtain the binaural data, and can also process the binaural data through the target reverberation effect parameters to obtain the reverberated binaural data.

可选地,上述步骤104,依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据,可包括如下步骤:Optionally, in the above step 104, processing the mono data according to the target reverberation effect parameters to obtain target reverberation binaural data, which may include the following steps:

41、获取所述声源的第一三维坐标;41. Obtain the first three-dimensional coordinates of the sound source;

42、获取目标对象的第二三维坐标,所述第一三维坐标与所述第二三维坐标基于同一坐标原点;42. Obtain the second three-dimensional coordinates of the target object, where the first three-dimensional coordinates and the second three-dimensional coordinates are based on the same coordinate origin;

43、依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据。43. Generate target reverberation binaural data according to the first three-dimensional coordinates, the second three-dimensional coordinates, the mono data, and the target reverberation effect parameter.

其中,以虚拟场景为例,由于虚拟场景中每一物体均可以对应一个三维坐标,因此,可以获取声源的第一三维坐标,在声源发出声音时,则可以获取生源产生的单声道数据。其中,目标对象也可以对应一个三维坐标,即第二三维坐标,当然,第一三维坐标与第二三维坐标为不同的位置,且第一三维坐标与第二三维坐标基于同一坐标原点。进而,依据第一三维坐标、第二三维坐标以及单声道数据、目标混响效果参数生成目标混响双声道数据,具体地,可以通过HRTF算法实现。Among them, taking the virtual scene as an example, since each object in the virtual scene can correspond to a three-dimensional coordinate, the first three-dimensional coordinate of the sound source can be obtained, and when the sound source emits sound, the monophonic sound generated by the sound source can be obtained. data. The target object may also correspond to a three-dimensional coordinate, that is, a second three-dimensional coordinate. Of course, the first three-dimensional coordinate and the second three-dimensional coordinate are different positions, and the first three-dimensional coordinate and the second three-dimensional coordinate are based on the same coordinate origin. Furthermore, the target reverberation binaural data is generated according to the first three-dimensional coordinates, the second three-dimensional coordinates, the monophonic data, and the target reverberation effect parameters, and specifically, the HRTF algorithm can be used.

可选地,在所述目标对象处于游戏场景时,上述步骤42,获取目标对象的第二三维坐标,可包括如下步骤:Optionally, when the target object is in a game scene, in the above step 42, acquiring the second three-dimensional coordinates of the target object may include the following steps:

421、获取所述游戏场景对应的地图;421. Obtain a map corresponding to the game scene;

422、在所述地图中确定所述目标对象对应的坐标位置,得到所述第二三维坐标。422. Determine the coordinate position corresponding to the target object in the map, and obtain the second three-dimensional coordinate.

其中,在目标对象处于游戏场景时,目标对象可视为游戏中一角色,当然,具体实现中,游戏场景可对应一三维地图,因此,电子设备可以获取游戏场景对应的地图,并在地图中确定目标对象对应的坐标位置,得到第二三维坐标,如此,针对不同的游戏,可实时知晓角色位置,在本申请实施例中,针对角色具体位置,能够生成3D音效,让用户在游戏时,能够身临其境,感觉游戏世界更为逼真。Among them, when the target object is in the game scene, the target object can be regarded as a character in the game. Of course, in the specific implementation, the game scene can correspond to a three-dimensional map. Therefore, the electronic device can obtain the map corresponding to the game scene and display the map in the map. Determine the coordinate position corresponding to the target object, and obtain the second three-dimensional coordinate. In this way, for different games, the position of the character can be known in real time. Being able to immerse yourself in the scene makes the game world feel more realistic.

可选地,上述步骤43,依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据,可包括如下步骤:Optionally, in the above step 43, generating target reverberation binaural data according to the first three-dimensional coordinates, the second three-dimensional coordinates, the monophonic data, and the target reverberation effect parameters may include the following steps: :

431、将所述单声道数据生成所述第一三维坐标与所述第二三维坐标之间的多路双声道数据,每路双声道数据对应唯一传播方向;431. Generate the multi-channel binaural data between the first three-dimensional coordinates and the second three-dimensional coordinates from the mono data, and each channel of binaural data corresponds to a unique propagation direction;

432、根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据。432. Combine the multi-channel binaural data into the target reverberation binaural data according to the target reverberation effect parameter.

其中,声源的原始声音数据为单声道数据,通过算法处理(例如,HRTF算法),则可以得到双声道数据,由于在现实环境中,声音是沿着各个方向进行传播,当然,在传播过程中,也会出现反射、折射、干涉、衍射等现象,因此,本申请实施例中,用于最终合成目标双声道数据的仅仅为经过第一三维坐标与第二三维坐标之间的多路双声道数据,根据目标混响效果参数将该多路双声道数据合成目标混响双声道数据。Among them, the original sound data of the sound source is monophonic data, and through algorithm processing (for example, HRTF algorithm), binaural data can be obtained, because in the real environment, the sound propagates in all directions, of course, in the In the propagation process, phenomena such as reflection, refraction, interference, diffraction, etc. will also occur. Therefore, in this embodiment of the present application, the only thing that is used to finally synthesize the target binaural data is to pass between the first three-dimensional coordinate and the second three-dimensional coordinate. Multi-channel binaural data, the multi-channel binaural data is synthesized into target reverberation binaural data according to the target reverberation effect parameter.

可选地,上述步骤432,根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据,可包括如下步骤:Optionally, the above step 432, synthesizing the multi-channel binaural data into the target reverberation binaural data according to the target reverberation effect parameter, may include the following steps:

A11、以所述第一三维坐标与所述第二三维坐标为轴线作横截面,将所述多路双声道数据进行划分,得到第一双声道数据集合和第二双声道数据集合,所述第一双声道数据集合、所述第二声道数据集合均包括至少一路双声道数据;A11. Use the first three-dimensional coordinate and the second three-dimensional coordinate as axes to make a cross-section, and divide the multi-channel binaural data to obtain a first binaural data set and a second binaural data set , the first binaural data set and the second binaural data set both include at least one channel of binaural data;

A12、将所述第一双声道数据集合进行合成,得到第一单声道数据;A12. Synthesize the first binaural data set to obtain first monaural data;

A13、将所述第二双声道数据集合进行合成,得到第二单声道数据;A13, synthesizing the second binaural data set to obtain second monaural data;

A14、根据所述目标混响效果参数将所述第一单声道数据、所述第二单声道数据进行合成,得到所述混响双声道数据。A14. Synthesize the first monaural data and the second monaural data according to the target reverberation effect parameter to obtain the reverberated binaural data.

其中,在知晓第一三维坐标,第二三维坐标之后,可以第一三维坐标与第二三维坐标为轴线作横截面,由于声音传播方向一定,传播轨迹也会沿着一定的对称轴具备一定的对称性,如图1C所示,第一三维坐标与第二三维坐标形成轴线,以该轴线作横截面,可以将多路双声道数据进行划分,得到第一双声道数据集合和第二双声道数据集合,不考虑外在因素,例如,折射,反射,衍射等情况,则第一双声道数据集合与第二双声道数据集合也可以是包含相同路数的双声道数据,且不同集合的双声道数据也是对称关系,第一双声道数据集合、第二声道数据集合均包括至少一路双声道数据,具体实现中,电子设备可将第一双声道数据集合进行合成,得到第一单声道数据,电子设备可以包括左右耳机,第一单声道数据可以主要由左耳机播放,相应地,电子设备可将第二双声道数据集合进行合成,得到第二单声道数据,第二单声道数据可以主要由右耳机播放,最后,根据目标混响效果参数将第一单声道数据、第二单声道数据进行合成,得到目标混响声道数据,具体地,如图1D所示,电子设备可将单声道数据(例如,第一单声道数据、第二单声道数据)进行声道处理(声道处理主要是指将单声道数据转变为双声道数据),得到2个双声道数据,以及依据目标混响效果参数对各路单声道数据进行处理,得到混响音频数据,将2个双声道数据以及混响音频数据进行合成,得到目标混响声道数据。Among them, after knowing the first three-dimensional coordinates and the second three-dimensional coordinates, the first three-dimensional coordinates and the second three-dimensional coordinates can be used as the axes to make a cross-section. Since the sound propagation direction is certain, the propagation trajectory will also have a certain degree of symmetry along a certain axis of symmetry. Symmetry, as shown in Figure 1C, the first three-dimensional coordinate and the second three-dimensional coordinate form an axis, and the axis is used as a cross-section to divide the multi-channel binaural data to obtain the first binaural data set and the second binaural data set. The binaural data set, regardless of external factors, such as refraction, reflection, diffraction, etc., the first binaural data set and the second binaural data set may also be binaural data containing the same number of channels , and the binaural data of different sets are also in a symmetrical relationship. The first binaural data set and the second binaural data set both include at least one channel of binaural data. In specific implementation, the electronic device can convert the first binaural data The set is synthesized to obtain the first mono data, the electronic device may include left and right earphones, the first mono data may be played mainly by the left earphone, and correspondingly, the electronic device may synthesize the second dual channel data to obtain The second monaural data, the second monaural data can be played mainly by the right earphone, and finally, the first monaural data and the second monaural data are synthesized according to the target reverberation effect parameters to obtain the target reverberation channel Data, specifically, as shown in FIG. 1D, the electronic device may perform channel processing on mono data (for example, first mono data, second mono data) (channel processing mainly refers to channel data into two-channel data), obtain two two-channel data, and process each channel of mono data according to the target reverberation effect parameters to obtain reverberation audio data, convert the two two-channel data and the mixed channel The sound audio data is synthesized to obtain the target reverberation channel data.

可选地,上述步骤4322,将所述第一双声道数据集合进行合成,得到第一单声道数据,可包括如下步骤:Optionally, in the above step 4322, the first binaural data set is synthesized to obtain the first monaural data, which may include the following steps:

B1、将所述第一双声道数据集合中每一路双声道数据的能量值,得到多个能量值;B1, obtain a plurality of energy values from the energy value of each channel of binaural data in the first binaural data set;

B2、从所述多个能量值中选取大于第一能量阈值的能量值,得到多个第一目标能量值;B2. Select an energy value greater than the first energy threshold from the plurality of energy values to obtain a plurality of first target energy values;

B3、确定所述多个第一目标能量值对应的第一双声道数据,将所述第一双声道数据进行合成,得到所述第一单声道数据。B3. Determine the first binaural data corresponding to the plurality of first target energy values, and synthesize the first binaural data to obtain the first monaural data.

其中,上述第一能量阈值可以由用户自行设置或者系统默认。具体实现中,电子设备可将第一双声道数据集合中每一路双声道数据的能量值,得到多个能量值,进而,从多个能量值中选取大于第一能量阈值的能量值,得到多个第一目标能量值,确定多个第一目标能量值对应的第一双声道数据,将第一双声道数据进行合成,得到第一单声道数据。The above-mentioned first energy threshold may be set by the user or the system defaults. In a specific implementation, the electronic device may obtain a plurality of energy values from the energy value of each channel of binaural data in the first binaural data set, and then select an energy value greater than the first energy threshold from the plurality of energy values, A plurality of first target energy values are obtained, first binaural data corresponding to the plurality of first target energy values is determined, and the first binaural data is synthesized to obtain first monaural data.

可选地,基于与上述步骤A1-步骤A3,也可以实现上述步骤4323,在此不再赘述。Optionally, based on the above-mentioned steps A1-A3, the above-mentioned step 4323 may also be implemented, which will not be repeated here.

可选地,在上述步骤41-步骤43之间,还可以包括如下步骤:Optionally, between the above steps 41 to 43, the following steps may also be included:

获取所述目标对象的面部朝向;obtaining the face orientation of the target object;

则,上述步骤43,依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据,可按照如下方式实施:Then, in the above step 43, the target reverberation binaural data is generated according to the first three-dimensional coordinates, the second three-dimensional coordinates, the mono data, and the target reverberation effect parameters, which can be implemented as follows:

依据所述面部朝向、所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据。The target reverberation binaural data is generated according to the face orientation, the first three-dimensional coordinate, the second three-dimensional coordinate, the mono data, and the target reverberation effect parameter.

其中,具体实现中,用户不同的面部朝向,则听到的3D音效也不一样,鉴于此,本申请实施例中,则考虑目标对象的面部朝向,电子设备可检测目标对象的面部朝向,具体地,若是游戏场景,则可以检测目标对象的相对于声源的朝向作为目标对象的面部朝向,若电子设备为是考虑用户头戴设备,例如,头戴式虚拟现实眼镜、虚拟现实头盔、虚拟现实头带显示设备等。人头方向的检测可以使用多种传感器,包括但不限于电阻式传感器、力学传感器、光敏传感器、超声波传感器、肌肉传感器等,在此不做限定。可以是其中一种传感器,也可以是其中几种传感器的组合,可以是一个传感器还可以是几个传感器的组合。人头方向的检测可以按照每隔预设时间间隔进行检测,预设时间间隔可以由用户自行设置或者系统默认。Among them, in the specific implementation, the 3D sound effects heard by the user with different facial orientations are also different. In view of this, in the embodiment of the present application, the facial orientation of the target object is considered, and the electronic device can detect the facial orientation of the target object. Specifically, On the other hand, if it is a game scene, the orientation of the target object relative to the sound source can be detected as the face orientation of the target object. Realistic headband display devices, etc. A variety of sensors can be used to detect the direction of the human head, including but not limited to resistive sensors, mechanical sensors, photosensitive sensors, ultrasonic sensors, muscle sensors, etc., which are not limited herein. It can be one of the sensors, or a combination of several sensors, or a single sensor or a combination of several sensors. The detection of the head direction may be performed at preset time intervals, and the preset time intervals may be set by the user or the system defaults.

可选地,上述步骤432,根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据,可包括如下步骤:Optionally, the above step 432, synthesizing the multi-channel binaural data into the target reverberation binaural data according to the target reverberation effect parameter, may include the following steps:

A21、确定多路双声道数据中每一路双声道数据的能量值,得到多个能量值;A21. Determine the energy value of each channel of dual-channel data in the multi-channel dual-channel data, and obtain multiple energy values;

A22、按照预设的能量值与混响效果参数调节系数之间的映射关系,确定所述多个能量值中每一能量值对应的混响效果参数调节系数,得到多个混响效果参数调节系数;A22. According to the mapping relationship between the preset energy value and the reverberation effect parameter adjustment coefficient, determine the reverberation effect parameter adjustment coefficient corresponding to each energy value in the plurality of energy values, and obtain a plurality of reverberation effect parameter adjustment coefficient;

A23、依据所述多个混响效果参数调节系数、所述目标混响效果参数确定每一路双声道数据对应的混响效果参数,得到多个第一混响效果参数;A23. Determine the reverberation effect parameter corresponding to each channel of binaural data according to the plurality of reverberation effect parameter adjustment coefficients and the target reverberation effect parameter, and obtain a plurality of first reverberation effect parameters;

A24、依据所述多个第一混响效果参数对所述多路双声道数据进行处理,得到多路混响双声道数据,每一第一混响效果参数对应唯一一路双声道数据;A24. Process the multi-channel binaural data according to the plurality of first reverberation effect parameters to obtain multi-channel reverberation binaural data, and each first reverberation effect parameter corresponds to a unique channel of binaural data ;

A25、将所述多路双声道数据进行合成,得到所述目标混响双声道数据。A25. Synthesize the multi-channel binaural data to obtain the target reverberation binaural data.

其中,电子设备中可预先存储预设的能量值与混响效果参数调节系数之间的映射关系,上述混响效果参数调节系数用于混响效果参数,混响效果参数调节系数的取值范围在0~1之间,具体地,混响效果参数*混响效果参数调节系数=实际混响效果参数,并通过实际混响效果参数对相应双声道数据进行处理,得到混响双声道数据。具体实现中,电子设备可确定多路双声道数据中每一路双声道数据的能量值,得到多个能量值,并依据上述映射关系,确定多个能量值中每一能量值对应的混响效果参数调节系数,得到多个混响效果参数调节系数,依据多个混响效果参数调节系数、目标混响效果参数确定每一路双声道数据对应的混响效果参数,得到多个第一混响效果参数,即第一混响效果参数=目标混响效果参数*混响效果参数调节系数,进而,依据多个第一混响效果参数对多路双声道数据进行处理,得到多路混响双声道数据,每一第一混响效果参数对应唯一一路双声道数据,将多路双声道数据进行合成,得到目标混响双声道数据,这样依据每一路能量值,实现不同方向不同的混响效果,最终合成的混响双声道数据,立体感更加真实。具体地,如图1E所示,电子设备可将单声道数据(例如,第一单声道数据、第二单声道数据)进行声道处理(声道处理主要是指将单声道数据转变为双声道数据),得到多个双声道数据,以及依据混响效果参数对各路声道数据进行处理,得到多个混响音频数据,将多个双声道数据以及多个混响音频数据进行合成,得到目标混响声道数据。The electronic device may pre-store the mapping relationship between the preset energy value and the reverberation effect parameter adjustment coefficient, the reverberation effect parameter adjustment coefficient is used for the reverberation effect parameter, and the value range of the reverberation effect parameter adjustment coefficient Between 0 and 1, specifically, reverberation effect parameter * reverberation effect parameter adjustment coefficient = actual reverberation effect parameter, and the corresponding binaural data is processed by the actual reverberation effect parameter to obtain reverberation binaural data. In a specific implementation, the electronic device can determine the energy value of each channel of binaural data in the multi-channel binaural data, obtain multiple energy values, and determine the mixed energy value corresponding to each energy value in the multiple energy values according to the above mapping relationship The reverberation effect parameter adjustment coefficient is obtained to obtain a plurality of reverberation effect parameter adjustment coefficients, and the reverberation effect parameters corresponding to each channel of binaural data are determined according to the multiple reverberation effect parameter adjustment coefficients and the target reverberation effect parameters, and a plurality of first reverberation effect parameters are obtained. The reverberation effect parameter, that is, the first reverberation effect parameter=target reverberation effect parameter*reverberation effect parameter adjustment coefficient, and further, the multi-channel binaural data is processed according to the plurality of first reverberation effect parameters to obtain a multi-channel reverberation effect parameter. Reverberation binaural data, each first reverberation effect parameter corresponds to a unique channel of binaural data, and multiple channels of binaural data are synthesized to obtain the target reverberation binaural data. In this way, according to the energy value of each channel, the realization of Different reverberation effects in different directions, and the final synthesized reverberation binaural data has a more realistic three-dimensional sense. Specifically, as shown in FIG. 1E , the electronic device may perform channel processing on monaural data (eg, first monaural data, second monaural data) (channel processing mainly refers to processing monaural data Convert into binaural data), obtain multiple binaural data, and process each channel data according to the reverberation effect parameters to obtain multiple reverberation audio data, combine multiple binaural data and multiple The sound audio data is synthesized to obtain the target reverberation channel data.

可以看出,本申请实施例中所描述的3D音效处理方法,获取声源的单声道数据,确定单声道数据对应的目标内容场景类型,根据目标内容场景类型确定目标混响效果参数,依据目标混响效果参数对单声道数据进行处理,得到目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the 3D sound effect processing method described in the embodiments of the present application acquires the mono data of the sound source, determines the target content scene type corresponding to the mono data, and determines the target reverberation effect parameter according to the target content scene type, The mono data is processed according to the target reverberation effect parameters to obtain the target reverberation binaural data. In this way, the reverberation effect parameters corresponding to the content scene can be determined, and the reverberation binaural can be generated according to the reverberation effect parameters. Therefore, the reverberation effect suitable for the content scene is realized, and the stereoscopic effect is more realistic.

与上述一致地,图2是本申请实施例公开的一种3D音效处理方法的流程示意图。应用于如图1A所示的电子设备,该3D音效处理方法包括如下步骤201-206。Consistent with the above, FIG. 2 is a schematic flowchart of a 3D sound effect processing method disclosed in an embodiment of the present application. Applied to the electronic device shown in FIG. 1A , the 3D sound effect processing method includes the following steps 201 - 206 .

201、获取声源的单声道数据。201. Acquire mono data of a sound source.

202、确定所述单声道数据对应的目标内容场景类型。202. Determine a target content scene type corresponding to the mono data.

203、根据所述目标内容场景类型确定目标混响效果参数。203. Determine a target reverberation effect parameter according to the target content scene type.

204、获取所述声源的第一三维坐标。204. Acquire first three-dimensional coordinates of the sound source.

205、获取目标对象的第二三维坐标,所述第一三维坐标与所述第二三维坐标基于同一坐标原点。205. Acquire second three-dimensional coordinates of the target object, where the first three-dimensional coordinates and the second three-dimensional coordinates are based on the same coordinate origin.

206、依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据。206. Generate target reverberation binaural data according to the first three-dimensional coordinates, the second three-dimensional coordinates, the mono data, and the target reverberation effect parameter.

其中,上述步骤201-步骤206的具体描述可以参照图1B所描述的3D音效处理方法的相应描述,在此不再赘述。The specific description of the above steps 201 to 206 may refer to the corresponding description of the 3D sound effect processing method described in FIG. 1B , which will not be repeated here.

可以看出,本申请实施例中所描述的3D音效处理方法,获取声源的单声道数据,确定单声道数据对应的目标内容场景类型,根据目标内容场景类型确定目标混响效果参数,获取声源的第一三维坐标,获取目标对象的第二三维坐标,,第一三维坐标与第二三维坐标基于同一坐标原点,依据第一三维坐标、第二三维坐标以及单声道数据、目标混响效果参数生成目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the 3D sound effect processing method described in the embodiments of the present application acquires the mono data of the sound source, determines the target content scene type corresponding to the mono data, and determines the target reverberation effect parameter according to the target content scene type, Obtain the first three-dimensional coordinates of the sound source, and obtain the second three-dimensional coordinates of the target object. The first three-dimensional coordinates and the second three-dimensional coordinates are based on the same coordinate origin, according to the first three-dimensional coordinates, the second three-dimensional coordinates The reverberation effect parameters generate the target reverberation binaural data. In this way, the reverberation effect parameters corresponding to the content scene can be determined, and the reverberation binaural data can be generated according to the reverberation effect parameters, thereby realizing the suitability for the content scene. The reverberation effect is more realistic.

与上述一致地,图3是本申请实施例公开的一种3D音效处理方法的流程示意图。应用于图1A所示的电子设备,该3D音效处理方法包括如下步骤301-306。Consistent with the above, FIG. 3 is a schematic flowchart of a 3D sound effect processing method disclosed in an embodiment of the present application. Applied to the electronic device shown in FIG. 1A , the 3D sound effect processing method includes the following steps 301 - 306 .

301、获取声源的单声道数据。301. Acquire mono data of a sound source.

302、对所述单声道数据进行语义解析,得到多个关键字。302. Perform semantic analysis on the mono data to obtain multiple keywords.

303、按照预设的关键字与内容场景类型之间的映射关系,确定所述多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型。303. According to a preset mapping relationship between keywords and content scene types, determine a content scene type corresponding to each keyword in the plurality of keywords, and obtain a plurality of content scene types.

304、将所述多个内容场景类型中出现次数最多的内容场景类型作为目标内容场景类型。304. Use the content scene type with the most occurrences among the multiple content scene types as the target content scene type.

305、按照预设的内容场景类型与混响效果参数之间的映射关系,确定所述目标内容场景类型对应的目标混响效果参数。305. Determine a target reverberation effect parameter corresponding to the target content scene type according to the preset mapping relationship between the content scene type and the reverberation effect parameter.

306、依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。306. Process the monaural data according to the target reverberation effect parameter to obtain target reverberation binaural data.

其中,上述步骤301-步骤306的具体描述可以参照图1B所描述的3D音效处理方法的相应描述,在此不再赘述。The specific description of the above steps 301 to 306 may refer to the corresponding description of the 3D sound effect processing method described in FIG. 1B , which will not be repeated here.

可以看出,本申请实施例中所描述的3D音效处理方法,获取声源的单声道数据,对单声道数据进行语义解析,得到多个关键字,按照预设的关键字与内容场景类型之间的映射关系,确定多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型,将多个内容场景类型中出现次数最多的内容场景类型作为目标内容场景类型,按照预设的内容场景类型与混响效果参数之间的映射关系,确定目标内容场景类型对应的目标混响效果参数,依据目标混响效果参数对单声道数据进行处理,得到目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the 3D sound effect processing method described in the embodiments of the present application obtains mono data of the sound source, performs semantic analysis on the mono data, and obtains a plurality of keywords, according to the preset keywords and content scenarios. The mapping relationship between the types, determine the content scene type corresponding to each keyword in the multiple keywords, obtain multiple content scene types, and use the content scene type with the most occurrences among the multiple content scene types as the target content scene type, According to the mapping relationship between the preset content scene type and the reverberation effect parameter, determine the target reverberation effect parameter corresponding to the target content scene type, process the mono data according to the target reverberation effect parameter, and obtain the target reverberation dual In this way, the reverberation effect parameters corresponding to the content scene can be determined, and the reverberation binaural data can be generated according to the reverberation effect parameters, thereby realizing the reverberation effect suitable for the content scene, and the stereoscopic effect is more realistic. .

请参阅图4,图4是本申请实施例公开的另一种电子设备的结构示意图,如图所示,该电子设备包括处理器、存储器、通信接口,以及一个或多个程序,其中,上述一个或多个程序被存储在上述存储器中,并且被配置由上述处理器执行,上述程序包括用于执行以下步骤的指令:Please refer to FIG. 4. FIG. 4 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application. As shown in the figure, the electronic device includes a processor, a memory, a communication interface, and one or more programs, wherein the above-mentioned One or more programs are stored in the aforementioned memory and configured to be executed by the aforementioned processor, the aforementioned programs including instructions for performing the following steps:

获取声源的单声道数据;Get the mono data of the sound source;

确定所述单声道数据对应的目标内容场景类型;Determine the target content scene type corresponding to the mono data;

根据所述目标内容场景类型确定目标混响效果参数;Determine target reverberation effect parameters according to the target content scene type;

依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。The monaural data is processed according to the target reverberation effect parameter to obtain target reverberation binaural data.

可以看出,本申请实施例中所描述的电子设备,获取声源的单声道数据,确定单声道数据对应的目标内容场景类型,根据目标内容场景类型确定目标混响效果参数,依据目标混响效果参数对单声道数据进行处理,得到目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the electronic device described in the embodiment of the present application acquires the mono data of the sound source, determines the target content scene type corresponding to the mono data, determines the target reverberation effect parameter according to the target content scene type, and determines the target reverberation effect parameter according to the target content scene type. The reverberation effect parameters process the mono data to obtain the target reverberation binaural data. In this way, the reverberation effect parameters corresponding to the content scene can be determined, and the reverberation binaural data can be generated according to the reverberation effect parameters. Therefore, a reverberation effect suitable for the content scene is realized, and the stereoscopic effect is more realistic.

在一个可能的示例中,在所述确定所述单声道数据对应的目标内容场景类型方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of determining the target content scene type corresponding to the mono data, the above program includes instructions for executing the following steps:

对所述单声道数据进行语义解析,得到多个关键字;Semantic parsing is performed on the mono data to obtain a plurality of keywords;

按照预设的关键字与内容场景类型之间的映射关系,确定所述多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型;According to the preset mapping relationship between keywords and content scene types, determine the content scene type corresponding to each keyword in the plurality of keywords, and obtain a plurality of content scene types;

将所述多个内容场景类型中出现次数最多的内容场景类型作为所述目标内容场景类型。The content scene type with the largest number of occurrences among the plurality of content scene types is used as the target content scene type.

在一个可能的示例中,在所述根据所述目标内容场景类型确定目标混响效果参数方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of determining the target reverberation effect parameter according to the target content scene type, the above program includes instructions for performing the following steps:

按照预设的内容场景类型与混响效果参数之间的映射关系,确定所述目标内容场景类型对应的所述目标混响效果参数。The target reverberation effect parameter corresponding to the target content scene type is determined according to the preset mapping relationship between the content scene type and the reverberation effect parameter.

在一个可能的示例中,在所述依据所述目标混响效果参数对所述单声道数据进行处理,得到目标双声道数据方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of processing the mono data according to the target reverberation effect parameters to obtain target binaural data, the above program includes instructions for executing the following steps:

获取所述声源的第一三维坐标;obtaining the first three-dimensional coordinates of the sound source;

获取目标对象的第二三维坐标,所述第一三维坐标与所述第二三维坐标基于同一坐标原点;acquiring the second three-dimensional coordinates of the target object, the first three-dimensional coordinates and the second three-dimensional coordinates are based on the same coordinate origin;

依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据。The target reverberation binaural data is generated according to the first three-dimensional coordinates, the second three-dimensional coordinates, the monaural data, and the target reverberation effect parameter.

在一个可能的示例中,在所述依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of generating the target reverberation binaural data according to the first three-dimensional coordinates, the second three-dimensional coordinates, the mono data, and the target reverberation effect parameters, the above The program includes instructions for performing the following steps:

将所述单声道数据生成所述第一三维坐标与所述第二三维坐标之间的多路双声道数据,每路双声道数据对应唯一传播方向;generating multi-channel binaural data between the first three-dimensional coordinates and the second three-dimensional coordinates from the monophonic data, and each channel of binaural data corresponds to a unique propagation direction;

根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据。The target reverberation binaural data is synthesized from the multi-channel binaural data according to the target reverberation effect parameter.

在一个可能的示例中,在所述根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of synthesizing the multi-channel binaural data according to the target reverberation effect parameter to the target reverberation binaural data, the above program includes instructions for performing the following steps:

以所述第一三维坐标与所述第二三维坐标为轴线作横截面,将所述多路双声道数据进行划分,得到第一双声道数据集合和第二双声道数据集合,所述第一双声道数据集合、所述第二声道数据集合均包括至少一路双声道数据;Taking the first three-dimensional coordinates and the second three-dimensional coordinates as axes to make a cross-section, the multi-channel binaural data is divided to obtain a first binaural data set and a second binaural data set, so The first binaural data set and the second binaural data set both include at least one channel of binaural data;

将所述第一双声道数据集合进行合成,得到第一单声道数据;synthesizing the first binaural data set to obtain first monaural data;

将所述第二双声道数据集合进行合成,得到第二单声道数据;synthesizing the second binaural data set to obtain second monaural data;

根据所述目标混响效果参数将所述第一单声道数据、所述第二单声道数据进行合成,得到所述混响双声道数据。The first monaural data and the second monaural data are synthesized according to the target reverberation effect parameter to obtain the reverberated binaural data.

在一个可能的示例中,在所述根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据方面,上述程序包括用于执行以下步骤的指令:In a possible example, in the aspect of synthesizing the multi-channel binaural data according to the target reverberation effect parameter to the target reverberation binaural data, the above program includes instructions for performing the following steps:

确定多路双声道数据中每一路双声道数据的能量值,得到多个能量值;Determine the energy value of each channel of dual-channel data in the multi-channel dual-channel data, and obtain multiple energy values;

按照预设的能量值与混响效果参数调节系数之间的映射关系,确定所述多个能量值中每一能量值对应的混响效果参数调节系数,得到多个混响效果参数调节系数;According to the mapping relationship between the preset energy value and the reverberation effect parameter adjustment coefficient, determine the reverberation effect parameter adjustment coefficient corresponding to each energy value in the plurality of energy values, and obtain a plurality of reverberation effect parameter adjustment coefficients;

依据所述多个混响效果参数调节系数、所述目标混响效果参数确定每一路双声道数据对应的混响效果参数,得到多个第一混响效果参数;determining a reverberation effect parameter corresponding to each channel of binaural data according to the plurality of reverberation effect parameter adjustment coefficients and the target reverberation effect parameter, to obtain a plurality of first reverberation effect parameters;

依据所述多个第一混响效果参数对所述多路双声道数据进行处理,得到多路混响双声道数据,每一第一混响效果参数对应唯一一路双声道数据;processing the multi-channel binaural data according to the plurality of first reverberation effect parameters to obtain multi-channel reverberation binaural data, and each first reverberation effect parameter corresponds to a unique channel of binaural data;

将所述多路双声道数据进行合成,得到所述目标混响双声道数据。The multi-channel binaural data is synthesized to obtain the target reverberation binaural data.

上述主要从方法侧执行过程的角度对本申请实施例的方案进行了介绍。可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所提供的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。The foregoing mainly introduces the solutions of the embodiments of the present application from the perspective of the method-side execution process. It can be understood that, in order to realize the above-mentioned functions, the electronic device includes corresponding hardware structures and/or software modules for executing each function. Those skilled in the art should easily realize that the present application can be implemented in hardware or in the form of a combination of hardware and computer software, in combination with the units and algorithm steps of each example described in the embodiments provided herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.

本申请实施例可以根据上述方法示例对电子设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In this embodiment of the present application, the electronic device may be divided into functional units according to the foregoing method examples. For example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation.

请参阅图5,图5是本申请实施例公开的一种3D音效处理装置的结构示意图,应用于图1A所示的电子设备,所述3D音效处理装置500包括:获取单元501、第一确定单元502、第二确定单元503和处理单元504,其中,Please refer to FIG. 5 . FIG. 5 is a schematic structural diagram of a 3D sound effect processing apparatus disclosed in an embodiment of the present application, which is applied to the electronic device shown in FIG. 1A . The 3D sound effect processing apparatus 500 includes: an obtaining unit 501 , a first determining unit 502, second determining unit 503 and processing unit 504, wherein,

所述获取单元501,用于获取声源的单声道数据;The obtaining unit 501 is used to obtain monophonic data of the sound source;

所述第一确定单元502,用于确定所述单声道数据对应的目标内容场景类型;The first determining unit 502 is configured to determine the target content scene type corresponding to the mono data;

所述第二确定单元503,用于根据所述目标内容场景类型确定目标混响效果参数;The second determining unit 503 is configured to determine target reverberation effect parameters according to the target content scene type;

所述处理单元504,用于依据所述目标混响效果参数对所述单声道数据进行处理,得到目标混响双声道数据。The processing unit 504 is configured to process the mono data according to the target reverberation effect parameter to obtain target reverberation binaural data.

在一个可能的示例中,在所述确定所述单声道数据对应的目标内容场景类型方面,所述第一确定单元502具体用于:In a possible example, in the aspect of determining the target content scene type corresponding to the mono data, the first determining unit 502 is specifically configured to:

对所述单声道数据进行语义解析,得到多个关键字;Semantic parsing is performed on the mono data to obtain a plurality of keywords;

按照预设的关键字与内容场景类型之间的映射关系,确定所述多个关键字中每一关键字对应的内容场景类型,得到多个内容场景类型;According to the preset mapping relationship between keywords and content scene types, determine the content scene type corresponding to each keyword in the plurality of keywords, and obtain a plurality of content scene types;

将所述多个内容场景类型中出现次数最多的内容场景类型作为所述目标内容场景类型。The content scene type with the largest number of occurrences among the plurality of content scene types is used as the target content scene type.

在一个可能的示例中,在所述根据所述目标内容场景类型确定目标混响效果参数方面,所述第二确定单元503具体用于:In a possible example, in the aspect of determining the target reverberation effect parameter according to the target content scene type, the second determining unit 503 is specifically configured to:

按照预设的内容场景类型与混响效果参数之间的映射关系,确定所述目标内容场景类型对应的所述目标混响效果参数。The target reverberation effect parameter corresponding to the target content scene type is determined according to the preset mapping relationship between the content scene type and the reverberation effect parameter.

在一个可能的示例中,在所述依据所述目标混响效果参数对所述单声道数据进行处理,得到目标双声道数据方面,所述处理单元504具体用于:In a possible example, in the aspect of processing the mono data according to the target reverberation effect parameter to obtain target binaural data, the processing unit 504 is specifically configured to:

获取所述声源的第一三维坐标;obtaining the first three-dimensional coordinates of the sound source;

获取目标对象的第二三维坐标,所述第一三维坐标与所述第二三维坐标基于同一坐标原点;acquiring the second three-dimensional coordinates of the target object, the first three-dimensional coordinates and the second three-dimensional coordinates are based on the same coordinate origin;

依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据。The target reverberation binaural data is generated according to the first three-dimensional coordinates, the second three-dimensional coordinates, the monaural data, and the target reverberation effect parameter.

在一个可能的示例中,在所述依据所述第一三维坐标、所述第二三维坐标以及所述单声道数据、所述目标混响效果参数生成目标混响双声道数据方面,所述处理单元504具体用于:In a possible example, in the aspect of generating the target reverberation binaural data according to the first three-dimensional coordinates, the second three-dimensional coordinates, the monophonic data, and the target reverberation effect parameter, the The processing unit 504 is specifically used for:

将所述单声道数据生成所述第一三维坐标与所述第二三维坐标之间的多路双声道数据,每路双声道数据对应唯一传播方向;generating multi-channel binaural data between the first three-dimensional coordinates and the second three-dimensional coordinates from the monophonic data, and each channel of binaural data corresponds to a unique propagation direction;

根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据。The target reverberation binaural data is synthesized from the multi-channel binaural data according to the target reverberation effect parameter.

在一个可能的示例中,在所述根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据方面,所述处理单元504具体用于:In a possible example, in the aspect of synthesizing the multi-channel binaural data according to the target reverberation effect parameter to the target reverberation binaural data, the processing unit 504 is specifically configured to:

以所述第一三维坐标与所述第二三维坐标为轴线作横截面,将所述多路双声道数据进行划分,得到第一双声道数据集合和第二双声道数据集合,所述第一双声道数据集合、所述第二声道数据集合均包括至少一路双声道数据;Taking the first three-dimensional coordinates and the second three-dimensional coordinates as axes to make a cross-section, the multi-channel binaural data is divided to obtain a first binaural data set and a second binaural data set, so The first binaural data set and the second binaural data set both include at least one channel of binaural data;

将所述第一双声道数据集合进行合成,得到第一单声道数据;synthesizing the first binaural data set to obtain first monaural data;

将所述第二双声道数据集合进行合成,得到第二单声道数据;synthesizing the second binaural data set to obtain second monaural data;

根据所述目标混响效果参数将所述第一单声道数据、所述第二单声道数据进行合成,得到所述混响双声道数据。The first monaural data and the second monaural data are synthesized according to the target reverberation effect parameter to obtain the reverberated binaural data.

在一个可能的示例中,在所述根据所述目标混响效果参数将所述多路双声道数据合成所述目标混响双声道数据方面,所述处理单元504具体用于:In a possible example, in the aspect of synthesizing the multi-channel binaural data according to the target reverberation effect parameter to the target reverberation binaural data, the processing unit 504 is specifically configured to:

确定多路双声道数据中每一路双声道数据的能量值,得到多个能量值;Determine the energy value of each channel of dual-channel data in the multi-channel dual-channel data, and obtain multiple energy values;

按照预设的能量值与混响效果参数调节系数之间的映射关系,确定所述多个能量值中每一能量值对应的混响效果参数调节系数,得到多个混响效果参数调节系数;According to the mapping relationship between the preset energy value and the reverberation effect parameter adjustment coefficient, determine the reverberation effect parameter adjustment coefficient corresponding to each energy value in the plurality of energy values, and obtain a plurality of reverberation effect parameter adjustment coefficients;

依据所述多个混响效果参数调节系数、所述目标混响效果参数确定每一路双声道数据对应的混响效果参数,得到多个第一混响效果参数;determining a reverberation effect parameter corresponding to each channel of binaural data according to the plurality of reverberation effect parameter adjustment coefficients and the target reverberation effect parameter, to obtain a plurality of first reverberation effect parameters;

依据所述多个第一混响效果参数对所述多路双声道数据进行处理,得到多路混响双声道数据,每一第一混响效果参数对应唯一一路双声道数据;processing the multi-channel binaural data according to the plurality of first reverberation effect parameters to obtain multi-channel reverberation binaural data, and each first reverberation effect parameter corresponds to a unique channel of binaural data;

将所述多路双声道数据进行合成,得到所述目标混响双声道数据。The multi-channel binaural data is synthesized to obtain the target reverberation binaural data.

可以看出,本申请实施例中所描述的3D音效处理装置,应用于电子设备,获取声源的单声道数据,确定单声道数据对应的目标内容场景类型,根据目标内容场景类型确定目标混响效果参数,依据目标混响效果参数对单声道数据进行处理,得到目标混响双声道数据,如此,可以确定与内容场景对应的混响效果参数,并依据该混响效果参数生成混响双声道数据,从而,实现了与内容场景相宜的混响效果,立体感更加真实。It can be seen that the 3D sound effect processing device described in the embodiments of the present application is applied to electronic equipment, obtains mono data of a sound source, determines the target content scene type corresponding to the mono data, and determines the target content scene type according to the target content scene type. Reverberation effect parameters: Process the monaural data according to the target reverberation effect parameters to obtain the target reverberation binaural data. In this way, the reverberation effect parameters corresponding to the content scene can be determined and generated according to the reverberation effect parameters The two-channel data of reverberation is reverberated, so that the reverberation effect suitable for the content scene is realized, and the three-dimensional sense is more realistic.

需要注意的是,本申请实施例所描述的电子设备是以功能单元的形式呈现。这里所使用的术语“单元”应当理解为尽可能最宽的含义,用于实现各个“单元”所描述功能的对象例如可以是集成电路ASIC,单个电路,用于执行一个或多个软件或固件程序的处理器(共享的、专用的或芯片组)和存储器,组合逻辑电路,和/或提供实现上述功能的其他合适的组件。It should be noted that the electronic devices described in the embodiments of the present application are presented in the form of functional units. The term "unit" as used herein should be understood in the broadest possible sense, and the object used to implement the functions described by each "unit" may be, for example, an integrated circuit ASIC, a single circuit for executing one or more software or firmware Program processors (shared, dedicated, or chipset) and memory, combinational logic circuits, and/or other suitable components that provide the functions described above.

其中,获取单元501、第一确定单元502、第二确定单元503和处理单元504可以是控制电路或处理器。The obtaining unit 501 , the first determining unit 502 , the second determining unit 503 and the processing unit 504 may be a control circuit or a processor.

本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种3D音效处理方法的部分或全部步骤。Embodiments of the present application further provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables the computer to execute any 3D sound effect processing method described in the above method embodiments. some or all of the steps.

本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种3D音效处理方法的部分或全部步骤。The embodiments of the present application further provide a computer program product, the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to execute the methods described in the foregoing method embodiments. Some or all of the steps of any 3D sound processing method.

需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be noted that, for the sake of simple description, the foregoing method embodiments are all expressed as a series of action combinations, but those skilled in the art should know that the present application is not limited by the described action sequence. Because in accordance with the present application, certain steps may be performed in other orders or concurrently. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present application.

在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, and can also be implemented in the form of software program modules.

所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the computer software product is stored in a memory, Several instructions are included to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned memory includes: U disk, read-only memory (ROM), random access memory (RAM), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.

本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above embodiments can be completed by instructing relevant hardware through a program, and the program can be stored in a computer-readable memory, and the memory can include: a flash disk , ROM, RAM, disk or CD, etc.

以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The embodiments of the present application have been introduced in detail above, and the principles and implementations of the present application are described in this paper by using specific examples. The descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application; at the same time, for Persons of ordinary skill in the art, based on the idea of the present application, will have changes in the specific implementation manner and application scope. In summary, the contents of this specification should not be construed as limitations on the present application.

Claims (13)

1. A3D sound effect processing method is characterized by comprising the following steps:
acquiring single track data of a sound source;
processing the mono channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, wherein the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the mono channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
determining a target content scene type corresponding to the mono data includes:
performing semantic analysis on the single sound channel data to obtain a plurality of keywords;
determining a content scene type corresponding to each keyword in the plurality of keywords according to a mapping relation between preset keywords and the content scene type to obtain a plurality of content scene types;
wherein the target content scene type is a content scene type that occurs the most frequently among the plurality of content scene types.
2. The method of claim 1, wherein the target reverberation effect parameter depends on the target content scene type and comprises:
and determining the target reverberation effect parameter corresponding to the target content scene type according to a mapping relation between a preset content scene type and the reverberation effect parameter.
3. The method of any of claims 1-2, wherein the processing the mono data to obtain the target binaural data according to the target reverberation effect parameter comprises:
acquiring a first three-dimensional coordinate of the sound source;
acquiring a second three-dimensional coordinate of the target object, wherein the first three-dimensional coordinate and the second three-dimensional coordinate are based on the same coordinate origin;
and generating target reverberation binaural data according to the first three-dimensional coordinate, the second three-dimensional coordinate, the single sound channel data and the target reverberation effect parameter.
4. The method of claim 3,
generating target reverberation binaural data according to the first three-dimensional coordinate, the second three-dimensional coordinate, the mono data, and the target reverberation effect parameter, including:
generating multi-channel two-channel data between the first three-dimensional coordinate and the second three-dimensional coordinate by using the single-channel data, wherein each channel of two-channel data corresponds to a unique propagation direction;
and synthesizing the multi-channel two-channel data into the target reverberation two-channel data according to the target reverberation effect parameter.
5. The method of claim 4, wherein said synthesizing the multi-channel binaural data into the target reverberant binaural data according to the target reverberation effect parameter comprises:
taking the first three-dimensional coordinate and the second three-dimensional coordinate as axes to be taken as cross sections, dividing the multichannel two-channel data to obtain a first two-channel data set and a second two-channel data set, wherein the first two-channel data set and the second two-channel data set both comprise at least one channel of two-channel data;
synthesizing the first double-channel data set to obtain first single-channel data;
synthesizing the second double-channel data set to obtain second single-channel data;
and synthesizing the first single-channel data and the second single-channel data according to the target reverberation effect parameter to obtain the reverberation double-channel data.
6. The method of claim 4, wherein said synthesizing the multi-channel binaural data into the target reverberant binaural data according to the target reverberation effect parameter comprises:
determining the energy value of each path of double-channel data in the multi-path double-channel data to obtain a plurality of energy values;
determining a reverberation effect parameter adjusting coefficient corresponding to each energy value in the plurality of energy values according to a mapping relation between preset energy values and reverberation effect parameter adjusting coefficients to obtain a plurality of reverberation effect parameter adjusting coefficients;
determining a reverberation effect parameter corresponding to each path of binaural data according to the reverberation effect parameter adjusting coefficients and the target reverberation effect parameter to obtain a plurality of first reverberation effect parameters;
processing the multichannel two-channel data according to the plurality of first reverberation effect parameters to obtain multichannel reverberation two-channel data, wherein each first reverberation effect parameter corresponds to a unique channel of two-channel data;
and synthesizing the multi-channel binaural data to obtain the target reverberation binaural data.
7. A3D sound effect processing method is characterized by comprising the following steps:
acquiring single track data of a sound source;
processing the single-channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, wherein the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the single-channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
wherein the target reverberation effect parameter depends on a target content scene type, including: and determining the target reverberation effect parameter corresponding to the target content scene type according to a preset mapping relation between the content scene type and the reverberation effect parameter.
8. A3D sound effect processing method is characterized by comprising the following steps:
acquiring single track data of a sound source;
processing the mono channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, wherein the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the mono channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
wherein the processing the mono channel data according to the target reverberation effect parameter to obtain the target binaural data includes:
acquiring a first three-dimensional coordinate of the sound source;
acquiring a second three-dimensional coordinate of the target object, wherein the first three-dimensional coordinate and the second three-dimensional coordinate are based on the same coordinate origin;
and generating target reverberation binaural data according to the first three-dimensional coordinate, the second three-dimensional coordinate, the single sound channel data and the target reverberation effect parameter.
9. A3D sound effect processing device, wherein the 3D sound effect processing device comprises: an acquisition unit, a first determination unit, a second determination unit and a processing unit, wherein,
the acquisition unit is used for acquiring single sound channel data of a sound source;
the processing unit is configured to process the mono channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, where the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the mono channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
determining a target content scene type corresponding to the mono data includes:
performing semantic analysis on the single sound channel data to obtain a plurality of keywords;
determining a content scene type corresponding to each keyword in the plurality of keywords according to a mapping relation between preset keywords and the content scene type to obtain a plurality of content scene types;
wherein the target content scene type is a content scene type that occurs the most frequently among the plurality of content scene types.
10. A3D sound effect processing device, wherein the 3D sound effect processing device comprises: an acquisition unit, a first determination unit, a second determination unit and a processing unit, wherein,
the acquisition unit is used for acquiring single sound channel data of a sound source;
the processing unit is configured to process the mono channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, where the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the mono channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
wherein the target reverberation effect parameter depends on a target content scene type, including: and determining the target reverberation effect parameter corresponding to the target content scene type according to a preset mapping relation between the content scene type and the reverberation effect parameter.
11. A3D sound effect processing device, wherein the 3D sound effect processing device comprises: an acquisition unit, a first determination unit, a second determination unit and a processing unit, wherein,
the acquisition unit is used for acquiring single sound channel data of a sound source;
the processing unit is configured to process the mono channel data according to a target reverberation effect parameter to obtain target reverberation binaural data, where the target reverberation effect parameter depends on a target content scene type, and the target content scene type corresponds to the mono channel data;
wherein,
the target reverberation effect parameter comprises at least one of: the method comprises the steps of inputting level, low-frequency cut points, high-frequency cut points, early reflection time, diffusion degree, low mixing ratio, reverberation time, original dry sound volume, early reflection sound volume, high-frequency attenuation points, frequency dividing points, reverberation volume, sound field width, output sound field and tail sound;
wherein the processing the mono channel data according to the target reverberation effect parameter to obtain the target binaural data includes:
acquiring a first three-dimensional coordinate of the sound source;
acquiring a second three-dimensional coordinate of the target object, wherein the first three-dimensional coordinate and the second three-dimensional coordinate are based on the same coordinate origin;
and generating target reverberation binaural data according to the first three-dimensional coordinate, the second three-dimensional coordinate, the single sound channel data and the target reverberation effect parameter.
12. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-8.
13. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN202110395644.5A 2018-09-25 2018-09-25 3D sound effect processing method and related products Expired - Fee Related CN113115175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110395644.5A CN113115175B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related products

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811115848.3A CN109327766B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related product
CN202110395644.5A CN113115175B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related products

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811115848.3A Division CN109327766B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related product

Publications (2)

Publication Number Publication Date
CN113115175A CN113115175A (en) 2021-07-13
CN113115175B true CN113115175B (en) 2022-05-10

Family

ID=65265299

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110395644.5A Expired - Fee Related CN113115175B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related products
CN201811115848.3A Active CN109327766B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related product

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811115848.3A Active CN109327766B (en) 2018-09-25 2018-09-25 3D sound effect processing method and related product

Country Status (2)

Country Link
CN (2) CN113115175B (en)
WO (1) WO2020063027A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115175B (en) * 2018-09-25 2022-05-10 Oppo广东移动通信有限公司 3D sound effect processing method and related products
CN108924705B (en) * 2018-09-25 2021-07-02 Oppo广东移动通信有限公司 3D sound effect processing method and related products
CN115696170A (en) * 2021-07-22 2023-02-03 腾讯科技(深圳)有限公司 Sound effect processing method, sound effect processing device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005244786A (en) * 2004-02-27 2005-09-08 Cyberlink Corp Stereo sound effect generation apparatus and its generation method
FR2903562A1 (en) * 2006-07-07 2008-01-11 France Telecom BINARY SPATIALIZATION OF SOUND DATA ENCODED IN COMPRESSION.
EP2304975A2 (en) * 2008-07-31 2011-04-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Signal generation for binaural signals
CN104869524A (en) * 2014-02-26 2015-08-26 腾讯科技(深圳)有限公司 Processing method and device for sound in three-dimensional virtual scene
CN107889044A (en) * 2017-12-19 2018-04-06 维沃移动通信有限公司 The processing method and processing device of voice data
CN109327766A (en) * 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 3D sound effect processing method and related product

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030007648A1 (en) * 2001-04-27 2003-01-09 Christopher Currell Virtual audio system and techniques
US8036767B2 (en) * 2006-09-20 2011-10-11 Harman International Industries, Incorporated System for extracting and changing the reverberant content of an audio input signal
KR101146841B1 (en) * 2007-10-09 2012-05-17 돌비 인터네셔널 에이비 Method and apparatus for generating a binaural audio signal
EP2175670A1 (en) * 2008-10-07 2010-04-14 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Binaural rendering of a multi-channel audio signal
US8666081B2 (en) * 2009-08-07 2014-03-04 Lg Electronics, Inc. Apparatus for processing a media signal and method thereof
BR112013017070B1 (en) * 2011-01-05 2021-03-09 Koninklijke Philips N.V AUDIO SYSTEM AND OPERATING METHOD FOR AN AUDIO SYSTEM
US8958567B2 (en) * 2011-07-07 2015-02-17 Dolby Laboratories Licensing Corporation Method and system for split client-server reverberation processing
US9319819B2 (en) * 2013-07-25 2016-04-19 Etri Binaural rendering method and apparatus for decoding multi channel audio
US10079941B2 (en) * 2014-07-07 2018-09-18 Dolby Laboratories Licensing Corporation Audio capture and render device having a visual display and user interface for use for audio conferencing
EP3172730A1 (en) * 2014-07-23 2017-05-31 PCMS Holdings, Inc. System and method for determining audio context in augmented-reality applications
EP2980789A1 (en) * 2014-07-30 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for enhancing an audio signal, sound enhancing system
CN104360994A (en) * 2014-12-04 2015-02-18 科大讯飞股份有限公司 Natural language understanding method and natural language understanding system
US9602947B2 (en) * 2015-01-30 2017-03-21 Gaudi Audio Lab, Inc. Apparatus and a method for processing audio signal to perform binaural rendering
EP3472832A4 (en) * 2016-06-17 2020-03-11 DTS, Inc. DISTANCE SWIVELING USING NEAR / FIELD PLAYBACK
CN106303789A (en) * 2016-10-31 2017-01-04 维沃移动通信有限公司 A kind of way of recording, earphone and mobile terminal
CN107016990B (en) * 2017-03-21 2018-06-05 腾讯科技(深圳)有限公司 Audio signal generation method and device
CN107281753B (en) * 2017-06-21 2020-10-23 网易(杭州)网络有限公司 Scene sound effect reverberation control method and device, storage medium and electronic equipment
CN107360494A (en) * 2017-08-03 2017-11-17 北京微视酷科技有限责任公司 A kind of 3D sound effect treatment methods, device, system and sound system
CN108040317B (en) * 2017-12-22 2019-09-27 南京大学 A hybrid sound field widening method for sense of hearing
GB201810621D0 (en) * 2018-06-28 2018-08-15 Univ London Queen Mary Generation of audio data
CN108924705B (en) * 2018-09-25 2021-07-02 Oppo广东移动通信有限公司 3D sound effect processing method and related products
CN109104687B (en) * 2018-09-25 2021-04-13 Oppo广东移动通信有限公司 Sound effect processing method and related product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005244786A (en) * 2004-02-27 2005-09-08 Cyberlink Corp Stereo sound effect generation apparatus and its generation method
FR2903562A1 (en) * 2006-07-07 2008-01-11 France Telecom BINARY SPATIALIZATION OF SOUND DATA ENCODED IN COMPRESSION.
EP2304975A2 (en) * 2008-07-31 2011-04-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Signal generation for binaural signals
CN104869524A (en) * 2014-02-26 2015-08-26 腾讯科技(深圳)有限公司 Processing method and device for sound in three-dimensional virtual scene
CN107889044A (en) * 2017-12-19 2018-04-06 维沃移动通信有限公司 The processing method and processing device of voice data
CN109327766A (en) * 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 3D sound effect processing method and related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
虚拟现实中三维音频关键技术现状及发展;张阳等;《电声技术》;20170617(第06期);全文 *

Also Published As

Publication number Publication date
WO2020063027A1 (en) 2020-04-02
CN109327766A (en) 2019-02-12
CN109327766B (en) 2021-04-30
CN113115175A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
US10993063B2 (en) Method for processing 3D audio effect and related products
CN109104687B (en) Sound effect processing method and related product
CN109246580B (en) 3D sound effect processing method and related products
CN109121069B (en) 3D sound effect processing method and related products
CN115250412B (en) Audio processing method, device, wireless headset and computer readable medium
CN109327795B (en) Sound effect processing method and related product
CN106911956B (en) Audio data playing method and device and mobile terminal
CN113115175B (en) 3D sound effect processing method and related products
CN108810860B (en) An audio transmission method, terminal device and main earphone
CN107889044B (en) Audio data processing method and device
CN106303789A (en) A kind of way of recording, earphone and mobile terminal
CN108319445A (en) A kind of audio playing method and mobile terminal
CN108924705B (en) 3D sound effect processing method and related products
CN109254752A (en) 3D sound effect treatment method and Related product
CN114339582B (en) Dual-channel audio processing method, device and medium for generating direction sensing filter
CN109121042B (en) Voice data processing method and related products
CN109327794B (en) 3D sound effect processing method and related product
CN110428802B (en) Sound reverberation method, device, computer equipment and computer storage medium
CN108882112B (en) Audio playback control method, device, storage medium and terminal device
CN111756929A (en) Multi-screen terminal audio playing method and device, terminal equipment and storage medium
CN109243413B (en) 3D sound effect processing method and related products
JP4966705B2 (en) Mobile communication terminal and program
CN109286841B (en) Film sound effect processing method and related products
HK40003435A (en) 3d sound effect processing method and related products
CN113411447B (en) Sound channel switching method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220510