HK1103154A - Response waveform synthesis method and apparatus - Google Patents
Response waveform synthesis method and apparatus Download PDFInfo
- Publication number
- HK1103154A HK1103154A HK07111639.7A HK07111639A HK1103154A HK 1103154 A HK1103154 A HK 1103154A HK 07111639 A HK07111639 A HK 07111639A HK 1103154 A HK1103154 A HK 1103154A
- Authority
- HK
- Hong Kong
- Prior art keywords
- band
- frequency
- speaker
- analysis
- response waveform
- Prior art date
Links
Abstract
Using frequency characteristics determined for individual ones of a plurality of analyzed bands of a predetermined audio frequency range with frequency resolution that becomes finer in order of lowering frequencies of the analyzed bands, a synthesized band is set for each one or for each plurality of the analyzed bands, and then a time-axial response waveform is determined for each of the synthesized bands. The response waveforms of the synthesized bands are then added together to thereby provide a response waveform for the whole of the audio frequency range.
Description
Technical Field
The present invention relates generally to a response waveform synthesizing method and apparatus for synthesizing a time-axis impulse response waveform from acoustic characteristics in a frequency domain, an acoustic design assisting apparatus and method using the response waveform synthesizing method, and a storage medium storing an acoustic design assisting program.
Background
In order to install speakers in a hall, an event site, or other venue (or acoustic facility), it has been common so far for an audio engineer or designer to select an appropriate speaker system according to the shape, size, etc. of the venue (or acoustic facility), and then design the position and direction in which the selected speaker system is to be installed, the equalizer characteristics of the speaker system to be installed, and the like.
Since the design work requires skilled and heavy calculation, various acoustic design aids and programs have been proposed so far, such as japanese patent application laid-open nos. 2002-. With the acoustic design assistance apparatus and program, it is desirable to visually display in advance, on a display device, acoustic characteristics on a surface (hereinafter referred to as "speaker sound receiving surface" or "sound receiving surface") on which a seat or the like is arranged and which receives sound from speakers to be installed in an acoustic hall or other venue (or acoustic facility), in accordance with characteristics of a selected speaker system, so that the acoustic characteristics of the selected speaker system can be simulated so as to facilitate selection of the speaker system before audio equipment such as the speaker system is carried to the venue (i.e., real acoustic space) such as the acoustic hall. Furthermore, it is desirable to use such acoustic design aids and programs to simulate the acoustic tuning state of the system even after installation of a selected speaker system at a venue so that the acoustic tuning state is reflected in the acoustic tuning of the system.
The aforementioned 2002-366162 publication (i.e., patent document 1) discloses that data of impulse responses at respective positions around each speaker are obtained in advance, and sound image localization parameters of a sound receiving surface are automatically calculated from the obtained impulse response data. According to the disclosure in this document, an impulse response subjected to FFT (fast fourier transform) prestores a model of the impulse response. The above-mentioned patent document 2 discloses an acoustic system design assistance apparatus that automates equipment selection and design work using a GUI (graphical user interface). The above-mentioned patent document 3 discloses an apparatus that automatically calculates desired sound image localization parameters. In addition, the above-mentioned patent document 4 discloses an acoustic adjusting apparatus that automatically adjusts acoustic frequency characteristics in a short time using characteristic data of a difference between a sound signal output from a speaker and a sound signal obtained by a microphone in a real live or a meeting place.
Moreover, an acoustic design assisting program arranged in the following manner is put into practical use at present. That is, although their application is limited to a speaker system of a planar or two-dimensional linear arrangement type, each of these acoustic design assisting programs calculates the required number of speakers and the direction, horizontal balance, Equalizer (EQ) parameter, and delay parameter of each speaker for a predetermined sound receiving area of a sound receiving surface by inputting thereto the sectional shape of an acoustic space such as a music hall.
With the above-mentioned conventionally known acoustic design assistance apparatus, a function of simulating the acoustic characteristics of sound from the speaker when sound is received at a given sound receiving point (e.g., seat) and allowing test listening of the simulated sound is required to check in advance which kind of sound can be heard at the sound receiving point.
In many of the above-mentioned conventionally known acoustic design assistance apparatuses, the analysis of the frequency characteristics is performed by dividing a frequency range of an audible sound into a plurality of partial frequency bands and then performing FFT analysis on the partial frequency bands whose number of sampling points is different from each other among the partial frequency bands, thereby making the frequency resolution finer in the order of the frequency decrease of the partial frequency bands. However, if the frequency characteristics obtained from a plurality of partial frequency bands are simply added together after the inverse FFT transform is performed independently of each other, discontinuity or discrete points will occur in the frequency characteristics, which easily causes unnecessary noise and unnatural sounds.
Disclosure of Invention
In view of the above-described problems, it is an object of the present invention to provide an improved response waveform synthesizing method and apparatus capable of obtaining a non-discontinuous waveform from frequency characteristics obtained from a plurality of divided partial frequency bands. Another object of the present invention is to provide a storage medium storing a program for causing a computer to execute a response waveform synthesis method and an acoustic design assistance technique using the method.
In order to achieve the above object, the present invention provides an improved response waveform synthesis method, comprising the steps of: an inverse FFT transform step of setting a synthesized band for each or every several analysis bands using frequency characteristics determined for a single one of a plurality of analysis bands divided from a predetermined audio range, the frequency characteristics being determined for a single analysis band to have a frequency resolution that becomes finer in the order in which the analysis band frequencies decrease, and then determining a time-axis response waveform for each synthesized band; and an addition synthesis step of adding together the response waveforms of the synthesized bands, thereby providing a response waveform for the entire audio frequency range.
According to the present invention, a synthesis band is set for each of one or more analysis bands which do not have frequency characteristics determined for each analysis band used directly as it is, and a time axis waveform is determined for each synthesis band. Therefore, the present invention can synthesize a smooth response waveform to determine a non-discontinuous waveform from the frequency characteristics obtained by dividing the audio frequency band into a plurality of partial (analysis) frequency bands.
Preferably, the inverse FFT transforming step determines a time-axis response waveform for each of the synthesized bands i (i ═ 1, 2, …, n) of the (i-1) th analysis band and the i-th analysis band using frequency characteristics determined for a single analysis band (0-n) divided from the audio range, and the adding synthesizing step adds together the response waveforms of the synthesized bands i (i ═ 1, 2, …, n) determined by the inverse FFT transforming step, thereby providing a response waveform for the entire audio range. Therefore, by using the same analyzed frequency band i for connecting the i-th and (i +1) -th synthesized frequency bands in an overlapping manner, the present invention can synthesize a smooth response waveform without causing a discrete characteristic in a boundary region between the frequency bands even when the response waveform of each frequency band is determined.
Preferably, by using a sine square function (sin) which multiplies a part of the synthesized band corresponding to the (i-1) th analysis band by a rising part of the waveform2θ) and multiplying a part of the synthesized band corresponding to the i-th analysis band by a cosine square function (cos) as a falling part of the waveform2θ), the FFT inverse transformation step determines a response waveform for each of the synthesized bands i (i ═ 1, 2, 3, …, n). Because of sin2θ+cos2Since θ is 1, even when the same analysis band is used in an overlapping manner for adjacent i-th and (i +1) -th synthesis bands, the present invention can accurately reproduce the frequency characteristics of the initial analysis band by additively synthesizing the response waveforms of the individual synthesis bands.
According to another aspect of the present invention, there is provided an improved response waveform synthesizing apparatus comprising: a frequency characteristic storage section that stores a frequency characteristic determined for a single one of a plurality of analysis bands divided from a predetermined audio frequency range, the frequency characteristic being determined to have a frequency resolution that becomes finer in an order in which the frequencies of the analysis bands decrease; an inverse FFT transform operation section that sets a synthesized band for each or every several analyzed bands and then determines a time-axis response waveform for each synthesized band; and an addition synthesis section that adds together the response waveforms of the synthesized bands, thereby providing a response waveform for the entire audio range.
Preferably, the response waveform synthesizing apparatus further includes: a characteristic storage section that stores respective characteristics of a plurality of types of speakers; a speaker selection assisting section that selects selectable speaker candidates according to shape information of a space where speakers are to be placed; a speaker selection section that receives a selection operation for selecting one speaker from the selectable speaker candidates; a speaker installation angle optimizing section that determines an installation orientation of the speaker so as to minimize a sound level variation at a single position of the sound receiving surface of the space, based on the characteristics of the speaker selected via the speaker selecting section; and a frequency characteristic calculating section that calculates a frequency characteristic at a predetermined position of the space for each of a plurality of analysis frequency bands divided from the audio frequency range, based on the shape information of the space and the installation orientation of the speaker determined by the speaker installation angle optimizing section. Here, the frequency characteristic storage section stores the frequency characteristic calculated by the frequency characteristic calculation section for each analysis frequency band. This arrangement simulates the sound produced by a designed loudspeaker arrangement. Accordingly, by applying the response waveform synthesis technique of the present invention, an improved acoustic design aid or method can be achieved.
Preferably, the response waveform synthesizing apparatus further includes a sound signal processing section including a filter in which the response waveform characteristic for the entire audio frequency range provided by the addition synthesizing section has been set. Here, a desired sound signal is input to the sound signal processing section, so that the input sound signal is processed by the filter, and then the processed sound signal is output from the sound signal processing section. This arrangement allows test listening of sounds when the simulated sounds are arranged with the designed loudspeakers.
The present invention may be constructed and implemented not only as the method invention described above but also as an apparatus method. Also, the present invention may be arranged and implemented as a software program for execution by a processor such as a computer or DSP, and a storage medium storing such a software program. In addition, the processor used in the present invention includes a dedicated processor with dedicated logic embedded in hardware, let alone a computer or other general-purpose processor capable of running a desired software program.
An embodiment of the present invention will be described below, but it should be understood that the present invention is not limited to the embodiment, and various modifications may be made without departing from the basic concept. The scope of the invention is therefore intended to be determined solely by the appended claims.
Drawings
For a better understanding of the objects and other features of the invention, preferred embodiments will be described in more detail below with reference to the accompanying drawings, in which:
FIG. 1 is a diagram illustrating a response waveform synthesis method according to an embodiment of the present invention, particularly outlining an analysis frequency band, a synthesis frequency band, and a window function;
FIG. 2 is a flow diagram illustrating an example sequence of operations for synthesizing an impulse response waveform;
FIG. 3A is a block diagram illustrating an example internal arrangement of an acoustic design assistance apparatus according to an embodiment of the present invention;
fig. 3B is a diagram showing a data structure of the meeting place (or acoustic facility) basic shape data;
FIG. 4 is a flow chart showing the general operation of an acoustic design aid; FIG. 5 is a diagram illustrating an example GUI for setting the general shape of a space in which speakers are to be placed;
FIG. 6 is a diagram illustrating an example GUI for entering shape parameters to set the general shape of a space in which speakers are to be placed;
FIG. 7 is a diagram illustrating an example GUI for visual display for selection and configuration of speakers;
fig. 8 is a diagram showing a data structure of a speaker data table;
fig. 9 is a conceptual diagram illustrating an operation sequence for automatically calculating the setting of the installation angle between the speaker units of the speaker array;
fig. 10A is a flowchart showing a process for optimizing the frequency characteristics at the axial point of a single speaker;
fig. 10B is a diagram showing an example of equalizer parameter setting for frequency characteristic optimization;
fig. 11 is a diagram showing an example sound receiving surface area divided by lattice points;
FIG. 12 is a flow chart showing a sequence of operations for optimizing speaker angle;
fig. 13 is a flowchart showing the operation of the acoustic design assistance apparatus when displaying the GUI screens of fig. 5 and 6; and
fig. 14 is a flowchart showing the operation of the acoustic design assistance apparatus when the speaker selection screen of fig. 7 is displayed.
Detailed Description
First, a response waveform synthesis method according to an embodiment of the present invention will be described. Fig. 1 is a diagram illustrating a response waveform synthesis method generally including the steps of: a predetermined audio frequency range (e.g., 0Hz-22050Hz) is divided into a plurality of partial frequency bands (hereinafter referred to as "analysis bands"), and then a time domain impulse response waveform over the entire audio frequency range is synthesized based on given frequency characteristics determined for each analysis band. In the example of fig. 1, it is assumed that the sampling frequency of the audio signal processing system is 44.1kHz, and thus the upper limit of the audio frequency range is half of the sampling frequency of 44.1kHz, i.e. 22050 Hz. Therefore, if the sampling frequency of the audio signal processing system varies, the predetermined audio frequency range also varies.
In this case, the audio frequency range of 0Hz-22050Hz is divided into nine analysis bands on an octave-by-octave basis at 1000Hz serving as a standard unit for octave division, and the lowest and highest analysis bands (i.e., analysis band 0 and analysis band 10) are each a band smaller than an octave (hereinafter, such a band smaller than an octave is referred to as a "division band"). Therefore, strictly speaking, the audio frequency range of 0Hz-22050Hz is divided into a total of 11 analysis bands from analysis band 0 to analysis band 10, as shown in "Table 1".
[ Table 1]
| Frequency band name | Low end frequency | High end frequency | FFT size | Frequency resolution |
| AB(n) | FL(n)(Hz) | FH(n)(Hz) | FS (dot) | FA (n) (Hz/dot) |
| Analysis band 0 | 0 | 31.25 | 65536 | 0.672912598 |
| Analysis of frequency band 1 | 31.25 | 62.5 | 65536 | 0.672912598 |
| Analysis band 2 | 62.5 | 125 | 32768 | 1.345825195 |
| Analysis band 3 | 125 | 250 | 16384 | 2.691650391 |
| Analysis band 4 | 250 | 500 | 8192 | 5.383300781 |
| Analysis band 5 | 500 | 1000 | 4096 | 10.76660156 |
| Analysis band 6 | 1000 | 2000 | 2048 | 21.53320313 |
| Analysis of frequency band 7 | 2000 | 4000 | 1024 | 43.06640625 |
| Analysis of frequency band 8 | 4000 | 8000 | 512 | 86.1328125 |
| Analysis frequency band 9 | 8000 | 16000 | 256 | 172.265625 |
| Analyzing a frequency band 10 | 16000 | 22050 | 256 | 172.265625 |
The above-mentioned octave relationships in which the boundary frequencies between the analysis bands are 31.25Hz, 62.5Hz, 125Hz, 250Hz, 500Hz, 1000Hz, 2000Hz, 4000Hz, 8000Hz, and 16000Hz, and the "FFT size" increases in the order of decreasing the frequency of the analysis band. Here, the "FFT size" refers to the number of time-domain sample data used in the FFT analysis.
More specifically, in the example of fig. 1, setting is made such that the FFT size is doubled as the frequency decreases by 1 octave. As indicated in Table 1 above, the FFT size of the analysis band 9(8000- > 16000Hz) is 256-point samples, while the FFT size of the analysis band 8(4000- > 8000Hz) is 512-point samples, i.e., 2 times the 256-point samples. Thereafter, as the subsequent analysis band sequentially decreases in octave, the FFT size is sequentially doubled to 1024-point sampling, 2048-point sampling, 4096-point sampling, …. The FFT size of analysis band 1 with the lowest octave width is 65536 point samples.
With this arrangement, the frequency characteristics of the lower frequency band can be analyzed with finer frequency resolution, whereas the frequency characteristics of the higher frequency band can be analyzed with a degree of coarseness that is proportional to the frequency.
Note that analysis band 0(0Hz-31.25Hz), i.e. a sub-band with a frequency lower than analysis band 1, has the same FFT size as analysis band 1. Likewise, the analysis band 10 (i.e., a sub-band having a frequency higher than the analysis band 9) has the same FFT size as the analysis band 9.
Now, a process for synthesizing an impulse response waveform from frequency characteristics obtained from divided analysis bands will be described with reference to fig. 1 and table 2. The frequency characteristics of the plurality of analysis bands may be those previously obtained according to any of the above-described prior art techniques, according to which the pulse waveform synthesis according to this embodiment of the present invention is to be performed (i.e., the frequency characteristics are determined for a single analysis band from the audio band in higher or finer frequency resolution in order of decreasing frequency of the analysis band). For example, since a technique of pre-storing an impulse response subjected to FFT transform processing as a template is known from patent document 1 (i.e., japanese patent application 2002-. Alternatively, for the pulse waveform synthesis according to the present embodiment, a frequency characteristic appropriately generated by the user himself or herself may be used.
According to the present embodiment, the impulse response waveform is synthesized by generating the frequency characteristics of 10 synthesis bands by combining the frequency characteristics of each two adjacent bands of the above-described 11 analysis bands, and then performing the inverse FFT transform on the frequency characteristics of each synthesis band. Each synthesized band overlaps with the immediately adjacent upper and lower synthesized bands; by multiplying the value of the frequency characteristic of one of the adjacent synthesis bands by a window function sin2Theta and multiplying the value of the frequency characteristic of the other of the adjacent synthesized bands by a window function cos2θ, thereby connecting these synthesized bands to each other in a smooth transition manner (i.e., smooth transition connection). Because of sin2θ+cos2θ is 1, a smoothed impulse response waveform in which the initial frequency characteristic is reproduced can be synthesized by additively synthesizing a time-axis impulse response waveform calculated by performing an inverse FFT transform on the frequency characteristic of a single synthesized frequency band.
[ Table 2]
| Frequency band number | Low end frequency (Hz) | High-end frequency (Hz) | Number of sampling points | Low side frequency (Hz) | High side frequency (Hz) |
| Synthesis band 1 | 0 | 62.5 | 65536 | Flat part | Descending part |
| 0-31.5 | 31.5-62.5 | ||||
| Synthesis band 2 | 31.25 | 125 | 32768 | Rising part | Descending part |
| 31.5-62.5 | 62.5-125 | ||||
| Synthesis band 3 | 62.5 | 250 | 16384 | Rising part | Descending part |
| 62.5-125 | 125-250 | ||||
| Synthesis band 4 | 125 | 500 | 8192 | Rising part | Descending part |
| 125-250 | 250-500 | ||||
| Synthesis band 5 | 250 | 1000 | 4096 | Rising part | Descending part |
| 250-500 | 500-1000 | ||||
| Synthesis band 6 | 500 | 2000 | 2048 | Rising part | Descending part |
| 500-1000 | 1000-2000 | ||||
| Synthesis band 7 | 1000 | 4000 | 1024 | Rising part | Descending part |
| 1000-2000 | 2000-4000 | ||||
| Synthesis band 8 | 2000 | 8000 | 512 | Rising part | Descending part |
| 2000-4000 | 4000-8000 | ||||
| Synthesis band 9 | 4000 | 16000 | 256 | Rising part | Descending part |
| 4000-8000 | 8000-16000 | ||||
| Synthesis band 10 | 8000 | 22050 | 256 | Rising part | Descending part |
| 8000-16000 | 16000-22050 |
The single synthesized band has bands as shown in fig. 1 and table 2. The synthesis band 1 and the synthesis band 2 overlap each other in the region of 31.25Hz-62.5 Hz. The real and imaginary parts of the frequency characteristics of the overlapping region of "31.25 Hz-62.5 Hz" located in the second half of the synthesis band 1 are all multiplied by the window function cos2θ, and is given the envelope of the falling portion. On the other hand, the real part and the imaginary part of the frequency characteristic of the overlapping region of "31.25-62.5 Hz" located in the first half of the synthesis frequency 2 corresponding to the second half of the synthesis frequency band 1 are all multiplied by the window function sin2θ, and is given to the envelope of the rising portion. The "0 Hz-31.25 Hz" region of the synthesis band 1 is a flat portion, and the result of FFT conversion using 65536 sample data is directly used as the flat portion.
Since the inverse FFT transform includes an arithmetic operation on discrete values, the inverse FFT transform is performed in the synthesis band 1 and the synthesis band 2 using subsequent frequency axis discrete value sample data. Also, since the analysis band and the synthesis band are disposed at equal intervals on the common logarithmic axis as shown in fig. 1, a window function is also provided to provide waveforms of sine square and cosine square on the logarithmic axis, respectively.
[ Synthesis band 1]
(1) The flat portion ranges from 0Hz to 31.25Hz, the FFT size is 65536 points, the sample number j of the analysis band 0 is 1, 2, …, 45, 46, and the sampling interval is about 0.67 Hz. The values of the sampled data are used as they are.
(2) The falling part ranges from 31.25Hz to 62.5Hz, the FFT size is 65536 points, the sample number j for analysis band 1 is 47, 48, …, 91, 92, and the sampling interval is about 0.67 Hz.
Real[j]=Real[j]*cos2(θ)
Img[j]=Img[j]*cos2(θ)
θ ═ PAI/2 [ { log10(j × Δ Freq [1]) -log10(31.25) }/{ log10(62.5) -log10(31.25) } ], where PAI is the circumferential ratio pi.
ΔFreq[1]=44100/65536
That is, in the first half (i.e., the low-side frequency band) of the synthesized band 1, 46 pieces of sampling data are obtained by sampling the frequency characteristic of the synthesized band 0 ranging from 0Hz to 31.25Hz at intervals of about 0.67Hz, and the envelope is kept flat. For convenience, 1, 2, …, 46 is assigned as a sample number j to the 46 sample data thus obtained. In the second half (i.e., the high-side frequency band) of the synthesized band 1, 46 pieces of sample data are obtained by sampling the frequency characteristics of the synthesized band 1 ranging from 31.25Hz to 62.5Hz, and these pieces of sample data are given to the envelope of the falling portion. For convenience, 47, 48, …, 92 is assigned as a sample number j to the 46 sample data of the latter half (i.e., the high-side frequency band) thus obtained. The second half of the synthesized band 1 (i.e., the high-side frequency band) is a frequency band overlapping with the first half of the next synthesized band 2 (i.e., the low-side frequency band).
[ Synthesis band 2]
(1) The rising portion ranges from 31.25Hz to 62.5Hz, the FFT size is 65536 points, and the sample number j of analysis band 1 is 48, 50, …, 90, 92 (every other sample of 46 sample data used in synthesis band 1 is used, so that there is a total of 23 sample data used, and therefore, the sampling interval is set to about 1.34 Hz).
Real[j]=Real[j]*sin2(θ)
Img[j]=Img[j]*sin2(θ)
θ=PAI/2*[{log10(j*ΔFreq[1])-log10(31.25)}/{log10(62.5)-log10(31.25)}]
ΔFreq[1]=44100/65536
(2) The falling portion ranges from 62.5Hz to 125Hz, the FFT size is 32768 points, the sample number j for analysis band 2 is 47, 48, …, 91, 92, and the sampling interval is about 1.34 Hz.
Since the sampling interval (frequency) of the synthesized band 2 is twice the sampling interval of the synthesized band 1, even if the sample data of the same sample data number as that of the sample data used for the synthesized band 1 is used here, the waveform obtained by the inverse FFT transform has a frequency twice the frequency of the synthesized band 1.
Real[j]=Real[j]*cos2(θ)
Img[j]=Img[j]*cos2(θ)
θ=PAI/2*[{log10(j*ΔFreq[2])-log10(62.5)}/{log10(125)-log10(62.5)}]
ΔFreq[2]=44100/32768
That is, in the first half (i.e., the low-side frequency band) of the synthesized band 2, 23 pieces of sample data are obtained by sampling the frequency characteristic of the synthesized band 1 ranging from 31.25Hz to 62.5Hz at intervals of about 1.34Hz, and the sample data thus obtained is given to the envelope of the rising portion. For convenience, if the same number as that used in the synthesis band 1 is used as the sample number j, these sample data are assigned to the even sample numbers 48, 50, …, 90, 92. In the second half (i.e., the high-side frequency band) of the synthesized band 2, 46 pieces of sample data are obtained by sampling the frequency characteristics of the synthesized band 2 ranging from 62.5Hz to 125Hz, and these pieces of sample data are given to the envelope of the falling portion. Also, for convenience, 47, 48, …, 92 are assigned as sample numbers j to the 46 sample data thus obtained. The second half of the synthesized band 2 (i.e., the high-side frequency band) is a frequency band overlapping with the first half of the next synthesized band 3 (the low-side frequency band).
In a similar manner to the above-described synthesis band 2, the same sampling interval (frequency) is set for the first half (low-side frequency band) and the second half (high-side frequency band) of each of the synthesis bands 3 to 9 by obtaining 23 pieces of sampling data from the frequency characteristics of the synthesis band serving as the first half (low-side frequency band) and 46 pieces of sampling data from the frequency characteristics of the synthesis band serving as the second half (high-side frequency band). Subsequently, the first half (low-side frequency band) of the sampled data is given to the envelope of the rising portion, and the second half (high-side frequency band) of the sampled data is given to the envelope of the falling portion. However, FFT size, sampling interval (frequency), θ calculation result, and the like between these bands are different. The following paragraphs discuss only the differences between these bands.
[ Synthesis band 3]
The sampling interval is 2.69 Hz.
(1) The rising portion ranges from 62.5Hz-125 Hz. The FFT size is 32768, with every other sample used.
θ=PAI/2*[{log10(j*ΔFreq[2])-log10(62.5)}/{log10(125)-log10(62.5)}]
ΔFreq[2]=44100/32768
(2) The falling portion ranges from 125Hz to 250 Hz. The FFT size is 16384.
θ=PAI/2*[{log10(j*ΔFreq[3])-log10(125)}/{log10(250)-log10(125)}]
ΔFreq[3]=44100/16384
[ Synthesis band 4]
The sampling interval is 5.38 Hz.
(1) The rising portion ranges from 125Hz-250 Hz. The FFT size is 16384, with every other sample used.
θ=PAI/2*[{log10(j*ΔFreq[3])-log10(125)}/{log10(250)-log10(125)}]
ΔFreq[3]=44100/16384
(2) The falling portion ranges from 250Hz to 500 Hz. The FFT size is 8192.
θ=PAI/2*[{log10(j*ΔFreq[4])-log10(250)}/{log10(500)-log10(250)}]
ΔFreq[4]=44100/8192
[ Synthesis band 5]
The sampling interval is 10.76 Hz.
(1) The rising portion ranges from 250Hz-500 Hz. The FFT size is 8192, with every other sample used.
θ=PAI/2*[{log10(j*ΔFreq[4])-1og10(250)}/{log10(500)-log10(250)}]
ΔFreq[4]=44100/8192
(2) The falling portion ranges from 500Hz to 1000 Hz. The FFT size is 4096.
θ=PAI/2*[{log10(j*ΔFreq[5])-log10(500)}/{log10(1000)-log10(500)}]
ΔFreq[5]=44100/4096
[ Synthesis band 6]
The sampling interval was 21.53 Hz.
(1) The rising portion ranges from 500Hz to 1000 Hz. The FFT size is 4096, with every other sample used.
θ=PAI/2*[{log10(j*ΔFreq[5])-log10(500)}/{log10(1000)-log10(500)}]
ΔFreq[5]=44100/4096
(2) The falling portion ranges from 1000Hz to 2000 Hz. The FFT size is 2048.
θ=PAI/2*[{log10(j*ΔFreq[6])-log10(1000)}/{log10(2000)-log10(1000)}]
ΔFreq[6]=44100/2048
[ Synthesis band 7]
The sampling interval was 43.07 Hz.
(1) The rising portion ranges from 1000Hz to 2000 Hz. The FFT size is 2048 and every other sample is used.
θ=PAI/2*[{log10(j*ΔFreq[6])-log10(1000)}/{log10(2000)-log10(1000)}]
ΔFreq[6]=44100/2048
(2) The falling portion ranges from 2000Hz to 4000 Hz. The FFT size is 1024.
θ=PAI/2*[{log10(j*ΔFreq[7])-log10(2000)}/{log10(4000)-log10(2000)}]
ΔFreq[7]=44100/1024
[ Synthesis band 8]
The sampling interval was 86.13 Hz.
(1) The rising portion ranges from 2000Hz to 4000 Hz. The FFT size is 1024, while every other sample is used.
θ=PAI/2*[{log10(j*ΔFreq[7])-log10(2000)}/{log10(4000)-log10(2000)}]
ΔFreq[7]=44100/1024
(2) The falling part ranges from 4000Hz to 8000 Hz. The FFT size is 512.
θ=PAI/2*[{log10(j*ΔFreq[8])-log10(4000)}/{log10(8000)-log10(4000)}]
ΔFreq[8]=44100/512
[ Synthesis band 9]
The sampling interval was 172.27 Hz.
(1) The rising portion ranges from 4000Hz to 8000 Hz. The FFT size is 512, while every other sample is used.
θ=PAI/2*[{log10(j*ΔFreq[8])-log10(4000)}/{log10(8000)-log10(4000)}]
ΔFreq[8]=44100/512
(2) The falling fraction ranges from 8000Hz to 16000 Hz. The FFT size is 256.
θ=PAI/2*[{log10(j*ΔFreq[9])-log10(8000)}/{log10(16000)-log10(8000)}]
ΔFreq[9]=44100/256
In the next highest frequency synthesis band 10, there is no overlapping band on its high side, so the upper half forms a flat portion.
[ Synthesis band 10]
The sampling interval was 172.27 Hz. The FFT size is 256.
(1) The rise portion ranges from 8000Hz-16000Hz, using the sample number j-48, 49, 50, …, 90, 91, 92 of the analysis band 9.
Real[j]=Real[j]*sin2(θ)
Img[j]=Img[j]*sin2(θ)
θ=PAI/2*[{log10(j*ΔFreq[9])-log10(8000)}/{log10(16000)-log10(8000)}]
ΔFreq[9]=44100/256
(2) The flat portion ranges from 16000Hz to 22050Hz, the FFT size is 256, and the sample number j is 93, 94, …, 128, 129. These values are used as is.
In the present embodiment, the FFT inverse transform arithmetic operation is performed on each of the aforementioned 10 synthesized bands based on the single sample data (along the frequency axis) of the frequency characteristic, thereby obtaining the time-axis frequency response waveform of the single synthesized band, and then the frequency response waveforms of these synthesized bands are additively synthesized to obtain the impulse response waveform of the entire audio range.
Fig. 2 is a flowchart showing an example operation sequence for obtaining an impulse response waveform of a single synthesized band using the aforementioned frequency characteristics of the corresponding analysis band, and for obtaining an impulse response waveform of the entire audio frequency range. Fig. 2 is a flowchart showing a process for determining which response characteristic sound output from the individual speaker units constituting the speaker array appears at a specific sound receiving point.
First, the characteristics of one of the plurality of speaker units are read out at step s 201. These characteristics are determined in advance for each analysis frequency band by convolving the characteristics of the equalizer with the frequency characteristics of the speaker unit installed at a predetermined orientation with respect to the direction toward the sound receiving point.
First, any one of the synthesis bands 1-10 is selected at step s202, and the center frequency of the selected synthesis band (i.e., the frequency at the boundary between two adjacent analysis bands corresponding to the selected synthesis band) is determined. Subsequently, in addition to the analysis of the frequency band of frequency band 0, the low-side frequency band (rising portion) below the determined center frequency (31.25Hz, 62.5Hz, 125Hz, …, or 16000Hz) is multiplied by a window function sin2θ (step s203) and every other data of the multiplied low side frequency band is selected (step s 204). On the other hand, in addition to the frequency band of the analysis band 10, the high-side frequency band (falling portion) higher than the determined center frequency is multiplied by the window function cos2θ (step s 205).
Then, an inverse FFT transform arithmetic operation is performed on the data of the thus obtained synthesized band (step s206), thereby obtaining a time-axis impulse response waveform of the band.
It is determined at step s208 whether the operations of steps s202 to s207 have been completed for all synthesized bands. The operations of steps s202 to s207 are repeated until "yes" is determined at step s 208. Once step s208 determines yes, the impulse response waveforms obtained for all the synthesized frequency bands are additively synthesized, resulting in an impulse response waveform for the entire audio range (step s 209). Then, the head-related transfer function is convolved with the impulse response waveform of the entire audio range (steps s209a and s 210). Then, a delay based on the distance between the speaker and the sound receiving point is given to the impulse response waveform (step s211), thereby providing impulse responses for two (i.e., left and right) channels from the speaker unit to the sound stage of the listener located at the sound receiving point.
It is determined at step s212 whether the operations of steps s201 to s211 for all the speaker units have been completed. The operations of steps s201 to s211 are repeated until "yes" is determined at step s 212. Once step s212 determines a "yes", the impulse responses determined for all the loudspeakers are added together (step s213) to provide impulse responses from the loudspeaker array to the two (i.e. left and right) channels in the listener's soundstage.
The acoustic design aid of the invention constitutes a sound stage simulator using the impulse response determined thereby as filter coefficients. That is, the acoustic design assisting apparatus of the present invention constitutes a filter that uses the impulse response as a filter coefficient, performs filter processing on a musical sound or a tone (dry source), and outputs the processed tone to the headphone. Therefore, any designer can know in advance which kind of sound is output with the designed speaker system through test listening of the sound.
Now, an acoustic design assistance apparatus to which the above-described response waveform synthesis method is applied will be explained. This acoustic design assistance apparatus 1 is intended to facilitate design such as selection and setting of devices in the case where a speaker system (sound reinforcement system) is installed in a venue (or acoustic facility) such as a concert hall or a conference hall. When outputting sound in a venue using the designed speaker system, the acoustic design assisting apparatus 1 is used to simulate a sound field formed in the venue, visually display the simulation result on a display, and audibly output the simulation result through headphones.
Fig. 3A is a block diagram showing a general setup example of an acoustic design assistance apparatus. As shown in the figure, the acoustic design assistance apparatus 1 includes a display 101, an operation section 102, a CPU 103, an external storage device 104 such as a Hard Disk Drive (HDD), a memory 105, and a sound output device 106. The operation section 102, hard disk (HDD)104, memory 105, and sound output device 106 are connected to the CPU 103.
The display device 101 is, for example, in the form of a general-purpose liquid crystal display that displays screens for assisting input of various setting conditions (see fig. 5 to 7).
The operation section 102 receives inputs of various setting conditions, an input indicating a sound stage simulation, an input indicating optimization of a speaker layout, and selection of a simulation result display style.
The CPU 103 executes a program stored in the HDD 104. In response to an instruction given via the operation section 102, the CPU 103 executes a corresponding one of the programs in combination with the other hardware resources of the acoustic design assistance apparatus 1.
The HDD 104 stores the acoustic design assisting program 10, speaker characteristic data (hereinafter referred to as "SP data") 107 obtained by FFT conversion of an impulse response or the like around the speaker, equalizer data 108 as data suitable for an equalizer of the speaker, a speaker data table 109, and a meeting place basic shape data table 110.
The memory 105 has a region set to execute the acoustic design assistance program 10 and a region set to temporarily store (buffer) data generated in the acoustic design assistance processing. SP data 107, equalizer data 108, and the like are stored (buffered) in the memory 105. Note that the equalizer data 108 is data obtained by arithmetically operating the settings of the equalizer, which is intended to adjust the frequency characteristics of the sound signal output from the speaker array according to a desired design.
The sound output device 106 generates a sound signal from the sound source data stored in the HDD 104. The sound output device 106 includes a DSP (digital signal processor) and a D/a converter, and has a signal processing function 1061 for equalizing, delaying, and the like, a sound signal. For example, in the case where the sound field in a predetermined position of the sound receiving surface is audibly determined as a result of simulation in the acoustic design assisting apparatus 1 through headphones, speakers, or the like, a sound signal subjected to signal processing is output to the headphones, speakers, or the like.
Note that the sound output device 106 need not be in the form of hardware, but may be implemented by software. The acoustic design assistance apparatus 1 may further include a sound signal input interface so that an external input sound signal can be output from the sound output device 106.
Here, the SP data 107 stored in the hard disk 104 is frequency characteristic data of a variety of speakers selectable in the acoustic design assistance apparatus 1. As described above with respect to the response signal synthesis method, the audio frequency range of 0Hz-22050Hz is divided into 9 analysis bands on an octave-by-octave basis at 1000Hz serving as a standard unit for octave-by-octave division, and the data of a single analysis band is stored as SP data 107B in the hard disk 104. The divided bands and FFT sizes of the single analysis band are as shown in "table 1" above. At the time of acoustic design, SP data on one direction corresponding to a desired sound reception point from one speaker selected by the user is read out from the HDD 104 and stored in the memory 105. For convenience, these SP data stored in the memory 105 are denoted by reference numeral 107B. SP data 107 for all specific directions corresponding to a desired sound reception point from a single speaker is stored in the HDD 104 and is denoted by reference numeral 107A for convenience.
The speaker data table 109 serves as a database for selecting speakers suitable for a specific venue (or acoustic facility) when the shape and size of the venue have been selected. As an example, data of speaker arrays each including a plurality of speaker units is stored in the speaker data table 109. However, the acoustic design assistance apparatus 1 of the present invention is not necessarily limited to applications in which a speaker array is used.
The meeting place (or acoustic facility) basic shape data table 110 includes a set of meeting place (or acoustic facility) shape names, coordinate data representing the size of the meeting place, and an image bitmap representing the shape inside the meeting place. The coordinate data also includes data for setting the spatial shape in the venue.
Fig. 4 is a flowchart showing a general operation sequence example of the design assistance process performed by the acoustic design assistance apparatus 1. The acoustic design assistance apparatus 1 performs three main steps ST1 to ST 3. In step ST1, simulation conditions are set. In the next step ST2, parameter data representing characteristics to display the simulation result is calculated in accordance with the set simulation conditions. At this time, SP data 107B for one specific direction is selected from all the direction-specific SP data 107A stored in the HDD 104, and equalizer data 108 is calculated.
In step ST3, the simulation result of the acoustic design assistance apparatus 1 is output to the display device 101 or the headphones. The above-described response waveform synthesis method is applied when the simulation result is acoustically output to the headphone.
In the simulation condition setting operation at step ST1, various conditions necessary for simulation are set at steps ST11 to ST 14. Specifically, information of a space in which the speaker is installed, for example, a shape of a meeting place (hereinafter, simply referred to as "spatial shape") is set. More specifically, the general shape of the space is selected, and details of the shape are numerically input (see fig. 5 and 6). In step S12, a speaker is selected, and the mounting position of the selected speaker is set. In step ST13, the installation conditions of the single selected speaker are set; the mounting condition is, for example, a mounting angle between the speaker units within the speaker array (hereinafter also referred to as "internal speaker unit mounting angle"). In the next step ST14, simulation conditions such as conditions as to whether or not the interference condition between the speaker units is considered and conditions as to how fine the grid points (see fig. 11) are arranged on the sound receiving surface are set.
Once all the conditions are set in the condition setting operation at step ST1, simulation is performed at step ST2, and the simulation result is displayed on the display device 101 or output via headphones at step ST 3.
Heretofore, it has been common for a designer or engineer to find the optimum design by repeating the operations of steps ST1 through ST3 in a trial and error manner. However, in the acoustic design assistance apparatus 1 of the present invention, according to the information of the spatial shape set in step ST1, the setting data of the speaker installation angle and the characteristics is automatically optimized in step ST15, and the setting is assisted.
The automatic optimization and assist operation of step S15 includes steps ST16 and ST 17. In step ST16, from among the speakers registered in the speaker data table, the speaker candidates for use in the meeting place can be displayed on the display device 101. When a speaker is selected via the operation section 102, a possible scene for placing the selected speaker in the space selected in step ST11 is displayed on the display device 101.
In step ST17, the optimum combination of the installed speaker array angles (horizontal direction and vertical direction) and the optimum angle between the speaker units (i.e., the internal speaker unit installation angle) are automatically calculated. Here, the angle of the speaker array which becomes a representative value of all speaker azimuth axes indicates the angle of the azimuth axis of the desired reference speaker unit in the horizontal direction and the vertical direction. The mounting angle between the speaker units means an angle (opening angle) between the adjacent speaker units.
The following paragraphs describe steps ST11 to ST17 included in the condition setting operation of step ST1 in more detail with reference to fig. 5. The reference numbers in the following figures generally correspond to the step numbers shown in fig. 4.
First, the spatial shape setting operation of ST11 is described with reference to fig. 5 and 6. Fig. 5 is an example of a GUI (graphical user interface) showing a general shape for setting a space where a speaker is to be placed. The acoustic design assisting apparatus 1 displays a space shape setting screen 11A as shown in the drawing on the display device 101 to allow a designer to select the outline of a space where a speaker is to be installed. A shape selection frame 11C is displayed near the upper end of the spatial shape setting screen 11A to allow the designer to select one of the fan shape and the box shape. Once the designer selects "sector" by selecting the mark "sector" in the shape selection box 11C with a mouse or the like, not shown, a plurality of shape examples of sector-shaped acoustic facilities or the like are displayed on the detailed shape selection box 11D. Thus, the user is allowed to select one of the desired shape examples displayed on the detailed shape selection box 11D.
Once the designer selects one of the fan examples displayed on the detailed shape selection frame 11D, the display screen on the display device 101 is switched from the spatial shape setting screen 11A of fig. 5 to the spatial shape setting screen 11B of fig. 6.
On the spatial shape setting screen 11B, the shape of the selected acoustic facility is displayed as a drawing 11F in a spatial shape display frame 11E. The spatial shape setting screen 11B is displayed by the CPU 103 reading out meeting place basic shape data of the corresponding meeting place from the meeting place basic shape data table 110 stored in the HDD 104. The designer enters on the screen shape parameters that determine the size of the space in which the loudspeakers are to be placed or mounted.
On the space shape setting screen 11B, the designer is allowed to input the shape of the space where the speaker is to be placed in the form of a numerical value into the shape parameter input box 11G. Here, the designer can set parameters regarding the stage width, the height and depth of the acoustic facility, the height and slope (inclination) angle of the individual layers, and the like through numerical input. When the numerical value of the shape parameter is changed by these input operations, the spatial shape indicated by the drawing 11F is changed in accordance with the numerical value change. The parameters indicated in the shape parameter input box 11G are selected according to the shape of the venue (acoustic facility). For example, when the meeting place (acoustic facility) is a fan shape, a region in which an angle of the fan shape is to be input is displayed. Also, in the case where the meeting place (acoustic facility) has the second and third floors, the area where the shape data of the second and third floors are to be input is displayed. Parameters required in accordance with the shape of the venue (acoustic facility) are stored in association with the venue basic shape data 110.
Once the designer presses the decision button 11H after inputting all the shape parameters, the display on the display device 101 is switched from the spatial shape setting screen of fig. 6 to the speaker selection/installation setting screen 12 of fig. 7 corresponding to steps ST12 to ST16 of fig. 4. On the speaker selection/installation setting screen 12 of fig. 7, a usage selection frame 12A, a spatial shape display frame 11E, a shape data display frame 12B, a speaker installation position display frame 12C, and an optimum speaker candidate display frame 16 are displayed.
In the spatial shape display box 11E, the spatial shape is displayed in proportion to the virtual-actual spatial shape in accordance with the spatial shape set via the screens of fig. 5 and 6.
The use selection box 12A is a display area for selecting use of an acoustic facility or the like, via which a designer can select either or both of "music" and "lecture" by selecting the markers "music" and/or "lecture". Here, the use "music" is intended for acoustic designs focusing on acoustic properties related to sound quality such as frequency characteristics of sound pressure level. Another use, "lecture" is intended for acoustic designs that focus on acoustic performance related to sound clarity.
The speaker installation position display frame 12C is a display area for selecting an appropriate position where a speaker is to be installed. The designer can select any one of "stage middle", "stage right", and "stage left" as a suitable position by selecting any one of "middle", "right", and "left" in the speaker mounting position display frame 12C.
When the designer selects each desired setting item in the use selection frame 12A and the speaker installation position display frame 12C by selecting an item mark with a mouse or the like, the optimum speaker candidate is displayed in the optimum speaker candidate display frame 16. The selection of the best speaker candidate corresponds to step ST16 of fig. 4 and is automatically influenced by the acoustic design aid 1.
The CPU 103 selects the optimum speaker candidate from the speaker data table 109 stored in the hard disk 104. The speaker data table 109 is constructed in the manner shown in fig. 8.
Data suitable for selecting a speaker according to information of a spatial shape set via the screens of fig. 5 and 6 is stored in the speaker data table 109, and the stored data includes data representing a speaker type name 109A, data of an area (i.e., an area size) 109B, data of a usage 109C, data of a mounting position 109D, and data of a horizontal-vertical ratio 109E.
If the area indicated by the shape data display box 12B (i.e., the area of the sound receiving surface) is 450m2And "middle" is selected or selected in the speaker installation position display box 12C, the speaker D or the speaker J can be selected from the speaker data table 109 indicated by the optimal speaker candidate display box 16 of fig. 7.
A GUI for displaying an example status when the speaker array is mounted will now be described with reference to fig. 7. One or more speaker candidates are displayed in the lower area of the speaker position setting screen 12, and when one of the speaker candidates has been selected, the selected speaker array 16A is displayed in the spatial shape display frame 11E in the same scale as the spatial shape 11F. In this way, it can be visually checked how the speaker array 16A is disposed in the space. The display of the speaker array 16A also corresponds to step ST16 of fig. 4. Step ST16 ends with the display of the speaker array 16A, and then control returns to step ST 12.
Also, when the speaker array 16A has been displayed, selection of the speaker array 16A cover tape can be made by the spatial shape display frame 11E. Fig. 7 shows the cover tape 16E when half of the sound receiving surface in the first layer portion of the space has been selected. Alternatively, the user is allowed to select the entire space, the entire first-layer portion, the entire second-layer portion, or the entire third-layer portion, the selection corresponding to step ST12 of fig. 4. Then, at step ST17 of fig. 4, the CPU 103 of the acoustic design assistance apparatus 1 sets speaker installation conditions, i.e., the angle of the speaker array and the installation angle between the individual speaker units of the speaker array.
The following paragraphs describe step ST17 in more detail with reference to fig. 9 to 13. Fig. 9 is a conceptual diagram illustrating an operation sequence for automatically calculating the angle of the speaker array and the setting of the installation angle between the speaker units of the speaker array.
The calculation performed at step ST17 of fig. 4 includes five calculation steps (a) to (E). These calculations are performed to determine the optimum values of the angles of the speaker arrays and the mounting angles between the speaker units of the speaker arrays in the case where the selected speaker array 16A of fig. 7 is mounted. As the optimum value, there is a adopted value that most effectively achieves "homogenization and optimization of the sound pressure level in the selected sound receiving surface". More specifically, as shown in (D) in fig. 9, this value can minimize the standard deviation in the sound pressure level in the grid points provided on the entire sound receiving surface.
In the calculation operation of step ST17, the frequency characteristics of the sound pressure level at the axis points 17B, 17C, and 17D of the intersection points between the axis (corresponding to the azimuth) of the speaker and the sound receiving surface are optimized.
As shown in (a) in fig. 9, the selection of the mounting angles between the speaker units of the speaker array is performed by reading out the possible mounting angles between the speaker units acceptable for the selected speaker array 16A in fig. 7 from the speaker data table 109 in fig. 8, and then selecting from the read-out possible mounting angles. These mounting angles between the speaker units are specific or peculiar to the individual speaker arrays, and are set by the jig of the speaker array 16A at the time of actual mounting.
For convenience of description, the installation angle between the speaker units is represented by θ int. Furthermore, it is necessary to set the angle of the speaker array to be mounted in both the horizontal and vertical directions, and such a combination of angles in the horizontal and vertical directions is expressed as (θ, *). Here, the mounting angle theta in the horizontal direction is in the range of-180 DEG < theta.ltoreq.180 DEG, and the mounting angle * in the vertical direction is in the range of-90 DEG < * DEG.ltoreq.90 DEG.between the speaker units is determined by these angles (theta int, theta, *).
Fig. 9 (B) shows a case where a speaker array including three speaker units is used. In this case, two types of mounting angles θ int need to be set, namely, a relative angle θ int1 between the speaker units 16B and 16C and a relative angle θ int2 between the speaker units 16C and 16D.
In order to set the installation angle between the speaker units, the apparatus searches for the speaker array angle (θ, *) and the installation angle θ int between the speaker units (i.e., θ int1 and θ int2) that can minimize the above standard deviation while sequentially changing the angles as shown in (E) of fig. 9. The angle change pitch (or the minimum unit of angle change) is determined from the speaker data table 109 for the mounting angle thetaint between the speaker units (i.e., thetaint 1 and thetaint 2). To reduce the necessary computation time, the program may be designed to change the angle at a larger angle change interval in the initial search stage.
The number of patterns or combinations in which the angle (θ int, θ, *) can be set is illustrated below with some specific examples. When the speaker type D is selected as the speaker type name 109A from the speaker candidate display frame 16, the angles of the speaker arrays are sequentially changed by 30 (i.e., with a 30 change pitch) each time within the ranges of-180 < theta ≦ 180 and-90 < * ≦ 90, as shown in (A) of FIG. 9. Also, the installation angle between the speaker units may be sequentially changed by 2.5 ° each time (i.e., with a 2.5 ° change pitch) within a range of 30 ° to 60 ° for a single speaker unit. That is, the angles (θ int, θ, *) are set by setting 180 ° as the angle θ, 90 ° as the angle *, and 60 ° as the angle θ int, as indicated by 17A in (a) in fig. 9. In this case, since the angle θ is changed at the 30 ° change interval, the angle θ can be set to 12 different values in the range of-180 ° to 180 °, and since the angle * is changed at the 30 ° change interval, the angle * can be set to-90 ° to 9 °The range of 0 is set to 7 different values. Also, for the speaker type D whose initial settable range is 30 degrees (30 ° to 60 °) and whose variation pitch is 2.5 ° as shown in fig. 8, the angle θ int may be set to 13 different angles (i.e., (60-30)/2.5+1 ═ 13). Also, because there are two types of angles θ int, i.e., θ int1 and θ int2, there may be 132And (4) combination. Thus, the total amount of angle combination can be set to 14,196 (i.e., 12 × 7 × (13 × 13) ═ 14,196). Also, since the speaker units 16B and 16D, which are generally higher and lower, are mounted in horizontally symmetrical combination with respect to the middle speaker unit 16C, it can be assumed that "θ int1 — θ int 2" is used to calculate the settable angle combination, so that the settable angle combination amount is 12 × 7 × 13 — 1,092 in total.
Then, the frequency characteristic of the sound pressure level at the axis point determined in (B) in fig. 9 is subjected to optimization processing as shown in (C) in fig. 9. Since the frequency characteristic optimization shown in (C) of fig. 9 will be described in detail later with reference to fig. 10A and 10B, only a brief description will be made here. The frequency characteristic optimization shown in (C) of fig. 9 is intended to cause the index calculation shown in (D) of fig. 9 to be performed with increased efficiency; in other words, the frequency characteristic optimization is intended to "determine the equalizer characteristic and the frequency characteristic thereof for uniformizing the sound pressure level between the axis points 17B, and 17D". Since the individual speaker units 16B, 16C, and 16D of the speaker array 16A generally have a wide directional characteristic, the sound of the speaker unit 16D can also reach the axial point 17B, and the sound of the speaker unit 16B can also reach the axial point 17D. Thus, in the case where the sound volume at the axis point 17B is relatively small and only the operation for increasing only the sound pressure level of the speaker unit 16B is performed, the sound volumes at the other axis points 17C and 17D are also increased, which would cause an undesirable imbalance. Therefore, in the apparatus according to the present embodiment, the mode of the equalizer parameters of the individual speaker units 16B, 16C, and 16D is prepared. Also, in this apparatus, the frequency characteristics of the sound transmitted from the individual speaker units 16B, 16C, and 16D of the speaker array 16A installed at the angles set in (a) of fig. 9 and received at the axis points 17B, 17C, and 17D are calculated using the above-described SP data 107 of fig. 3 (i.e., data obtained by FFT conversion of impulse responses for all angles around the speaker), thereby selecting the optimum mode. The operation flow shown in (C) of fig. 9 is described below.
First, in step S171, a reference frequency band fi (fi represents a discrete value (i ═ 1-N)) is set. In this case, the reference frequency band fi may be set to any one of 62.5Hz, 125Hz, 250Hz, 500Hz, 1kHz, 2kHz and 8kHz according to the channel of the parametric equalizer.
In the next step S172, the equalizer parameter modes (G1, G2, G3) fiHz for adjusting the reference band gains are set for the individual speaker units 16B, 16C, and 16D.
With respect to the equalizer parameter pattern thus set, in the next step S173, the frequency characteristics of the sound pressure levels at the above-described axis points 17B, 17C, and 17D are calculated, and then the optimum pattern that minimizes dispersion or variation between the axis points 17B, 17C, and 17D in each reference band is selected. More specifically, the dispersion between the axis points 17B, 17C, and 17D is calculated for each reference frequency band, and then the square root of the absolute value of the dispersion is calculated, thereby calculating the standard deviation for each reference frequency band. Such a standard deviation indicates a degree of change in gain at a particular frequency, and a smaller value of the standard deviation indicates a smaller gain change. Therefore, an equalizer parameter pattern that represents a smaller standard deviation may be referred to as a more suitable equalizer parameter pattern.
Then, the best equalizer parameter mode (G1, G2, G3) fiHz is selected independently for each frequency. Through the above operation, the equalizer parameters for the speaker units 16B, 16C, and 16D are determined at step S174.
Although the optimum equalizer parameter mode is selected for each frequency by the above-described parameter determination step, the equalizer parameters thus determined are set to the equalizer parameters (PEQ parameters) for each peak instead of each frequency in order to be set in the parametric equalizer (step S175). Subsequently, data representing the equalizer parameters (PEQ parameters) thus set is stored in the external storage device 104 or the like for the individual speaker units 16B, 16C, and 16D.
In the operation stage or process shown in (C) of fig. 9, although not particularly shown, a sound level optimization process is also performed according to the SP data 107.
Also, the equalizer parameters calculated in the manner shown in (C) of fig. 9 are FFT-transformed, and the thus FFT-transformed equalizer parameters are stored as equalizer data 108 in the external storage device 104 of fig. 3. In this way, the simulation parameters can be calculated in the simulation parameter calculation operation of step ST2 by performing only convolution calculation in the frequency domain, and the calculation result can be output instantaneously. In many cases, the acoustic design assistance apparatus performs the optimum design by repeatedly performing the simulation while changing the simulation conditions a plurality of times as described above; for such an acoustic design aid, the equalizer parameters can be FFT transformed very efficiently.
In (D) of fig. 9, the standard deviation of the sound pressure level in the sound receiving surface region is calculated from the PEQ parameters of the individual speaker units 16B, 16C, and 16D, and the sound pressure level in the sound receiving surface region and the frequency characteristics thereof are calculated. The operations of steps S176-S178 are performed as follows for these purposes.
In step S176, a plurality of grid points 17J are set in the entire coverage area of the acoustic facility, as shown in fig. 11. The acoustic design of the entire sound receiving surface area is realized using the grid points 17J as sampling sound receiving points.
In step S177, the sound level at the single grid point 17J is determined from the SP data 107 of fig. 8 and the like. More specifically, the sound level is determined by convolving the FFT-transformed equalizer data 108 with the SP data 107B of the corresponding direction for each speaker unit, and then additively combining the outputs from the individual speakers.
In the next step S178, the standard deviation α is calculated for the sound level at the single grid point 17J that has been determined in step S177. A smaller value of the standard deviation α is more preferable because it can obtain a smaller variation in points within the entire sound-receiving surface.
In (E) of fig. 9, the processes of (a) to (D) of fig. 9 are repeated after the horizontal and vertical angles (θ i, * i) of the speaker units 16B, 16C, and 16D are reset or changed. By repeating these processes, an angle setting mode that can minimize the standard deviation determined in the manner shown in (D) of fig. 9 is selected. In this case, in order to reduce the required calculation time, the angle search is implemented with the angle change pitch of the speaker array to be mounted being initially set to a relatively large value and then set to a small value.
As described above, the calculation of the optimal angle of the speaker array and the angle between the individual speaker units includes: the angle pattern is set as shown in (a) of fig. 9, then the standard deviation of the sound level in the sound receiving surface area (i.e., the index representing the degree of sound pressure dispersion or variation) is calculated as shown in (D) of fig. 9, and the minimum value of the standard deviation is found. For these purposes, the axis points 17B, 17C, and 17D are set as representative points in the respective cover tapes of the individual speaker units. Then, equalizer characteristics for minimizing the frequency characteristics at the axis points 17B, 17C, and 17D are determined as shown in (C) of fig. 9, and applied to the corresponding speaker units.
Referring to fig. 10A and 10B, the following paragraphs describe the process shown in (C) of fig. 9 in more detail. Fig. 10A is a flowchart showing a process for optimizing the frequency characteristic at the axis point as shown in (C) of fig. 9, and fig. 10B is a diagram showing an example of equalizer settings for optimizing the frequency characteristic.
In fig. 10A, the reference frequency band fi is sequentially set to 8 frequency bands (62.5 Hz to 8kHz as described above) as the frequency gain indexes of the three speaker units 16B, 16C, and 16D (S171). The reference frequency band is the center frequency of each channel of the parametric equalizer, which is set to any one of, for example, 62.5Hz, 125Hz, 250Hz, 500Hz, 1kHz, 2kHz, and 8kHz as shown in fig. 10B.
In the illustrated example, aboveThe gain setting patterns (G1, G2, G3) fiHz described above in connection with step S172 shown in (C) of fig. 9 are set to the range of 0dB to 10dB with 1dB as the minimum unit. Thus, 11 is set for each reference frequency (e.g. 62.5Hz)3A mode, thus setting up 8 × 11 in total3A mode. Also, equalizer data for each speaker unit subjected to FFT transformation is stored as equalizer data 108 for each mode.
In step S173, the gain at the axis point is calculated with each mode to select the best one of the modes. This step may be divided into steps S1731 to S1733.
In step S1731, the frequency characteristics of the sound transmitted from the speaker array 16A and received at the single axis points 17B, 17C, and 17D are calculated from the SP data 107 of fig. 3, and data of the frequency gain at the axis point of each reference frequency band fi is calculated and accumulated.
Performing a frequency gain calculation for each loudspeaker unit by convolving together all of the following data: fourier-transformed and time-delayed phase correction filter data, fourier-transformed range attenuation correction filter data, fourier-transformed equalizer data 108, and SP data 107B corresponding to a particular direction.
In the present embodiment, in the case where the number of speaker units is three, the number of frequency gain data to be accumulated is 24 (i.e., three speaker units × eight frequency bands is 24).
In step S1732, the standard deviation between the frequency gain data at three points of each reference frequency band fi is determined.
In the next step S1733, all the 11S set in the above step S172 are addressed3The operations of steps S1731 to S1732 are repeated for different patterns, thereby finding one of the patterns that can minimize the standard deviation.
Thus, by the operations of steps S1731 to S1733, the equalizer gains (these equalizer gains are represented by small black dots in fig. 10B) for each reference band that can minimize the standard deviation in the sound pressure level between the axis points 17B, 17C, and 17D can be determined. By repeating the operation for all of the above-described 8 reference bands, an optimum equalizer gain pattern can be determined at step S174 of fig. 10A. Then, according to the determined equalizer gain pattern, parameters for a Parametric Equalizer (PEQ) are determined for each peak at step S175. As shown above with respect to (C) of fig. 9, these parameters are reorganized and then stored into the external storage device 104 of each speaker unit. Thereafter, the operation flow of fig. 10A ends.
Referring to the flowchart of fig. 12, the following paragraphs describe in more detail how the angle of the speaker array and the mounting angle between the speaker units of the speaker array are set, and how the optimum angle is determined from among the set angles as shown in (a) to (E) of fig. 9.
Steps S21 to S26 correspond to the processing shown in (a) of fig. 9. In step S21, a pattern of speaker array angles (θ, *) is set at a 30 ° variation pitch for each horizontal and vertical direction. Also, the mounting angle θ int between the individual speaker units is set for each speaker array angle. At this time, a pattern of the installation angle θ int between the individual speaker units is prepared by selecting the installation angle from the settable angle range specific to the speaker array 16A described above with respect to fig. 8. Here, the angle θ may be set at a 30 ° variation pitch within a range of-180 < θ ≦ 180 °, and the angle * may be set at a 30 ° variation pitch within a range of-90 < * ≦ 90 °.
Then, in step S22, 5 optimal angle patterns (θ, *) that can achieve a standard deviation reduced in the sound level between the grid points (e.g., 17J in fig. 11) are selected from the set patterns. In selecting such 5 optimum angle patterns, it is necessary to set the mounting angles θ int between the plurality of speaker units, and then to select an optimum one of the thus set mounting angles θ int between the speaker units. Therefore, the subroutine of step S27 is executed for each speaker array angle pattern.
The subroutine of step S27 includes an installation angle determination procedure between the speaker units. First, in step S271, the installation angle θ int between the plurality of speaker units is set for the speaker array angle pattern (θ, *) selected in step S22.
At step S272 following the mounting angle determination flow between speaker units, the standard deviation calculation flow of step S28 is executed for the angles (θ int, θ, *) set at steps S22 and S271. Here, each operation of step S28 is performed by changing only the angle θ int that is kept fixed from the angle (θ, *). Steps S281 to S283 of step S28 correspond to the processes shown in (B) to (D) of fig. 9, and thus are not described herein again to avoid unnecessary repetition.
In the next step S273, the mounting angle θ int between the speaker units at which the minimum standard deviation can be obtained is extracted from the result calculated in step S272. Thereafter, the subroutine of step S27 is temporarily ended, and then restarted with shifting from one set of angles (θ, *) to another set of angles.
Then, in step S23, a combination of angles located 15 ° before and after the single angle of the pattern is reset for each of the 5 angle patterns (θ, *) selected in the above step S22. For example, if the optimum values of the angle (θ, *) of a given one of the optimum 5 angle patterns selected are 30 ° and 45 °, the patterns of 15 ° and 45 ° in which the optimum angle 30 ° and the optimum angle 30 ° are 15 ° forward and backward are reset with respect to θ (i.e., the patterns of 15 °, 30 °, and 45 °). Also, patterns of 30 ° and 60 ° of the optimum angle 45 ° and 15 ° before and after the optimum angle 45 ° (i.e., patterns of 30 °, 45 °, and 60 °) are newly set for * (9 different patterns). Therefore, a total of (5 × 9) different patterns of (θ, *) can be set. In the above-described subroutine of step S27, the mounting angle thetaint between the speaker units is set for each of the thus set angle patterns (θ, *) so as to optimize the mounting angle thetaint.
In step S24, the 5 best angle modes (θ, *) that can achieve a reduction in the standard deviation in the sound level between the grid points (e.g., 17J in fig. 11) are selected from the modes rearranged in step S23, generally in the same manner as in step S22.
Step S25 is similar to step S23, but differs from step S23 in that a combination of angles located 5 ° (not 15 °) before and after a single angle of the selected pattern is reset. For example, if the optimum angle θ for a given one of the selected 5 angular patterns is 45 °, the patterns of 40 °, 45 °, and 50 ° are reset for θ.
In step S26, the angle set in step S25 is determined (θ int, θ, *) with the subroutine of step S27, generally in the same manner as in step S22 or S24. However, unlike step S22 or S24, this step S26 selects 1 (instead of 5) optimal angle patterns (θ, *) to finally determine (θ int, θ, *).
As described above, the angle search is performed in the present embodiment with the angle change pitch of the speaker array to be mounted initially set to a relatively large value and subsequently set to a small value, thereby reducing the necessary search time. Moreover, such angle search can prevent the calculation from becoming impossible due to the problem of the calculation cost.
From the above, it can be seen that the condition settings and automated optimization/assistance provided by the present embodiment in the manner described above with respect to fig. 4 to 12 substantially enable automation of condition settings that have been optimized in the past in a trial-and-error approach. Also, by acoustically outputting the optimization result of step ST3 of fig. 4, the present embodiment enables the optimization result to be confirmed by headphones.
Note that the numerical values, the number of speaker units, the fan-or rectangular box shape of fig. 5, the GUIs of fig. 6 to 7, the operation flows shown in some of the figures, and the like are merely illustrative examples, and the present invention is of course not limited thereto. In particular, the condition setting and mode setting processing has been shown and described as a partially repetitive operation flow, but once the setting is made, there is no need to repeatedly set these conditions and modes in a repetitive procedure.
Now, the operation of the acoustic design assistance apparatus when the spatial shape setting screen of fig. 5 and 6 is displayed is described in the following paragraphs with reference to the flowchart of fig. 13. The operation flow of fig. 13 corresponds to the spatial shape setting operation of step ST11 shown in fig. 4.
First, as shown in fig. 5, the shape selection frame 11C is displayed, and it is determined whether the sector shape or the box shape is selected in step S111. If the fan shape is selected, step S111 determines yes, so that a plurality of examples of the fan shape as shown in fig. 3 are displayed in the shape selection box 11D. If the selected shape is not a fan shape, "no" is determined at step S111, so that a plurality of box-shaped examples (not shown) are displayed.
In step S114, it is determined whether any shape is selected from the sector selection frame 11D in step S112 or the box selection frame in step S113. If the shape is not selected, step S114 determines no, and thus the apparatus is in standby. If any shape as determined is selected in step S114, the screen of the display device 101 is switched to another screen, after which control passes to the next step S115.
In step S115, it is determined whether a numerical value indicating the shape of the space is input. If all the predetermined numerical values are not input, step S115 is determined as no, and the apparatus stands by until all the numerical values are input. Once all the values are input, the planar area size and the spatial vertical-horizontal ratio are calculated in step S116 based on the values input in step S115.
In step S117, it is determined whether the decision button 11H has been pressed. If the decision button 11H has been pressed as determined in step S117, the operation flow ends. If the decision button 11H has not been pressed as determined in step S117, control reverts to S115 to receive any desired change to the input value until the decision button 11H is pressed.
Next, with reference to the flowchart of fig. 14, the operation of the acoustic design assistance apparatus of the invention when the speaker selection screen 12 of fig. 7 is displayed is described.
In steps S161 and S162, it is determined whether a desired item has been selected in the use selection frame 12A and the speaker installation position selection frame 12C of the speaker selection screen 12. If no selection is made in the foregoing block, it is determined as "no" in steps S161 and S162, and then the apparatus is in standby. If it is determined to be "yes" in both steps S161 and S162, the control proceeds to step S163.
In step S163, a speaker array satisfying the conditions input in steps S161 and S162 is selected, and the speaker array thus selected is displayed as an optimal speaker candidate as shown in fig. 7 (step 164).
Claims (14)
1. A response waveform synthesis method comprising the steps of:
an inverse FFT transform step of setting a synthesized band for each or every several analysis bands using frequency characteristics determined for a single one of a plurality of analysis bands divided from a predetermined audio range, the frequency characteristics being determined for a single analysis band to have a frequency resolution that becomes finer in the order in which the analysis band frequencies decrease, and then determining a time-axis response waveform for each synthesized band; and
and an addition synthesis step of adding together the response waveforms of the synthesized bands, thereby providing a response waveform for the entire audio range.
2. The response waveform synthesizing method according to claim 1, wherein the inverse FFT transform step determines a time-axis response waveform for each synthesized band i (i-1, 2, … n) having a band as an (i-1) th analyzed band and a band as an i-th analyzed band using frequency characteristics determined for a single analyzed band (0-n) divided from an audio range, and
the addition synthesis step adds together the response waveforms of the synthesis frequency band i (i ═ 1, 2, …, n) determined by the inverse FFT transform step, thereby providing a response waveform for the entire audio frequency range.
3. The response waveform synthesizing method according to claim 2, wherein the (i-1) th analysis band is analyzed by using a sine square function (sin) which multiplies a part of the synthesized band corresponding to the (i-1) th analysis band by a rising part of the waveform2θ), and multiplying a part of the synthesized band corresponding to the i-th analysis band by a cosine square function (cos) which is a waveform falling part2θ), which determines a response waveform for each of the synthesized frequency bands i (i ═ 1, 2, 3, …, n).
4. The response waveform synthesizing method according to claim 2 or 3, wherein the 1 st to (n-1) th analysis bands are divided from an audio range on an octave-by-octave basis, and a frequency characteristic of each analysis band is determined by FFT analysis, and
wherein an amount of FFT sample data to be used in FFT analysis of the kth analysis band (k ═ 1, 2, …, n-2) is twice an amount of FFT sample data to be used in FFT analysis of the (k +1) th analysis band.
5. The response waveform synthesizing method according to claim 4, wherein in the inverse FFT transforming step, a portion of a synthesized band i (i-1, 2, 3, …, n-1) corresponding to the (i-1) th analysis band uses frequency characteristic values discretely present on a frequency axis in a decreasing manner so as to be equal in number to the frequency characteristic values discretely present on the frequency axis on the portion corresponding to the i-th synthesized band.
6. A response waveform synthesizing apparatus comprising:
a frequency characteristic storage section that stores a frequency characteristic determined for a single one of a plurality of analysis bands divided from a predetermined audio frequency range, the frequency characteristic being determined to have a frequency resolution that becomes finer in an order in which analysis band frequencies decrease;
an inverse FFT transform operation section that sets a synthesized band for each or every several analyzed bands and then determines a time-axis response waveform for each synthesized band; and
and an addition synthesis section that adds together the response waveforms of the synthesized bands, thereby providing a response waveform for the entire audio range.
7. The response waveform synthesizing apparatus according to claim 6, wherein the inverse FFT transform operation section determines a time-axis response waveform for each synthesized frequency band i (i-1, 2, … n) having a frequency band as an (i-1) th analyzed frequency band and a frequency band as an i-th analyzed frequency band using frequency characteristics determined for a single analyzed frequency band (0-n) divided from an audio frequency range, and
the addition synthesis section adds together the response waveforms of the synthesis frequency band i (i ═ 1, 2, …, n) determined by the inverse FFT transform operation section, thereby providing a response waveform for the entire audio frequency range.
8. The response waveform synthesis apparatus of claim 6, further comprising:
a characteristic storage section that stores respective characteristics of a plurality of types of speakers;
a speaker selection assisting section that selects selectable speaker candidates according to shape information of a space where speakers are to be placed;
a speaker selection section that receives a selection operation for selecting one speaker from the selectable speaker candidates;
a speaker installation angle optimizing section that determines an installation orientation of the speaker so as to minimize a sound level variation at a single position of the sound receiving surface of the space, based on the characteristics of the speaker selected via the speaker selecting section; and
a frequency characteristic calculating section that calculates a frequency characteristic at a predetermined position of the space for each of a plurality of analysis frequency bands divided from an audio frequency range, based on the shape information of the space and the installation orientation of the speaker determined by the speaker installation angle optimizing section,
wherein the frequency characteristic storage section stores the frequency characteristic calculated by the frequency characteristic calculation section for each analysis frequency band.
9. The response waveform synthesizing apparatus according to claim 8, further comprising a sound signal processing section which includes a filter in which response waveform characteristics for the entire audio frequency range provided by the addition synthesizing section have been set, and wherein a desired sound signal is input to the sound signal processing section so that the input sound signal is processed by the filter, and the processed sound signal is then output from the sound signal processing section.
10. The response waveform synthesizing apparatus according to claim 8, wherein the inverse FFT transform operation section determines a time axis response waveform for each synthesized frequency band i (i-1, 2, … n) having a frequency band as an (i-1) th analyzed frequency band and a frequency band as an i-th analyzed frequency band using frequency characteristics determined for an individual one of a plurality of analyzed frequency bands (0-n) divided from an audio frequency range, and wherein
The addition synthesis section adds together the response waveforms of the synthesis frequency band i (i ═ 1, 2, …, n) determined by the inverse FFT transform operation section, thereby providing a response waveform for the entire audio frequency range.
11. A response waveform synthesis method comprising the steps of:
a first step of selecting an optional speaker candidate according to shape information of a space where a speaker is to be placed;
a second step of receiving a selection operation for selecting one speaker from the selectable speaker candidates;
a third step of selecting an installation orientation of the speaker so as to minimize a sound level variation at a single position of the sound receiving surface of the space, according to the characteristics of the speaker selected through the second step;
a fourth step of calculating a frequency characteristic at a predetermined position of the space for each of a plurality of analysis frequency bands divided from a predetermined audio frequency range, based on the shape information of the space and the installation orientation of the speaker determined by the third step;
an inverse FFT transform step of setting a synthesized frequency band for each or every several analysis frequency bands and then determining a time axis response waveform for each synthesized frequency band; and
and an addition synthesis step of adding together the response waveforms of the synthesized bands, thereby providing a response waveform for the entire audio range.
12. The response waveform synthesizing method according to claim 11, further comprising:
a setting step of setting, in a filter, characteristics of the response waveform for the entire audio range provided by the addition synthesis step; and
an input step of inputting a desired sound signal, processing the input sound signal by the filter, and then outputting the processed sound signal.
13. The response waveform synthesizing method according to claim 11, wherein the fourth step calculates the frequency characteristics of the individual analysis bands with the frequency resolution becoming finer in the order in which the analysis band frequencies decrease.
14. The response waveform synthesizing method according to claim 11, wherein the inverse FFT transform step determines a time axis response waveform for each synthesis band i (i-1, 2, … n) having a band as an (i-1) th analysis band and a band as an i-th analysis band using frequency characteristics determined for an individual one of a plurality of analysis bands (0-n) divided from an audio frequency range, and
the addition synthesis step adds together the response waveforms of the synthesis frequency band i (i ═ 1, 2, …, n) determined by the inverse FFT transform step, thereby providing a response waveform for the entire audio frequency range.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2006-030096 | 2006-02-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1103154A true HK1103154A (en) | 2007-12-14 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP1816898B1 (en) | Response waveform synthesis method and apparatus | |
| CN104170408B (en) | The method of application combination or mixing sound field indicators strategy | |
| US8290605B2 (en) | Acoustic design support apparatus | |
| JP7139409B2 (en) | Generating binaural audio in response to multichannel audio using at least one feedback delay network | |
| JP4685106B2 (en) | Audio adjustment system | |
| CN101883304B (en) | Compensation system for sound reproduction | |
| US20160337777A1 (en) | Audio processing device and method, and program therefor | |
| JP4466493B2 (en) | Acoustic design support device and acoustic design support program | |
| JP4200989B2 (en) | Acoustic design support device and acoustic design support program | |
| Heimes et al. | A real-time virtual reality building acoustic auralization framework for psychoacoustic experiments with contextual and interactive features | |
| HK1103154A (en) | Response waveform synthesis method and apparatus | |
| JP4475193B2 (en) | Acoustic design support device and acoustic design support program | |
| WO2019208285A1 (en) | Sound image reproduction device, sound image reproduction method and sound image reproduction program | |
| JP2002366162A (en) | Sound simulation device and sound adjustment device | |
| Tuna et al. | Data-driven local average room transfer function estimation for multi-point equalization | |
| EP4060655B1 (en) | Audio signal processing method, audio signal processing apparatus and audio signal processing program | |
| Cheer et al. | A comparison of control strategies for a car cabin personal audio system | |
| Fărcaș et al. | Experiments on Multiple-point Room Equalization Applied to Medium-sized Enclosed Spaces | |
| Stevanović et al. | Software for Measuring Acoustic Parameters in open-Plane Offices | |
| Hölter | Adjoint-based monopole synthesis of sound sources with complex directivities | |
| Bellini et al. | APLODSP, Design of customizable audio processors for loudspeaker system compensation by DSP | |
| Giesbrecht et al. | Algorithmic Reverberation |