WO1992000584A1 - Method and apparatus for acoustic holographic imaging in marine and other acoustic remote sensing equipment - Google Patents
Method and apparatus for acoustic holographic imaging in marine and other acoustic remote sensing equipment Download PDFInfo
- Publication number
- WO1992000584A1 WO1992000584A1 PCT/GB1991/001058 GB9101058W WO9200584A1 WO 1992000584 A1 WO1992000584 A1 WO 1992000584A1 GB 9101058 W GB9101058 W GB 9101058W WO 9200584 A1 WO9200584 A1 WO 9200584A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- array
- image
- coefficients
- transducer
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000003384 imaging method Methods 0.000 title claims abstract description 48
- 238000012545 processing Methods 0.000 claims abstract description 54
- 238000005070 sampling Methods 0.000 claims abstract description 28
- 239000002131 composite material Substances 0.000 claims abstract description 19
- 230000000737 periodic effect Effects 0.000 claims abstract description 6
- 230000001934 delay Effects 0.000 claims description 13
- 230000003321 amplification Effects 0.000 claims description 12
- 230000033001 locomotion Effects 0.000 claims description 12
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 12
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 7
- 238000007906 compression Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims description 2
- 230000000063 preceeding effect Effects 0.000 claims 2
- 230000000694 effects Effects 0.000 claims 1
- 230000003595 spectral effect Effects 0.000 description 21
- 230000006641 stabilisation Effects 0.000 description 10
- 238000011105 stabilization Methods 0.000 description 9
- 230000008901 benefit Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 7
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 7
- 230000003111 delayed effect Effects 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 5
- 238000003491 array Methods 0.000 description 5
- 238000002592 echocardiography Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 239000000243 solution Substances 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 241000251468 Actinopterygii Species 0.000 description 2
- 238000007792 addition Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000010363 phase shift Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000009658 destructive testing Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011496 digital image analysis Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011049 filling Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009659 non-destructive testing Methods 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 239000012086 standard solution Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/18—Methods or devices for transmitting, conducting or directing sound
- G10K11/26—Sound-focusing or directing, e.g. scanning
- G10K11/34—Sound-focusing or directing, e.g. scanning using electrical steering of transducer arrays, e.g. beam steering
- G10K11/341—Circuits therefor
- G10K11/346—Circuits therefor using phase variation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/88—Sonar systems specially adapted for specific applications
- G01S15/89—Sonar systems specially adapted for specific applications for mapping or imaging
Definitions
- the present invention relates to acoustic imaging methods and apparatus.
- the methods and apparatus disclosed are primarily intended for marine'sonar survey applications, but are also applicable to other applications of acoustic imaging.
- MUSIC Multiple Signal Classification
- MUSIC and other related algorithms aims at resolving far-field directional sources beyond the Rayleigh limit.
- the methodology is closely related to that of adaptive pattern nulling, where the characteristic feature of an adaptive array is the automatic rejection of directional interference. If these algorithms are applied to the near-field situation, null-filling and directional mismatch are observed. For example adaptive nulling results in much shallower nulls, which appear in the wrong positions. It was the realization that the planar wavefront approximation is frequently inadequate that spurred research into imaging near-field objects, initially for medical imaging and non ⁇ destructive testing of materials.
- SAFT Synthetic Aperture Focussing Technique
- the main object of the invention is to improve the performance, and eventually to reduce the complexity and cost, of a wide range of acoustic remote sensing equipment, used to survey water volumes (Eg. fish shoal detention) , the sea-floor, and the sub-bottom region below the sea-floor.
- the invention is applicable to standard acoustic sensing modes such as sector-scan sonar, side-scan sonar, and sub- bottom profiling.
- the invention allows a whole insonified scene to be reconstructed from a set of digitised samples without the need for any form of special scanning equipment.
- Swath bathymetry is an acoustic survey mode which allows the precise depth of the sea-floor or other water bottom to be measured in a strip on either side of the survey vessels track. Such survey is normally carried out with special forms of "multi-beam" sonar, which require complex electronics to produce.
- the invention allows swath- bathymetry to be carried out using either a towed sonar array, or an array mounted on the hull of the survey vessel without the need for special electronics.
- the Invention replaces conventional "front-end" electronics with a single Digital Sampling Unit. Subsequent processing is carried out digitally. This form of equipment can take advantage of the reducing cost and increasing performance of digital processing elements as technology develops.
- processing modules can be used in the different acoustic sensing modes, such as sector-scan and side-scan. This brings considerable advantages to any manufacturer for product rationalization. In addition it allows processing modules to be shared between different acoustic sensors in ROV (remotely operated vehicle) applications, and other situations where space and electrical power are limited.
- ROV remotely operated vehicle
- Equipment embodying the Invention can offer superior performance to existing equipment, both in terms of image quality, and in the ability to select chosen regions of the whole image for larger-scale display and subsequent computer image-analysis.
- the principles embodied in the Invention can also be applied to other acoustic imaging equipment, for example to medical ultrasonic scanners.
- a typical problem with existing sonar equipment is to convert the image from polar to cartesian coordinates for operator interpretation, or subsequent computer processing. This conversion may require additional electronics, but in any case the image information is degraded as a result of the resampling implicit in the scan conversion procedure.
- the coordinate system required for the image is allowed for in the Look-up Table values, and no conversion is necessary.
- Dynamic Focus is achieved by correct computation of the Look-Up Tables.
- sonar arrays which are non-linear in a geometric sense, for example to bend around the hull of a vessel.
- the changes required to the imaging procedure can be allowed for in the Look-Up Table values.
- an acoustic imaging method for marine survey and other purposes wherein a target area is insonified by periodic acoustic pulses, reflections of said pulses are detected by an array of at least two transducers which generate output signals in response thereto, and said output signals are processed so as to produce an image of all or part of the insonified target area, said image comprising an array of pixels each corresponding to a point in the insonified area and the intensity of each pixel representing the strength of the pulse reflected from said corresponding point, comprising the steps of
- steps (c) (ii) and (iii) above are executed by:
- a preferred form of step (iv) above to achieve pulse compression of frequency-modulated (chirp) pulses comprises adding the cross-products of the in-phase and quadrature frequency components of the composite signal with the corresponding predetermined frequency component of the transmitted pulse.
- An adaptation of the basic method for swath bathmetry comprises the steps of:
- a preferred method of determining the times of flight specified in step (c) (i) of the basic method uses digital Time-Delay Look-Up Tables which may be wholly or partly predetermined, such Look-Up Tables being adapted to offer specific imaging facilities including:
- a preferred method of generating the Time Delay Look-Up Tables is based on the time delay from a given pixel point to a selected reference element on the array plus the differential delays to other transducers of the array.
- an acoustic imaging apparatus for marine survey and other purposes comprising transmitter means for transmitting periodic acoustic pulses, an array of at least two transducers for detecting reflections of said pulses from a target area and for generating output signals in response thereto, and means for processing said output signals so as to produce an image of at least a part of said target area, said image comprising an array of pixels each corresponding to a point in said target area and the intensity of each pixel representing the strength of the reflected pulse from said corresponding points; and further comprising:
- said data processing and image generation means is adapted to select a set of digitised samples from each transducer for each pixel of the image, said selection being determined by the time of flight a pulse from the transmitter means to the corresponding point in the target area and back to the array, to correct each selected sample set by some form of interpolation so that it is precisely aligned with the required time of flight, to derive a composite signal from the time-shifted sample sets for each pixel, representative of the strength of the reflected pulse from the corresponding point, and to derive an intensity value for each pixel from its corresponding composite signal.
- the data processing and image generation means may be adapted to perform any of the particular optional method steps defined above.
- the data acquisition and storage means preferably includes means for amplifying the analogue signals received from the transducers fo the array, analogue to digital conversion means for sampling the.analogue signals, and means for multiplexing the signals from each transducer channel.
- the analogue output signals from each transducer channel are amplified and digitised separately, and the parallel, digital signals are digitally multiplexed prior to storage.
- the analogue output signals from the transducers are amplified separately in a first amplification stage, the parallel, amplified analogue signals are analogue multiplexed, and the multiplexed, analogue signal is further amplified prior to digitisation and storage.
- the further amplification stage is a digital switched gain stage wherein the gain is adjusted to compensate for attenuation of the reflected signals.
- the serial implementation preferably also includes similar pre- amplification and digital switched gain stages for each transducer channel prior to digitisation.
- the date acquisition and storage means preferably further includes timing and control means to control the multiplexing means, digitally switched gain and digitisation.
- Fig. 1 is a block diagram of an acoustic imaging system embodying the invention
- Figs. 2(a) and 2(b) are, respectively, schematic illustrations of alternative parallel and serial implementations of the data acquisition means of the system of Fig. 1;
- Fig. 3 is a more detailed block diagram of a serial implementation of the serial data acquisition means fo Fig. 2(b) ;
- Fig. 4 is a further schematic illustration of a serial data acquisition arrangement similar to that of Fig. 3;
- Fig. 5 is a flow chart illustrating the steps of a data processing and image generation method in accordance with the invention.
- Fig. 6 is a schematic illustration of two prior art approaches to beamforming in sonar systems
- Fig. 7 is an illustration of a typical source signal as used in the described embodiments of the invention.
- Fig. 8 is a simplified illustration of an insonified scene with a superimposed grid
- Fig. 9 is similar to Fig. 8, and illustrates stabilisation against movement of the transducer array
- Fig. 10 is similar to Figs. 8 and 9, and illustrates differential delays to different transducers of the array with the array yawed through an angle ⁇ .
- the overall system consists of two main modules:
- Fig. 1 is a block diagram illustrating the system, comprising a sonar head 10 consisting of an array of ultrasonic transducers and a pulse transmitter or projection.
- Data Acquisition and Storage means 12 and Data Processing and Image Generation means 14.
- the sonar head 10 itself is of conventional type and will not be described in further detail herein.
- the number of transducers in the array can vary, and useful basic imaging can be accomplished with relatively few transducers down to a minimum of two.
- the Data Acquisition and Storage means 12 is adapted to amplify, digitize, sample and store the signals received from the sonar head 10, and the Data Processing and Image Generation means 14 processes the stored samples to generate the required images. These are described in greater detail below. 5.2 Data Acquisition and Storage
- the Data Acquisition and Storage means 12 comprises:
- Fig. 2(a) shows a parallel implementation wherein each transducer of the sonar head 10 is connected to a digital multiplexer 16 via separate pre-amplification stages 18, digitally-switched-gain amplification stages 20 and analogue-to-digital conversion means (A/D) 22.
- the output from the multiplexer 16 is transmitted to digital memory for subsequent data processing.
- Fig. 2(b) shows a serial implementation wherein each transducer of the sonar head 10 is connected to an analogue multiplexer 24 via separate pre-amplification stages 26. The output from the multiplexer 24 is then transmitted via a single digitally-switched-gain amplification stage 28 to A/D 30, and thence to digital memory 32 for subsequent processing in the data processing and image generation means 14.
- the parallel implementation represents the best option in terms of A/D performance as the conversion speed need only be above the single channel Nyquist rate.
- the ability to match amplitude and phase characteristics in the amplifiers was at first thought to be a problem and for this reason the serial implementation was chosen instead. Additional tests on a prototype digital switched gain board however confirms that the problem of matching channels may not be as difficult as expected.
- the DMA operation is currently performed at a rate of lOMbytes/s into a modified graphics processor board, Scanbeam. It is technically possible to increase this rate to well over 20Mbytes/s using off the shelf technology.
- the 10 MHz DMA restriction has fundamental implications on the overall system design.
- a 360kHz sonar array is used.
- the Nyquist Low- pass theorem dictates a rate of over twice 360kHz. This limits the number of channels to the order of 12.
- the number of channels in turn sets the aperture size and hence the angular resolution. If the size of the DMA RAM storage is 1Mbyte, then in combination with the digitisation rate this sets the total range to the order of 20 metres.
- the sonar transmitter can however be thought of as a narrow-band source so the sampling rate can be equated to the Nyqulist Band-pass theorem. Given a bandwidth of typically 5% of the carrier we can see that a twenty fold increase in the aperture size is quite feasible. For the purposes of the present example this was not implemented as it was adequate for experimentation to restrict the number of channels to the order of nine..
- the total sonar is split into 3 parts, the array itself, the amplifiers and digitisation module, and the digital storage and processing engine. Physically these three outlets are interconnected by cables.
- the output from each element of the array feeds up individual 60 ohm twisted pairs using a differential cable driver- and into the input connecter of the amplifier and digitisation module.
- the output of the A/D converter on the digitization unit is 8- bit ECL parallel data and is cabled into the input connector of the VME host processor.
- Fig. 3 illustrates the architecture of a serial system embodying the invention, in this case having an array of fifteen transducer elements, comprising a sonar head 40, data acquisition means 42 and data storage/processing means 44.
- the head 40 comprises an array of transducers and pre-amps 46, each connected to a separate output channel, and projectors and power-amps 48.
- the data acquisition means 42 which also controls the projectors 48, comprises separate input amplifiers connected to each transducer channel and disposed on three identical cards 50, 52 and 54.
- the outputs from the input amplifiers are connected to the inputs of analogue multiplexer 56, and the output from the multiplexer 56 to an analogue to digital converter 58 via digital switched gain card 60.
- the output from A/D 58 is connected to the memory of the data storage/processing means 44.
- the data acquisition means 42 further includes a power supply card 62, which provides power supplied for the analogue and digital data acquisition circuitry, a low pass filter and line driver card 64 connected to the projectors 48 of the head 40, and a main control board 66 which controls the timing of the various system components and generates pulses for driving the projectors 48 via the filter card 64,
- Fig. 4 is a schematic block diagram of a similar, n- channel serial system wherein the outputs from the transducers 70 are fed to an analogue multiplexer 72 via input amplification stages 74(a), 74(b) and 74(c).
- the twisted pair cables from the array are connected to input amplifiers.
- the input amplifiers have a total of 56 dB of matched gain for each channel prior to multiplexer input. This gain is split into three blocks; a first stage 74(a) of fixed 20 dB, a second stage of gain 74(b) selectable as either plus or minus 18 dB, and finally a buffer gain 74(c) of 18 dB driving 50 ohms.
- the selectable plus and minus 18 dB of gain is used to ensure that the power levels into the multiplexer 72 remains high for the full echo return time. This high level signal ensures optimum performance from the analogue switches in the multiplexer.
- acoustic echo returns are such that minus 18 dB of gain maintains a high input level into the multiplexer.
- the incoming power level falls off and to keep the signal levels high the plus 18 dB gain is then selected.
- These input gains may require a degree of adjustment in practice.
- the range at which the gain change occurs is set by a timing and control card 76.
- the analogue multiplexer 72 consists of a nine-to-one channel multiplexer ( mux) . Multiplexing is achieved using Siliconix quad analogue switches(SD5002) .
- the 5D5002 is a.. DMOS FET switch with switching times of ns.
- the specification for the mux. is that it must switch the nine channels with a lOMhz clock rate.
- the mux. was designed to have a 12ns switching time and the timing of the A/D 78 set to correspond to the middle of the switching period.
- the gate drive must shift from plus 15 volts to minus 15 volts in less than 10ns, which is achieved by means of a discrete transistor circuit.
- the gate control signals are ECL timing signals from the timing and control card 76. After multiplexing the single channel is passed through a 20 dB attenuator 80 before entering the Digital Switched Gain card(DSG) 82. This card is designed to compensate for the normal attenuation characteristics of acoustic waves in water. An analogue switched gain circuit based on log amplifier technology would have a dubious phase characteristic over the full swept gain range, so a digitally switched gain board is preferred.
- This board uses a number of high performance operational amplifiers (0PA6 75 swop amps) and analogue switches in different gain stages at the appropriate time in the attenuation curve. This board keeps the actual gain within 6dB of the theoretical gain required. As the output of this card feeds the A/D 78 converter the DSG circuit ensures that the A/D maintains an acceptable dynamic range for the full target range. The timing for this card is derived from the control card 76. The gain selection times are pre-programmed in a ROM.
- the Analogue to Digital converter 78 is an 8-bit Datel 8303E, a development card. This card has an input analogue bandwidth of 40 MHz, a system requirement due to the slew rate specification at the output of the multiplexer. The output data is converted into ECL signals for output on the cable connecting the software processor.
- the control and timing card 76 performs much of the system clocks and time references.
- This card also includes the transmitter EPROM which is pre-programmed with the windowed sine wave transmitter shape values.
- An 8-bit D/A circuit reads the output from the EPROM and this signal is sent to the projector power amplifier 80.
- the digital storage is performed using a Scanbeam graphics processor board modified by a separate main controller board to act as a 10 MHz DMA.
- Fig 5 gives a general block diagram for the image generation computation.
- the processing in the individual blocks varies in detail depending on the specific application - sidescan, sector-scan, seismic processing, chirp source etc. but the framework remains the same.
- This schematic diagram does not exclude other possible processing steps which may assist in specific instances, for example swath bathymetry.
- the array is steered to an angle, a, with respect to the array normal in order to point at the reflector, P, in the far-field. This point is sufficiently distant from the array for the curvature of the echo wave-front to be negligible across the array, so that the wave-front can be assumed to be plane.
- the signal received from the transducer, Tj ⁇ is delayed by a time (K-k)*t before adding the sum amplifier.
- This specific set of time delays aligns the signals received from P, so that they constructively add together.
- ⁇ any far- field point subtending the same angle, ⁇ , with the array normal.
- signals received from a point subtending a different angle destructively interfere with each other.
- This form of processing creates a sonar receiver beam pointing in the direction, ⁇ .
- the width of the beam depends on the carrier frequency of the sonar signal which is echoed from P.
- the nominal value is usually given as 1/L radians, where 1 is the wavelength of sound in the medium, and L is the length of the array.
- phased-array sonar The standard engineering for the phased-array sonar is to implement the delay-sum architecture using tapped delay lines or other analog, or hybrid digital/analog, circuitry. Processing is carried out in real-time as the echo is received, and is only carried out for one specific beam angle at any time.
- the sum amplifier is followed by some form of detector which is used to estimate the power in the echo, averaged over a time-interval.
- the averaging time is roughly equal to the length of the transmitted pulse-packet, although the signal may later be smoothed over an interval equivalent to the resolution of the chart recorder.
- detection techniques are employed, ranging from simple half-wave rectifiers to correlation detectors.
- the simple system has the disadvantages that the digitisation rate is so high that the required sampling circuitry is very expensive, and that a large number of sample values must be stored to enable the computation to be carried out.
- estimation of the rms power in a given pixel is obtained by forming a sum of squares of the composite samples within the time window.
- Time-domain interpolation is only accurate when the sampling rate is high in comparison with the carrier frequency, so that the number of samples within a typical pulse envelope is also large. Hence the computation required to sum the squares of such samples is also expensive.
- the signal, S ' (t) is a copy of S (t) delayed by a time, h. Then:
- b 1 a sin(u) + b cos (u) .
- a « a n cos(u n ) + b n sin(u n ) n
- the first stage of processing is to carry out a local spectral estimate for each transducer signal for each sample point in turn.
- the processing consists of estimating the in-phase and quadrature frequency components of different subsets of the stored transducer samples for a discrete set of frequency values. These components are calculated by convolving the sample sequence for each transducer with a set of FIR filters, one pair for each frequency required. One convolution kernel in each pair gives the in-phase component a n for that frequency, f n ' and the other kernel gives the quadrature component, b n .
- each pixel in the image corresponds to a point (X,Y) in the insonified field.
- pixel points For a given image size with a given pixel point spacing, it is possible to precompute the time delay from transmission of the outgoing pulse for its echo to reach each transducer from each pixel point. For example, if there are 33 transducers, and 512 x 512 pixels in the image, then there are approximately 8 x 10 6 delay values to be computed. These precomputed delay values can be stored in a Look-Up Table. In practice the size of this look-up table can be reduced considerably by the use of difference techniques.
- this pre-computed look-up table can take account of the true geometry of the insonified scan, and allow for curvature of the echo wavefront. ie. dynamic focussing can be achieved by correctly computing the look-up table.
- tj ⁇ is the stored value of the time delay for the wave-front to arrive at the transducer, Tj from a pixel point, P. Then this value tk can be used to find the nearest sample to this time delay in the sample set, and hence the nearest set of local frequency components ⁇ a ⁇ , ⁇ b j ⁇ n ⁇ of that transducer in the Frequency Component Store. In general the nearest sample will not have precisely the required time delay.
- the first stage of imaging is to estimate the spectral composition of the delay-sum signal. This can be carried out in the frequency domain by adding the in-phase and quadrature components over the set of transducers:
- a n ⁇ (a'k,n ) .
- ⁇ n ⁇ ( b'k,n ) (5.2) k k
- the intensity of the pixel in the image corresponding to P is then made suitably dependent on p (eg. proportional to W, sqrt(W) , log(W) etc depending on the requirement) .
- p eg. proportional to W, sqrt(W) , log(W) etc depending on the requirement
- the processing can be subdivided into stages in many different ways. For example it is possible to carry out all processing for each frequency component in turn for all image points.
- a 2 + B 2 to the power, W, at each n n image point (Eq. 5.3) is then added into the image register. This processing continues until all frequency components have been processed. Alternatively the processing can be carried out for each pixel in turn for all frequencies. The processing can also be split between different processing units operating in parallel.
- the length of the kernel ⁇ k m ,j ⁇ is 2R+1.
- the entire set of component values for a given transducer can be generated by passing the sequence ⁇ s n ⁇ through a transverse filter loaded with the particular kernel values.
- the optimum kernel set to be used depends on many parameters:
- Carrier Frequency(ies) Nature of imaging task (eg. survey, isolated target identification, statistical classification of seafloor) Sampling rate
- the local echo power estimate is normally required averaged over the length on the ground corresponding to the pixel separation in the range direction.
- the time window in each transducer signal corresponding to the pixel spacing in the image is the space distance between pixel points divided by the 2-way speed of sound, e.g. 750 m/sec in water. In the above example, the time window is 266 microseconds.
- the number of samples in the time window depends on the sampling rate. Thus if the sampling rate is 20 kHz, we have 13 samples in each time window.
- the echo from the insonified scene will have a frequency composition which is related to the frequency composition of the source, so that a narrow ⁇ band source will produce narrow-band echoes, and a wide-band source will produce wide-band echoes.
- the four source signatures are:
- the first two categories are normally used in sector- scan and side-scan sonar.
- the seismic wavelet would be employed for sub-bottom profiling and could be employed for swath bathymetry.
- Chirp (swept frequency) sources may be used in each of the above applications.
- Minimum Time-Bandwidth Sources A typical source signal has a single carrier frequency with an approximately gaussian envelope (Fig 7) . Other pulse shapes with well rounded envelopes are also near minimum time-bandwidth. We assume that the length of the pulse is approximately the same as the pixel spacing in time (which is the usual situation) . Then the time-bandwidth law asserts that the uncertainty in frequency of any spectral estimate made during that time window is equal to the bandwidth of the signal itself. Hence there is no point in attempting to measure more than one frequency component.
- Spectral estimation now reduces to the problem of determining the amplitude and phase of the carrier in the local echo.
- the standard solution to this problem is to use a quadrature matched filter. A pair of -kernels are created whose values are just the sampled source signature, and the sampled source signature with carrier phase-shifted by 90 degrees.
- This pair of kernels is now used to generate a single pair of Frequency Components in the Frequency Component Store.
- the standard technique for local spectral estimation used in seismic processing is to carry out a local DFT within the time-window.
- the sampled signal is first padded with zeros so that the length is 2**n, e.g 512 or 1024.
- the signal is tapered to zero at each end so that it becomes approximately periodic, and the standard DFT algorithm is applied.
- the combined operations of taper followed by DFT are equivalent to a set of convolution operations.
- the signal component of the echo is known to consist of a limited range of frequencies. Hence all frequencies outside this range are ignored. Careful choice of sampling frequency avoids aliasing problems.
- there are mathematical modifications to the Fourier kernels which correct for aliasing to some extent (Corrected DFT Technique) .
- An alternative approach is to use a set of quadrature bandpass filters to estimate the local spectrum.
- the number of such filters is equal to the time- bandwidth product of the signal to be estimated. This approach minimises the number of frequency components which are computed.
- the DFT solution may have some merit if special hardware (DFT chips) can be 'employed in the computation.
- a typical wavelet consists of a pair of sharp positive and negative peaks followed by a longer period ripple.
- the spectral composition may have a maximum around 3 kHz, but the spectrum will extend from around 1 kHz up to 10 kHz.
- Such a signal can be handled in precisely the same way as the band-limited signal discussed earlier, though here the minimum sampling frequency given by Nyquist is just twice the maximum frequency in the signal, and there is no special aliasing problem.
- the chirp source can be treated as a band limited source with a rather wide bandwidth, and either a DFT kernel set, or the Corrected DFT technique of the Quadrature Bandpass Filter technique can be used to generate the kernel set.
- the length of the time- window used in the kernel set should be at least the length of the chirped pulse, even if the pixel spacing is closer, if the pulse compression technique described below is to work successfully.
- the final step in image generation should be modified to carry out pulse compression of the chirped echo.
- Pulse compression of a chirped echo is normally carried out by matched filtering of the echo with a copy of the chirped source signature. Using digital signal processing, both the echo and the source signature are sampled.
- Matched filtering in the time-domain is equivalent to multiplication of the complex spectrum of the echo by the complex spectrum of the chirp source in the frequency domain. This is easily achieved with the given processing scheme, since the set ⁇ A n ⁇ , ⁇ B n defined in Sec. 5.3.8 is just the complex' spectrum of the echo. If the complex spectrum of the chirp source, ⁇ P n ⁇ , ⁇ Q n ⁇ i s precomputed by the same technique as that used to estimate the local echo spectrum, then:
- Equation 5.4 gives the local power of the pulse compressed signal.
- Any change to the image size for example generating a zoomed display of part of the insonified scene, can be done be substituting a new look-up table.
- image stabilization against movement of the sonar platform can be done by updating the look-up tables as the platform moves.
- the table stores time delays. In fact it is more convenient to store values as (delay/sample period) but this is a detail.
- the following sections show how the look-up tables can be structured to reduce storage capacity and regeneration time at the expense of a slightly greater access time. The method of updating the tables to achieve image stabilization is also described.
- the look-up table In a basic survey mode of operation, the look-up table is fixed for a given location of the imaged scene with respect to the transducer array. Hence the Delay Table could be precomputed for each standard mode of survey and stored in ROM. However, when a 'zoom' facility is used, the Delay Table for the zoomed window must be generated rapidly if the facility is to be of operational value. An even more serious problem arises if the imaged scene requires to be stabilized against movement of the transducer array, because the look ⁇ up requires to be refreshed each ping.
- the present invention proposes a solution based on first and second order differences from a 'Base Table' .
- the grid spacing for the array may be a suitable grid for surveying the whole scene, or a much smaller grid suitable for zoomed, stabilised display.
- the basic grid may be allowed to cover more of the scene, and hence be bigger than the displayed window.
- the (R, ⁇ ) coordinates of each grid point with respect to S only are computed and stored in a pair of basic tables.
- This R, ⁇ table will be referred to as the 'Polar Base Table• .
- the R co-ordinates can be stored in whatever are most convenient. In this document, distance in the water medium and the corresponding time delay at the speed of sound are used interchangeably.
- R 1 R - d cos( - ⁇ ) (la)
- ⁇ ' ⁇ - d sin( ⁇ - ⁇ ) /R (lb)
- Figure 10 shows the original image of Figure 8, with the array superimposed.
- the array pointed along the Y-axis of the grid. Starting from this reference time, it is required to stabilize the image against rotation of the array in the (X,Y) plane.
- the array makes an angle, a, with the Y axis after several pings in a stabilised mode of operation.
- r(l) 2 R 2 + l 2 + 2R1 cos ( ⁇ /2- ⁇ - ⁇ ) ,
- r(l) 2 R 2 + l 2 + 2R1 sin ( ⁇ + a) (3)
- R is the range to the array centre, given by the Polar Base Table. The solution to this equation is required for each value of 1 corresponding to a transducer location on the array. If the transducer spacing is a constant equal to d 0 , and there are 2m + 1 transducers mounted on the array, then 1 takes on the values:
- Equation 6 can be used to generate all the rj in the +ve direction. Changing the sign of d 0 gives the r in the -ve direction.
- Equations (9) are much faster to compute than Equation (7) .
- Equations (9) are easier to implement in fixed-point arithmetic.
- ⁇ n+l arccos (x n + ⁇ /r n+ ⁇ )
- ⁇ n+ l arcsin (y n + ⁇ /r n + ⁇ )
- the absolute error in the delay values determines the precision with which the image is stabilized.
- the tolerable error is some fraction of the acoustic travel time corresponding to the image grid spacing, which is usually of the same magnitude as the pulse length. Error in computing the delay differences between transducers leads to phase errors which degrade the imaging process itself.
- the allowable difference error between transducers is therefore some fraction of the source signal carrier period.
- delay differences require to be at least one order of magnitude more precise than the absolute delay values.
- the delay differences can be converted directly into phase-differences between transducers when the spectral in-phase and quadrature components are calculated.
- Equation (la) requires 2 add operations, 1 multiply, and a cosine table look-up for each pixel point. Assuming 50 nanoseconds per operation, the slowest execution time which can offer any kind of real-time capability, the computation time to stabilize for array translation is 50 ms.
- the next task is to generate the differential delays for each of the transducers in the array. Computation of initial conditions for the difference equations, (8) or (9) above, will also take 50-100 ms, assuming the necessary trigonometric function tables are available. Computation of delay difference costs one or two additions per transducer per pixel point (test and branch instruction may be avoided using in-line code) . The total time or 256 x 256 pixel points and 40 transducers is around 1 sec. This time is not likely to dominate the holographic computation.
- the computing cost is 2 additions, one division operation and 2 table look-ups per move. Assuming 50 ns per operation, the cost is 250 ns per grid point, and 62.5 ms for a 512 x 512 window. If the division operation is too expensive, the time can be reduced using a similar approximation to the one used to find the individual transducer delays, but It is probably not worth the trouble, particularly as zoom windows are likely to be smaller than 512 X 512.
- a particular application of the above general imaging scheme is swath bathmetry.
- the required sonar array configuration is similar to sector-scan, except that the array points vertically downwards, with the projected fan- shaped beam normal to the direction of motion of the sonar platform. As the platform moves, a section of images are obtained of the sea floor along a line below the array, approximately at right angles to the ships track.
- the array itself may be a conventional linear array, or bent in an arc around the hull of the ship or towed fish. As no penetration is required , high frequencies can be employed as for short- range sector-scan sonar.
- the imaging procedure is essentially the same as described above, except that the image is only required in the neighbourhood of the expected location of the sea-floor, hence the following procedure is adopted. After each ping, the image is reconstructed, and a suitable image processing algorithm is used to locate the sea-bottom horizon in the image. This horizon should be corrected to true depth and horizontal offset allowing for the attitude of the sonar (yaw, pitch and roll) .
- the imaging area for the next ping is controlled by the horizon found in the previous ping or pings, using any prediction techniques which may be appropriate.
- the aim is to reduce the number of pixels which are imaged, in order to speed up image reconstruction time, and hence increase the possible speed of the survey vehicle.
- the embodiments of the invention described herein relate generally to systems having linear transducer arrays, from which images of two-dimensional sections of the target area can be generated, however the invention is equally and directly applicable to planar (n x m element) arrays, T- shaped arrays and the like which may be used to produce three-dimensional images.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Acoustics & Sound (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The invention relates to acoustic imaging methods and apparatus intended to improve the performance and reduce the cost and complexity of such methods and apparatus. Accordingly, the method comprises the steps of: (a) between periodic acoustic pulses sampling output signals from a plurality of transducer at a predetermined rate; (b) digitising and storing the samples; (c) for each pixel of an image (i) selecting a corresponding set of samples from each transducer; (ii) correcting each selected sample set by interpolation so that it is precisely aligned with the required time of flight from pixel point to transducer; (iii) deriving a composite signal representing the strength of the reflected pulse from the point; and (iv) deriving the intensity of the pixel from the composite signal. The apparatus comprises a sonar head (10), consisting of the transducers and a pulse transmitter, data acquisition and storage means (12), and data processing and image generation means (14).
Description
Method and Apparatus for Acoustic Holographic Imaging In Marine and other Acoustic Remote Sensing Equipment
Introduction
The present invention relates to acoustic imaging methods and apparatus. The methods and apparatus disclosed are primarily intended for marine'sonar survey applications, but are also applicable to other applications of acoustic imaging.
1. Background of the Invention
1.1 State of the Art
Much of current research into signal processing of array/antenna sensor data is concerned with the theory of manipulating digitised baseband "in-phase" and "quadrature" components. Here mathematical tools taken from modern spectral analysis methods are applied to the array concept of "spatial frequency". Work on "super-resolution" and adaptive processing has become both very fruitful, and extremely popular in academic and industrial circles, no doubt because of well-funded military applications.
One of the best-known super-resolution algorithms is MUSIC (Multiple Signal Classification) , based on eigenvalve/eigenvector analysis of signal statistics. MUSIC and other related algorithms aims at resolving far-field directional sources beyond the Rayleigh limit. The methodology is closely related to that of adaptive pattern nulling, where the characteristic feature of an adaptive array is the automatic rejection of directional interference.
If these algorithms are applied to the near-field situation, null-filling and directional mismatch are observed. For example adaptive nulling results in much shallower nulls, which appear in the wrong positions. It was the realization that the planar wavefront approximation is frequently inadequate that spurred research into imaging near-field objects, initially for medical imaging and non¬ destructive testing of materials. One existing approach, which derived from early research, is called the "Synthetic Aperture Focussing Technique" (SAFT) [1]. This employs a single transducer which moves linearly along an object surface, while transmitting a spatially wide beam into the object, and receiving return echoes at a constant spatial interval. The received signal is digitised immediately and stored in 2D memory. A "focus table" for each pixel in the image is accessed during image reconstruction. Only one real-time SAFT imaging system has been reported [2] which operated at 2.7 - 4.3 MHZ, using a very short 2.5 cycle source signature.
Another real-time focussing imaging system developed by J.F.McDonald et al. [3] acquires data at 10 Hz on all 256 elements after a single 5 MHZ ping and constructs a 2D hologram of the ultrasonic image using the "backward-wave propagation" technique [4,1] with fast digital hardware.
Another such "within pulse" technique using unmodulated data exists [5], however this is a theoretical approach to vector scanning sonar using the "spacial frequency" concept assuming a far field source.
In conclusion the focusing systems find applications in the small-scale very-high frequency non-destructive testing
and medical imaging fields mentioned earlier [4,7] while the MUSIC/adaptive systems have been developed to acquire highly directional information about a few; usually military targets.
Therefore, although there is a considerable amount of sophisticated signal and array processing research, little of it appears directly relevant to the sonar survey problem, characterized by kHz frequencies and long pulse lengths, and where there are immediate advantages in being able to reconstruct an image of the complete area insonified by each ping, with the area containing as many potential near/medium-field "targets" of interest as there are pixels in the image.
2. Objects of the Invention
2.1 General
2.1.1 Improvements to Sonar Imaging Eguipment
The main object of the invention is to improve the performance, and eventually to reduce the complexity and cost, of a wide range of acoustic remote sensing equipment, used to survey water volumes (Eg. fish shoal detention) , the sea-floor, and the sub-bottom region below the sea-floor. The invention is applicable to standard acoustic sensing modes such as sector-scan sonar, side-scan sonar, and sub- bottom profiling.
2.1.2 Whole Scene Reconstruction
Using existing sonar equipment, it is only possible to image the insonified field of view in one particular
direction, unless some form of mechanical or electronic scanning is employed. The invention allows a whole insonified scene to be reconstructed from a set of digitised samples without the need for any form of special scanning equipment.
2.1.3 Swath Bathymetry
Swath bathymetry is an acoustic survey mode which allows the precise depth of the sea-floor or other water bottom to be measured in a strip on either side of the survey vessels track. Such survey is normally carried out with special forms of "multi-beam" sonar, which require complex electronics to produce. The invention allows swath- bathymetry to be carried out using either a towed sonar array, or an array mounted on the hull of the survey vessel without the need for special electronics.
2.1.4 Simplicity and Cost
The Invention replaces conventional "front-end" electronics with a single Digital Sampling Unit. Subsequent processing is carried out digitally. This form of equipment can take advantage of the reducing cost and increasing performance of digital processing elements as technology develops.
2.1.5 Commonality and Flexibility
Using the Invention, almost identical processing modules can be used in the different acoustic sensing modes, such as sector-scan and side-scan. This brings considerable advantages to any manufacturer for product rationalization. In addition it allows processing modules to be shared
between different acoustic sensors in ROV (remotely operated vehicle) applications, and other situations where space and electrical power are limited.
2.1.6 Performance Advantages
Equipment embodying the Invention can offer superior performance to existing equipment, both in terms of image quality, and in the ability to select chosen regions of the whole image for larger-scale display and subsequent computer image-analysis.
2.1.7 Other Application Areas
The principles embodied in the Invention can also be applied to other acoustic imaging equipment, for example to medical ultrasonic scanners.
2.2 Advantages due to Table-driven Processing
2.2.1 General
Selection and accurate reconstruction of the acoustic image is carried out in the invention using one or more Time-Delay Look-Up Tables, stored in digital form. The ability to compute these Tables to embody geometric corrections, and to change or update the Tables for specific purposes, brings a wide range of performance advantages illustrated below.
2.2.2 Imaging in Reguired Coordinate System
A typical problem with existing sonar equipment is to convert the image from polar to cartesian coordinates for
operator interpretation, or subsequent computer processing. This conversion may require additional electronics, but in any case the image information is degraded as a result of the resampling implicit in the scan conversion procedure. Using the Invention, the coordinate system required for the image is allowed for in the Look-up Table values, and no conversion is necessary.
2.2.3 Dynamic Focus
Using existing sonar equipment, additional electronics are required to focus objects in the near field of the array where curvature of the echo wavefront is significant. Using the Invention, Dynamic Focus is achieved by correct computation of the Look-Up Tables.
2.2.4 Non-Linear Array Geometry
It is sometimes desirable to use sonar arrays which are non-linear in a geometric sense, for example to bend around the hull of a vessel. Using the Invention, the changes required to the imaging procedure can be allowed for in the Look-Up Table values.
2.2.5 Zoom Display
It is a common operational requirement to be able to survey the whole insonified scene out to maximum range, and then to be able to image a selected area of interest for display at larger scale. This facility can be provided by substituting new Look-Up Tables. The Invention includes a special form of Look-Up Table which reduces the digital processing required to achieve this.
2.2.6 Stabilized Display
By updating the Look-Up Tables as the sonar platform moves through the water medium, it is possible to stabilize the resultant acoustic image against either translation of the sonar platform, or change in attitude (Roll, Pitch, Yaw) or both. This possibility offers several distinct performance advantages, for example:
(a) detection of targets which are moving in the ground frame-of-reference by subtraction of successive stabilized images.
(b) improvement in the signal/noise ratio for stationary objects by integration (time-averaging) of successive images.
(c) simplification of the human interpreter's task, for example comparison of the image with some form of chart or map.
2.3 Improvements Related to Whole-Scene Imaging
The ability to image a whole area or sector of the insonified scene carries important advantages even in situations such as side-scan survey where full coverage can be obtained by movement of the sonar platform. In particular a particular object or area of interest will be "viewed" from different angles as the platform moves, which permits either the view with the greatest contrast to be selected, or the flunctuations in echo strength to be used to assist in object classification. It is well known that echoes from man-made objects normally flunctuate with direction in a different manner than echoes from natural objects.
3. Summary of the Invention
In accordance with a first aspect of the invention there is provided an acoustic imaging method for marine survey and other purposes wherein a target area is insonified by periodic acoustic pulses, reflections of said pulses are detected by an array of at least two transducers which generate output signals in response thereto, and said output signals are processed so as to produce an image of all or part of the insonified target area, said image comprising an array of pixels each corresponding to a point in the insonified area and the intensity of each pixel representing the strength of the pulse reflected from said corresponding point, comprising the steps of
(a) sampling the output signals from each of said transducers at a predetermined rate during the period between said transmitted pulses; l
(b) digitizing and storing said samples;
(c) for each pixel of the image,
(i) selecting a corresponding set of digitized samples from each transducer, the selection of the sample set being determined by the time of flight of a transmitted pulse to the point in the insonified area corresponding to the pixel and back to the particular transducer in the array;
(ii) correcting each selected sample set by some form of interpolation so that it is precisely aligned with the required time of flight;
(iii) deriving a composite signal representing the strength of the reflected pulse from said corresponding point; and
(iv) deriving the intensity of the pixel from said composite signal.
Preferably, steps (c) (ii) and (iii) above are executed by:
(i) estimating the in-phase and quadrature coefficients for one or more frequency components present in each sample set using quadrature matched filtering, Discrete Fourier Transform computation, or other technique which involves the convolution of the sample set with one or more pairs of sets of predetermined coefficients;
(ii) computing the frequency coefficients for the required time-shifted sample set by phase-shifting the pairs of coef icients for each frequency component estimated in (i) above;
(iii) combining the phase-shifted coefficients for each frequency component from all transducers to generate the corresponding coefficients for the composite signal;
(iv) deriving the strength of the reflected pulse from the frequency coefficients of the composite signal.
A preferred form of step (iv) above to achieve pulse compression of frequency-modulated (chirp) pulses comprises adding the cross-products of the in-phase and quadrature frequency components of the composite signal with the corresponding predetermined frequency component of the transmitted pulse.
An adaptation of the basic method for swath bathmetry comprises the steps of:
(a) orienting an acoustic transmitter array in such a manner that a narrow strip of the sea-floor is insonified at right angles to the direction of motion of the vessel of other vehicle used for survey;
(b) orienting an acoustic receiver array in such a manner as to image the sector below the array including the insonified sea-floor intercept;
(c) after each pulse transmission, imaging a selected area of the scene which includes the sea-floor intercept using the basic method defined above;
(d) locating the pixels in the image corresponding to the sea-floor intercept by means of standard signal or image processing techniques;
(e) using horizon intercept determined from one or more previous insonifications to select the area for which the image needs to be computed after the next
insonification. A preferred method of determining the times of flight specified in step (c) (i) of the basic method uses digital Time-Delay Look-Up Tables which may be wholly or partly predetermined, such Look-Up Tables being adapted to offer specific imaging facilities including:
(a) geometric correction for near-field focus
(b) geometric slant-range to true-range correction for side-scan sonar
(c) geometric correction for non-linear array shapes
(d) imaging in the required frame-of-reference
(e) displaying stabilization against translation or change of attitude of the sonar array
(f) enlarge ("zoom") display of a part of a previous scene selected by the human operator or by automatic computer processing, such enlarged image to be at the higher acoustic resolution available at the new scale.
A preferred method of generating the Time Delay Look-Up Tables is based on the time delay from a given pixel point to a selected reference element on the array plus the differential delays to other transducers of the array.
A further preferred method of generating and updating the such Tables uses finite difference equations which allow for the appropriate angular relationships in the scene to be imaged.
In accordance with a second aspect of the invention, there is provided an acoustic imaging apparatus for marine survey and other purposes comprising transmitter means for transmitting periodic acoustic pulses, an array of at least two transducers for detecting reflections of said pulses from a target area and for generating output signals in response thereto, and means for processing said output signals so as to produce an image of at least a part of said target area, said image comprising an array of pixels each corresponding to a point in said target area and the intensity of each pixel representing the strength of the reflected pulse from said corresponding points; and further comprising:
data acquisition and storage means for sampling the output signals from each of said transducers at a predetermined rate during the period between successive transmitted pulses and for digitising and storing said samples; and wherein
said data processing and image generation means is adapted to select a set of digitised samples from each transducer for each pixel of the image, said selection being determined by the time of flight a pulse from the transmitter means to the corresponding point in the target area and back to the array, to correct each selected sample set by some form of interpolation so that it is precisely aligned with the required time of flight, to derive a composite signal from the time-shifted sample sets for each pixel, representative of the strength of the reflected pulse from the corresponding point, and to derive an intensity value for each pixel from its corresponding composite signal.
The data processing and image generation means may be adapted to perform any of the particular optional method steps defined above.
The data acquisition and storage means preferably includes means for amplifying the analogue signals received from the transducers fo the array, analogue to digital conversion means for sampling the.analogue signals, and means for multiplexing the signals from each transducer channel.
In a parallel implementation, the analogue output signals from each transducer channel are amplified and digitised separately, and the parallel, digital signals are digitally multiplexed prior to storage.'
In a serial implementation, the analogue output signals from the transducers are amplified separately in a first amplification stage, the parallel, amplified analogue signals are analogue multiplexed, and the multiplexed, analogue signal is further amplified prior to digitisation and storage. The further amplification stage is a digital switched gain stage wherein the gain is adjusted to compensate for attenuation of the reflected signals. The serial implementation preferably also includes similar pre- amplification and digital switched gain stages for each transducer channel prior to digitisation.
The date acquisition and storage means preferably further includes timing and control means to control the multiplexing means, digitally switched gain and digitisation.
These and further aspects and features of the invention will be apparent from the following description of embodiments of the invention, given by way of example only, with reference to the accompanying drawings.
4. Description of Drawings
Fig. 1 is a block diagram of an acoustic imaging system embodying the invention;
Figs. 2(a) and 2(b) are, respectively, schematic illustrations of alternative parallel and serial implementations of the data acquisition means of the system of Fig. 1;
Fig. 3 is a more detailed block diagram of a serial implementation of the serial data acquisition means fo Fig. 2(b) ;
Fig. 4 is a further schematic illustration of a serial data acquisition arrangement similar to that of Fig. 3;
Fig. 5 is a flow chart illustrating the steps of a data processing and image generation method in accordance with the invention;
Fig. 6 is a schematic illustration of two prior art approaches to beamforming in sonar systems;
Fig. 7 is an illustration of a typical source signal as used in the described embodiments of the invention;
Fig. 8 is a simplified illustration of an insonified scene with a superimposed grid;
Fig. 9 is similar to Fig. 8, and illustrates stabilisation against movement of the transducer array; and
Fig. 10 is similar to Figs. 8 and 9, and illustrates differential delays to different transducers of the array with the array yawed through an angle α.
5. Detailed Description of Preferred Embodiments
5.5 System Overview
The overall system consists of two main modules:
(a) Data Acquisition and Storage
(b) Data Processing and Image Generation
Fig. 1 is a block diagram illustrating the system, comprising a sonar head 10 consisting of an array of ultrasonic transducers and a pulse transmitter or projection. Data Acquisition and Storage means 12, and Data Processing and Image Generation means 14. The sonar head 10 itself is of conventional type and will not be described in further detail herein. The number of transducers in the array can vary, and useful basic imaging can be accomplished with relatively few transducers down to a minimum of two. The Data Acquisition and Storage means 12 is adapted to amplify, digitize, sample and store the signals received from the sonar head 10, and the Data Processing and Image Generation means 14 processes the stored samples to generate the required images. These are described in greater detail below.
5.2 Data Acquisition and Storage
The Data Acquisition and Storage means 12 comprises:
(a) Front-end amplifier and digitisation unit;
(b) Digital interface to the software processor to perform a direct memory access (DMA) operation.
As the holographic sonar concept requires both amplitude and phase information it is essential that in the sonar front-end all channels have identical transmission characteristics. Two configurations for the data acquisition means can be implemented, as shown in Figs. 2(a) and 2(b).
Fig. 2(a) shows a parallel implementation wherein each transducer of the sonar head 10 is connected to a digital multiplexer 16 via separate pre-amplification stages 18, digitally-switched-gain amplification stages 20 and analogue-to-digital conversion means (A/D) 22. The output from the multiplexer 16 is transmitted to digital memory for subsequent data processing.
Fig. 2(b) shows a serial implementation wherein each transducer of the sonar head 10 is connected to an analogue multiplexer 24 via separate pre-amplification stages 26. The output from the multiplexer 24 is then transmitted via a single digitally-switched-gain amplification stage 28 to A/D 30, and thence to digital memory 32 for subsequent processing in the data processing and image generation means 14.
The parallel implementation represents the best option in terms of A/D performance as the conversion speed need
only be above the single channel Nyquist rate. The ability to match amplitude and phase characteristics in the amplifiers was at first thought to be a problem and for this reason the serial implementation was chosen instead. Additional tests on a prototype digital switched gain board however confirms that the problem of matching channels may not be as difficult as expected.
When opting for a serial implementation only initial amplification 26 is in parallel, the signal then being analogue multiplexed into a single channel. The single channel is then amplified in a common time switched gain module 28 and A/D/30. The A/D/30 must however handle the time division multiplexed signal and has thus a times n (channels) increase in convertions rate*.
The DMA operation is currently performed at a rate of lOMbytes/s into a modified graphics processor board, Scanbeam. It is technically possible to increase this rate to well over 20Mbytes/s using off the shelf technology.
The 10 MHz DMA restriction has fundamental implications on the overall system design. In the present example a 360kHz sonar array is used. In order to capture the raw sonar data a digitisation rate for each channel must agree with the Nyquist sampling theorem. The Nyquist Low- pass theorem dictates a rate of over twice 360kHz. This limits the number of channels to the order of 12. The number of channels in turn sets the aperture size and hence the angular resolution. If the size of the DMA RAM storage is 1Mbyte, then in combination with the digitisation rate this sets the total range to the order of 20 metres.
The sonar transmitter can however be thought of as a narrow-band source so the sampling rate can be equated to the Nyqulist Band-pass theorem. Given a bandwidth of typically 5% of the carrier we can see that a twenty fold increase in the aperture size is quite feasible. For the purposes of the present example this was not implemented as it was adequate for experimentation to restrict the number of channels to the order of nine..
The total sonar is split into 3 parts, the array itself, the amplifiers and digitisation module, and the digital storage and processing engine. Physically these three outlets are interconnected by cables. The output from each element of the array feeds up individual 60 ohm twisted pairs using a differential cable driver- and into the input connecter of the amplifier and digitisation module. The output of the A/D converter on the digitization unit is 8- bit ECL parallel data and is cabled into the input connector of the VME host processor.
Fig. 3 illustrates the architecture of a serial system embodying the invention, in this case having an array of fifteen transducer elements, comprising a sonar head 40, data acquisition means 42 and data storage/processing means 44.
The head 40 comprises an array of transducers and pre-amps 46, each connected to a separate output channel, and projectors and power-amps 48. The data acquisition means 42, which also controls the projectors 48, comprises separate input amplifiers connected to each transducer channel and disposed on three identical cards 50, 52 and 54. The outputs from the input amplifiers are connected to the inputs of analogue multiplexer 56, and the output from the
multiplexer 56 to an analogue to digital converter 58 via digital switched gain card 60. The output from A/D 58 is connected to the memory of the data storage/processing means 44. The data acquisition means 42 further includes a power supply card 62, which provides power supplied for the analogue and digital data acquisition circuitry, a low pass filter and line driver card 64 connected to the projectors 48 of the head 40, and a main control board 66 which controls the timing of the various system components and generates pulses for driving the projectors 48 via the filter card 64,
Fig. 4 is a schematic block diagram of a similar, n- channel serial system wherein the outputs from the transducers 70 are fed to an analogue multiplexer 72 via input amplification stages 74(a), 74(b) and 74(c).
The twisted pair cables from the array are connected to input amplifiers. The input amplifiers have a total of 56 dB of matched gain for each channel prior to multiplexer input. This gain is split into three blocks; a first stage 74(a) of fixed 20 dB, a second stage of gain 74(b) selectable as either plus or minus 18 dB, and finally a buffer gain 74(c) of 18 dB driving 50 ohms. The selectable plus and minus 18 dB of gain is used to ensure that the power levels into the multiplexer 72 remains high for the full echo return time. This high level signal ensures optimum performance from the analogue switches in the multiplexer. For target returns close to the array the acoustic echo returns are such that minus 18 dB of gain maintains a high input level into the multiplexer. As echo returns are received from more distant targets the incoming power level falls off and to keep the signal levels high the plus 18 dB gain is then selected. These input gains may
require a degree of adjustment in practice. The range at which the gain change occurs is set by a timing and control card 76.
In the present system the analogue multiplexer 72 consists of a nine-to-one channel multiplexer ( mux) . Multiplexing is achieved using Siliconix quad analogue switches(SD5002) . The 5D5002 is a.. DMOS FET switch with switching times of ns. The specification for the mux. is that it must switch the nine channels with a lOMhz clock rate. The mux. was designed to have a 12ns switching time and the timing of the A/D 78 set to correspond to the middle of the switching period. To achieve the DMOS FET switching rate the gate drive must shift from plus 15 volts to minus 15 volts in less than 10ns, which is achieved by means of a discrete transistor circuit. The gate control signals are ECL timing signals from the timing and control card 76. After multiplexing the single channel is passed through a 20 dB attenuator 80 before entering the Digital Switched Gain card(DSG) 82. This card is designed to compensate for the normal attenuation characteristics of acoustic waves in water. An analogue switched gain circuit based on log amplifier technology would have a dubious phase characteristic over the full swept gain range, so a digitally switched gain board is preferred.
This board uses a number of high performance operational amplifiers (0PA6 75 swop amps) and analogue switches in different gain stages at the appropriate time in the attenuation curve. This board keeps the actual gain within 6dB of the theoretical gain required. As the output of this card feeds the A/D 78 converter the DSG circuit ensures that the A/D maintains an acceptable dynamic range for the full target range. The timing for this card is
derived from the control card 76. The gain selection times are pre-programmed in a ROM.
The Analogue to Digital converter 78 is an 8-bit Datel 8303E, a development card. This card has an input analogue bandwidth of 40 MHz, a system requirement due to the slew rate specification at the output of the multiplexer. The output data is converted into ECL signals for output on the cable connecting the software processor.
The control and timing card 76 performs much of the system clocks and time references. This card also includes the transmitter EPROM which is pre-programmed with the windowed sine wave transmitter shape values. An 8-bit D/A circuit reads the output from the EPROM and this signal is sent to the projector power amplifier 80.
The digital storage is performed using a Scanbeam graphics processor board modified by a separate main controller board to act as a 10 MHz DMA.
5.3 Data Processing and Image Generation
5.3.1 Overview
5.3.1.1 Introduction
Fig 5 gives a general block diagram for the image generation computation. The processing in the individual blocks varies in detail depending on the specific application - sidescan, sector-scan, seismic processing, chirp source etc. but the framework remains the same. This schematic diagram does not exclude other possible processing
steps which may assist in specific instances, for example swath bathymetry.
It will be easiest to understand the block diagram if the conceptual steps are described from a conventional phased-array sonar Receiver (Fig 6) to the digital holographic scheme of the present invention.
5.3.1.2 Phased-Arrav Sonar
This system consists of a linear array containing K+l transducer elements, {Tjς} k=O...K, spaced at equal intervals of d along the array.
The array is steered to an angle, a, with respect to the array normal in order to point at the reflector, P, in the far-field. This point is sufficiently distant from the array for the curvature of the echo wave-front to be negligible across the array, so that the wave-front can be assumed to be plane.
Let U be the speed of sound in the medium (assumed constant) . Then the echo from P reaches each transducer in turn, beginning with T0, with a time delay of t = d*sin(α)/U between adjacent transducers.
In order to image the point P, the signal received from the transducer, Tjς is delayed by a time (K-k)*t before adding the sum amplifier. This specific set of time delays aligns the signals received from P, so that they constructively add together. The same is true for any far- field point subtending the same angle, α, with the array normal. However signals received from a point subtending a different angle destructively interfere with each other.
This form of processing creates a sonar receiver beam pointing in the direction, α. The width of the beam depends on the carrier frequency of the sonar signal which is echoed from P. The nominal value is usually given as 1/L radians, where 1 is the wavelength of sound in the medium, and L is the length of the array.
The standard engineering for the phased-array sonar is to implement the delay-sum architecture using tapped delay lines or other analog, or hybrid digital/analog, circuitry. Processing is carried out in real-time as the echo is received, and is only carried out for one specific beam angle at any time. The sum amplifier is followed by some form of detector which is used to estimate the power in the echo, averaged over a time-interval. In* an analog sonar, the averaging time is roughly equal to the length of the transmitted pulse-packet, although the signal may later be smoothed over an interval equivalent to the resolution of the chart recorder. A variety of detection techniques are employed, ranging from simple half-wave rectifiers to correlation detectors.
In order to image a point in the near-field correctly, account must be taken of the curvature of the wavefront. The time-delays between adjacent transducers are no longer constant. Additional circuitry in the delay-sum architecture can be provided to modify the set of time delays to achieve such "dynamic focus", but the complication is significant.
5.3.1.3 Digitised Sampled Data
Conceptually the same processing scheme can be employed with signals which are quantised both in magnitude and in time. The sampled signal from the transducer Tl must now be
delayed by the number of samples which approximates in time to (K-k)*t as in the analog scheme. The time averaging of the composite signal is achieved by forming the sum of squares:
w=∑ (S2) ,
where the summation is taken over the number of samples in the required time-window, and Sj denotes the j'th composite sample formed by summing the delayed transducer samples. The rms power can be obtained directly from W if required.
For this simple scheme to work, very high sampling rates are required, as the following example demonstrates. Consider a sonar receiver consisting of 33 transducer elements, and designed to operate with a carrier frequency of 100 kHz. For effective imaging, the error in the time- delay applied to each transducer signal, must not exceed a small fraction, say 1%, of the carrier period, which is here 10 microseconds. Since the possible time delay error with the simple scheme is 1/2 the sampling period, the sampling period itself should not exceed 200 ns, which implies a sampling rate of 5 MHz. This is in itself rather high, but the total sampling rate from the whole set of transducers amounts to 165 MHz.
The simple system has the disadvantages that the digitisation rate is so high that the required sampling circuitry is very expensive, and that a large number of sample values must be stored to enable the computation to be carried out.
5.3.1.4 Interpolation in the Time-Domain
In order to reduce the sampling rate, some method is required to interpolate between sample points in order to estimate the signal for any time delay which is not coincident with a sampling instant. The most obvious way to do this is by interpolation in the time-domain, for example by linear or polynomial interpolation between sample points. Such techniques can reduce the sampling rate by an order of magnitude, but the engineering still remains expensive unless the carrier frequency is low.
As discussed in 5.3.1.3 above, estimation of the rms power in a given pixel is obtained by forming a sum of squares of the composite samples within the time window. Time-domain interpolation is only accurate when the sampling rate is high in comparison with the carrier frequency, so that the number of samples within a typical pulse envelope is also large. Hence the computation required to sum the squares of such samples is also expensive.
5.3.1.5 Interpolation in the Frequency-Domain
This technique is the preferred method used in the present invention. To understand the principle, first consider an analog signal S(t) consisting of a single frequency, f:
S(t) = a sin(2τrft) + b cos(2τrft).
The signal, S ' (t) , is a copy of S (t) delayed by a time, h. Then:
S « (t) = a cos [27τf (t+h) ] + b sin[27rf (t+h) ]
= a cos [27rft+u] + b sin[2πft+u]
where the phase-shift, u = 2πfh.
Further:
S ' (t) = a 1 cos (27rft) + b 1 sin(2πft) ,
where a * = a cos (u) + b sin (u) . .
b 1 = a sin(u) + b cos (u) .
The above simple mathematics shows that the in-phase and quadrature components, a and b, of a sinusoidal signal delayed by a time t can be computed directly from the Fourier components of the original signal using the phase- shift 2πft, where f is the signal frequency. Now consider a more general signal which can be represented by the spectral decomposition:
S(t) = ∑[an cos(2πfnt) + bnsin(27rfnt) ] n
Then the delayed signal, S'(t) is given by
S'(t) = Σ[a'cos(2πfnt) + b'sin(2πfnt) n n where
a« = ancos(un) + bnsin(un) n
b«=-an sin(un)+bncos(un)
n
and un = 2τrfnt. (5.1)
The above analysis leads directly to the proposed method of delaying sampled data sets by first estimating the spectral content. Since there are a variety of methods for estimating the spectral content of sampled data, there is now no particular problem of achieving any specific small delay required. Moreover if the bandwidth of the signal is known to be small, the required spectral estimation can be carried out using both a small number of frequency components, and a small sampling frequency given by the Nyqulist Theorem as twice the bandwidth.
5.3.1.6 Local Spectral Estimation
In the block diagram. Fig 5, the signals received from the entire set of transducers have been multiplexed, sampled, and stored. There is some time offset between the sample sets from the different transducers, but this can be ignored for the purpose of the description. It is also possible that some processing can be carried out "on-the-fly" before storage, but this again does not change the concept.
The first stage of processing is to carry out a local spectral estimate for each transducer signal for each sample point in turn. This could be a full local Discrete Fourier Transform (DFT) , but it will be shown below that simpler processing is sufficient. In any case the processing consists of estimating the in-phase and quadrature frequency components of different subsets of the stored transducer samples for a discrete set of frequency values. These components are calculated by convolving the sample sequence
for each transducer with a set of FIR filters, one pair for each frequency required. One convolution kernel in each pair gives the in-phase component an for that frequency, fn' and the other kernel gives the quadrature component, bn. The results of this set of convolutions are held in the Frequency Component Store. Each pair of values in this store gives the in-phase and quadrature coefficients for a certain transducer for the set of samples- contained in a time window centred on the corresponding sample in the Sample Store.
If M component frequencies are used in the spectral estimate, and N sample values are held in the Sample Store, then there are 2MN Frequency Components to be computed and stored, though not necessarily simultaneously.
5.3.1.7 Time-Delay Look-Up Tables
Suppose we wish to generate an image consisting of a set of pixel points, where each pixel in the image corresponds to a point (X,Y) in the insonified field. We call such corresponding points "pixel points". For a given image size with a given pixel point spacing, it is possible to precompute the time delay from transmission of the outgoing pulse for its echo to reach each transducer from each pixel point. For example, if there are 33 transducers, and 512 x 512 pixels in the image, then there are approximately 8 x 106 delay values to be computed. These precomputed delay values can be stored in a Look-Up Table. In practice the size of this look-up table can be reduced considerably by the use of difference techniques.
Note that this pre-computed look-up table can take account of the true geometry of the insonified scan, and allow for
curvature of the echo wavefront. ie. dynamic focussing can be achieved by correctly computing the look-up table.
Suppose that tjς is the stored value of the time delay for the wave-front to arrive at the transducer, Tj from a pixel point, P. Then this value tk can be used to find the nearest sample to this time delay in the sample set, and hence the nearest set of local frequency components {a^}, {bjς n} of that transducer in the Frequency Component Store. In general the nearest sample will not have precisely the required time delay.
The residual time difference, e, is corrected by generating the time-shifted set of frequency components {a'jς n}, {b'jς n} using equation 5.1, where un ='2πfn,e.
The above procedure gives the interpolated local spectral decomposition of each transducer signal for precisely the required time delay corresponding to a pixel point, P.
5.3.1.8 Image Intensity Estimation
The first stage of imaging is to estimate the spectral composition of the delay-sum signal. This can be carried out in the frequency domain by adding the in-phase and quadrature components over the set of transducers:
An = ∑(a'k,n) . ^n = ∑(b'k,n) (5.2) k k
{An}, {Bn} gives the spectral decomposition of the required composite signal. If the outgoing pulse was transmitted by a
non-chirp source then the total local signal power due to the echo from P is given by:
Wp = Σ(A2 + B2) . (5.3) n n
The intensity of the pixel in the image corresponding to P is then made suitably dependent on p (eg. proportional to W, sqrt(W) , log(W) etc depending on the requirement) . The imaging performance of the equipment, as indicated by the point-spread function for each pixel point, can be modified in detail by using the more general expressions for the composite signal coefficients:
An = ∑(hka'k/n), Bn = ∑(hk,b'kn) - (5.4) k
where the {hk} are precomputed weights associated with each transducer in the receiver array.
5.3.1.9 Processing Steps
In order to compute the image intensity for all points in a given image, the processing can be subdivided into stages in many different ways. For example it is possible to carry out all processing for each frequency component in turn for all image points.
The contribution. A2 + B2 to the power, W, at each n n image point (Eq. 5.3) is then added into the image register. This processing continues until all frequency components have been processed. Alternatively the processing can be carried out for each pixel in turn for all frequencies. The
processing can also be split between different processing units operating in parallel.
5.3.2 Local Spectral Estimation using a Convolution Kernel Set
5.3.2.1 Introduction
The operation "Estimate local spectral components" in the Block Schematic (Fig. 5) covers a range of possible operations. Each sampled transducer echo consists of a sequence of values, {Sn}. The generation of the m'th spectral component, c,r,m, centered on a sample, Sr, is achieved by means of the convolution:
j-R cr,m = ∑(^m,jsr-j) j=-R
The length of the kernel {km,j} is 2R+1. Typically the entire set of component values for a given transducer can be generated by passing the sequence {sn} through a transverse filter loaded with the particular kernel values.
The optimum kernel set to be used depends on many parameters:
Source signature;
Carrier Frequency(ies) Nature of imaging task (eg. survey, isolated target identification, statistical classification of seafloor)
Sampling rate
Available processing capacity:
Maximum digitisation rate available -
Nevertheless a common processing technique can be used in which just the convolution kernels are modified.
In general higher sampling rates will improve image quality, and reduce artefacts due to random noise; however the processing and digitisation rate increases correspondingly. It is one of the aims of the invention to allow imaging using sampling rates down to the theoretical lower limit of twice the signal component of the echo bandwidth (Nyquist) .
In this section we give some examples of the techniques for defining the convolution kernels.
5.3.2.2 Length of Convolution Kernels
It is helpful to define the term "local" in "local spectral estimation" more carefully. The local echo power estimate is normally required averaged over the length on the ground corresponding to the pixel separation in the range direction. Thus if the image size is 512 x 512 pixels, and this image covers a 100 m x 100 m field of view, then each pixel is 20 cm long in the range direction. The time window in each transducer signal corresponding to the pixel spacing in the image is the space distance between pixel points divided by the 2-way speed of sound, e.g. 750 m/sec in water. In the above example, the time window is 266 microseconds.
The number of samples in the time window depends on the sampling rate. Thus if the sampling rate is 20 kHz, we have 13 samples in each time window. These considerations determine the length of the kernels in the kernel sets which are used for spectral estimation.
5.3.2.3 Types of Signal
For illustration only, we shall consider four specific types of source signature. The echo from the insonified scene will have a frequency composition which is related to the frequency composition of the source, so that a narrow¬ band source will produce narrow-band echoes, and a wide-band source will produce wide-band echoes. The four source signatures are:
(a) minimum (or near-minimum) time-bandwidth modulated signals
(b) band-limited modulated signals
(c) seismic wavelets
(d) chirp (swept carrier frequency) sources.
The first two categories are normally used in sector- scan and side-scan sonar. The seismic wavelet would be employed for sub-bottom profiling and could be employed for swath bathymetry. Chirp (swept frequency) sources may be used in each of the above applications.
5.3.2.4. Minimum Time-Bandwidth Sources
A typical source signal has a single carrier frequency with an approximately gaussian envelope (Fig 7) . Other pulse shapes with well rounded envelopes are also near minimum time-bandwidth. We assume that the length of the pulse is approximately the same as the pixel spacing in time (which is the usual situation) . Then the time-bandwidth law asserts that the uncertainty in frequency of any spectral estimate made during that time window is equal to the bandwidth of the signal itself. Hence there is no point in attempting to measure more than one frequency component.
Spectral estimation now reduces to the problem of determining the amplitude and phase of the carrier in the local echo. The standard solution to this problem is to use a quadrature matched filter. A pair of -kernels are created whose values are just the sampled source signature, and the sampled source signature with carrier phase-shifted by 90 degrees.
This pair of kernels is now used to generate a single pair of Frequency Components in the Frequency Component Store.
5.3.2.5 Band-Limited Sources
The standard technique for local spectral estimation used in seismic processing is to carry out a local DFT within the time-window. The sampled signal is first padded with zeros so that the length is 2**n, e.g 512 or 1024. Then the signal is tapered to zero at each end so that it becomes approximately periodic, and the standard DFT algorithm is applied. The combined operations of taper followed by DFT are equivalent to a set of convolution operations.
In this instance the signal component of the echo is known to consist of a limited range of frequencies. Hence all frequencies outside this range are ignored. Careful choice of sampling frequency avoids aliasing problems. Alternatively there are mathematical modifications to the Fourier kernels which correct for aliasing to some extent (Corrected DFT Technique) .
An alternative approach is to use a set of quadrature bandpass filters to estimate the local spectrum. The number of such filters (quadrature-pairs) is equal to the time- bandwidth product of the signal to be estimated. This approach minimises the number of frequency components which are computed. However the DFT solution may have some merit if special hardware (DFT chips) can be 'employed in the computation.
5.3.2.6 Seismic Wavelets
A typical wavelet consists of a pair of sharp positive and negative peaks followed by a longer period ripple. The spectral composition may have a maximum around 3 kHz, but the spectrum will extend from around 1 kHz up to 10 kHz. Such a signal can be handled in precisely the same way as the band-limited signal discussed earlier, though here the minimum sampling frequency given by Nyquist is just twice the maximum frequency in the signal, and there is no special aliasing problem.
5.3.2.7 Chirp Sources
For spectral estimation, the chirp source can be treated as a band limited source with a rather wide bandwidth, and either a DFT kernel set, or the Corrected DFT
technique of the Quadrature Bandpass Filter technique can be used to generate the kernel set. The length of the time- window used in the kernel set should be at least the length of the chirped pulse, even if the pixel spacing is closer, if the pulse compression technique described below is to work successfully.
The final step in image generation (Eg. 5.3 in 5.3.1.8 above) should be modified to carry out pulse compression of the chirped echo. Pulse compression of a chirped echo is normally carried out by matched filtering of the echo with a copy of the chirped source signature. Using digital signal processing, both the echo and the source signature are sampled.
Matched filtering in the time-domain is equivalent to multiplication of the complex spectrum of the echo by the complex spectrum of the chirp source in the frequency domain. This is easily achieved with the given processing scheme, since the set {An}, {Bn defined in Sec. 5.3.8 is just the complex' spectrum of the echo. If the complex spectrum of the chirp source, {Pn}, {Qn} is precomputed by the same technique as that used to estimate the local echo spectrum, then:
WP = ∑(AnPn - BnQn) (5.4) n
Equation 5.4 gives the local power of the pulse compressed signal.
5.3.3 Look-up Tables, Zoom Facility. Display Stabilization
5.3.3.1 Introduction
Much of the power in the holographic system is given by the proper design and use of look-up tables. In the basic two-dimensional sector scan sonar the correct computation of time delays not only provides a dynamic focus facility, but also eliminates the stage of polar-to-cartesian grid conversion which is required in standard sonar imaging systems. For side-scan sonar, the slant-range to true-range correction which takes account of the vertical separation of the sonar array above the sea-floor can also be built into the look-up table values.
Any change to the image size, for example generating a zoomed display of part of the insonified scene, can be done be substituting a new look-up table. Similarly image stabilization against movement of the sonar platform can be done by updating the look-up tables as the platform moves.
The explicit form of look-up table, in which times to each individual transducer from each individual pixel point are stored with the necessary precisions, is not convenient in a number of respects:
(a) the table contains a large number of entries, and each value must be held with great precision
(b) the table takes a long time to compute, so is only convenient if the image is seldom changed (it can then be stored in ROM)
(c) it is inconvenient for stabilization or zoom.
As described in 5.3.1.7., the table stores time delays. In fact it is more convenient to store values as
(delay/sample period) but this is a detail. The following sections show how the look-up tables can be structured to reduce storage capacity and regeneration time at the expense of a slightly greater access time. The method of updating the tables to achieve image stabilization is also described.
Time Delay Look-up Tables for Holographic Imaging
1. Introduction
In a basic survey mode of operation, the look-up table is fixed for a given location of the imaged scene with respect to the transducer array. Hence the Delay Table could be precomputed for each standard mode of survey and stored in ROM. However, when a 'zoom' facility is used, the Delay Table for the zoomed window must be generated rapidly if the facility is to be of operational value. An even more serious problem arises if the imaged scene requires to be stabilized against movement of the transducer array, because the look¬ up requires to be refreshed each ping.
We therefore have an operational requirement to regenerate a large accurate look-up table in a fraction of a second. Clearly a different kind of solution is needed. The present invention proposes a solution based on first and second order differences from a 'Base Table' .
2. Outline of Method
2.1 Basic Tables
Figure 8 shows a simple illustration of an insonified scene, with a superimposed image grid. The point, S, located at the centre of the transducer array is assumed to be the source of the transmitted acoustic signal. If it is fixed at some other point of the array, minor computational differences arise.
No assumption is made about the grid spacing for the array; it may be a suitable grid for surveying the whole scene, or a much smaller grid suitable for zoomed, stabilised display. In the latter case the basic grid may be allowed to cover more of the scene, and hence be bigger than the displayed window.
Given the grid, the (R,θ) coordinates of each grid point with respect to S only are computed and stored in a pair of basic tables. This R,θ table will be referred to as the 'Polar Base Table• . The R co-ordinates can be stored in whatever are most convenient. In this document, distance in the water medium and the corresponding time delay at the speed of sound are used interchangeably.
2.2 Grid Stabilization against Movement of S
Suppose that we require to stabilize the grid against the movement of the array centre from S to S1 (Fig. 9). Movement of an integral number of grid squares presents no problem other than relabelling the axes of the existing Base Table before access. Consider, therefore, a small vector movement of (d,θ) within a grid square. To a first order of approximation, the new values of R are given by:
R1 = R - d cos( -θ) (la)
θ ' = θ - d sin(φ-θ) /R (lb)
To stabilize the image against array movement through the water, it is necessary to generate a modified (R',θ) Table from the Polar Base Table each ping using Equation la. The change to θ from (lb) can almost certainly be ignored.
2.3 Delays for Complete Set of Transducers. Allowing for Yaw
Figure 10 shows the original image of Figure 8, with the array superimposed. At some reference time for image stabilization, the array pointed along the Y-axis of the grid. Starting from this reference time, it is required to stabilize the image against rotation of the array in the (X,Y) plane. Suppose the array makes an angle, a, with the Y axis after several pings in a stabilised mode of operation.
In this situation, the problem is to use the (R,θ) Table to compute the table of delays , rj , for the j* h individual transducer from the array centre. If there is no rotation of the array to consider, we have a special case of this problem with α = 0.
Consider the range, r(l) from some pixel point, P, in the image field to the transducer, located at a distance +1 from the array centre. From simple trigonometry, r(l) satisfies the equation:
r(l)2 = R2 + l2 + 2R1 cos (π/2-θ-α) ,
i.e.
r(l)2 = R2 + l2 + 2R1 sin (θ + a) (3)
where R is the range to the array centre, given by the Polar Base Table. The solution to this equation is required for each value of 1 corresponding to a transducer location on the array. If the transducer spacing is a constant equal to d0, and there are 2m + 1 transducers mounted on the array, then 1 takes on the values:
Take the derivative of Equation 3, and write θ' = θ+α, to get:
rdr = [1 + Rsin(θ')]dl (4)
Provided d0 « r, the differential equation (4) can be replaced with a finite difference equation in which dl = d0. Let the range from P to the j•th transducer be rj, and write:
drj = rj+1 " rj (5)
Then substituting in (4) ,
rjdrj = [jd0 + Rsin(θ')3ά_o
(6) Starting from r0 = R, Equation 6 can be used to generate all the rj in the +ve direction. Changing the sign of d0 gives the r in the -ve direction.
Note that in this difference equation, θ' remains fixed for all the transducers in the array, and R sin(θ') is also constant. Write
Hj = [R sin(θ') + jd0]d0
The difference equations now become
dHj = d2 (7a)
drj = Hj/rj (7b)
starting from the initial conditions:
H0 = d0R sin(θ')
r0=R
In many situations, it is acceptable to make the 'far- field' assumption that the change in r is negligible in Equation (7b) . Under these circumstances
drj = H0/R = d0 sin(θ') = constant (8a)
If dynamic focus is required, then the next approximation is to form the second difference equation:
d2rj = -Hjdrj/(r2) + dHj/R j
-H2/(R3) d2/R
[1 - sin (θ ' ) ]d2/R o
= k(R,θ'), constant for all transducers (8b)
Using this approximation, the difference equations become:
Initial conditions:
d2r0 = K(R,Θ) = constant.
dr0 = d0 sin(θ')
= R
Increments
drj = drj_! + K 9(a)
rj+l = rj + drj 9(k)
It depends on the speed of division whether Equations (9) are much faster to compute than Equation (7) . However Equations (9) are easier to implement in fixed-point arithmetic.
2.4 Creation of Base Table for Zoom Window
Here the task is to generate a new Polar Base Table for a window centred on an origin with Cartersian coordinates xoΥo* τlιe p°lar values r0,θ0 are available for the origin, from the existing Polar Base Table.
Remembering that we are looking for both r and θ for each grid point in the window, the simplest technique seems to be as follows. Let h be the grid spacing in both x and y directions. The procedure is to move out from the grid square by means of steps in x or y until each grid point is computed. One possibility is to move first in the +- x directions until the complete x axis of the window has been generated; then to move in +- y from each x-axis grid point, to cover the whole window. The computation required for each Step is
Let the current point be Xn'yn'rn'θn- Then the next point is given by:
xn+l = xn + h
Yn+1 = Yn
rn+l = rn + hcos (θn)
θn+l = arccos (xn+ι/rn+ι)
Y-Step
The next point is given by:
xn+l = xn
Yn+1 = Yn + h
rn+l = rn + hsin (θn)
θn+l = arcsin (yn+ι/rn+ι)
2.5 Combination of Zoom, Translation. Orientation and Tx Displacement
There is no difficulty in putting the results of 2.2, 2.3 and 2.4 together to generate or refresh a complete set of stabilized tables. Using the difference equation approximations in 2.3, there is no obvious virtue in storing values for individual transducer delays, which can be generated almost as rapidly as they can be accessed.
There is an additional virtue in working with difference values for the individual transducer arrays. The absolute error in the delay values determines the precision with which the image is stabilized. The tolerable error is some fraction of the acoustic travel time corresponding to the image grid spacing, which is usually of the same magnitude as the pulse length. Error in computing the delay differences between transducers leads to phase errors which degrade the imaging process itself. The allowable difference error between transducers is therefore some fraction of the source signal carrier period. Hence delay differences require to be at least one order of magnitude more precise than the absolute delay values. The delay differences can be converted directly into phase-differences between
transducers when the spectral in-phase and quadrature components are calculated.
No attempt is made here to lay out the system computation in detail, as this depends on the particular modes of operation. It is envisaged that the Polar Base Table for the whole scene is precomputed without approximation, and perhaps held in ROM. The trig tables sin, cos, arcsin, arccos are also precomputed in advance. Working tables are then created for a zoomed window, and for display stabilization.
3. Computation Times
3.1 Stabilization Against Array Movemerit
Whatever the grid spacing, it is only necessary to compute the look-up tables for the pixel array that is actually displayed. Assume that this is a 512 x 512 image, with 1/4 million pixel points (in practice the image size may be somewhat smaller) . Equation (la) requires 2 add operations, 1 multiply, and a cosine table look-up for each pixel point. Assuming 50 nanoseconds per operation, the slowest execution time which can offer any kind of real-time capability, the computation time to stabilize for array translation is 50 ms.
3.2 Individual Transducer Delays
The next task is to generate the differential delays for each of the transducers in the array. Computation of initial conditions for the difference equations, (8) or (9) above, will also take 50-100 ms, assuming the necessary trigonometric function tables are available. Computation of
delay difference costs one or two additions per transducer per pixel point (test and branch instruction may be avoided using in-line code) . The total time or 256 x 256 pixel points and 40 transducers is around 1 sec. This time is not likely to dominate the holographic computation.
3.3 Zoom Window
The computing cost is 2 additions, one division operation and 2 table look-ups per move. Assuming 50 ns per operation, the cost is 250 ns per grid point, and 62.5 ms for a 512 x 512 window. If the division operation is too expensive, the time can be reduced using a similar approximation to the one used to find the individual transducer delays, but It is probably not worth the trouble, particularly as zoom windows are likely to be smaller than 512 X 512.
4. Conclusion
Simple computational techniques have been demonstrated for generating the delay look-up tables required for holographic imaging. Most algorithms start from a derivative of the triangle relationship:
a2 = b2 + c2 - 2cosA
The approximation is then made of replacing infinitesimals with finite differences.
5.3.4 Application to Swath Bathymetry
A particular application of the above general imaging scheme is swath bathmetry. The required sonar array configuration is similar to sector-scan, except that the array points vertically downwards, with the projected fan- shaped beam normal to the direction of motion of the sonar platform. As the platform moves, a section of images are obtained of the sea floor along a line below the array, approximately at right angles to the ships track. The array itself may be a conventional linear array, or bent in an arc around the hull of the ship or towed fish. As no penetration is required , high frequencies can be employed as for short- range sector-scan sonar.
The imaging procedure is essentially the same as described above, except that the image is only required in the neighbourhood of the expected location of the sea-floor, hence the following procedure is adopted. After each ping, the image is reconstructed, and a suitable image processing algorithm is used to locate the sea-bottom horizon in the image. This horizon should be corrected to true depth and horizontal offset allowing for the attitude of the sonar (yaw, pitch and roll) .
The imaging area for the next ping is controlled by the horizon found in the previous ping or pings, using any prediction techniques which may be appropriate. The aim is to reduce the number of pixels which are imaged, in order to speed up image reconstruction time, and hence increase the possible speed of the survey vehicle.
The embodiments of the invention described herein relate generally to systems having linear transducer arrays, from which images of two-dimensional sections of the target area can be generated, however the invention is equally and directly applicable to planar (n x m element) arrays, T-
shaped arrays and the like which may be used to produce three-dimensional images.
References
[1] The Fast Fourier Transform and its Applications" Chapter 14 E. Oran Brigham, Prentice-Hall New Jersey, 1988.
[2] J.F. McDonald, P.K. Das, K.C. Laprade, C.J. Hidalgo, K.S. Goekjian, L. Jones, H. Shyu and G. Capsi alis, "A High Spatial Resolution Digital System for Ultrasonic Imaging", Acoustic Imaging. Proceedings of the 14th. Inter national Symposium, The Hague , Netherlands, pp.559-572, April 1985.
[3] C.F. Schueler, H. Lee and G. -Wade, "Fundamentals of Digital Ultrasonic Imaging," IEEE Trans. Sonics &. Ultrasonics.Vol SU-31,N0. 4, pp.195-217, July 1984.
[4] F. Duck, S, Johnson, J. Greenleaf Acoustic Aperture", Ultrasonics . pp. 83-88, march 1977.
[5] S. Bennett, D.K. Peterson, D. Corl and G.S. Kino, "A Real-Time Synthetic Aperture Digital Acoustic Imaging System", Acoustical Imaging f Vol.10, P.. Alais & A.F. Metherall (eds.), Plenum Press, New York, 1980.
Claims
1. An acoustic imaging method for marine survey and other purposes wherein a target area is insonified by periodic acoustic pulses, reflections of said pulses are detected by an array of at least two transducers which generate output signals in response thereto, and said output signals are processed so as to produce an image of all or part of the insonified target area, said image comprising an array of pixels each corresponding to a point in the insonified area and the intensity of each pixel representing the strength of the pulse reflected from said corresponding point, comprising the steps of:
(a) sampling the output signals from each of said transducers at a predetermined rate during the period between said transmitted pulses;
(b) digitizing and storing said samples; '
(c) for each pixel of the image,
(i) selecting a corresponding set of digitized samples from each transducer, the selection of the sample set being determined by the time of flight of a transmitted pulse to the point in the insonified area corresponding to the pixel and back to the particular transducer in the array;
(ii) correcting each selected sample set by interpolation so that it is precisely aligned with the required time of flight by: estimating the in-phase and quadrature coefficients for one or more frequency components present in each sample set by convolution of the sample set with one or more pairs of sets of predetermined coefficients; and computing the frequency coefficients for the required time-shifted sample set by phase-shifting the pairs estimated pairs of coefficients for each frequency component; (iii) combining the phase-shifted coefficients for each frequency component from all transducers to generate the corresponding coefficients for a composite signal; (iv) deriving the strength of the reflected pulse and hence intensity of the pixel from the frequency coefficients of the composite signal.
2. The method of claim 1, wherein said in-phase and quadrature coefficients for said one or more frequency components are estimated using quadrature matched filtering.
3. The method of claim 1, wherein said in-phase and quadrature coefficients for said one or more frequency components are estimated using Discrete Fourier Transform computation.
4. The method of any of claims 1, 2 or 3, wherein step (iv) of claim 1 includes adding the cross-products of the in-phase and quadrature frequency components of the composite signal with the corresponding predetermined frequency component of the transmitted pulse so as to effect pulse-compression of frequency modulated transmitted pulses.
5. The method of any preceding claim, wherein the times of flight of a transmitted pulse to each point in the insonified area corresponding to each pixel are obtained from values stored in a look-up table.
6. The method of claim 5, wherein said stored values are wholly predetermined.
7. The method of claim 6, wherein said stored values are partly predetermined.
8. The method of claim 5, claim 6, or claim 7, wherein said stored values are computed to provide geometric correction of the time of flight for pixels corresponding to points in the near field of the insonified area.
9. The method of any of claims 5 to 8, wherein said stored values are computed to provide geometric correction of slant- range to true-range for side-scan sonar applications.
10. The method of any of claims 5 to 9, wherein said stored values are computed to provide imaging in a selected frame of reference.
11. The method of any of claims 5 to 10, wherein said stored values are recomputed to provide an enlarged display of a selected part of a previously generated display.
12. The method of any of claims 5 to 11, wherein said look-up tables are generated or updated on the basis of the time delay from a given pixel point to a selected reference element on the array plus the differential delays to each other transducer of the array.
13. The method of claim 12, wherein the generation and updating of said tables is effected,by the use of finite difference equations allowing appropriate angular relationships in the insonified area to be imaged.
14. The method of any preceeding claim, adapted for swath bathymetry applications wherein said transducer array includes an acoustic transmitter array and an acoustic receiver array mounted on a vessel or other survey vehicle, further comprising the steps of:
(a) orienting the transmitter array in such a manner that a narrow strip of the sea-floor is insonified at right angles to the direction of motion of the vessel or other vehicle;
(b) orienting the receiver array in such a manner as to image the sector below the array including the insonified sea- floor intercept;
(c) after each pulse transmission, imaging a selected area of the scene which includes the sea-floor intercept using the method of any preceeding claim;
(d) locating the pixels in the image corresponding to the sea- floor intercept by means of standard signal or image processing techniques; and
(e) using an horizon intercept determined from one or more previous insonifications to select the area for which the image needs to be computed after the next insonification.
15. An acoustic imaging apparatus for marine survey and other purposes comprising transmitter means for transmitting periodic acoustic pulses, an array of at least two transducers for detecting reflections of said pulses from a target area and for generating output signals in response thereto, and means for processing said output signals so as to produce an image of at least a part of said target area, said image comprising an array of pixels each corresponding to a point in said target area and the intensity of each pixel representing the strength of the reflected pulse from said corresponding points, and further comprising: data acquisition and storage means for sampling the output signals from each of said transducers at a predetermined rate during the period between successive transmitted pulses and for digitising and storing said samples; and wherein said data processing and image generation means is adapted to select a set of digitised samples from each transducer for each pixel of the image, said selection being determined by the time of flight of pulse from the transmitter means to the corresponding point in the target area and back to the array, to correct each selected sample set by interpolation so that it is precisely aligned with the required- time of flight by, estimating the in-phase and quadrature coeffieients for one or more frequency components present in each sample set by convolution of the sample set with one or more pairs of sets of predetermined coefficients, and computing the frequency coefficients for the required time shifted sample set by phase-shifting the estimated pairs of coefficients for each frequency component, and combining•the phase-shifted coefficients for each frequency component from all transducers to generate corresponding coefficients for a composite signal, in order to derive the strength of the reflected pulse and hence intensity of the pixel from the frequency coefficients of the composite signal.
16. An acoustic imaging apparatus as claimed in claim 15, wherein the data acquisition and storage means includes means for amplifying the analogue signals received from the transducers of the array, analogue to digital conversion means for sampling the analogue signals, and means for multiplexing the signals from each transducer channel.
17. An acoustic imaging apparatus as claimed in claim 16, wherein the analogue output signals from each transducer channel are amplified and digitised in parallel, and the parallel, digital signals are digitally multiplexed prior to storage.
18. An acoustic imaging apparatus as claimed in claim 16, wherein the analogue output signals from the transducers are amplified separately in a first amplification stage, the parallel, amplified analogue signals are analogue multiplexed, and the multiplexed, analogue signal is further amplified prior to digitisation and storage.
19. An acoustic imaging apparatus as claimed in claim 18, wherein the further amplification stage is a digital switched gain stage wherein the gain is adjusted to compensate for attenuation of the reflected signals.
20. An acoustic imaging apparatus as claimed in claim 19, wherein the pre-amplification and digital switched gain stages of each channel prior to digitisation are similar to one another.
21. An acoustic imaging apparatus as claimed in any of claims 15 to 20, wherein said data acquisition and storage means includes timing and control means to control the multiplexing and digitising means.
22. An acoustic imaging apparatus as claimed in claim 21 when dependent upoη claim 19 or 20, wherein said timing and control means further controls said digital switched gain stages.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9014544.2 | 1990-06-29 | ||
GB909014544A GB9014544D0 (en) | 1990-06-29 | 1990-06-29 | Methods and apparatus for acoustic holographic imaging in marine and other acoustic remote sensing equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1992000584A1 true WO1992000584A1 (en) | 1992-01-09 |
Family
ID=10678454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1991/001058 WO1992000584A1 (en) | 1990-06-29 | 1991-06-28 | Method and apparatus for acoustic holographic imaging in marine and other acoustic remote sensing equipment |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB9014544D0 (en) |
WO (1) | WO1992000584A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5793703A (en) * | 1994-03-07 | 1998-08-11 | Bofors Underwater Systems Ab | Digital time-delay acoustic imaging |
DE10334902B3 (en) * | 2003-07-29 | 2004-12-09 | Nutronik Gmbh | Signal processing for non-destructive object testing involves storing digitized reflected ultrasonic signals and phase-locked addition of stored amplitude values with equal transition times |
US8609630B2 (en) | 2005-09-07 | 2013-12-17 | Bebaas, Inc. | Vitamin B12 compositions |
WO2014117767A1 (en) * | 2013-01-29 | 2014-08-07 | Atlas Elektronik Gmbh | Underwater sound signal, underwater transmitter or underwater receiver, underwater sonar, underwater vehicle and retrofitting kit |
CN107796473A (en) * | 2017-11-09 | 2018-03-13 | 广东美的环境电器制造有限公司 | A kind of method, apparatus and toilet seat for monitoring excretion data |
WO2019014771A1 (en) * | 2017-07-20 | 2019-01-24 | UNIVERSITé LAVAL | Second-order detection method and system for ranging applications |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0109869A1 (en) * | 1982-10-22 | 1984-05-30 | Thomson-Csf | Digital sonar beam-forming apparatus |
WO1985000889A1 (en) * | 1983-08-05 | 1985-02-28 | Luthra Ajay K | Body imaging using vectorial addition of acoustic reflections to achieve effect of scanning beam continuously focused in range |
GB2192061A (en) * | 1986-06-27 | 1987-12-31 | Plessey Co Plc | A phased array sonar system |
-
1990
- 1990-06-29 GB GB909014544A patent/GB9014544D0/en active Pending
-
1991
- 1991-06-28 WO PCT/GB1991/001058 patent/WO1992000584A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0109869A1 (en) * | 1982-10-22 | 1984-05-30 | Thomson-Csf | Digital sonar beam-forming apparatus |
WO1985000889A1 (en) * | 1983-08-05 | 1985-02-28 | Luthra Ajay K | Body imaging using vectorial addition of acoustic reflections to achieve effect of scanning beam continuously focused in range |
GB2192061A (en) * | 1986-06-27 | 1987-12-31 | Plessey Co Plc | A phased array sonar system |
Non-Patent Citations (2)
Title |
---|
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA. vol. 76, no. 4, October 1984, NEW YORK US pages 1132 - 1144; M.E. WEBER ET AL.: 'A frequency-domain beamforming algorithm for wideband coherent signal processing ' see paragraph I.A * |
ULTRASONICS vol. 15, no. 2, March 1977, NY US pages 83 - 88; F. DUCK ET AL.: 'Digital image foccussing in the near field of a sampled acoustic aperture ' cited in the application see paragraph "Theory" see figure 1 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5793703A (en) * | 1994-03-07 | 1998-08-11 | Bofors Underwater Systems Ab | Digital time-delay acoustic imaging |
DE10334902B3 (en) * | 2003-07-29 | 2004-12-09 | Nutronik Gmbh | Signal processing for non-destructive object testing involves storing digitized reflected ultrasonic signals and phase-locked addition of stored amplitude values with equal transition times |
US7581444B2 (en) | 2003-07-29 | 2009-09-01 | Ge Inspection Technologies Gmbh | Method and circuit arrangement for disturbance-free examination of objects by means of ultrasonic waves |
US8609630B2 (en) | 2005-09-07 | 2013-12-17 | Bebaas, Inc. | Vitamin B12 compositions |
WO2014117767A1 (en) * | 2013-01-29 | 2014-08-07 | Atlas Elektronik Gmbh | Underwater sound signal, underwater transmitter or underwater receiver, underwater sonar, underwater vehicle and retrofitting kit |
WO2019014771A1 (en) * | 2017-07-20 | 2019-01-24 | UNIVERSITé LAVAL | Second-order detection method and system for ranging applications |
US11460558B2 (en) | 2017-07-20 | 2022-10-04 | UNIVERSITé LAVAL | Second-order detection method and system for optical ranging applications |
CN107796473A (en) * | 2017-11-09 | 2018-03-13 | 广东美的环境电器制造有限公司 | A kind of method, apparatus and toilet seat for monitoring excretion data |
CN107796473B (en) * | 2017-11-09 | 2024-03-15 | 广东美的环境电器制造有限公司 | Method and device for monitoring excretion data and toilet bowl |
Also Published As
Publication number | Publication date |
---|---|
GB9014544D0 (en) | 1990-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4237737A (en) | Ultrasonic imaging system | |
US6130641A (en) | Imaging methods and apparatus using model-based array signal processing | |
US10451758B2 (en) | Multi-function broadband phased-array software defined sonar system and method | |
JP4302718B2 (en) | Coherent imaging device | |
EP0155280B1 (en) | Body imaging using vectorial addition of acoustic reflections to achieve effect of scanning beam continuously focused in range | |
US6056693A (en) | Ultrasound imaging with synthetic transmit focusing | |
US5793701A (en) | Method and apparatus for coherent image formation | |
US4207620A (en) | Oceanographic mapping system | |
US5793703A (en) | Digital time-delay acoustic imaging | |
EP0916966B1 (en) | Ultrasonic signal focusing method and apparatus for ultrasonic imaging system | |
US5142649A (en) | Ultrasonic imaging system with multiple, dynamically focused transmit beams | |
EP2063292B1 (en) | Calibrating a multibeam sonar apparatus | |
US4119940A (en) | Underwater viewing system | |
US4815045A (en) | Seabed surveying apparatus for superimposed mapping of topographic and contour-line data | |
EP0179073B1 (en) | Hybrid non-invasive ultrasonic imaging system | |
US5706818A (en) | Ultrasonic diagnosing apparatus | |
Gehlbach et al. | Digital ultrasound imaging techniques using vector sampling and raster line reconstruction | |
US4958330A (en) | Wide angular diversity synthetic aperture sonar | |
JPH07506519A (en) | Barrier filter using circular convolution for color flow imaging systems | |
EP0139242B1 (en) | Ultrasonic imaging device | |
US4688430A (en) | Device for imaging three dimensions with a single pulse transmission | |
US5548561A (en) | Ultrasound image enhancement using beam-nulling | |
US5476098A (en) | Partially coherent imaging for large-aperture phased arrays | |
US5029144A (en) | Synthetic aperture active underwater imaging system | |
WO1992000584A1 (en) | Method and apparatus for acoustic holographic imaging in marine and other acoustic remote sensing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE |
|
NENP | Non-entry into the national phase |
Ref country code: CA |