It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition
Technical field
It is specifically a kind of that confrontation network is generated based on condition the invention belongs to retinal images denoising method technical field
Speckle denoising method in OCT image.
Background technique
Optical coherent chromatographic imaging (Optical Coherence Tomography, OCT) is the width that developed recently gets up
Band optical scanning chromatography imaging technique, realized using the low coherence of wideband light source high-resolution, non-intruding optical chromatography at
Picture reaches as high as several microns currently, the resolution ratio of OCT image generally can achieve more than ten microns.
Optical coherence tomography can quick obtaining micrometer resolution eye biological tissue cross sectional image, mesh
Before have become the important tool of retina image-forming, help is provided to the diagnosis and treatment of disease for Clinical Ophthalmology doctor;By the more of light wave
Speckle noise caused by secondary forward and backward scatters is the principal element for causing OCT image quality to decline, existing speckle noise
Often cover subtle but important details of morphology, thus be to observation retinopathy it is unfavorable, it have an effect on for objective and
The performance of the automatic analysis method of accurate quantification;Although in the past OCT in 20 years imaging resolution, speed and depth
It substantially improves, but very good solution is not yet received in the solid problematic speckle noise as imaging technique.
Application No. is 201210242543.5 patents to disclose the OCT image speckle noise based on adaptive bilateral filtering
Reduce algorithm, by establishing the speckle noise model of original OCT image, according to Rayleigh criterion, the speckle of original OCT image is made an uproar
Acoustic model constructs spatial function as variable, and passes through the characteristic of analysis space function, derives that spatial function F weighs filtering
The method formula of coefficient progress adaptive correction;It, which can be realized, reduces OCT image speckle noise, reduces image mean square error simultaneously
Y-PSNR is improved, while dramatically keeping the marginal information of image, contrast on border is improved, obtains clearer figure
As edge details.However, there are defects below for current retina OCT image speckle Denoising Algorithm: (1) general image is gone
Algorithm of making an uproar is difficult to the characteristics of being effectively directed to speckle noise and is removed;(2) traditional some Image denoising algorithms can cause centainly
The image border distortion and contrast decline of degree;(3) most of Image denoising algorithm is difficult to while removing speckle noise
The reservation image detail information being called, be easy to cause the excess smoothness of image;(4) some method implementation complexity and time at
This is excessively high, and is difficult to adapt to the image of different types of OCT scan instrument acquisition.
Summary of the invention
Confrontation network is generated based on condition in response to the problems existing in the prior art, the purpose of the present invention is to provide a kind of
Speckle denoising method in OCT image, the present invention generate confrontation network (cGAN) framework using condition, are obtained by training from containing
The OCT image of speckle noise to muting OCT image mapping model, then using the mapping model to retina OCT image
Speckle noise eliminated.
To achieve the above object, the technical solution adopted by the present invention is that:
It is a kind of that speckle denoising method in the OCT image of confrontation network is generated based on condition, comprising the following steps:
S1, the acquisition of training image contain the 3-D image of multiple B-scan images to same one eye multi collect;
S2, the pretreatment of training image are registrated the B-scan image of close positions in the 3-D image, will be more
Image after registration is averaging and carries out contrast stretching, obtains muting OCT image, then by muting OCT image
Training image pair is formed with the former B-scan image on corresponding position containing speckle noise;
S3, data amplification, by scaling at random, flip horizontal, rotation and non-rigid transformation be to pretreated training
Image obtains final training dataset to data amplification is carried out;
S4, model training generate the confrontation network architecture using condition, and introduce and keep edge thin using training dataset
The constraint of section obtains the OCT image speckle denoising model to edge information sensing by end-to-end training;
S5, model use, by the OCT image containing speckle noise be sent into trained OCT image speckle denoising model into
Row calculates, and obtains muting OCT image.
Specifically, in step S2, carrying out registration to the B-scan image of close positions in the 3-D image includes following step
It is rapid:
S21 selects one as target image at random in multiple described 3-D images;
S22, on the basis of i-th of B-scan image in the target image, by position in all 3-D images and described the
B-scan image similar in i B-scan image is placed in a set;
S23, using affine transformation to all B-scan images in addition to i-th in the set with i-th of B-scan
It is registrated on the basis of image.
Further, in step S2, it includes following step that the image after multiple are registrated, which is averaging and carries out contrast stretching,
It is rapid:
S24 selects the multiple images and i-th of B with highest average structural similarity index from the image after registration
Scan image is averaging together, is obtained corresponding with i-th of B-scan image with reference to denoising image;
S25 goes the standard for obtaining contrast enhancing with reference to denoising image execution segmented linear gray stretching conversion
It makes an uproar image, the gray scale less than background area average value is mapped to 0, remaining gray scale zooms to [0,255] by linear stretch.
Further, in step S24, the average structure index of similarity is obtained by following formula:
Wherein, x and y is the window that two sizes of corresponding position in two images are W × W, μxAnd uyIt is two windows respectively
The average value of pixel grey scale in mouthful,WithIt is the variance of pixel grey scale in two windows, σ respectivelyxyIt is two windows of x and y
Covariance;Constant C1=2.55, C2=7.65.
Specifically, in step S3,
Random scale simulates the image that the OCT instrument of different resolution acquires using different zoom factors, just
In with the data set after amplification train come model can test different types OCT scan instrument acquisition other images;
The flip horizontal is used to simulate the symmetry of right eye and left eye, with guarantee the data set after expanding train come
Model be suitable for right and left eyes;
The different gradients rotated for simulating retina in OCT image, rotation angle range are -30 °~30 °,
To improve with the data set after expanding train come the different retina OCT image of model treatment inclined degree robust
Property;
The non-rigid transformation is for simulating uneven deformation caused by different pathological, to be assembled for training using the data after amplification
Practising the model come can handle the OCT image of different pathological.
Specifically, in step S4, it includes generator and arbiter that the condition, which generates confrontation network,;
The condition generates confrontation network and constrains using the image that inputs as condition the image of generation;
For the generator by training study so that itself generating the image for allowing arbiter to be difficult to differentiate, the arbiter is logical
Training study is crossed to promote the resolution capability of itself.
Further, the condition generates the objective function of confrontation network are as follows:
Wherein, Pdata(x, y) is the joint probability density function of x and y, Pdata(x) probability density function for being x, Pz(z)
For the probability density function of z;G is generator, and D is arbiter;The input of the generator is the B-scan image in target image
X and random noise vector z, output are to generate image G (x, z) accordingly with x;The input of the arbiter is in target image
The truthful data that B-scan image x and corresponding goldstandard y is constituted is to (x, y) or the B-scan image x and generates image G
For the generation data that (x, z) is constituted to (x, G (x, z)), output is data to being judged as true probability;
In the training process, the target of arbiter is to keep the objective function maximum, and the target of generator is to make the mesh
Scalar functions are minimum, then the objective function after optimizing are as follows:
In order to make the image generated closer to goldstandard, L1 distance restraint is introduced in objective function:
In order to solve clearly to retain the difficulty at edge again while removing speckle noise, the introducing pair in objective function
The edge penalty of marginal information sensitivity:
Wherein, i and j indicates the coordinate of vertical and horizontal in image;
The condition generates the final optimization pass objective function of confrontation network are as follows:
Wherein, λ1And λ2It is the weighting coefficient of L1 distance and edge penalty respectively.
Compared with prior art, the beneficial effects of the present invention are: (1) present invention is by containing same one eye multi collect
The 3-D image of multiple B-scan images is registrated close positions B-scan image, then is averaging and to its degree of comparing
It stretches, keeps the training image quality obtained higher;(2) present invention makes to expand in the amplification of training data using random scaling
Data set afterwards train come model can test different types OCT scan instrument acquisition image;Flip horizontal is used to protect
Data after card amplification train the model come and are suitable for right and left eyes;Using rotation improve amplification after data set train come
The robustness of the different retina OCT image of model treatment inclined degree;Make the data training after amplification using non-rigid transformation
Practising the model come can handle the OCT image of different pathological;(3) present invention is generated in the confrontation network architecture in condition and is introduced
It keeps the constraint condition of edge details to train, obtains the OCT image speckle denoising model to edge information sensing, to make this
The speckle denoising model of invention is while effectively removing speckle noise, moreover it is possible to preferably reservation image detail information.
Detailed description of the invention
Fig. 1 is a kind of flow chart that speckle denoising method in the OCT image of confrontation network is generated based on condition of the present invention;
Fig. 2 a is a B-scan image of target image in embodiment 1;
Fig. 2 b be embodiment 1 in it is corresponding with former B-scan image near peace after;
Fig. 2 c is that the standard of contrast enhancing corresponding with former B-scan image in embodiment 1 denoises image;
Fig. 3 is the U-Net structural schematic diagram of generator in embodiment 2;
Fig. 4 is the PatchGAN model structure schematic diagram of arbiter in embodiment 2;
Fig. 5 is the background area delimited manually in embodiment 3 and three signal area images;
Fig. 6 a is effect contrast figure of the OCT image after denoising model denoises in embodiment 3;
Fig. 6 b is effect contrast figure of the OCT image after denoising model denoises in embodiment 3;
Fig. 6 c is effect contrast figure of the OCT image after denoising model denoises in embodiment 3;
Fig. 6 d is effect contrast figure of the OCT image after denoising model denoises in embodiment 3.
Specific embodiment
Below in conjunction with the attached drawing in the present invention, technical solution of the present invention is clearly and completely described, it is clear that
Described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the implementation in the present invention
Example, those of ordinary skill in the art's all other embodiment obtained under the conditions of not making creative work belong to
The scope of protection of the invention.
Embodiment 1
As shown in Figure 1, present embodiments provide it is a kind of based on condition generate confrontation network OCT image in speckle denoising side
Method, comprising the following steps:
S1, the acquisition of training image are most in collection process to same normal eye repeated acquisition K three-dimensional OCT image
It can be avoided that eye motion;
S2, the pretreatment of training image are registrated the B-scan image of close positions in the 3-D image, will be more
Image after registration is averaging and carries out contrast stretching, obtains muting OCT image, then by muting OCT image
Training image pair is formed with the former B-scan image on corresponding position containing speckle noise;
S3, data amplification, by scaling at random, flip horizontal, rotation and non-rigid transformation be to pretreated training
Image obtains final training dataset to data amplification is carried out;
S4, model training generate the confrontation network architecture using condition, and introduce and keep edge thin using training dataset
The constraint of section obtains the OCT image speckle denoising model to edge information sensing by end-to-end training;
S5, model use, by the OCT image containing speckle noise be sent into trained OCT image speckle denoising model into
Row calculates, and obtains muting OCT image.
Specifically, in step S2, carrying out registration to the B-scan image of close positions in the 3-D image includes following step
It is rapid:
S21 selects one as target image at random from the K 3-D image and is expressed as V1, other K-1
3-D image is expressed as V2VK, by VmJ-th of B-scan image is expressed as
S22, with i-th of B-scan image in the target imageOn the basis of, by subscript and i in all K 3-D images
Similar 2P+1 B-scan image is placed in a set: B-scan image similar in a B-scan image is placed in a set;
S23 removes all in the set using affine transformationB-scan image in addition withOn the basis of matched
It is quasi-.
Further, in step S2, it includes following step that the image after multiple are registrated, which is averaging and carries out contrast stretching,
It is rapid:
S24 selects the Q figure with highest average structural similarity index from the image after (2P+1) K-1 registration
As and withBe averaging together, obtain withCorresponding reference denoising image;To all B-scan images in target image
This operation is repeated, it is corresponding with B-scan images all in target image that a whole set of can be obtained at the different location of retina
With reference to denoising image, the original B-scan image is as shown in Figure 2 a, which acquired by Topcon DRI-1 scanner
, normal retina image centered on macula lutea;Obtained reference denoising image is as shown in Figure 2 b;
S25 goes the standard for obtaining contrast enhancing with reference to denoising image execution segmented linear gray stretching conversion
It makes an uproar image, the gray scale less than background area average value is mapped to 0, remaining gray scale zooms to [0,255] by linear stretch;Institute
It is as shown in Figure 2 c to state standard denoising image;
Further, in step S24, the average structure index of similarity is obtained by following formula:
Wherein, x and y is the window that two sizes of corresponding position in two images are W × W, μxAnd uyIt is two windows respectively
The average value of pixel grey scale in mouthful,WithIt is the variance of pixel grey scale in two windows, σ respectivelyxyIt is two windows of x and y
Covariance;Constant C1=2.55, C2=7.65.
Further, in this embodiment K=10~20, P=3~5, Q=20~70, W=3 or 5.
Specifically, in step S3,
Random scale simulates the image that the OCT instrument of different resolution acquires using different zoom factors, just
In with the data set after amplification train come model can test different types OCT scan instrument acquisition other images;
The flip horizontal is used to simulate the symmetry of right eye and left eye, with guarantee the data set after expanding train come
Model be suitable for right and left eyes;
The different gradients rotated for simulating retina in OCT image, rotation angle range are -30 °~30 °,
To improve with the data set after expanding train come the different retina OCT image of model treatment inclined degree robust
Property;
The non-rigid transformation is for simulating uneven deformation caused by different pathological, to be assembled for training using the data after amplification
Practising the model come can handle the OCT image of different pathological.
Specifically, in step S4, it includes generator (G) and arbiter (D), the generation that the condition, which generates confrontation network,
The target of device is to generate true image as far as possible, and the target of the arbiter is that the image of accurate judgement input as far as possible is true
What real or generator generated, the process of model training is exactly the game between generator and arbiter;Generator passes through instruction
Practice study so that itself generating the image for allowing arbiter to be difficult to differentiate, arbiter promotes the resolution energy of itself by training study
Power;The image work for generating confrontation network with the condition unlike confrontation network (GAN), in the present embodiment that generally generates to input
The image of generation is constrained for condition;
Further, the condition generates the objective function of confrontation network are as follows:
Wherein, Pdata(x, y) is the joint probability density function of x and y, Pdata(x) probability density function for being x, Pz(z)
For the probability density function of z;The input of the generator is B-scan image x in target image and random noise vector z, defeated
It is to generate image G (x, z) accordingly with x out;The input of the arbiter is B-scan image x in target image and corresponding
The generation data that the truthful data that goldstandard y is constituted constitutes (x, y) or the B-scan image x and generation image G (x, z)
To (x, G (x, z)), output is data to being judged as true probability;
In the training process, the target of arbiter is to keep the objective function maximum, and the target of generator is to make the mesh
Scalar functions are minimum, then the objective function after optimizing are as follows:
In order to make the image generated closer to goldstandard, L1 distance restraint is introduced in objective function:
In order to solve clearly to retain the difficulty at edge again while removing speckle noise, the introducing pair in objective function
The edge penalty of marginal information sensitivity:
Wherein, i and j indicates the coordinate of vertical and horizontal in image;
The condition generates the final optimization pass objective function of confrontation network are as follows:
Wherein, λ1And λ2It is the weighting coefficient of L1 distance and edge penalty respectively;It is tested by experiment, λ in the present embodiment1
Value range be 80~120, λ2Value range be 0.8~1.2, to guarantee L1 distance and edge penalty number having the same
The stabilization and convergence of magnitude and optimization process.
Embodiment 2
As shown in Figure 3,4, a kind of condition generation confrontation net denoised for speckle in OCT image is present embodiments provided
Network, it includes generator and arbiter that the condition, which generates confrontation network,;The generator uses U-Net convolutional neural networks with life
At the better picture of details;The generator is a kind of coder-decoder structure with symmetrical parallel link, can be retained
The characteristic pattern detailed information of different resolution in encoder, allows decoder preferably to repair target detail, the figure of generation
As closer to goldstandard;The arbiter carries out true and false differentiation to the image of generation using PatchGAN model;It is described to sentence
The patch of other device each N × N in image for identification is true or false, and image is considered as Markov random field,
Assuming that mutually indepedent between the pixel in different patch.It is tested by experiment, the size N of patch is set as 70, this makes
Arbiter possesses less parameter and the faster speed of service, and still can produce the result of high quality.
Specifically, as shown in figure 3, in the generator, all convolutional layers and warp lamination all use sliding step for
2, the convolution kernel that shape is 4 × 4, other than first convolutional layer of encoder, each layer uses batch standardization;Coding
All activated function ReLU in device is leaky ReLU, slope 0.2, and the activation primitive in decoder is then ReLU;
The dropout rate that 0.5 is introduced in the three first layers of decoder can also be during the training period as the form of random noisy vectors z
It is effectively prevented overfitting, hyperbolic tangent function is used as the activation primitive of the last layer in decoder;
Specifically, as shown in figure 4, in the arbiter, PatchGAN input truthful data pair or data are generated to producing
Raw corresponding output, it has 5 convolutional layers, and wherein three first layers use sliding step for 2, shape for 4 × 4 convolution kernel, finally
Use for two layers sliding step for 1, the convolution kernel that shape is 4 × 4;Intermediate three layers using batch standardization;It is all sharp in first four layers
Function ReLU living is leaky ReLU, and slope 0.2, what the last layer used is then Sigmoid function, has reached identification
Purpose;In 62 × 62 final images, each pixel indicates that in input corresponding 70 × 70 patch is identified as really
Probability.
Embodiment 3
Present embodiments provide a kind of experiment knot that speckle denoising method in the OCT image of confrontation network is generated based on condition
Fruit, during the training pattern of the present embodiment, using ready 512 groups of data as training set, using initial learning rate
For 2e-4, momentum be 0.5 Adam algorithm come alternative optimization generator and arbiter;It will be fed into a collection of picture in neural network
Number is set as 1, and frequency of training is set as 100, after training, and trained generator is used only to speckle noise to be removed
OCT image tested, 9 groups of OCT images for test pick up from four kinds of different types of OCT scan instrument, in test image
Including normal eyes and lesion eye image;It is as shown in table 1:
Table 1 acquires the OCT scan instrument inventory for testing OCT image;
For the denoising of retina OCT image speckle, using signal-to-noise ratio (SNR), Contrast-to-noise ratio (CNR), equivalent
Objective indicator depending on number (ENL) and edge retention coefficient (EPI) as appraisal procedure, in order to calculate these indexs, the present embodiment
Area-of-interest (RIO) and layered boundary delimited manually on the image, as shown in figure 5, the present embodiment is also manual on the image
It delimit a background area, three signal areas and (has been located at retinal nerve fibre layer (RNFL), inner retina and view
Retinal pigment epithelium (RPE) complex) and three boundaries (be successively the coboundary of RNFL, interior outer retina boundary from top to bottom
With the lower boundary of RPE, respectively as calculate EPI position), which is to be acquired, by Topcon DRI-1 scanner with Huang
Normal retina image centered on spot;Performance indicator is described below:
(a) signal-to-noise ratio (SNR)
SNR is the appropriate criteria for reflecting noise in image level, is defined as follows:
Wherein, max (I) indicates the maximum gradation value of image I, σbIt is the standard deviation of background area.
(b) Contrast-to-noise ratio (CNR)
Wherein μiAnd σiThe mean value and standard deviation of i-th of signal area in expression image, and μbAnd σbIndicate background area
Mean value and standard deviation.
In the present embodiment, average CNR is calculated on 3 signal ROI.
(c) equivalent number (ENL)
ENL is commonly used to measure the smoothness of homogeneous area in image.The ENL of i-th of ROI may be calculated in image:
Wherein μiAnd σiIndicate the mean value and standard deviation of i-th of signal ROI in image.
In the present embodiment, average ENL is calculated on 3 signal ROI.
(d) edge retention coefficient (EPI)
EPI be it is a kind of be reflected in denoising after keep image edge detailss degree measurement.Longitudinal EPI is defined as:
Wherein IoAnd IdIndicate noise image and denoising image, and i and j indicates the coordinate of vertical and horizontal in image.If
It calculates on the entire image, which may not be the accurate index that edge is kept, because after denoising, in homogeneous area
Gradient will become smaller.Therefore, we calculate in image boundary neighborhood.In our experiment, image boundary neighborhood quilt
It is set as the band that a height is 7 pixels, center is located at boundary as shown in Figure 5.
As shown in table 2, the average behavior index of image, obtains after more original B-scan image and denoising model are handled
Very big promotion;
Table 2 carries out speckle denoising forward backward averaging performance indicator comparison to OCT image using the present embodiment denoising model
As shown in Table 2, after carrying out speckle denoising to OCT image using the denoising model of the present embodiment, four indices are obtained
Larger promotion is arrived;As shown in Fig. 6 a, 6b, 6c, 6d, the denoising model of the present embodiment can be realized preferably on OCT image
Farthest retain edge details while removing speckle noise, and to the figure of different types of OCT scan instrument acquisition
As there is denoising effect well;Wherein, Fig. 6 a be acquired by 2000 scanner of Topcon, centered on regarding nipple just
Normal retinal images;Fig. 6 b is the center serosity choroid acquired by Topcon DRI-1 scanner, centered on macula lutea
Lesion retinal images;Fig. 6 c is the normal retina image acquired by Topcon DRI-1 scanner, centered on macula lutea;
Fig. 6 d is the pathological myopia lesion retinal images acquired by 4000 scanner of Zeiss, centered on macula lutea.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with
A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding
And modification, the scope of the present invention is defined by the appended.