[go: up one dir, main page]

CN118057444A - High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir - Google Patents

High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir Download PDF

Info

Publication number
CN118057444A
CN118057444A CN202211423312.4A CN202211423312A CN118057444A CN 118057444 A CN118057444 A CN 118057444A CN 202211423312 A CN202211423312 A CN 202211423312A CN 118057444 A CN118057444 A CN 118057444A
Authority
CN
China
Prior art keywords
image
resolution
low
features
deep
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211423312.4A
Other languages
Chinese (zh)
Inventor
刘合
刘茜
林盛斓
梁佳
蒋丽维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petrochina Co Ltd
Original Assignee
Petrochina Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petrochina Co Ltd filed Critical Petrochina Co Ltd
Priority to CN202211423312.4A priority Critical patent/CN118057444A/en
Publication of CN118057444A publication Critical patent/CN118057444A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device and equipment for reconstructing CT images of an oil and gas reservoir with high resolution, wherein the method can comprise the following steps: acquiring a low-quality blurred image of a sample; inputting the low-quality blurred image into a pre-trained SwinIR depth neural network model, outputting shallow features of the low-quality blurred image by a convolution layer in a shallow feature extraction module, and inputting the shallow features into an image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-quality blurred image so as to recover high-frequency information of the low-quality blurred image and input the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct the high-definition image. The method provides a new thought and a new method for solving the problems of insufficient acquisition resolution of the existing CT image and insufficient definition of the existing image data.

Description

High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and equipment for reconstructing CT images of an oil and gas reservoir with high resolution.
Background
CT images are one of the important means for the scientific researchers to detect, analyze and study samples to improve the working efficiency. However, in the process of CT image acquisition, the image acquisition device is limited in resolution due to difficulty in being lifted in hardware, and meanwhile, the resolution of the obtained image is constrained by the technical conditions of image storage after imaging, so that the requirements of scientific research technicians cannot be completely met. The image super-resolution reconstruction method is an image post-processing technology, clear high-resolution CT images are obtained through an advanced computer algorithm, more effective information is provided for later classification and recognition analysis of the images, the detection efficiency and analysis effect of a sample can be greatly improved by the reconstruction method, and assistance is provided for development of industry.
In 1964, tsai and Huang proposed the concept of super-resolution of images, and until now, super-resolution reconstruction of images was largely divided into three categories, which are based on interpolation, reconstruction and learning, respectively. Common interpolation algorithms include nearest neighbor interpolation, bilinear interpolation and bicubic interpolation, the algorithm has low computational complexity, and the reconstruction efficiency is very high in practical application, but the statistical characteristics of the image are not considered, the weight function is not trained, and the reconstruction effect is poor. Compared with interpolation, the algorithm based on reconstruction utilizes some statistical properties of the image, and common algorithms include an iterative back projection algorithm, a convex set projection algorithm and the like. However, the reconstruction-based method excessively depends on priori knowledge of the high-resolution image, has high computational complexity and slow convergence speed, and has the problems of non-uniqueness of a target solution and the like. Recently, learning-based algorithms have been developed rapidly, mainly based on sparse representation, neighbor embedding and deep learning algorithms. The method based on deep learning is a mainstream method in recent years, and the basic idea is to learn the mapping relation between the low-quality image and the high-quality image through iterative training with labels, so as to realize resolution amplification of the low-quality image and obtain a clear high-quality image.
At present, convolutional Neural Networks (CNNs) are most widely applied in high-resolution reconstruction deep learning tasks, common models are RCAN, HAN, IGNN, NLSA, and most of the models focus on subtle architecture designs, such as residual learning and dense connection. Although the performance is improved a lot over the conventional model-based approach, there are still two basic problems that originate from the basic convolution layer. First, the interaction between the image and the convolution kernel is content independent, and using the same convolution kernel to recover different regions of the image is not a best choice; second, convolution is ineffective for modeling long-distance dependencies under the principle of local processing.
As an alternative to CNN, the transducer devised a self-attention mechanism to obtain information on global interactions between content, and achieved good results on a variety of visual tasks. But visual transformations for image reconstruction typically cut the input image into fixed-size blocks (patches) and process each block (patch) independently, a strategy that inevitably introduces two types of defects. First, boundary pixels cannot use neighboring pixels outside the block (patch) range for image reconstruction; second, boundary artifacts are easily introduced around the block (patch) in the reconstructed image. While this problem can be alleviated by block (patch) overlap, this introduces additional computational burden.
Disclosure of Invention
The inventor finds that the recently Swin transducer integrates the advantages of CNN and transducer and has great prospect. On the one hand, swin transducer has the advantage of CNN processing large-size images due to the local attention mechanism. On the other hand, it has in turn the ability of a transducer to model long-term dependencies due to the sliding window mechanism. The present invention has been made in view of the above problems, and it is an object of the present invention to provide a method, apparatus and device for high resolution reconstruction of CT images of hydrocarbon reservoirs which overcomes or at least partially solves the above problems.
Compared with the popular CNN-based image reconstruction model, the novel model can realize long-term dependency modeling through a shift window mechanism, and has better performance and fewer parameters.
In a first aspect, an embodiment of the present invention provides a method for reconstructing a CT image of an oil and gas reservoir, which may include:
acquiring a low-resolution blurred image of a sample;
Inputting the low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
Optionally, the SwinIR deep neural network model is pre-trained by:
acquiring a training sample set, wherein each sample of the training sample set comprises a pair of low-resolution blurred images and high-resolution images;
Training the SwinIR depth neural network model with the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
Optionally, the acquiring a training sample set may include:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the sharpness difference of the low-resolution image and the high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
In a second aspect, an embodiment of the present invention provides a training method for a machine learning model, which may include:
Obtaining a training sample set, wherein each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by a sample;
Training the SwinIR depth neural network model by using the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
Optionally, the acquiring a training sample set may include:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the sharpness difference of the low-resolution image and the high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
Optionally, the method may further include: and performing image segmentation and marking on the low-resolution blurred image and the high-resolution image in the training sample set.
In a third aspect, an embodiment of the present invention provides a device for reconstructing a CT image of an oil and gas reservoir, which may include:
the first acquisition module is used for acquiring a low-resolution blurred image of the sample;
The reconstruction module is used for inputting the low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
In a fourth aspect, an embodiment of the present invention provides a training apparatus for a machine learning model, which may include:
The second acquisition module is used for acquiring a training sample set, wherein each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by the sample;
The training module is used for training the SwinIR depth neural network model by using the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for reconstructing a high resolution of a CT image of an oil and gas reservoir according to the first aspect, or implements the method for training a machine learning model according to the second aspect;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
In a sixth aspect, an embodiment of the present invention provides a computer device, including a memory, a processor and a computer program stored on the memory and executable on the processor, where the processor implements the method for reconstructing a high resolution CT image of an oil and gas reservoir according to the first aspect or implements the method for training a machine learning model according to the second aspect when the processor executes the program.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
The embodiment of the invention provides a method, a device and equipment for reconstructing CT images of an oil and gas reservoir, which can comprise the following steps: acquiring a low-resolution blurred image of a sample; inputting the low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
The method realizes end-to-end high-resolution image generation, and provides a new thought and a new method for solving the problems of insufficient resolution and insufficient definition of the existing image data of the existing CT image acquisition instrument. The method realizes reconstruction and definition of CT images under different imaging conditions, and mainly adopts the technical means that an artificial intelligence technology taking computer vision as a core is applied to the field of image analysis, image information is fully utilized by virtue of a new algorithm model, missing high-frequency detail information caused by various factors is restored, and the characteristics of a small fuzzy target are enhanced, so that the characteristics of the target are clearer, the aim of improving the image resolution is fulfilled, scientific research technicians and the like are helped to analyze samples more accurately, and the working efficiency is improved. Compared with improving the hardware performance of CT scanning equipment, the method has low cost and easy popularization, and can be suitable for sample images acquired by most CT equipment. Meanwhile, the technology can eliminate the influence of environment and human factors of CT equipment and the influence of a sample picture storage process, obtain a clearer high-resolution image providing more detail information, fully display the basic detail information of a sample, provide effective data guarantee for scientific research technicians and greatly improve the working efficiency.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a flowchart of a training method of a machine learning model according to a first embodiment of the present invention;
FIG. 2 is a SwinIR-deep neural network model provided in a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a moving window strategy according to a first embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a training device for a machine learning model according to a first embodiment of the present invention;
FIG. 5 is a flow chart of a method for reconstructing CT images of an oil and gas reservoir according to a second embodiment of the present invention;
FIG. 6 is a graph showing an example I of the comparison of the effects before and after image reconstruction provided in the second embodiment of the present invention;
FIG. 7 is a diagram showing a second example of the comparison of the effects before and after image reconstruction provided in the second embodiment of the present invention;
Fig. 8 is a schematic structural diagram of a high-resolution reconstruction device for a CT image of an oil and gas reservoir according to a second embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
In a first embodiment of the present invention, a training method for a machine learning model is provided, and referring to fig. 1, the method may include the following steps:
step S11, a training sample set is obtained, and each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by the sample.
The method for acquiring the image in the training sample set in the embodiment of the invention can be as follows: firstly, a CT scanner is used for scanning small-size and large-size core plunger samples so as to obtain a high-resolution image and a low-resolution image; then, a sharpness difference of the low resolution image and the high resolution image is analyzed; then, blurring processing is carried out on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference, so that low-resolution images corresponding to the high-resolution images are obtained; finally, a training sample set is constructed based on the high resolution map image and the low resolution image.
It should be noted that, the high resolution image and the low resolution image obtained by using the CT scanner to scan the core plunger sample with small size and large size may also be used as the samples in the training sample set, but it is difficult to adjust the CT scanner to perform different sizes in the same view field to acquire images with different resolutions. Thus, in embodiments of the present invention, the low resolution images included in the plurality of data pairs in the training sample set are created based on the high resolution images.
In the specific implementation, CT images of a small amount of samples with different scales are obtained, the definition difference (such as 4 times of the resolution difference) between a low-resolution CT image scanned by a large-scale sample and a high-resolution CT image scanned by a small-scale sample is analyzed, the high-resolution image is subjected to blurring and sampling treatment, so that the resolution and the definition of the high-resolution image are adjusted to be the same as those of the low-resolution image, namely, the definition of the high-resolution CT image is consistent as much as possible under the condition that the corresponding resolution of the low-resolution CT image is sampled by the image processing modes such as sampling, gaussian blurring and the like; for example, an image acquired by a certain CT scanning device is analyzed, and it is proposed to implement mapping from a high resolution map to a low resolution map by downsampling by 12-20 times and then upsampling to one fourth of the resolution of the original map. A training sample set is constructed based on the low resolution blurred image and the high resolution image obtained by the method.
The sample in the embodiment is a CT image of an oil and gas reservoir, namely a display image of a rock core, and the image is reconstructed with high definition, so that the method has guiding significance on oil and gas migration, storage and oil and gas distribution.
In an alternative embodiment, after the pairs of low resolution blurred images and high resolution image samples are acquired, the low resolution blurred images and high resolution images in the training sample set are further required to be image segmented and labeled. For example, a low resolution blurred image having an excessive resolution is cut into image blocks of a suitable size (a×b), and a high resolution image corresponding to the image is cut according to the size of (4 a×4b), wherein the high resolution image blocks are labels of the corresponding low resolution blurred image blocks. The method comprises the steps of randomly selecting 80% of low-resolution blurred image blocks and high-resolution image blocks of corresponding labels to form a training set, randomly selecting 10% as a verification set and the rest as a test set.
Step S12, training the SwinIR depth neural network model by using samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
In this embodiment, the parameter estimation is to optimize and adjust the parameters of the model in the training process, so that the training result meets the preset condition.
Referring to fig. 2, the SwinIR deep neural network model includes a shallow feature extraction module, a deep feature extraction module, and an image reconstruction module.
The shallow feature extraction module extracts shallow features by adopting a convolution layer and directly transmits the shallow features to the reconstruction module so as to retain low-frequency information. Namely, the shallow feature extraction module obtains shallow detail information such as edges, colors, brightness and the like. The specific operation is as follows: the low-resolution blurred image (I LQ),ILQ∈RH×W×Cin is taken as input, wherein H, W and Cin respectively represent the height, width and input channel number of the image), shallow layer characteristics F o∈RH×W×C are extracted through convolution with the size of 3 multiplied by 3, wherein C is the output characteristic channel number.
The depth feature extraction module consists essentially of Remaining Swin Transform Blocks (RSTB), each of which utilizes multiple Swin transforms for local attention and cross-window interactions. In addition, a convolution layer is added at the end of the module for feature enhancement, and a residual connection is used to provide a shortcut for feature aggregation. The deep feature extraction module is tightly connected with the shallow feature extraction module and consists of a plurality of Swin transducer modules (RSTB) and a convolution layer with the size of 3x3, so as to obtain deep features F DF∈RH×W×C, wherein the convolution layer is used for feature enhancement. The calculation of the deep features F DF is described below:
Fi=HRSTB(Fi-1),i=1,2,...,K
FDF=Hconv(FK)
H RSTB (·) denotes the i-th RSTB, and H conv (·) denotes the last convolutional layer. The goal of adding a convolution layer at the end of feature extraction is to introduce a generalized bias of convolution operations for a transducer-based model, thereby laying a good foundation for subsequent fusion of shallow and deep features.
As shown in fig. 2 (a), the RSTB module is composed of a plurality of Swin Transformer Layers (STL), one convolutional layer (Conv), and a residual connection. For the ith RSTB, the L-th Swin transducer layer extracts intermediate features F i,1,Fi,2,...,Fi,L as follows:
Wherein, Is the j-th Swin transducer layer in the i-th RSTB. A convolutional layer is then added before the residual join. The output RSTB is as follows:
is the convolution layer in the i-th RSTB, with the aim that the convolution layer has spatially invariant properties, contributing to the enhanced SwinIR conversion uniformity. And then the residual connection is used for realizing aggregation of different levels (the layer and the upper layer) of features while stabilizing training.
The Swin transducer layer is shown in FIG. 2 (b) and includes LayerNorm, W/SW-MSA (window/shifted window based multihead self-attention), multilayer perceptron (multi-layer perceptron, MLP) and jump junctions. Wherein the MLP is used for further feature conversion and consists of two fully connected layers with GELU nonlinear activations. Swin transformers evolved from the standard multi-headed self-attention in the original transformers. Because of the locality of the visual task itself, in general, the Swin transducer utilizes fine-grained local self-care to explore the interaction between different regions in the window in consideration of stronger visual dependence of the farther regions between adjacent regions, so that the computational burden is greatly reduced, and the consumed resources caused by the square computational complexity in the computation of the larger resolution image are avoided to be significant. The method specifically comprises the steps of W-MSA and SW-MSA: W-MSA assumes that the input is H W C-sized feature map, swin transform divides the input into non-overlapping M partial windows, and reconstructs the partial windows to a sizeFeatures of (1)/>, whereinIs the total number of windows. Standard self-attention is then calculated separately for the M x M window, and the multi-head results are stitched (concatenated) together to implement the multi-head self-attention operation. The SW-MSA introduces a moving window strategy based on the W-MSA, and is realized by moving the features [ M/2, M/2] before dividing the serial port, and is used for extracting the cross-window information, as shown in figure 3. The extraction of global features is achieved by several interactive overlaps of the W-MSA and the SW-MSA.
And finally, fusing shallow and depth features in an image reconstruction module, and performing high-resolution image reconstruction. The image reconstruction module in the embodiment of the invention realizes high-resolution image reconstruction by aggregating shallow layer and deep layer features and up-sampling the features by using a sub-pixel convolution layer, and the calculation method is as follows:
IRHQ=HREC(F0+FDF)
Wherein H REC represents a reconstruction module, the shallow feature F0 mainly contains low frequency information in the image, and the deep feature F DF focuses on recovering lost high frequency information.
The module directly transmits the shallow features into the reconstruction module through jump links, and is used for reserving low-frequency information and realizing fusion of the deep and shallow features. Meanwhile, the deep feature extraction module can concentrate on the extraction of high-frequency information, and training is more stable.
In order to improve the visual effect of the model reconstruction image, the model reconstruction image is prevented from being too smooth in a high-frequency detail part when the model is trained only by adopting L 1 pixels. In the embodiment of the invention, the super-resolution image reconstruction model parameters based on the Swin transform are optimized through one or more of the L 1 pixel loss function, the GAN loss function and the perception loss function, and the model is used as a generator and a UNet network is used as a discriminator. Meanwhile, BSRGAN identical degradation models are introduced to improve the practicability of the super-division reconstruction model.
The L 1 pixel loss function is:
L=||IRHQ-IHQ||1
I RHQ is an output obtained by taking I LQ as a model input, and I HQ is a super-resolution high-definition image corresponding to I LQ. This function is used to evaluate the difference between the model-generated image and the actual high-definition image.
The GAN loss function is implemented using BCEWithLogitsLoss, which is mainly used to promote the authenticity of the model reconstructed image to recover more texture detail.
The perceptual loss function calculates the reconstruction loss based on the feature map before the VGG active layer which is trained, and the loss function also selects L 1 loss, so that the visual quality of the image is improved.
The method has the advantages that the similarity between the existing knowledge and the new knowledge can be found, the more general features can be learned, so that a better model training effect can be obtained on a small-scale data set, the fitting speed of network training is improved, and the time consumption is saved. According to the embodiment of the invention, a model is trained by taking a data set DIV2K (800 training images), flickr2K (2650 images) and OST (10324 sky, water, grass, mountain, building, plant and animal images) as training data sets, simultaneously 10000 rock core CT scanning images of carbonate rock reservoirs and tight sandstone reservoirs of different blocks of a certain oil field are collected, and image super-resolution reconstruction and fine-tuning of the model are carried out on the CT rock core scanning image data sets through migration learning.
According to the embodiment of the invention, the low-resolution blurred image sample is input into the super-resolution high-definition image reconstruction network model, so that the model can effectively extract and utilize information in a low-quality image and form mapping to a high-resolution image, and a high-resolution high-definition image with rich details and definition is generated.
The method provided by the embodiment of the invention mainly aims to realize reconstruction and definition of the blurred image under different imaging conditions. The method is mainly technically characterized in that an artificial intelligence technology taking computer vision as a core is applied to the field of image analysis, image information is fully utilized by means of a new algorithm model, missing high-frequency detail information caused by various factors is restored, and the characteristics of a small fuzzy target are enhanced, so that the characteristics of the target are clearer, the purpose of improving the image resolution is achieved, scientific research technicians and the like are helped to analyze samples more accurately, and the working efficiency is improved. Compared with improving the hardware performance of CT imaging equipment, the method has low cost and easy popularization, and can be suitable for sample images acquired by most CT equipment. Meanwhile, the technology can eliminate the influence of environment and human factors of CT equipment and the influence of a sample picture storage process, obtain a clearer high-resolution image providing more detail information, fully display the basic detail information of a sample, provide effective data guarantee for scientific research technicians and greatly improve the working efficiency.
Based on the same inventive concept, the embodiment of the invention further provides a training device of a machine learning model, and referring to fig. 4, the device may include: the second acquisition module 11 and the training module 12 operate according to the following principles:
The second acquisition module 11 is configured to acquire a training sample set, where each sample in the training sample set includes a low resolution blurred image and a high resolution image of a sample pair.
The training module 12 is configured to train the SwinIR depth neural network model using the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
In an alternative embodiment, the second acquisition module 11 is specifically configured to:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the definition difference of the large-scale low-resolution image and the small-scale high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
In another alternative embodiment, the second acquisition module 11 performs image segmentation and labeling on the low resolution blurred image and the high resolution image in the training sample set.
Based on the same inventive concept, there is also provided in an embodiment of the present invention a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a training method of a machine learning model as described above.
Based on the same inventive concept, the embodiment of the invention also provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the training method of the machine learning model when executing the program.
The principle of the problems solved by the device, the client, the medium and the related equipment in the embodiment of the invention is similar to that of the method, so that the implementation of the method can be referred to the implementation of the method, and the repetition is omitted.
Example two
In a second embodiment of the present invention, a method for reconstructing a CT image of an oil and gas reservoir with high resolution is provided, and referring to fig. 5, the method may include the following steps:
step S51, obtaining a low-resolution blurred image of the sample.
Step S52, inputting a low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input into the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct the high-resolution image.
The SwinIR deep neural network model in the embodiment of the present invention may be trained in advance by the training method of the machine learning model in the first embodiment.
With the increasing demand of fine and quantized image analysis, image super-resolution reconstruction and definition become one research focus in the field of image analysis. The invention provides a high-resolution reconstruction method for CT images of an oil-gas reservoir, which realizes end-to-end high-resolution image generation and provides a new thought and a new method for solving the problems of insufficient resolution of the existing image acquisition instrument and insufficient definition of the existing image data.
The embodiment of the invention obtains advanced performance in an image super-resolution reconstruction task, and takes an image resolution evaluation standard peak signal-to-Noise Ratio (PSNR) and Structural similarity (Structural SIMILARITY, SSIM) as evaluation indexes to carry out a comparison test. The PSNR is used for measuring the distortion degree of the image, and specific numerical value calculation is realized by regarding the image to be evaluated as superposition of an original image and noise and calculating the ratio; the SSIM evaluates the similarity degree of two images from three aspects of brightness, contrast and structure, wherein the specific calculation method is to calculate local SSIM through a sliding window with a large size and take an average value as a global evaluation result, the brightness adopts average value estimation, the contrast adopts standard deviation calculation, and the structural similarity degree uses covariance measurement.
PSNR calculation formula:
Where I is a clean image of size mxn and K is noise.
SSIM calculation formula:
Where x and y are two samples, μ represents the mean, σ represents the variance, and c 1、c2 is equal to 0.01, 0.03 times the range of pixel values, respectively.
Table 1 shows a quantitative comparison of the inventive example and the most advanced method, the invention achieves optimal performance on the amplification factors of all 3 reference data sets when tested on the published training set DIV 2K. Among them, RCAN and HAN introduce channel and spatial attention, IGNN proposes adaptive block (patch) feature aggregation, NLSA is based on a non-local attention mechanism. But all of these CNN-based attention mechanisms do not behave as well as the proposed new model algorithm based on transfomer, which demonstrates the effectiveness of the proposed model.
Table 1 model verification quantitative comparison Table
In an alternative embodiment, the SwinIR depth neural network model is pre-trained by:
Acquiring a training sample set, wherein each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by a sample;
Training the SwinIR depth neural network model with samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
In another alternative embodiment, obtaining the training sample set includes:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the sharpness difference of the low-resolution image and the high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
In a specific example, referring to fig. 6, sample 1 is a compact sandstone reservoir core CT scan image, with an artwork size of 3.83M, and an image size of 55M after high resolution reconstruction. Meanwhile, after the magnification is 4 times, the contrast image can be seen to have low original image resolution and unclear grain boundary, the reconstructed image is brighter as a whole, the definition is obviously improved, and the visual effect is better.
In another specific example, referring to fig. 7, a CT scan of a core of a sample 2 is shown, the original size is 3.9M, and the image is reconstructed and repaired by the new technology, so as to obtain a new image with a size of 49.71M. Meanwhile, after the corresponding multiple is amplified, the contrast of the partial images with rich information is intercepted, the overall definition of the reconstructed images is obviously improved, the visual effect is better, the texture features and the edge structures are clearer and more visible, and the details are fuller.
Based on the same inventive concept, an embodiment of the present invention provides a device for reconstructing a CT image of an oil and gas reservoir, and referring to fig. 8, the device may include: the first acquisition module 51 and the reconstruction module 52 operate according to the following principle:
The first acquisition module 51 is used for acquiring a low-resolution blurred image of the sample;
The reconstruction module 52 is configured to input the low-resolution blurred image into a pre-trained SwinIR depth neural network model, where the SwinIR depth neural network model includes a shallow feature extraction module, a deep feature extraction module, and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
Based on the same inventive concept, the embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the above-mentioned high-resolution reconstruction method of the CT image of the oil and gas reservoir.
Based on the same inventive concept, the embodiment of the invention also provides a computer device which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the high-resolution reconstruction method of the CT image of the oil and gas reservoir when executing the program.
The principle of the problems solved by the device, the client, the medium and the related equipment in the embodiment of the invention is similar to that of the method, so that the implementation of the method can be referred to the implementation of the method, and the repetition is omitted.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The high-resolution reconstruction method of the CT image of the oil and gas reservoir is characterized by comprising the following steps of:
acquiring a low-resolution blurred image of a sample;
Inputting the low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
2. The method of claim 1, wherein the SwinIR depth neural network model is pre-trained by:
acquiring a training sample set, wherein each sample of the training sample set comprises a pair of low-resolution blurred images and high-resolution images;
Training the SwinIR depth neural network model with the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
3. The method of claim 1, wherein the acquiring a training sample set comprises:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the sharpness difference of the low-resolution image and the high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
4. A method of training a machine learning model, comprising:
Obtaining a training sample set, wherein each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by a sample;
Training the SwinIR depth neural network model by using the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
5. The method of claim 4, wherein the obtaining a training sample set comprises:
Scanning small-size and large-size core plunger samples by using a CT scanner to obtain a high-resolution image and a low-resolution image;
Analyzing the sharpness difference of the low-resolution image and the high-resolution image;
Performing blurring processing on other high-resolution images through Gaussian blurring and/or a size function based on the definition difference to obtain low-resolution images corresponding to the high-resolution images;
A training sample set is constructed based on the high resolution map image and the low resolution image.
6. The method as recited in claim 5, further comprising: and performing image segmentation and marking on the low-resolution blurred image and the high-resolution image in the training sample set.
7. A high resolution reconstruction device for a CT image of an oil and gas reservoir, comprising:
the first acquisition module is used for acquiring a low-resolution blurred image of the sample;
The reconstruction module is used for inputting the low-resolution blurred image into a pre-trained SwinIR depth neural network model, wherein the SwinIR depth neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image so as to retain low-frequency information of the low-resolution blurred image, and the shallow features are input to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image so as to recover high-frequency information of the low-resolution blurred image and input the deep features into the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image.
8. A training apparatus for a machine learning model, comprising:
The second acquisition module is used for acquiring a training sample set, wherein each sample of the training sample set comprises a low-resolution blurred image and a high-resolution image which are paired by the sample;
The training module is used for training the SwinIR depth neural network model by using the samples in the training sample set to perform parameter estimation of the SwinIR depth neural network model based on the reconstructed high-resolution image and a loss function of the high-resolution image in the samples; the SwinIR deep neural network model comprises a shallow feature extraction module, a deep feature extraction module and an image reconstruction module; the convolution layer in the shallow feature extraction module outputs shallow features of the low-resolution blurred image and inputs the shallow features to the image reconstruction module; the deep feature extraction module comprises a plurality of Swin transform layers and a convolution layer connected with residual errors, and outputs deep features of the low-resolution blurred image and inputs the deep features to the image reconstruction module; the image reconstruction module is used for fusing the shallow layer features and the deep layer features so as to reconstruct a high-resolution image;
Wherein the loss function comprises at least one or a combination of: an L1 pixel loss function, a GAN loss function, and a perceptual loss function.
9. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the method for high resolution reconstruction of a CT image of an hydrocarbon reservoir as claimed in any one of claims 1 to 3 or implements the method for training a machine learning model as claimed in any one of claims 4 to 6.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of high resolution reconstruction of a CT image of a hydrocarbon reservoir as claimed in any one of claims 1 to 3 or implements the method of training a machine learning model as claimed in any one of claims 4 to 6.
CN202211423312.4A 2022-11-15 2022-11-15 High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir Pending CN118057444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211423312.4A CN118057444A (en) 2022-11-15 2022-11-15 High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211423312.4A CN118057444A (en) 2022-11-15 2022-11-15 High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir

Publications (1)

Publication Number Publication Date
CN118057444A true CN118057444A (en) 2024-05-21

Family

ID=91068460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211423312.4A Pending CN118057444A (en) 2022-11-15 2022-11-15 High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir

Country Status (1)

Country Link
CN (1) CN118057444A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119251346A (en) * 2024-12-04 2025-01-03 南京信息工程大学 Transformer-based multimodal MRI reconstruction method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119251346A (en) * 2024-12-04 2025-01-03 南京信息工程大学 Transformer-based multimodal MRI reconstruction method

Similar Documents

Publication Publication Date Title
Zhang et al. Residual non-local attention networks for image restoration
CN111062872B (en) A method and system for image super-resolution reconstruction based on edge detection
Chen et al. MICU: Image super-resolution via multi-level information compensation and U-net
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
Wang et al. Esrgan: Enhanced super-resolution generative adversarial networks
Guo et al. Deep wavelet prediction for image super-resolution
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110263756A (en) A kind of human face super-resolution reconstructing system based on joint multi-task learning
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN114596378B (en) Sparse angle CT artifact removal method
CN114266957B (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
Yue et al. Unsupervised moiré pattern removal for recaptured screen images
CN110490796B (en) A face super-resolution processing method and system based on fusion of high and low frequency components
CN106920214A (en) Spatial target images super resolution ratio reconstruction method
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN118587097B (en) A multispectral remote sensing image enhancement method based on frequency-space dual-domain learning
CN117422620A (en) Infrared image super-resolution reconstruction method oriented to real scene based on deep learning
CN112163998A (en) A Single Image Super-Resolution Analysis Method Matching Natural Degradation Conditions
Jaisurya et al. AGLC-GAN: Attention-based global-local cycle-consistent generative adversarial networks for unpaired single image dehazing
CN117011139A (en) Face super-resolution graph reconstruction method and system based on deep learning
Wang et al. Medical image super-resolution analysis with sparse representation
CN118057444A (en) High-resolution reconstruction method, device and equipment for CT image of oil and gas reservoir
Sangeetha et al. C2 log-gan: Concave convex and local global attention based generative adversarial network for super resolution mri reconstruction
Yao et al. MTKDSR: multi-teacher knowledge distillation for super resolution image reconstruction
Yu et al. ZSDT: Zero-shot domain translation for real-world super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination