[go: up one dir, main page]

CN112927132B - PET image reconstruction method for improving spatial resolution uniformity of PET system - Google Patents

PET image reconstruction method for improving spatial resolution uniformity of PET system Download PDF

Info

Publication number
CN112927132B
CN112927132B CN202110109430.7A CN202110109430A CN112927132B CN 112927132 B CN112927132 B CN 112927132B CN 202110109430 A CN202110109430 A CN 202110109430A CN 112927132 B CN112927132 B CN 112927132B
Authority
CN
China
Prior art keywords
pet
projection data
network model
operator
image reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110109430.7A
Other languages
Chinese (zh)
Other versions
CN112927132A (en
Inventor
刘华锋
林菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110109430.7A priority Critical patent/CN112927132B/en
Publication of CN112927132A publication Critical patent/CN112927132A/en
Application granted granted Critical
Publication of CN112927132B publication Critical patent/CN112927132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine (AREA)

Abstract

The invention discloses a PET image reconstruction algorithm for improving the spatial resolution uniformity of a PET system based on deep learning, which fully utilizes the characteristic of nonuniform radial resolution in the FOV space of the PET system to solve the mutual depth effect, takes a high-resolution concentration distribution map reconstructed by projection data when phantom is positioned in the center of the FOV as a label, and improves the resolution of the reconstructed concentration distribution map of the same phantom positioned at the edge position of the FOV by means of neural network training without any novel detector or obtaining any additional information, such as DOI information or PSF information. The invention uses software means to replace the complex hardware method at the present stage, and solves the problem of nonuniform spatial resolution of the PET system.

Description

提升PET系统空间分辨率均匀性的PET图像重建方法A PET Image Reconstruction Method for Improving Spatial Resolution Uniformity of PET System

技术领域technical field

本发明属于生物医学图像分析技术领域,具体涉及一种基于深度学习提升 PET系统空间分辨率均匀性的PET图像重建算法。The invention belongs to the technical field of biomedical image analysis, and in particular relates to a PET image reconstruction algorithm based on deep learning to improve the spatial resolution uniformity of a PET system.

背景技术Background technique

正电子发射断层成像(PET)是一种无创的功能成像技术,是核医学和分子成像的重要成像方法之一。在疾病的早期,生化变化往往先于解剖变化,由于PET 能利用15O、18F等放射性核素标记的葡萄糖、蛋白质等物质作为示踪剂参与生物正常的生理代谢,从而能够在分子水平上动态、定量地反映动物体内的病理生理变化和代谢过程,因此其在心脏疾病、脑疾病和恶性肿瘤的前期诊断和治疗方面发挥着不可替代的作用。当这些放射性核素在体内发生衰变时,其产生的正电子会与周围组织产生碰撞而几乎立即丧失自己的动能并和负电子结合发生湮灭反应,进而辐射出一对方向相反,能量相同且为511kev的γ光子。根据符合一致原理,体外的PET探测器能探测到体内γ光子的衰变地点,获得原始的投影数据(sinogram),利用不同的算法对sinogram进行处理,便可以重建出放射性核素在生物体内的浓度分布图。Positron emission tomography (PET) is a non-invasive functional imaging technique and one of the important imaging methods in nuclear medicine and molecular imaging. In the early stage of the disease, biochemical changes often precede anatomical changes. Because PET can use 15 O, 18 F and other radionuclide-labeled substances such as glucose and protein as tracers to participate in the normal physiological metabolism of organisms, it can be used at the molecular level. It dynamically and quantitatively reflects the pathophysiological changes and metabolic processes in animals, so it plays an irreplaceable role in the early diagnosis and treatment of heart diseases, brain diseases and malignant tumors. When these radionuclides decay in the body, the positrons generated by them will collide with the surrounding tissue and lose their kinetic energy almost immediately and combine with the negative electrons to undergo an annihilation reaction, and then radiate a pair of opposite directions with the same energy and 511kev gamma photons. According to the consistent principle, the in vitro PET detector can detect the decay site of γ photons in the body, obtain the original projection data (sinogram), and use different algorithms to process the sinogram to reconstruct the concentration of radionuclides in the organism. Distribution.

毫无疑问,PET重建浓度分布图的分辨率对医生诊断疾病的准确性有很大的影响,因此均匀的高分辨率图像重建在PET系统中占有特别重要的地位。然而,由于湮没光子与探测器晶体发生相互作用时的深度是任意的,甚至会发生穿透,因此在确定的横向PET系统视场(FOV)中存在严重的径向相互作用深度 (DOI)效应或径向视差(parallax error),即导致重建图像在不同径向位置的分辨率是不均匀的,特别是当物体靠近视场边缘时,分辨率会显著降低而呈现的严重“拖尾”效应。Undoubtedly, the resolution of the PET reconstructed concentration distribution map has a great influence on the accuracy of the doctor's diagnosis of the disease, so the uniform high-resolution image reconstruction occupies a particularly important position in the PET system. However, since the depth at which the annihilation photon interacts with the detector crystal is arbitrary, and even penetration occurs, there is a severe radial depth of interaction (DOI) effect in the defined lateral field of view (FOV) of the PET system Or radial parallax (parallax error), that is, the resolution of the reconstructed image at different radial positions is not uniform, especially when the object is close to the edge of the field of view, the resolution will be significantly reduced and the severe "smearing" effect will appear .

不同的PET探测器结构,如环形PET和平板PET,均存在径向DOI效应。目前,硬件方法被认为是解决径向DOI问题的主要手段,其主要基于不同的原理设计新型的探测器,从而获得晶体中精确的DOI信息,例如有的通过改变系统直径和单个探测器尺寸来影响DOI问题的程度,有的使用phosphor sandwich 或者由多层闪烁晶体阵列错峰组成的离散型多层探测器,还有的将两个光电转换装置耦合到晶体阵列两端,通过比较两端读取的信号输出来判断Gamma光子作用到晶体的位置,获得连续型DOI信息。显然,上述方法需要更为复杂的探测器结构和信号处理过程,而利用软件手段来代替硬件DOI探测器,能达到一定程度精度DOI探测器的效果,从而来解决PET系统空间分辨率不均匀的问题,降低对硬件设备的要求,是一个值得研究的方向。Different PET detector structures, such as ring PET and flat PET, all have radial DOI effects. At present, the hardware method is considered to be the main method to solve the radial DOI problem. It mainly designs new detectors based on different principles, so as to obtain accurate DOI information in the crystal. To affect the extent of the DOI problem, some use a phosphor sandwich or a discrete multilayer detector composed of a multi-layer scintillation crystal array with staggered peaks, and some couple two photoelectric conversion devices to the two ends of the crystal array. Take the signal output to judge the position where the Gamma photon acts on the crystal, and obtain continuous DOI information. Obviously, the above method requires a more complex detector structure and signal processing process, and the use of software means to replace the hardware DOI detector can achieve the effect of a certain degree of precision DOI detector, thereby solving the problem of uneven spatial resolution of the PET system. It is a worthy research direction to reduce the requirements for hardware equipment.

随着机器学习的出现和普及,科研人员开始研究机器学习来提升PET空间分辨率的均匀性,如在Convolutional Neural Network for Crystal Identification andGamma Ray Localization in PET一文中,提出了在DOI编码器的硬件基础上引入深度学习的方法,从而来更加精确的定位Gamma光子和晶体相互作用的深度,该技术一定程度上降低了对PET的结构参数的要求,但依旧需要复杂的PET结构。With the emergence and popularization of machine learning, researchers began to study machine learning to improve the uniformity of PET spatial resolution. For example, in the article Convolutional Neural Network for Crystal Identification and Gamma Ray Localization in PET, the hardware foundation of DOI encoder was proposed The deep learning method is introduced to more accurately locate the depth of interaction between Gamma photons and crystals. This technology reduces the requirements for the structural parameters of PET to a certain extent, but still requires a complex PET structure.

发明内容SUMMARY OF THE INVENTION

鉴于上述,本发明提供了一种基于深度学习提升PET系统空间分辨率均匀性的PET图像重建算法,将处理位于不同径向位置的投影数据的重建问题拆分成两个子问题,分别用滤波反投影和神经网络解决。In view of the above, the present invention provides a PET image reconstruction algorithm based on deep learning to improve the spatial resolution uniformity of a PET system. Projection and Neural Network Solving.

一种基于深度学习提升PET系统空间分辨率均匀性的PET图像重建算法,包括如下步骤:A PET image reconstruction algorithm based on deep learning to improve the spatial resolution uniformity of a PET system, comprising the following steps:

(1)向体膜注入PET放射性示踪剂,将体膜分别置于PET系统径向距离视野中心(FOV)不同位置处进行扫描,探测符合光子并进行计数,得到处于不同径向位置处相对应的投影数据;(1) Inject PET radioactive tracer into the body membrane, place the body membrane at different positions radially from the center of the field of view (FOV) of the PET system for scanning, detect and count the coincident photons, and obtain the phase at different radial positions. corresponding projection data;

(2)根据PET测量方程将不同径向位置所对应投影数据的重建问题拆分成两个子问题:子问题1为i=0位置处对应的投影数据重建,子问题2为i>0位置处对应的投影数据重建,i表示与视野中心的径向距离;(2) According to the PET measurement equation, the reconstruction problem of projection data corresponding to different radial positions is divided into two sub-problems: sub-problem 1 is the reconstruction of the projection data corresponding to the position i=0, and sub-problem 2 is the position where i>0 Corresponding projection data reconstruction, i represents the radial distance from the center of the field of view;

(3)对于子问题1采用滤波反投影算法(FBP)进行重建,对于子问题2则采用深度学习的方法进行重建;(3) For sub-problem 1, the filtered back-projection algorithm (FBP) is used for reconstruction, and for sub-problem 2, the deep learning method is used for reconstruction;

(4)搭建ISTA-Net网络模型,利用i>0不同径向位置扫描得到的投影数据作为模型的输入,利用子问题1重建得到的PET图像作为模型对应输出的真值标签,通过迭代训练得到PET图像重建模型用以实现子问题2的重建过程。(4) Build the ISTA-Net network model, use the projection data scanned at different radial positions with i>0 as the input of the model, use the PET image reconstructed from sub-problem 1 as the ground truth label of the corresponding output of the model, and obtain through iterative training The PET image reconstruction model is used to realize the reconstruction process of sub-problem 2.

进一步地,所述步骤(1)中的扫描方式可以是静态扫描,也可以是动态扫描。Further, the scanning manner in the step (1) may be static scanning or dynamic scanning.

进一步地,所述步骤(3)中的滤波反投影算法包括频域滤波和反投影两个步骤。Further, the filtering back-projection algorithm in the step (3) includes two steps of frequency-domain filtering and back-projection.

进一步地,所述ISTA-Net网络模型由多个phase(阶段)依次连接组成,每个 phase均由一个

Figure GDA0003622913450000038
算子经过软阈值算法后与一个
Figure GDA0003622913450000039
算子连接组成,所述
Figure GDA00036229134500000310
算子从输入至输出依次由卷积层A1、卷积层A2、Relu函数、卷积层A3依次连接组成;所述
Figure GDA00036229134500000311
算子与
Figure GDA00036229134500000312
算子的结构呈镜像对称,其从输入至输出依次由卷积层A3、Relu 函数、卷积层A2、卷积层A1依次连接组成,卷积层A1与A2之间插入有批归一化(BN)层,
Figure GDA00036229134500000313
算子的输出以及
Figure GDA00036229134500000314
算子的输入均经批归一化处理,
Figure GDA00036229134500000315
算子的输入与
Figure GDA00036229134500000316
算子的输出线性叠加后以残差的形式作为phase的最终输出结果。Further, the ISTA-Net network model is composed of multiple phases (phases) connected in sequence, and each phase consists of a
Figure GDA0003622913450000038
After the operator goes through the soft threshold algorithm, it is combined with a
Figure GDA0003622913450000039
The operator concatenates the composition, the
Figure GDA00036229134500000310
The operator is composed of convolutional layer A1, convolutional layer A2, Relu function, and convolutional layer A3 connected in sequence from input to output; the
Figure GDA00036229134500000311
operator and
Figure GDA00036229134500000312
The structure of the operator is mirror-symmetrical. It consists of convolutional layer A3, Relu function, convolutional layer A2, and convolutional layer A1 connected in sequence from input to output. Batch normalization is inserted between convolutional layers A1 and A2. (BN) layer,
Figure GDA00036229134500000313
the output of the operator and
Figure GDA00036229134500000314
The input of the operator is processed by batch normalization,
Figure GDA00036229134500000315
The input to the operator is the same as
Figure GDA00036229134500000316
The output of the operator is linearly superimposed in the form of residual as the final output result of the phase.

进一步地,所述卷积层A1~A3的输出均经过批量标准化处理,且每个卷积层均设置有32个滤波器,采用的卷积核大小为3×3,步长为1。Further, the outputs of the convolutional layers A1 to A3 are batch standardized, and each convolutional layer is provided with 32 filters, the size of the convolution kernel used is 3×3, and the step size is 1.

进一步地,所述步骤(4)中对ISTA-Net网络模型进行训练的具体过程为:首先初始化模型参数,将所有i>0不同径向位置扫描得到的投影数据作为样本分为训练集和测试集,将训练集样本逐一输入网络模型中,通过正向传播计算得到网络模型的输出结果;然后计算模型每一输出结果与对应真值标签之间的损失函数L,根据损失函数L的偏导数通过Adam算法对网络模型中的参数不断进行迭代优化直至损失函数L收敛,最终训练完成后得到PET图像重建模型。Further, the specific process of training the ISTA-Net network model in the step (4) is: firstly initialize the model parameters, and divide the projection data obtained by scanning all i>0 different radial positions as samples into a training set and a test. The training set samples are input into the network model one by one, and the output results of the network model are obtained through forward propagation calculation; then the loss function L between each output result of the model and the corresponding true value label is calculated. According to the partial derivative of the loss function L The parameters in the network model are iteratively optimized by the Adam algorithm until the loss function L converges, and finally the PET image reconstruction model is obtained after the training is completed.

进一步地,所述损失函数L的表达式如下:Further, the expression of the loss function L is as follows:

Figure GDA0003622913450000031
Figure GDA0003622913450000031

其中:xi为第i个样本对应的真值标签,

Figure GDA0003622913450000032
为第i个样本输入网络模型后其中第n个phase的输出结果,A为xi的像素尺寸大小,n为ISTA-Net网络模型中的 phase数量,
Figure GDA0003622913450000033
为网络模型中第k个phase中的
Figure GDA0003622913450000034
算子,
Figure GDA0003622913450000035
为网络模型中第k个phase中的
Figure GDA0003622913450000036
算子,‖‖2表示L2范数,
Figure GDA0003622913450000037
为第i个样本输入网络模型后其中第k个phase的输出结果,B为训练集中的样本数量。Where: x i is the true value label corresponding to the ith sample,
Figure GDA0003622913450000032
is the output result of the n-th phase after the i-th sample is input to the network model, A is the pixel size of x i , n is the number of phases in the ISTA-Net network model,
Figure GDA0003622913450000033
is the kth phase in the network model
Figure GDA0003622913450000034
operator,
Figure GDA0003622913450000035
is the kth phase in the network model
Figure GDA0003622913450000036
operator, ‖‖ 2 represents the L2 norm,
Figure GDA0003622913450000037
is the output result of the kth phase after the ith sample is input to the network model, and B is the number of samples in the training set.

进一步地,所述ISTA-Net网络模型的输入为2D的PET扫描数据,若采集到的投影数据为3D形式,则需要采用SSRB或FORB等方法将其转化为2D的 PET扫描数据,不同的方法对同一横截面的径向分辨率影响不同。Further, the input of the ISTA-Net network model is 2D PET scan data. If the collected projection data is in a 3D form, then methods such as SSRB or FORB need to be used to convert it into 2D PET scan data. Different methods Different effects on the radial resolution of the same cross section.

进一步地,在对ISTA-Net网络模型进行训练之前,需要将输入的投影数据及其对应的真值标签进行单帧的归一化处理,具体计算公式如下:Further, before training the ISTA-Net network model, the input projection data and its corresponding ground truth label need to be normalized for a single frame. The specific calculation formula is as follows:

Figure GDA0003622913450000041
Figure GDA0003622913450000041

其中:X为投影数据或其对应真值标签中归一化处理前的任一数据值,Xnorm为X 对应归一化处理后的数据值,Xmax为投影数据或其对应真值标签中的最大值, Xmin为投影数据或其对应真值标签中的最小值。Among them: X is any data value before normalization in the projection data or its corresponding truth label, X norm is the data value after X corresponding normalization, and X max is the projection data or its corresponding truth label. The maximum value of , X min is the minimum value in the projected data or its corresponding ground-truth label.

本发明充分利用PET系统FOV空间内径向分辨率不均匀自身存在的这个特性来解决相互深度效应,提出了一种基于深度学习的软件方法来提升PET系统分辨率的均匀性,将phantom位于FOV中心时的投影数据重建出的高分辨率浓度分布图作为标签,用神经网络训练的手段来提高同一phantom位于FOV边缘位置的重建浓度分布图的分辨率,而不需要任何新型探测器或获得额外的任何信息,如DOI信息或者PSF信息。本发明用软件的手段来代替现阶段复杂的硬件方法,解决PET系统空间分辨率不均匀的问题。The invention makes full use of the feature of the non-uniform radial resolution in the FOV space of the PET system to solve the mutual depth effect, and proposes a software method based on deep learning to improve the uniformity of the resolution of the PET system. The phantom is located in the center of the FOV. The high-resolution concentration distribution map reconstructed from the projection data at the time is used as the label, and the neural network training method is used to improve the resolution of the reconstructed concentration distribution map of the same phantom located at the edge of the FOV, without any new detectors or additional acquisition. Any information, such as DOI information or PSF information. The present invention replaces the complicated hardware method at the present stage by means of software, and solves the problem of uneven spatial resolution of the PET system.

附图说明Description of drawings

图1为本发明PET图像重建算法的实施流程示意图。FIG. 1 is a schematic diagram of the implementation flow of the PET image reconstruction algorithm of the present invention.

图2为本发明ISTA-Net网络模型的结构示意图。FIG. 2 is a schematic structural diagram of the ISTA-Net network model of the present invention.

图3为derenzo phantom处于不同径向位置(0.70mm、0.95mm、1.40mm、 1.95mm、2.40mm、2.80mm)的PET重建结果对比图,从左到右依次为:距离视野中心75mm的phantom在4层DOI探测器下的重建图、位于视野中心的 phantom无DOI信息的重建图、距离视野中心75mm的phantom无DOI信息的重建图、采用本发明算法得到的重建图。Figure 3 is a comparison chart of the PET reconstruction results of derenzo phantom at different radial positions (0.70mm, 0.95mm, 1.40mm, 1.95mm, 2.40mm, 2.80mm). The reconstruction map under the 4-layer DOI detector, the reconstruction map of the phantom located in the center of the field of view without DOI information, the reconstruction map of the phantom with no DOI information at a distance of 75mm from the center of the field of view, and the reconstruction map obtained by using the algorithm of the present invention.

具体实施方式Detailed ways

为了更为具体地描述本发明,下面结合附图及具体实施方式对本发明的技术方案进行详细说明。In order to describe the present invention more specifically, the technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

如图1所示,本发明深度学习提升PET系统空间分辨率均匀性的图像重建算法,具体包括如下步骤:As shown in FIG. 1 , the image reconstruction algorithm of the present invention for improving the spatial resolution uniformity of the PET system by deep learning specifically includes the following steps:

(1)采集数据。给体膜(phantom)注入PET放射性示踪剂,将该体膜分别置于PET设备径向距离视野中心(FOV)不同位置处进行扫描,探测符合光子并进行计数,得到与处于不同径向位置i处相对应的原始投影数据矩阵Yi(1) Collect data. The PET radiotracer was injected into the donor film (phantom), and the phantom was placed at different positions radially from the center of the field of view (FOV) of the PET equipment for scanning, and the coincident photons were detected and counted, and the results obtained from different radial positions were obtained. The corresponding original projection data matrix Yi at i .

(2)根据PET成像原理,建立测量方程模型:(2) According to the principle of PET imaging, establish the measurement equation model:

Y=GX+R+SY=GX+R+S

其中:G为系统矩阵,X是真实的示踪剂浓度分布图,R为测量过程中的随机光子数,S为测量过程中的散射光子数。Among them: G is the system matrix, X is the real tracer concentration distribution map, R is the number of random photons in the measurement process, and S is the number of scattered photons in the measurement process.

由于PET系统存在深度效应(depth of interaction),导致其空间分辨率不均匀,即位于同一横截面视野中心的图像分辨率高,径向位置越接近FOV边缘,分辨率显著降低,固将获得处于不同径向位置的投影数据Yi的重建问题拆分成两个子问题,子问题1是处理位于视野中心的原始投影数据Y0,子问题2是处理不同径向位置i(i≠0)处相对应的原始投影数据矩阵YiDue to the depth of interaction in the PET system, its spatial resolution is not uniform, that is, images located in the center of the same cross-sectional field of view have high resolution. The reconstruction problem of projection data Y i at different radial positions is divided into two sub-problems, sub-problem 1 is to deal with the original projection data Y 0 at the center of the field of view, and sub-problem 2 is to deal with different radial positions i (i≠0) The corresponding raw projection data matrix Y i .

子问题1直接采用滤波反投影(FBP)进行图像重建,子问题2基于子问题1 的重建图像,采用深度学习的手段(ISTA-Net)解决。Sub-problem 1 directly uses filtered back projection (FBP) for image reconstruction, and sub-problem 2 is based on the reconstructed image of sub-problem 1 and is solved by means of deep learning (ISTA-Net).

本发明所采用的网络模型结构如图2所示,其由多个完全相同的阶段(phase) 依序连接构成,每一个阶段从输入至输出由一个算子

Figure GDA0003622913450000051
经过软阈值算法后和其对称的算子
Figure GDA0003622913450000052
组成,其中算子
Figure GDA0003622913450000053
包括2个卷积层、一个Relu函数和另一个卷积层依次连接组成,网络中的每一个卷积神经网络层的输出均依次经过批量标准化处理;
Figure GDA0003622913450000054
输出的结果经过软阈值算法,而
Figure GDA0003622913450000055
则是与
Figure GDA0003622913450000056
完全对称的结构,先对数据进行批归一化处理再经过一个卷积层、一个Relu函数和另外两个卷积层,且两个卷积层之间插有一个批归一化(BN)层,并将
Figure GDA0003622913450000057
最后输出数据同输入的投影数据进行线性叠加以残差的形式输出。本实施方式的网络中一共设置了9个phase,每个phase里的所有的卷积层均设置有32个滤波器,每个卷积核的尺寸为3*3,步长为1。The structure of the network model adopted by the present invention is shown in Figure 2, which consists of multiple identical phases connected in sequence, and each phase consists of an operator from input to output
Figure GDA0003622913450000051
After the soft threshold algorithm and its symmetric operator
Figure GDA0003622913450000052
composition, where the operator
Figure GDA0003622913450000053
It consists of 2 convolutional layers, a Relu function and another convolutional layer connected in sequence, and the output of each convolutional neural network layer in the network is batch standardized in turn;
Figure GDA0003622913450000054
The output result goes through a soft threshold algorithm, while
Figure GDA0003622913450000055
is with
Figure GDA0003622913450000056
Completely symmetrical structure, batch normalization is performed on the data first, and then a convolutional layer, a Relu function and two other convolutional layers are passed, and a batch normalization (BN) is inserted between the two convolutional layers. layer, and will
Figure GDA0003622913450000057
Finally, the output data is linearly superimposed with the input projection data and output in the form of residual. A total of 9 phases are set in the network of this embodiment, all convolution layers in each phase are set with 32 filters, the size of each convolution kernel is 3*3, and the step size is 1.

(3)训练阶段。(3) Training stage.

首先需要对输入的数据(sinogram)和相应的标签均进行单帧的归一化处理:First, the input data (sinogram) and the corresponding label need to be normalized for a single frame:

Figure GDA0003622913450000061
Figure GDA0003622913450000061

其中:Xmin和Xmax分别是单帧数据的最小值和最大值。Where: X min and X max are the minimum and maximum values of a single frame of data, respectively.

然后初始化ISTA-Net的参数,将不同径向位置的原始投影数据矩阵Yi分为训练集和测试集,并把训练集的sinogram输入ISTA-Net,通过正向传播公式计算每层的输出,进而获得ISTA-Net最终的输出,计算ISTA-Net的输出和标签之间的损失函数:Then initialize the parameters of ISTA-Net, divide the original projection data matrix Y i of different radial positions into training set and test set, and input the sinogram of the training set into ISTA-Net, and calculate the output of each layer through the forward propagation formula, Then obtain the final output of ISTA-Net, and calculate the loss function between the output of ISTA-Net and the label:

Figure GDA0003622913450000062
Figure GDA0003622913450000062

其中:xi是第i个样本的标签,n为ISTA-Net的总阶段数,B为ISTA-Net中处理的总训练集的数目,A为xi的像素尺寸。where: x i is the label of the ith sample, n is the total number of stages of ISTA-Net, B is the number of total training sets processed in ISTA-Net, and A is the pixel dimension of x i .

最后求损失函数的偏导数,通过Adam算法更新ISTA-Net中可学习的参数,重复进行正向传播和反向求导,直到损失函数的数值足够小。Finally, the partial derivative of the loss function is obtained, the learnable parameters in ISTA-Net are updated through the Adam algorithm, and forward propagation and reverse derivation are repeated until the value of the loss function is small enough.

(4)估计阶段。(4) Estimation stage.

在估计阶段,把径向最靠近FOV边缘位置的投影数据输入训练好的 ISTA-Net,直接获得相对高质量的重建图,从而大大弱化深度效应产生的影响,改善空间分辨率不均匀的问题。In the estimation stage, the projection data closest to the FOV edge in the radial direction is input into the trained ISTA-Net, and a relatively high-quality reconstruction map is directly obtained, thereby greatly weakening the influence of the depth effect and improving the problem of uneven spatial resolution.

以下我们基于蒙特卡洛仿真数据进行实验,以验证本实施方式的有效性。蒙特卡洛仿真的示踪剂是18F-FDG,体模为derenzo phantom(0.70mm、0.95mm、 1.40mm、1.95mm、2.40mm、2.80mm),模拟的扫描仪是4层DOI探测器,其 DOI精度为5mm和无DOI信息的单层探测器,其中将无DOI信息的单层探测器下得到体膜处于距离视野中心不同径向位置的数据被随机分为训练集(1080 sinograms)和测试集(360sinograms)。训练集用于学习ISTA-Net的参数,测试集用于检验训练好的ISTA-Net的性能。Below we conduct experiments based on Monte Carlo simulation data to verify the effectiveness of this implementation. The tracer for Monte Carlo simulation is 18 F-FDG, the phantom is derenzo phantom (0.70mm, 0.95mm, 1.40mm, 1.95mm, 2.40mm, 2.80mm), and the simulated scanner is a 4-layer DOI detector, The DOI accuracy is 5mm and the single-layer detector without DOI information, in which the data obtained from the single-layer detector without DOI information at different radial positions from the center of the field of view are randomly divided into training sets (1080 sinograms) and Test set (360 sinograms). The training set is used to learn the parameters of ISTA-Net, and the test set is used to check the performance of the trained ISTA-Net.

图3比较了derenzo phantom处于FOV边缘位置时,利用本发明和4层DOI 探测器的重建结果,从左到右列分别是:距离视野中心75mm的phantom在4 层DOI探测器下的重建图、位于视野中心的phantom无DOI信息的重建图、距离视野中心75mm的phantom无DOI信息的重建图以及利用本发明算法得到的重建图。由图3可知,处于中心位置的phantom即便没有DOI信息,它的浓度分布图的分辨率也是很高的,而处于边缘位置的phantom利用FBP(没有DOI信息时)重建的浓度分布图的分辨率很低,连直径为2.8mm的点源都不可区分,但获取DOI信息后图像分辨率能略微高于处于中心位置却没有DOI信息的重建图。我们使用本发明对处于边缘位置的sinogram进行重建发现,其分辨率有了显著的提升,效果能接近获取4层DOI信息,从而使处于FOV边缘和中心的分辨率更接近一致,提高PET系统空间分辨率的均匀性。Figure 3 compares the reconstruction results of the present invention and the 4-layer DOI detector when the derenzo phantom is at the edge of the FOV. The columns from left to right are: the reconstructed image of the phantom at a distance of 75mm from the center of the field of view under the 4-layer DOI detector, The reconstruction map of the phantom without DOI information in the center of the visual field, the reconstruction map of the phantom with no DOI information at a distance of 75 mm from the center of the visual field, and the reconstruction map obtained by using the algorithm of the present invention. As can be seen from Figure 3, even if the phantom at the center has no DOI information, the resolution of its concentration distribution map is also very high, while the phantom at the edge position uses FBP (when there is no DOI information) to reconstruct the resolution of the concentration distribution map. Very low, even a point source with a diameter of 2.8mm is indistinguishable, but the image resolution after obtaining DOI information can be slightly higher than the reconstructed image at the center without DOI information. We use the present invention to reconstruct the sinogram at the edge position and find that its resolution has been significantly improved, and the effect can be close to the acquisition of 4-layer DOI information, so that the resolution at the edge and center of the FOV is closer to the same, and the PET system space is improved. Uniformity of resolution.

上述对实施例的描述是为便于本技术领域的普通技术人员能理解和应用本发明。熟悉本领域技术的人员显然可以容易地对上述实施例做出各种修改,并把在此说明的一般原理应用到其他实施例中而不必经过创造性的劳动。因此,本发明不限于上述实施例,本领域技术人员根据本发明的揭示,对于本发明做出的改进和修改都应该在本发明的保护范围之内。The above description of the embodiments is for the convenience of those of ordinary skill in the art to understand and apply the present invention. It will be apparent to those skilled in the art that various modifications to the above-described embodiments can be readily made, and the general principles described herein can be applied to other embodiments without inventive effort. Therefore, the present invention is not limited to the above-mentioned embodiments, and improvements and modifications made to the present invention by those skilled in the art according to the disclosure of the present invention should all fall within the protection scope of the present invention.

Claims (7)

1. A PET image reconstruction method for improving the spatial resolution uniformity of a PET system based on deep learning comprises the following steps:
(1) injecting a PET radioactive tracer into the body membrane, respectively placing the body membrane at different positions of the PET system in the radial direction from the center of the visual field for scanning, detecting coincident photons and counting to obtain corresponding projection data at different radial positions;
(2) the reconstruction problem of the projection data corresponding to different radial positions is split into two sub-problems according to a PET measurement equation: sub-problem 1 is projection data reconstruction corresponding to a position where i is 0, sub-problem 2 is projection data reconstruction corresponding to a position where i > 0, and i represents a radial distance from the center of the field of view;
(3) reconstructing the subproblem 1 by adopting a filtering back projection algorithm, and reconstructing the subproblem 2 by adopting a deep learning method;
(4) an ISTA-Net network model is built, projection data obtained by scanning different radial positions with i being larger than 0 are used as input of the model, a PET image obtained by reconstruction of a subproblem 1 is used as a truth label of corresponding output of the model, a PET image reconstruction model is obtained through iterative training to realize the reconstruction process of a subproblem 2, and the specific training process is as follows: firstly, initializing model parameters, taking projection data obtained by scanning different radial positions with i larger than 0 as samples to be divided into a training set and a testing set, inputting the training set samples into a network model one by one, and obtaining an output result of the network model through forward propagation calculation; calculating a loss function L between each output result of the model and the corresponding truth label, continuously performing iterative optimization on parameters in the network model through an Adam algorithm according to the partial derivative of the loss function L until the loss function L is converged, and finally obtaining a PET image reconstruction model after training is completed;
Figure FDA0003651684270000011
wherein: x is the number ofiThe true label for the ith sample,
Figure FDA0003651684270000012
the output result of the nth phase is input into the network model for the ith sample, and A is xiN is the number of phases in the ISTA-Net network model,
Figure FDA0003651684270000013
for in the kth phase in the network model
Figure FDA0003651684270000014
The operator(s) is (are) selected,
Figure FDA0003651684270000015
for in the kth phase in the network model
Figure FDA0003651684270000016
Operator, | | | purple wind2The norm of L2 is shown,
Figure FDA0003651684270000017
and B is the number of samples in the training set, wherein the output result of the kth phase is the ith sample after the ith sample is input into the network model.
2. The PET image reconstruction method according to claim 1, characterized in that: the scanning mode in the step (1) may be static scanning or dynamic scanning.
3. The PET image reconstruction method according to claim 1, characterized in that: the filtering back projection algorithm in the step (3) comprises two steps of frequency domain filtering and back projection.
4. The PET image reconstruction method according to claim 1, characterized in that: the ISTA-Net network model is formed by connecting a plurality of phases in sequence, and each phase is formed by one phase
Figure FDA0003651684270000021
The operator is subjected to a soft threshold algorithm and then is compared with one
Figure FDA0003651684270000022
Operator concatenation composition, said
Figure FDA0003651684270000023
The operator is formed by sequentially connecting a convolutional layer A1, a convolutional layer A2, a Relu function and a convolutional layer A3 from input to output; the described
Figure FDA0003651684270000024
Operator and
Figure FDA0003651684270000025
the structure of the operator is mirror symmetry, and the operator is formed by sequentially connecting a convolution layer A3, a Relu function, a convolution layer A2 and a convolution layer A1 from input to output, wherein a batch normalization layer is inserted between the convolution layers A1 and A2,
Figure FDA0003651684270000026
output of operator and
Figure FDA0003651684270000027
the inputs of the operators are all processed by batch normalization,
Figure FDA0003651684270000028
input of operator and
Figure FDA0003651684270000029
and the output of the operators is linearly superposed and then is used as the final output result of the phase in the form of residual error.
5. The PET image reconstruction method according to claim 4, characterized in that: the outputs of the convolutional layers A1-A3 are subjected to batch standardization, each convolutional layer is provided with 32 filters, the size of an adopted convolutional kernel is 3 multiplied by 3, and the step length is 1.
6. The PET image reconstruction method according to claim 1, characterized in that: the input of the ISTA-Net network model is 2D PET scanning data, if the acquired projection data is in a 3D form, the projection data needs to be converted into the 2D PET scanning data by adopting an SSRB or FORB method, and different methods have different influences on the radial resolution of the same cross section.
7. The PET image reconstruction method according to claim 1, characterized in that: before training the ISTA-Net network model, the input projection data and the corresponding truth label need to be normalized by a single frame, and the specific calculation formula is as follows:
Figure FDA00036516842700000210
wherein: x is any data value before normalization processing in projection data or corresponding truth value label thereof, and X isnormFor X to correspond to the normalized data value, XmaxFor the maximum, X, in the projection data or its corresponding truth labelminIs the minimum in the projection data or its corresponding truth label.
CN202110109430.7A 2021-01-25 2021-01-25 PET image reconstruction method for improving spatial resolution uniformity of PET system Active CN112927132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110109430.7A CN112927132B (en) 2021-01-25 2021-01-25 PET image reconstruction method for improving spatial resolution uniformity of PET system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110109430.7A CN112927132B (en) 2021-01-25 2021-01-25 PET image reconstruction method for improving spatial resolution uniformity of PET system

Publications (2)

Publication Number Publication Date
CN112927132A CN112927132A (en) 2021-06-08
CN112927132B true CN112927132B (en) 2022-07-19

Family

ID=76166834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110109430.7A Active CN112927132B (en) 2021-01-25 2021-01-25 PET image reconstruction method for improving spatial resolution uniformity of PET system

Country Status (1)

Country Link
CN (1) CN112927132B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913256A (en) * 2022-03-28 2022-08-16 浙江大学 PET image reconstruction algorithm based on non-local deep learning neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258642A (en) * 2020-12-21 2021-01-22 之江实验室 Low-dose PET data three-dimensional iterative updating reconstruction method based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9495771B2 (en) * 2013-05-10 2016-11-15 The General Hospital Corporation Systems and methods for motion correction in positron emission tomography imaging
US10475214B2 (en) * 2017-04-05 2019-11-12 General Electric Company Tomographic reconstruction based on deep learning
WO2019134879A1 (en) * 2018-01-03 2019-07-11 Koninklijke Philips N.V. Full dose pet image estimation from low-dose pet imaging using deep learning
CN110853113B (en) * 2019-11-19 2023-05-26 黄秋 TOF-PET image reconstruction algorithm and reconstruction system based on BPF
CN111325686B (en) * 2020-02-11 2021-03-30 之江实验室 Low-dose PET three-dimensional reconstruction method based on deep learning
CN111627082B (en) * 2020-05-21 2022-06-21 浙江大学 PET image reconstruction method based on filtering back projection algorithm and neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258642A (en) * 2020-12-21 2021-01-22 之江实验室 Low-dose PET data three-dimensional iterative updating reconstruction method based on deep learning

Also Published As

Publication number Publication date
CN112927132A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN111867474B (en) Full-dose PET image estimation from low-dose PET images using deep learning
CN108257134B (en) Nasopharyngeal carcinoma focus automatic segmentation method and system based on deep learning
Cheng et al. Applications of artificial intelligence in nuclear medicine image generation
Chen et al. Direct and indirect strategies of deep-learning-based attenuation correction for general purpose and dedicated cardiac SPECT
CN109009179B (en) Identical isotope-labeled dual tracer PET separation method based on deep confidence network
CN109615674B (en) Dynamic double-tracing PET reconstruction method based on mixed loss function 3D CNN
CN106846430B (en) Image reconstruction method
US12266116B2 (en) Systems and methods for image processing
CN106204674B (en) The dynamic PET images method for reconstructing constrained based on structure dictionary and kinetic parameter dictionary joint sparse
CN108550172B (en) PET image reconstruction method based on non-local characteristics and total variation joint constraint
US9576379B2 (en) PRCA-based method and system for dynamically re-establishing PET image
CN105147312A (en) PET image acquiring method and system
CN104657950B (en) Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV
CN101681520A (en) Pet local tomography
Ma et al. An encoder-decoder network for direct image reconstruction on sinograms of a long axial field of view PET
CN106618628A (en) Breathing movement gating correction and attenuation correction method based on PET/CT imaging
CN105678821B (en) A kind of dynamic PET images method for reconstructing based on self-encoding encoder image co-registration
JP2007286020A (en) Image reconstruction method
CN109658390B (en) An interest region extraction method for positron detection sinusoidal matrix diagram
Zhang et al. PET image reconstruction using a cascading back-projection neural network
WO2023051719A1 (en) Methods and systems for attenuation map generation
Zhang et al. Deep generalized learning model for PET image reconstruction
CN110717951A (en) cGANs-based PET image direct reconstruction method
Huang et al. Gapfill-recon net: a cascade network for simultaneously pet gap filling and image reconstruction
CN112700380A (en) PET image volume correction method based on MR gradient information and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant