Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
As described above, a practitioner may utilize computer tomography to perform CTP perfusion imaging to obtain blood flow perfusion information of a patient's specific tissue to assess the patient's blood flow function. In some scenarios, the patient does not have the conditions for performing CTP perfusion imaging examinations, e.g., the patient's physical condition does not allow for injection of large doses of contrast media, the patient's condition is urgent and cannot wait for the outcome of perfusion imaging, or the patient's regional medical facility is relatively behind without the medical equipment required for CTP perfusion imaging. The illustrated situation often occurs commonly, so that a professional can only judge the vascular condition of a patient through a structural image obtained by CT vascular imaging, and further cannot accurately judge the vascular lesion condition of the patient at the first time.
Based on this, the present disclosure provides a method of simulating blood flow perfusion, an apparatus for simulating blood flow perfusion, and an electronic device.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a flow chart of a method 100 of simulating blood flow perfusion according to an embodiment of the present disclosure.
Referring to fig. 1, a method 100 of simulating blood flow perfusion includes:
Step S110, acquiring multi-stage CT blood vessel imaging data of a target tissue, and extracting a plurality of time sequence blood vessel images;
step S120, extracting vascular structure information of the target tissue based on the plurality of time-series vascular images, and determining time-series CT values of the target tissue, and
Step S130, inputting the vascular structure information and the time series CT value into a pre-trained blood flow perfusion prediction model to generate a corresponding blood flow perfusion image and perfusion parameters.
In the example of step S110, the multi-phase CT vessel imaging is also referred to as multi-phase CT vessel imaging, and in contrast to the general single-phase CT vessel imaging technique, the technique acquires CT vessel imaging data at a plurality of time points (periods) respectively instead of a single period. Therefore, the multi-stage CT vascular imaging has time resolution, namely, the filling state and the hemodynamic change of the blood vessel can be more comprehensively displayed, so that the side branch circulation state can be better dynamically estimated. Especially in acute ischemic stroke and other diseases, the compensation level of the collateral circulation can be reflected more accurately.
Furthermore, in the method of simulating blood flow perfusion of the embodiments of the present disclosure, the different time points at which the CT vessel imaging data is acquired by the multi-phase CT vessel imaging may include at least an arterial peak period, a venous peak period, and a venous late period. The arterial peak period refers to the time point when the contrast agent reaches the maximum density in the arterial system of the target tissue, the development of the arterial system is most obvious at the time, so that the arterial system is clearly developed, the anatomical structure and the blood flow state of the artery can be clearly displayed, the venous peak period refers to the time point when the contrast agent reaches the maximum density in the venous system of the target tissue, the development of the venous system is most obvious, so that the hemodynamic state and the venous outflow condition of veins can be well estimated in the period, the venous late period refers to the stable period after the density of the contrast agent of the venous system begins to decrease, the development of the venous system is gradually weakened in the period after the venous peak period, but the delayed development condition of the venous system can be observed, and the state of venous side branch circulation can be estimated. When the CT blood vessel imaging data simultaneously comprise the data of the three time points, the blood vessel state can be comprehensively estimated, and the dynamic change of blood flow can be captured to a certain extent.
It should be noted that whether the contrast agent reaches the maximum density in the arterial and venous system or remains stable is determined according to the CT value change of the specific region of the arterial and venous system. The specific region can be obtained by automatically identifying the region of the arterial/venous system in the target tissue or the region of interest (region of interest, ROI) corresponding to the arterial/venous system of the target tissue specified by a professional, and after the specific region is determined, the time density curve of the specific region can be obtained by scanning and recording the change of the CT value of the specific region along with time, when the CT value of the time density curve reaches the peak value, the contrast agent can be judged to reach the maximum density, and when the CT value is reduced to be almost unchanged, the density of the contrast agent can be judged to be kept stable. The arterial density peak value threshold, the venous density peak value threshold and the venous density stability threshold can be preset, and when the acquired CT values respectively reach the thresholds, the arterial peak value period, the venous peak value period and the venous late period can be judged. It is also possible to judge the arrival of the peak pulse period, peak venous period and late venous period and acquire corresponding CT vessel imaging data after a certain time period after injection of the contrast agent into the patient, based on clinical data or experience.
In the example of step S110, the target tissue refers to the tissue of the patient to be detected. Since the multi-stage CT vascular imaging has the time resolution as described above, it has important value in diagnosis and treatment decision of acute ischemic stroke, cerebrovascular disease, etc., and thus the target tissue may be the brain tissue of the patient in general. It should be understood that the method of simulating blood flow perfusion of embodiments of the present disclosure is not limited to simulating blood flow perfusion of brain tissue, but may also simulate blood flow perfusion information of tissues such as heart, liver, etc., to aid in diagnosis and treatment of related diseases.
Based on this, a plurality of time-series blood vessel images can be extracted when acquiring multi-phase CT blood vessel imaging data of a target tissue. Wherein each acquisition time point (period) contains at least one corresponding vessel image.
In the example of step S120, vascular structure information of the target tissue is extracted from the plurality of time-series vascular images, and a time-series CT value of the target tissue is determined.
In this example, the time-series CT values of the target tissue only contain CT values at the acquired time points (time periods) and for other time points (time periods) no acquisition is performed, according to the characteristics of the CT angiography technique. The extracted vascular structure information may be a fusion image generated by a plurality of time-series vascular images, or may be an image which is screened from the plurality of time-series vascular images and can reflect the vascular structure of the target tissue most. Furthermore, the vascular structure information may also be presented in other forms, for example in the form of a set of CT values or linear attenuation coefficients.
In the example of step S130, vascular structure information and time-series CT values are input to a pre-trained blood flow perfusion prediction model to generate corresponding blood flow perfusion images and perfusion parameters.
In an example, the pre-trained blood perfusion prediction model employed is not particularly limited. For example, a model based on Convolutional Neural Network (CNN), long-term memory network (LSTM), gated loop unit (GRU), graph roll-up network (GCN), or a hybrid model may be employed.
Further, in the example of step S130, the generated blood flow perfusion image may be a dynamic image or a static image generated from extracting blood vessel structure information of the target tissue and perfusion parameters predicted by the blood flow perfusion prediction model from a plurality of time-series blood vessel images, and when the target tissue is the heart, a bullseye chart capable of effectively reflecting myocardial perfusion may also be generated.
The perfusion parameters collected may include Blood Volume (BV), i.e., the amount of Blood contained in a unit mass of tissue, in mL/100g, blood Flow (BF), i.e., the amount of Blood flowing through a unit mass of tissue in mL/min/100g per unit time, and average transit time (MEAN TRANSIT TIME, MTT), i.e., the average time of Blood passing through a capillary bed from arterial side to venous side, in seconds(s) or minutes (min). When the target tissue is brain tissue, the perfusion parameters acquired are brain blood volume, brain blood flow and mean transit time.
Compared to the traditional CTP perfusion imaging method, which needs to acquire a time density curve of the whole period of the target tissue, the method of the embodiments of the present disclosure only needs to acquire data at a plurality of predetermined time points (periods), so that the dose of the required contrast agent can be effectively reduced, the radiation exposure is reduced, and the time required for data acquisition and post-processing is greatly shortened to improve the imaging efficiency. In addition, the traditional CTP perfusion imaging method needs to be provided with special equipment, and the equipment has high cost, so that CTP perfusion imaging is high in price and poor in popularization degree. The method of the embodiment of the disclosure only depends on the common equipment of CT blood vessel imaging, thereby effectively reducing the cost of acquiring blood flow perfusion information.
Fig. 2 shows a flow chart of a method 200 of training a blood flow perfusion prediction model. As shown in fig. 2, according to some embodiments, method S200 may include:
Step S210, initializing a blood flow perfusion prediction model;
Step S220, acquiring a plurality of training sample data, wherein each training sample data comprises first image data acquired through multi-stage CT vascular imaging, the first image data comprises vascular structure information and time sequence CT values, and second image data acquired through corresponding CTP perfusion imaging, the second image data comprises CTP perfusion images and perfusion parameters calculated based on the CTP perfusion images, wherein the acquisition time and the acquisition position of the first image data and the second image data are the same, and
Step S240, training the blood flow perfusion prediction model by using the first image data as input so as to optimize model parameters of the blood flow perfusion prediction model.
In the example of step S210, initializing the perfusion prediction model may include initializing model parameters, for example, initializing all model parameters to zero using a zero initialization method, or randomly initializing parameters of the model using a random distribution method such as a uniform distribution or a gaussian distribution. It should be noted that, the method of initialization is not limited herein, and the initialization policy may be selected according to actual situations.
In the example of step S220, each training sample contains an image pair, specifically including a first image data acquired by multi-phase CT vascular imaging and a second image data acquired by corresponding CTP perfusion imaging. In order to ensure that the data in the image pairs are in one-to-one correspondence, the first image data and the second image data in each image pair need to adopt the same acquisition position and acquisition time.
In the example of step S240, the blood flow perfusion prediction model is trained using the first image data in the plurality of training samples as input, thereby optimizing the initialized model parameters.
As described above, the multi-stage CT vascular imaging has time resolution, and can capture dynamic changes of blood flow, matching with dynamic characteristics of blood flow perfusion information. Therefore, the blood flow perfusion prediction model is trained by taking the first image data acquired through multi-stage CT vascular imaging as input, and the blood flow perfusion prediction model with more accurate prediction capability can be obtained.
With continued reference to fig. 2, the method 200 of training a blood flow perfusion prediction model may further include:
step S250, inputting the first image data into a blood flow perfusion prediction model, so that the blood flow perfusion prediction model generates a corresponding blood flow perfusion image and simulated perfusion parameters;
step S260, verifying the blood flow perfusion prediction model by comparing the generated blood flow perfusion image and the simulated perfusion parameters with the second image data.
In the examples of step S250 and step S260, the first image data may be used as input to generate a corresponding blood flow perfusion image and a simulated perfusion parameter in addition to optimizing the model parameters, and further the blood flow perfusion prediction model may be validated by comparing the generated blood flow perfusion image and the simulated perfusion parameter with the second image data. This has the advantage that since the second image data corresponds to the first image data, the second image data is able to evaluate the accuracy of the model prediction and further optimize the model parameters.
With continued reference to fig. 2, prior to step S230, the method 200 of training a blood flow perfusion prediction model may further include:
step S230, preprocessing is performed on the first image data, where the preprocessing includes image alignment, image noise reduction and/or normalization.
In the example of step S230, preprocessing the first image data may enhance the training effect of the prediction model and enhance the accuracy of generating the blood flow perfusion image and the simulated perfusion parameters.
Wherein the image alignment may include aligning the first image data acquired at different acquisition time points such that corresponding points of the plurality of first image data are precisely coincident in the same coordinate system, and the image alignment may also include aligning the first image data with a predetermined standard image. Image noise reduction is a process of reducing image noise by a specific algorithm or technique, and aims to improve image quality, so that the image is clearer and easier to analyze. The normalization process is a process of converting a plurality of first image data to a uniform scale or range in order to ensure comparability and consistency between different images. For example, the mean normalization may be achieved by subtracting each pixel value of the first image data by its mean and then dividing by the standard deviation, so that the pixel value distribution of the image is more uniform.
In addition, in some embodiments of the present disclosure, for a blood flow perfusion prediction model that has been trained in advance, the model parameters may also be automatically adjusted based on pathology data of the patient, so that the physical condition of each patient individual can be comprehensively considered, and blood flow perfusion information of the patient can be predicted more accurately.
According to another aspect of the present disclosure, an apparatus for simulating blood flow perfusion is provided.
Fig. 3 shows a block diagram of a device 300 for simulating blood flow perfusion, according to an embodiment of the disclosure. As shown in fig. 3, an apparatus 300 for simulating blood perfusion includes:
a first unit 310 configured to acquire multi-phase CT vessel imaging data of a target tissue and extract a plurality of time-series vessel images;
a second unit 320 configured to extract vascular structure information of the target tissue based on the plurality of time-series vascular images and determine time-series CT values of the target tissue, and
A third unit 330 configured to input vascular structure information and time-series CT values to a pre-trained blood flow perfusion prediction model to generate corresponding blood flow perfusion images and perfusion parameters.
It should be appreciated that the various elements of the apparatus 300 shown in fig. 3 may correspond to steps S110-S130 in the method 100 described with reference to fig. 1. Thus, the operations, features and advantages described above with respect to method 100 apply equally to apparatus 300 and the units it comprises. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific units, it should be noted that the functions of the various units discussed herein may be divided into multiple units and/or at least some of the functions of the multiple units may be combined into a single unit. The particular unit performing the action discussed herein includes the particular unit itself performing the action, or alternatively the particular unit invoking or otherwise accessing another component or unit performing the action (or performing the action in conjunction with the particular unit). Thus, a particular element performing an action may include the particular element performing the action itself and/or another element performing the action that the particular element invokes or otherwise accesses.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program elements. The various units described above with respect to fig. 3 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these units may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the first unit 310, the second unit 320, and the third unit 330 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a Processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
According to another aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of simulating blood perfusion as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing a computer program for causing a computer to perform the method of simulating blood perfusion described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of simulating blood flow perfusion described above.
Fig. 4 is a block diagram illustrating an example of an electronic device 400 according to an example embodiment of the present disclosure. It should be noted that the structure shown in fig. 4 is only an example, and the electronic device of the present disclosure may include only one or more of the components shown in fig. 4 according to a specific implementation.
The electronic device 400 may be, for example, a general-purpose computer (e.g., a laptop computer, a tablet computer, etc., various computers), a mobile phone, a personal digital assistant, and the like. According to some embodiments, electronic device 400 may be a cloud computing device and a smart device. According to some embodiments, the electronic device 400 may be an X-ray imaging device, such as a computed tomography CT device.
According to some embodiments, the electronic device 400 may be configured to process at least one of an image, text, and audio, and transmit the processing results to an output device for provision to a user. The output device may be, for example, a display screen, a device including a display screen, or a sound output device such as a headphone, a speaker, or an oscillator. For example, the electronic device 400 may be configured to perform object detection on an image, transmit the object detection result to a display device for display, and the electronic device 400 may be further configured to perform enhancement processing on the image and transmit the enhancement result to the display device for display. The electronic device 400 may also be configured to recognize text in an image and transmit the recognition result to a display device for display and/or convert the recognition result into sound data and transmit to a sound output device for playback. The electronic device 400 may also be configured to recognize and process audio and transmit the recognition results to a display device for display and/or convert the processing results to sound data and transmit to a sound output device for playback.
The electronic device 400 may include an image processing circuit 403, and the image processing circuit 403 may be configured to perform various image processing on an image. The image processing circuit 403 may be configured to at least one of noise-reduce the image, normalize the image, register the image, geometrically correct the image, extract features of the image, detect and/or identify objects in the image, enhance the image, detect and/or identify text contained in the image, and so forth, for example.
The electronic device 400 may also include a text recognition circuit 404, the text recognition circuit 404 configured to perform text detection and/or recognition (e.g., OCR processing) of text regions in the image to obtain text data. The word recognition circuit 404 may be implemented, for example, by a dedicated chip. The electronic device 400 may further comprise a sound conversion circuit 405, the sound conversion circuit 405 being configured to convert the text data into sound data. The sound conversion circuit 405 may be implemented by a dedicated chip, for example.
The electronic device 400 may also include an audio processing circuit 406, the audio processing circuit 406 being configured to convert the audio into text, thereby obtaining audio corresponding text data. The audio processing circuitry 406 may also be configured to process the audio-corresponding text data, which may include keyword extraction, intent recognition, intelligent recommendation, intelligent question-answering, and the like, for example. The audio processing circuit 406 may be implemented, for example, by a dedicated chip. The voice conversion circuit 405 may be further configured to convert the audio processing result into voice data for application scenarios such as voice assistant or virtual customer service.
One or more of the various circuits described above (e.g., image processing circuit 403, text recognition circuit 404, sound conversion circuit 405, audio processing circuit 406) may be implemented using custom hardware, and/or may be implemented in hardware, software, firmware, middleware, microcode, hardware description language, or any combination thereof, e.g., one or more of the various circuits described above may be implemented by programming hardware, e.g., programmable logic circuits including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs), in an assembly language or hardware programming language, such as VERILOG, VHDL, c++, using logic and algorithms according to the present disclosure.
According to some embodiments, electronic device 400 may also include an output device 407, which output device 407 may be any type of device for presenting information, and may include, but is not limited to, a display screen, a terminal with display functionality, headphones, speakers, a vibrator, and/or a printer, among others.
According to some embodiments, electronic device 400 may also include an input device 408, which input device 408 may be any type of device for inputting information to electronic device 400, and may include, but is not limited to, various sensors, mice, keyboards, touch screens, buttons, levers, microphones, and/or remote controls, and the like.
According to some embodiments, electronic device 400 may also include a communication device 409, which communication device 409 may be any type of device or system that enables communication with external devices and/or with a network, which may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication devices, and/or chipsets, such as bluetooth devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
According to some implementations, the electronic device 400 may also include a processor 401. The processor 401 may be any type of processor and may include, but is not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). The processor 401 may be, for example, but not limited to, a central processing unit CPU, a graphics processor GPU, or various dedicated Artificial Intelligence (AI) computing chips, etc.
The electronic device 400 may also include a working memory 402 and a storage device 411. The processor 401 may be configured to obtain and execute computer readable instructions stored in the working memory 402, the storage device 411, or other computer readable medium, such as program code of the operating system 402a, program code of the application program 402b, and the like. The working memory 402 and the storage device 411 are examples of computer-readable storage media for storing instructions that can be executed by the processor 401 to implement the various functions described previously. Working memory 402 may include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). Storage 411 may include hard disk drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CDs, DVDs), storage arrays, network attached storage, storage area networks, and the like. Both working memory 402 and storage 411 may be collectively referred to herein as memory or computer-readable storage medium, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 401 as a particular machine configured to implement the operations and functions described in the examples herein.
According to some embodiments, the processor 401 may control and schedule at least one of the image processing circuit 403, the text recognition circuit 404, the sound conversion circuit 405, the audio processing circuit 406, and other various devices and circuits included in the electronic apparatus 400. According to some embodiments, at least some of the various components described in fig. 4 may be interconnected and/or in communication by a bus 410.
Software elements (programs) may reside in the working memory 402 including, but not limited to, an operating system 402a, one or more application programs 402b, drivers, and/or other data and code.
According to some embodiments, instructions for performing the foregoing control and scheduling may be included in the operating system 402a or one or more application programs 402 b.
According to some embodiments, instructions to perform the method steps described in the present disclosure may be included in one or more applications 402b, and the various modules of the electronic device 400 described above may be implemented by the instructions of one or more applications 402b being read and executed by the processor 401. In other words, electronic device 400 may include a processor 401 and memory (e.g., working memory 402 and/or storage device 411) storing a program comprising instructions that, when executed by the processor 401, cause the processor 401 to perform the methods as described in various embodiments of the disclosure.
According to some embodiments, some or all of the operations performed by at least one of the image processing circuit 403, the text recognition circuit 404, the sound conversion circuit 405, the audio processing circuit 407 may be implemented by the processor 401 reading and executing instructions of one or more application programs 402 b.
Executable code or source code of instructions of software elements (programs) may be stored in a non-transitory computer readable storage medium (such as the storage device 411) and may be stored in the working memory 402 (possibly compiled and/or installed) when executed. Accordingly, the present disclosure provides a computer readable storage medium storing a program comprising instructions that, when executed by a processor of an electronic device, cause the electronic device to perform a method as described in various embodiments of the present disclosure. According to another embodiment, executable code or source code of instructions of the software elements (programs) may also be downloaded from a remote location.
It should also be understood that various modifications may be made according to specific requirements. For example, custom hardware may also be used, and/or individual circuits, units, modules or elements may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the circuits, units, modules, or elements contained in the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuits including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, c++ using logic and algorithms according to the present disclosure.
According to some implementations, the processor 401 in the electronic device 400 may be distributed over a network. For example, some processes may be performed using one processor while other processes may be performed by another processor remote from the one processor. Other modules of the electronic device 400 may also be similarly distributed. As such, electronic device 400 may be interpreted as a distributed computing system that performs processing in multiple locations. The processor 401 of the electronic device 400 may also be a processor of a cloud computing system or a processor that incorporates a blockchain.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.