WO2018161123A1 - Guided radiation therapy - Google Patents
Guided radiation therapy Download PDFInfo
- Publication number
- WO2018161123A1 WO2018161123A1 PCT/AU2018/050212 AU2018050212W WO2018161123A1 WO 2018161123 A1 WO2018161123 A1 WO 2018161123A1 AU 2018050212 W AU2018050212 W AU 2018050212W WO 2018161123 A1 WO2018161123 A1 WO 2018161123A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- motion
- axis
- rotational
- projection
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1121—Determining geometric values, e.g. centre of rotation or angular range of movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1127—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using markers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7285—Specific aspects of physiological measurement analysis for synchronizing or triggering a physiological measurement or image acquisition with a physiological event or waveform, e.g. an ECG signal
- A61B5/7289—Retrospective gating, i.e. associating measured signals or images with a physiological event after the actual measurement or image acquisition, e.g. by simultaneously recording an additional physiological signal during the measurement or image acquisition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
- A61N2005/1061—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1077—Beam delivery systems
- A61N5/1081—Rotating beam systems with a specific mechanical construction, e.g. gantries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20068—Projection on vertical or horizontal image axis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present disclosure relates to systems and methods for use in relation to guided radiation therapy systems.
- In one form three is disclosed a system and method for use in motion tracking of target in a guided radiation therapy system.
- Radiation therapy is a treatment modality used to treat localised tumours. It generally involves producing high energy megavoltage (MV) and conformal beams of x-rays to the target (tumour) using a medical linear accelerator. The radiation interacts with the tissues to create double strand DNA breaks to kill tumour cells. Radiation therapy requires high precision to deliver the dose to the tumour and spare healthy tissue, particularly that of organs surrounding the tumour. Each treatment is tailored to a patient-by-patient basis.
- IGRT image guided radiation therapy
- tumour motion corrections should be applied for both tumour translations and tumour rotations.
- Retrospective post-treatment calculation of tumour rotations have shown that the rotations could be significant for both prostate and lung tumours.
- Tumour motion can occur in six degrees of freedom (6DoF) that is, rotational and translational movements can occur about and along three axes.
- 6DoF degrees of freedom
- Tumour motion during treatment can cause large radiation doses to be delivered to critical structures and healthy tissue, leading to suboptimal dosimetry (dose coverage outside the tumour).
- dosimetrically, uncorrected prostate rotations of 15° can result in a 12% under dose to the tumour.
- Motion management in radiation therapy has thus become vital in delivering accurate dose coverage and limiting toxicities to healthy tissue.
- SBRT stereotactic body radiation therapy
- KIM Kilovoltage Intrafraction Monitoring
- KIM is a real-time image guidance technique that utilises existing radiotherapy technologies found in cancer care centres (i.e. on-board x-ray images).
- KIM exploits fiducial markers implanted inside the tumour (organ) and reconstructs their location by acquiring multiple images of the target using the on-board kilovoltage (KV) beam which is a low energy X-ray imager and determines any motion in the left-right (LR), superior- inferior (SI), and anterior-posterior (AP) directions.
- KIM Tracking has also been developed which dynamically modifies the multi leaf collimator (MLC) position while delivering the treatment dose base of the tumour position reconstructed by KIM.
- MLC multi leaf collimator
- KIM-gated radiation therapy is currently used to treat prostate cancer patients at multiple cancer centres could also be expanded to treat lung cancer patients in the near future.
- tumour motion is monitored in real-time while both the MV beam is delivering the treatment dose and the KV beam is imaging the tumour target. If significant motion away from the treatment beam occurs the treatment is paused and the patient is repositioned before the treatment is continued.
- the concept of determining or estimating the motion of a target refers to determining an offset in the position and rotation of the target from a reference position and rotation.
- the reference positon is labelled M re p
- the appropriate reference position and rotation can be determined in a number of ways as will be described below.
- a method for estimating the motion of a target from a 2- dimensional projection of the target said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle ⁇ , about the first axis.
- the method can comprise:
- the method preferably includes computing a motion based on a correlation between movement in the y axis and movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions.
- a method for estimating the motion of a target from a 2-dimensional projection of the target said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle ⁇ , about the first axis.
- the method comprises: identifying the target in said projection; determining a projected position of a reference point within the target in said projection; estimating the position along the y axis of the reference point of the target, and initial coefficient vectors correlating the motion of the target and motion along the y axis; generating a refined estimate of the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on the previously determined y-axis position of the reference point and the previous estimates.
- the method can further include
- the method can further include:
- Steps (a) and (b) can be repeated on the basis of the determined refined coefficient vectors from step (d).
- a method of guided radiation therapy in which at least one treatment beam of radiation is directed at a target, said method including: estimating the motion of the target using a method as claimed in any one of the preceding claims, and directing the treatment beam based on the estimated position.
- a system for guided radiation therapy including: A radiation source for emitting at least one treatment beam of radiation;
- An imaging system arranged to generate a succession of images comprising a two dimensional projection of a field of view and in which the location of the target may be identified;
- a control system to direct the at least one treatment beam at the target wherein said beam control system is configured to: receive images from the imaging system and estimate the motion of the target based on said images using a method as claimed in any one of the preceding claims; and adjust the system to direct the at least one beam at the target.
- control system can adjust the system by controlling one or more of: at least one geometrical property of said at least one emitted beam; a position of the target relative to the beam; a time of emission of the beam, an angle of emission of the beam relative to the target about the system rotational angle.
- Figure 1 illustrates a schematic representation of a system configured to implement an embodiment of the present invention.
- Figure 2 is a flowchart of a guided radiation therapy process according to an embodiment of the present invention.
- Figure 3 sets out a process for testing the tracking method described herein by simulation.
- Figure 4A and Figure 4B illustrate histograms of six degrees of freedom motion in the ground-truth data, across 81 traces from 19 patients and 53736 frames.
- Figure 4A shows translational motion
- figure 4B shows rotational motion.
- Figure 5 plots the six degree of freedom target motion successfully estimated with the motion tracking method according to an embodiment of the present invention compared to the ground truth.
- Figure 6 and 7 are boxplots showing: the distributions of error in 6D0F between the motion tracking method according to an embodiment of the present invention and the ground-truth across 81 liver tumour traces from 19 patient with 53736 image frames, the distributions of the mean of error in each segment in 6DoF between motion tracking method according to an embodiment of the present invention and the ground-truth across 81 liver tumour traces from 19 patients.
- Figures 8 and 9 are scatter plots of the error between motion tracking method according to an embodiment of the present invention in each DoF as a function of the magnitude of deformation, assessed by the variation in the area of the triangle subtended by the markers ( Figure 8) and the absolute value of motion ( Figure 9).
- Figure 10 shows scatter plots showing the relationship between the maximum error of estimates made by the motion tracking method according to an embodiment of the present invention in each degree of freedom and the absolute linear correlation value of the motion in that degree of freedom and the motion in the SI direction for each trace.
- the p value indicates the Pearson correlation coefficient between each value pair.
- Figure 11 shows an example of data from a patient and illustrates an estimation around a movement anomaly (e.g. a cough).
- a movement anomaly e.g. a cough
- Figure 12 is a Boxplot showing the distributions of the mean of error in each segment in 6DoF between 6D-IDC estimation with projection of 1 marker versus projections that used 3 markers after a learning arc across 81 liver tumour traces from 19 patients.
- Figure 13 illustrates a 6D-IDC estimation with with the projection of one marker versus with projections of three markers.
- FIG. 1 depicts a system for image guided radiation therapy able to implement an embodiment of the inventions described herein.
- the system 10 includes:
- a radiation source 12 for emitting at least one treatment beam of radiation The radiation source emits the treatment beam 14 along a first beam axis towards the patient being treated.
- the radiation source 12 will comprise a linear accelerator emitting megavolt x-rays.
- An imaging system 16 arranged to generate a succession of images 18 comprising a two dimensional projection of a field of view and in which the location of the target may be identified.
- the imaging system 16 includes a second radiation source 20 that emit at least one imaging beam 22 along a second beam axis.
- the imaging beam 22 will be transmitted in a direction orthogonal to the treatment beam 14.
- the imaging beam is transmitted through the patient (or at least through the region of the patient) to a radiation detector 24 that is configured to detect radiation transmitted through the target.
- the spatial intensity of the received radiation is converted to an x-ray image that is a projection of said at least one imaging beam in a plane normal to the direction its emission.
- the imaging system will be a kilovolt imaging system built into the linear accelerator.
- a support platform 26 (e.g. a bed) on which the subject of the radiation therapy is supported during treatment.
- Support platform is repositionable relative to the imaging system and radiation source so that the patient can be positioned with the centre of the target (e.g. tumour) located as near as possible to the intersection between the first and second beam axes.
- the centre of the target e.g. tumour
- control system 30 That controls the parameters of operation of the radiotherapy system.
- the control system 30 is a computer system comprising one or more processors with associated working memory, data storage and other necessary hardware, that operates under control of software instructions to receive input data from one or more of a user, other components of the system (e.g. the imaging system), and outputs control signals to control the operation of the radiation therapy system.
- the control system 30 causes the radiation source 12 to direct its least one treatment beam at the target. To do this the control system receives images from the imaging system and estimates the motion of the target, then issues a control signal to adjust the system 10 to direct the treatment beam 14 at the target.
- the radiation source 12, imaging system 16 and support platform 30 are common to most conventional image radiation therapy systems. Accordingly, in the conventional manner the radiation source 12, imaging system 16 can be rotatably mounted (on a structure commonly called a gantry) with respect to the patient support platform 30 so that they can rotate about the patient in use.
- the rotational axis of the gantry motion is thus orthogonal to the directions of the treatment beam and imaging beam (i.e. the first and second directions.) It enables sequential treatment and imaging of the patient at different angular positions about the system's gantry's axis.
- the control system 30 processes images received from the imaging system 16 and estimates the motion of the target, then issues a control signal to adjust the system 10 to direct the treatment beam at the target.
- the adjustment will typically comprise at least one of the following: changing a geometrical property of the treatment beam such as its shape or position, e.g. by adapting a multi-leaf collimator of the linac; changing the time of emission of the beam, e.g. by delaying treatment beam activation to a more suitable time; gating the operation of the beam, e.g. turning off the beam if the estimated motion is greater than certain parameters; changing an angle at which the beam is emitted relative to the target about the system rotational axes.
- the system 10 can also be adjusted so as to direct the treatment beam at the target by moving the patient support platform 26. Moving the support platform 26 effectively changes the position of the centroid of the target with respect to the position of the treatment beam 14 (and imaging beam).
- the general method of operation in of the system 10 is as follows.
- the radiation source and imaging system rotates around the patient during treatment.
- the imaging system acquires 2D projections of the target.
- the target will be marked by the placement of fiducial markers within or about the target.
- the positioning of the markers may be such that the centroid of the markers lies at the centre of the target, but this is not strictly necessary.
- the control system 30 identifies the positon of the markers in each image to determine estimate the target's three dimensional position and orientation.
- the control system therefore needs a mechanism for estimating the target's position in 3-dimensions based on its location in a 2D image.
- the method employed in the present invention directly estimates the translational motion in three dimensions and rotational motion of the target about three axes from the 2-dimensional projection (image) of the target.
- the present inventors have determined that the problem of solving for the target's 3D position in this manner is ill-posed, hence, some a priori knowledge or assumption is required.
- the approach used makes use of interdimensional correlation (IDC) of the motion of the target.
- IDC interdimensional correlation
- the preferred embodiments are based on the understanding that the thoracic and upper abdominal tumour motion in the Anterior-Posterior (AP) and Left-Right (LR) directions are correlated to the tumour motion in the Superior-Inferior (SI) direction and so are the rotational tumour motion around these axes.
- the imaging system rotates around the patient with its axis parallel to the patient' s SI axis, the SI position of the tumour is always visible on the kV images.
- the preferred method includes computing the target's motion based on a correlation between its movement in the y axis and its movement along or about the other axes.
- M is consisted of n number of points
- X' is merely a mathematical by-product of the rotation equation to accurately relate a 3D object with coordinates (x,y,z) with its referenced coordinates
- the vector Tr on its own does not provide the translational motion information.
- the real translational vector is defined as simply a vector difference between the current centroid of the target and its referenced centroid coordinates. That is:
- Equation (3) relates all the components of equation (1) with the target's y- coordinate.
- equation (3) relates all the components of equation (1) with the target's y- coordinate.
- other models of correlation can be used such as a state-augmentation model and a 2 nd order correlation model, and equation (3) will change accordingly. As noted above, this is advantageous because the treatment beam and imaging system rotate around the y-axis.
- the parameter F indicates the number of image frames used to calculate the cost function, which is explained in further details in the next section.
- the vectors A and B can be estimated by minimizing the cost function C, given vW(£), in the least squares sense: ⁇ «, b) — arg min
- equation (4-1) is only an approximation of equation (4), it may be necessary to iteratively refine the solution, as shown in the pseudo-code below.
- This process can be summarised as follows: Identify the target in said projection (e.g. by segmenting the image to identify the markers).
- a refined estimate of the movement of the target in 6DOF can then be generated based on the previously determined y-axis position of the reference point and the previous estimates. Based on this a refined estimate of rotational and translational position can then be determined, followed by a refined estimate of the y position of the reference point within the target.
- Updated coefficient vectors correlating the motion of the target and motion along the y axis based on the distance between the refined projected position can be computed, e.g. by applying a least squares optimisation. These vectors can form the basis of an iterative recalculation of the movement of the target in 6DOF. The number of iterations used can be selected as appropriate.
- Figure 2 illustrates a method of guided radiation therapy in which the process described above can be used.
- the methods of guided radiation therapy are similar to those followed by Huang et al. 2015 (Huang, C.-Y., Tehrani, J. N., Ng, J. A., Booth, J. T. & Keall, P. J. 2015.
- Keall et al. 2016 Keall, P. J., Ng, J. A., Juneja, P., O'brien, R. T., Huang, C.-Y., Colvill, E., Caillet, V., Simpson, E., Poulsen, P. R., Kneebone, A., Eade, T. & Booth, J. T. 2016.
- the process 200 can be divided into two phases, set up 201 and treatment 202.
- the set up phase 201 uses an imaging procedure 204, e.g. Cone Beam CT, before treatment to initialise 206 the parameters for the movement tracking method described above.
- Target segmentation 208 is used to identify fiducial markers in the target during initialisation.
- the initialised movement tracking method can then be used to track target motion 210. In some cases 212 patient realignment may be necessary.
- the method moves to the treatment phase 202.
- the treatment beam is activated and the target irradiated, movement tracking system will update the tumour's translational and rotational motion 224 in real-time using continuous small field kV imaging 220.
- the field of view for the kV imaging during treatment can be reduced to encompass only the tumour and anticipated motion range+50% to reduce imaging dose to the surrounding anatomy.
- Motions output by movement tracking method can be used to either or both of: (1) control adaptation of an automatic Multi-Leaf-Collimator (MLC) which will follow the motion of the tumours and adapt the treatment field to hit the tumour at its current position 226; or (2) gate the operation of the treatment beam 228.
- MLC Multi-Leaf-Collimator
- the treatment beam can be deactivated and the robotic couch moved to re-align the target with the treatment field, after which the treatment can continue.
- Gating can be automatic or manually performed by a technician in response to an alert issued by the system controller.
- the ground truth 6 DoF motion data were computed in two steps 302.
- the 6DoF motions of the target were calculated using the ICP algorithm (Tehrani, J. N., O'brien, R. T., Poulsen, P. R. & Keall, P. 2013. Real-time estimation of prostate tumour rotation and translation with a kV imaging system based on an iterative closest point algorithm.
- the referenced position (M re f using the terminology above) is taken to be the first three dimensional position and rotational orientation that can be determined form a notional image of the target.
- the ground- truth 3D positions of the markers were projected onto the notional imager using equation (2).
- the SAD and SID value were set at 1000 mm and 1800 mm, respectively.
- the gantry started at 180° and rotated counter-clockwise at 6 s to simulate a full rotation VMAT treatment - 303.
- the movement tracking method was then used in step 304 to estimate 6DoF motion using only information from the projected positions of the markers on each image frame, as described above. Tracking begun after 200 imaging frames, equivalent to 110° of gantry rotation. After that, the tracked motion was updated, for each new frame, using all the data from the beginning of the treatment. However, when updating the model, only one iteration of optimisation was used, instead of multiple noted above. The least square optimisation was started with the previous solution for the correlation vectors A and B. For the initial estimate, it was found that using 6 iteration allows the solution to converge for all the test trajectories with the difference in the sum of square error criterion set at le-6 mm.
- the least square solver used the solution from the last time point - i.e. the previous simulated "image", this effectively gives it a "warm start”. Thus, the 6 iterations were not necessary and one iteration was sufficient to have the solution converge.
- the error of the movement tracking method was defined as the difference between 6DoF motions estimated with the movement tracking method and the 6DoF ground-truth motion. Analysis was performed of the following factors affecting the accuracy of movement tracking method:
- Deformation estimated by the change in the area of the triangle, that was formed by the 3 markers, in 3D in each frame, compared with the referenced area.
- Absolute magnitude of motion in each DoF the absolute value of 6DoF motions in each frame relative to planned marker position.
- translational motion is denoted by its axis of motion, e.g., translation motion in LR is denoted as “LR”.
- rotational motion is denoted by an “r” before its axis of rotation, e.g. rotation motion around the SI axis is denoted as “rSI”. This is simply for clarity in figures.
- Figure 5 shows a comparison of 6DoF motion estimated using an illustrative embodiment and the ground truth motion used in the simulation.
- the six plots of figure 5 each represent either translation or rotational motion along or about the labelled axes.
- the solid line indisctest the "ground truth” motion and the dotted line indicates the 6D-IDC prediction.
- the means and standard deviations of the differences are summarised in Table 1.
- the mean of error in the 6DoF are under 0.1 mm and 0.1° across 81 motion traces from 19 patients.
- the standard deviation of error for motion estimated with the illustrative embodiment are less than 1 mm for translational motion and less than 1.5° for rotational motion. This result is a pooled analysis across 53736 imaging frames of the 81 liver motion traces from 19 patients.
- the boxplot of the overall error is shown in Figure 6.
- Figure 7 shows the boxplot of the mean of error of the motion tracking method according to an embodiment of the present invention compared with ground-truth 6DoF motion for each of the 81 tested traces.
- the plots of figure 11 show plots of the 6D-IDC framework (dotted lines) compared to the ground truth data set (solid line), around an anomaly. The position of the anomaly on each of the plots is shown by an arrow. Each plot represents translation or rotational motion about a given axis as labelled.
- Table 2 shows the summary statistics of the error of 6DoF motion estimation of an embodiment of the present invention if only one marker projection is available after a learning arc of 110°. Table 2. Summary of error of 6DoF motion estimated with 6D-IDC using projected position of one marker after 110° learning arc.
- Figure 13 shows a comparison of tracking performed with 3 markers vs. 1 marker.
- Figures 131- A and 13I-B represent estimated motion of a liver tumour in a patient.
- Figure 131- A show the outcome vs. ground truth using 3 markers.
- Figure 13I-B represents the estimated motion using only 1 marker.
- Each plot represents translation in, or rotation about, a given axis as labelled on the figure.
- Figure 13II-A represent estimated motion of a liver tumour in a patient using 3 markers.
- Figure 13II-B represents the estimated motion using only 1 marker.
- Figure 131 A and B represent a case in which 6D-IDC is as accurate with one marker (figure 13(I-B)) as with three markers (figure 13(1- A))
- Figures 1311 A and B show a case in which 6D-IDC with one marker is less accurate (figure 13(II-B)) than with three markers (figure 13(11- A)).
- Figures 8 and 9 are scatter plots of the error between motion tracking method according to an embodiment of the present invention in each DoF as a function of the magnitude of deformation, assessed by the variation in the area of the triangle subtended by the markers ( Figure 8) and the absolute value of motion ( Figure 9).
- the p value indicates the Pearson correlation coefficient between each value pair.
- the magnitude of the deformation seen in the ground-truth dataset had little effect on the accuracy of the motion tracking method according to an embodiment of the present invention ( Figure 8).
- the relationship between the magnitude of error and the change in area in each frame was weak in all 6 DoFs.
- Figure 10 shows scatter plots of the maximum of error and the linear correlation between each DoF motion and the translational SI motion for all tested traces.
- a strong correlation is found in the AP translation motion and the rotation around the LR axis (rLR), with Pearson's correlation p values of -0.6 for AP and -0.5 for rLR.
- a negative Pearson's correlation indicates a negatively correlated relationship.
- no correlation or very weak correlation can be observed. From Figure 10, it can also be observed that most of the outliers occurred with weak correlation with SI ( ⁇ 0.2), especially in translation motions in AP and rLR and rAP rotation motion.
- a motion tracking method e.g. suitable to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager that is orthogonally mounted on the gantry of a standard linac.
- the method utilises the interdimensional correlation in the translation in SI direction with other 5 degrees of freedom motions as an a priori.
- Simulations demonstrate that the simulated embodiment performed with sub-mm and sub-degree accuracy on the tested dataset.
- the accuracy (mean) and precision (standard deviation) of the exemplary method in estimating translation motions of the tested dataset were sub-mm.
- embodiments of the tracking algorithm perform better (with both higher accuracy and precision), when the projections of all three markers are available.
- the algorithm still gives sub-mm and sub-degree mean error in all 6 DoFs. This is particularly advantageous for real-time applications as all three markers may not be visible or reliably detected on a projection at all time.
- the algorithm can also be used with MV tracking provided the initial correlation model is built during initial CBCT imaging. The standard deviation of error for the 3D translations are under 1 mm while the standard deviation of error for the 3D rotations are under 2° using only one marker projection.
- the algorithm can be optimised by updating the correlation model in the occasions when all three markers projections are available to improve its performance.
- the preferred embodiment of the motion tracking method according to the present invention employs solving the correlation matrix in a least square sense.
- This formalism of solving 6D0F motion from the target's projection on an imager is scalable.
- Other embodiments are capable of solving for the 6DoF motion of a target comprised of a larger number of points, such as situations with four or more markers, or the segmented tumour on a projection image.
- Embodiments of the present invention may have the advantageous property that utilising equation (3), means that the preferred algorithm is able to compute the rotation and translation of the target directly, without the need to solve for the 3D coordinates of each point separately.
- the 6D-IDC algorithm can be used to estimate 6DoF motion when only one marker is available provided the parameters of the correlation matrix are already computed during a learning arc where three or more markers are available.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Physiology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Dentistry (AREA)
- Artificial Intelligence (AREA)
- Geometry (AREA)
- High Energy & Nuclear Physics (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radiation-Therapy Devices (AREA)
Abstract
The present disclosure relates to systems and methods for use in relation to guided radiation therapy systems. In one form three is disclosed a system and method for use in motion tracking of target in a guided radiation therapy system. There is described a method for estimating the motion of a target from a 2-dimensional projection of the target, said 2- dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle θ, about the first axis. The method can comprise: Identifying the target in said projection; Determining the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on said projections.
Description
Guided Radiation Therapy
Field of the invention
The present disclosure relates to systems and methods for use in relation to guided radiation therapy systems. In one form three is disclosed a system and method for use in motion tracking of target in a guided radiation therapy system.
Background of the invention
Radiation therapy is a treatment modality used to treat localised tumours. It generally involves producing high energy megavoltage (MV) and conformal beams of x-rays to the target (tumour) using a medical linear accelerator. The radiation interacts with the tissues to create double strand DNA breaks to kill tumour cells. Radiation therapy requires high precision to deliver the dose to the tumour and spare healthy tissue, particularly that of organs surrounding the tumour. Each treatment is tailored to a patient-by-patient basis.
In current radiation therapy, image guided radiation therapy (IGRT) is routinely applied at the start of treatment to align the target with its planned position. However, tumours in the thorax, abdomen and pelvis are not static during treatment. Patients undergoing radiotherapy treatment are subject to movement both in the setup on the treatment bed and by way of organ and tumour motion during treatment delivery.
Increasing evidence suggests that such intrafractional tumour motion corrections should be applied for both tumour translations and tumour rotations. Retrospective post-treatment calculation of tumour rotations have shown that the rotations could be significant for both prostate and lung tumours. Tumour motion can occur in six degrees of freedom (6DoF) that is, rotational and translational movements can occur about and along three axes. Tumour motion during treatment can cause large radiation doses to be delivered to critical structures and healthy tissue, leading to suboptimal dosimetry (dose coverage outside the tumour). In this regard, dosimetrically, uncorrected prostate rotations of 15° can result in a 12% under dose to the tumour.
Motion management in radiation therapy has thus become vital in delivering accurate dose coverage and limiting toxicities to healthy tissue. With the increase of using stereotactic body radiation therapy (SBRT), which delivers high doses in small fractions within a small field size (small X-ray beam size), motion management becomes extremely significant to allow conformal high doses to be delivered to the target site whilst sparing healthy tissue.
A number of different intrafraction real-time guidance methods have been used during prostate cancer treatments. Systems such as CyberKnife (Accuray, Sunnyvale, CA) and the realtime tracking radiotherapy (RTRT) system use real-time kilovoltage (kV) images from two (CyberKnife) or four (RTRT system) orthogonal room-mounted imagers to track the prostate position based on 10 segmented positions of implanted fiducial markers. Calypso (Varian, Palo Alto, CA) and RayPilot (Micropos, Gothenburg, Sweden) utilise implanted electromagnetic transponders, transmitting positional signals to an external receiver. Emerging real-time guidance technologies include ultrasonography and integrated magnetic resonance imaging (MRI) -radiation therapy systems. However, common to all these methods is the drawback that they need additional dedicated, and typically expensive equipment to perform the real-time guidance.
Ideally, real-time image guidance would be performed using a standard linear accelerator (linac) without relying on additional hardware. Since most modern linear accelerator have a kV x-ray imager system, mounted orthogonally to the treatment beam, a number of algorithms have been proposed for the purpose of estimating the target's position in 3D based on its location on a 2D x-ray image, which has been acquired using the linear accelerator's gantry mounted kilovoltage (kV) x-ray imager system. However, the target position on the kV imager only contains 2D information, making accurate position determination complex.
One new approach to monitoring patient motion has been developed named Kilovoltage Intrafraction Monitoring (KIM). KIM is a real-time image guidance technique that utilises existing radiotherapy technologies found in cancer care centres (i.e. on-board x-ray images). KIM exploits fiducial markers implanted inside the tumour (organ) and reconstructs their location by acquiring multiple images of the target using the on-board kilovoltage (KV) beam which is a low energy X-ray imager and determines any motion in the left-right (LR), superior- inferior (SI), and anterior-posterior (AP) directions. KIM Tracking has also been developed
which dynamically modifies the multi leaf collimator (MLC) position while delivering the treatment dose base of the tumour position reconstructed by KIM. KIM-gated radiation therapy is currently used to treat prostate cancer patients at multiple cancer centres could also be expanded to treat lung cancer patients in the near future. In KIM, tumour motion is monitored in real-time while both the MV beam is delivering the treatment dose and the KV beam is imaging the tumour target. If significant motion away from the treatment beam occurs the treatment is paused and the patient is repositioned before the treatment is continued.
One key challenge that exists for systems like KIM is that the acquired images only contain 2-dimensional information, yet a three dimensional positional estimation is needed. Poulsen et al. (Poulsen, P. R., Cho, B., Langen, K., Kupelian, P. & Keall, P. J. 2008b. Three- dimensional prostate position estimation with a single x-ray imager utilizing the spatial probability density. Phys Med Biol, 53, 4331-4353 proposed a Maximum Likelihood Estimation (MLE) algorithm to estimate the target's 3D position assuming a Gaussian distribution, which can be built after a learning arc. This solution has been clinically implemented in the KIM system. Recently, a Bayesian method to estimate the proper distribution of the target was also proposed, which does not assume Gaussian distribution and hence may be more accurate in estimating the target' s respiratory motion, as the thoracic tumour motions are complicated and can be asymmetrical as well as displaying hysteresis.
For Cone Beam CT (CBCT) trajectories, reconstruction of the positions of the target in 3D can be estimated using phase-binning, and linear interpolation, assuming the target' s position in 3D do not change much within each respiratory bin. However, this method requires all projections to be collected before 3D estimation can be performed. Hence, such systems are not suitable for real-time target positional estimation during treatment.
Reference to any prior art in the specification is not an acknowledgment or suggestion that this prior art forms part of the common general knowledge in any jurisdiction or that this prior art could reasonably be expected to be understood, regarded as relevant, and/or combined with other pieces of prior art by a skilled person in the art.
Summary of the invention
In the present description the concept of determining or estimating the motion of a target refers to determining an offset in the position and rotation of the target from a reference position and rotation. In the example below the reference positon is labelled Mrep The appropriate reference position and rotation can be determined in a number of ways as will be described below.
In a first aspect there is provided a method for estimating the motion of a target from a 2- dimensional projection of the target, said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle Θ, about the first axis. The method can comprise:
Identifying the target in said projection;
Determining the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on said projections.
The method preferably includes computing a motion based on a correlation between movement in the y axis and movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions.
In a further aspect there is provided a method for estimating the motion of a target from a 2-dimensional projection of the target, said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle Θ, about the first axis. The method comprises: identifying the target in said projection; determining a projected position of a reference point within the target in said projection; estimating the position along the y axis of the reference point of the target, and initial coefficient vectors correlating the motion of the target and motion along the y axis; generating a refined estimate of the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on the previously determined y-axis position of the reference point and the previous estimates. The method can further include
(a) Generating a refined estimate of the rotational position of the target in 3 dimensions based on a previous estimate;
(b) Generating a refined estimate translational position of the target in 3 dimensions;
(c) Generating a refined estimate of the position along the y axis of the reference point of the target In some embodiments the method can further include:
(d) Determining refined coefficient vectors correlating the motion of the target and motion along the y-axis based on the distance between the refined projected position.
Steps (a) and (b) can be repeated on the basis of the determined refined coefficient vectors from step (d). In a further aspect there is provided a method of guided radiation therapy in which at least one treatment beam of radiation is directed at a target, said method including: estimating the motion of the target using a method as claimed in any one of the preceding claims, and directing the treatment beam based on the estimated position.
In yet another aspect there is provided a system for guided radiation therapy including: A radiation source for emitting at least one treatment beam of radiation;
An imaging system arranged to generate a succession of images comprising a two dimensional projection of a field of view and in which the location of the target may be identified;
A control system to direct the at least one treatment beam at the target, wherein said beam control system is configured to: receive images from the imaging system and estimate the motion of the target based on said images using a method as claimed in any one of the preceding claims; and adjust the system to direct the at least one beam at the target.
In some embodiments the control system can adjust the system by controlling one or more of:
at least one geometrical property of said at least one emitted beam; a position of the target relative to the beam; a time of emission of the beam, an angle of emission of the beam relative to the target about the system rotational angle.
As used herein, except where the context requires otherwise, the term "comprise" and variations of the term, such as "comprising", "comprises" and "comprised", are not intended to exclude further additives, components, integers or steps.
Further aspects of the present invention and further embodiments of the aspects described in the preceding paragraphs will become apparent from the following description, given by way of example and with reference to the accompanying drawings.
Brief description of the drawings
Figure 1 illustrates a schematic representation of a system configured to implement an embodiment of the present invention.
Figure 2 is a flowchart of a guided radiation therapy process according to an embodiment of the present invention.
Figure 3 sets out a process for testing the tracking method described herein by simulation.
Figure 4A and Figure 4B illustrate histograms of six degrees of freedom motion in the ground-truth data, across 81 traces from 19 patients and 53736 frames. Figure 4A shows translational motion, and figure 4B shows rotational motion. Figure 5 plots the six degree of freedom target motion successfully estimated with the motion tracking method according to an embodiment of the present invention compared to the ground truth.
Figure 6 and 7 are boxplots showing:
the distributions of error in 6D0F between the motion tracking method according to an embodiment of the present invention and the ground-truth across 81 liver tumour traces from 19 patient with 53736 image frames, the distributions of the mean of error in each segment in 6DoF between motion tracking method according to an embodiment of the present invention and the ground-truth across 81 liver tumour traces from 19 patients.
Figures 8 and 9 are scatter plots of the error between motion tracking method according to an embodiment of the present invention in each DoF as a function of the magnitude of deformation, assessed by the variation in the area of the triangle subtended by the markers (Figure 8) and the absolute value of motion (Figure 9).
Figure 10 shows scatter plots showing the relationship between the maximum error of estimates made by the motion tracking method according to an embodiment of the present invention in each degree of freedom and the absolute linear correlation value of the motion in that degree of freedom and the motion in the SI direction for each trace. The p value indicates the Pearson correlation coefficient between each value pair.
Figure 11 shows an example of data from a patient and illustrates an estimation around a movement anomaly (e.g. a cough).
Figure 12 is a Boxplot showing the distributions of the mean of error in each segment in 6DoF between 6D-IDC estimation with projection of 1 marker versus projections that used 3 markers after a learning arc across 81 liver tumour traces from 19 patients.
Figure 13 illustrates a 6D-IDC estimation with with the projection of one marker versus with projections of three markers.
Detailed description of the embodiments
Figure 1, depicts a system for image guided radiation therapy able to implement an embodiment of the inventions described herein. The system 10 includes:
A radiation source 12 for emitting at least one treatment beam of radiation. The radiation source emits the treatment beam 14 along a first beam axis towards the
patient being treated. Typically the radiation source 12 will comprise a linear accelerator emitting megavolt x-rays.
An imaging system 16 arranged to generate a succession of images 18 comprising a two dimensional projection of a field of view and in which the location of the target may be identified. The imaging system 16 includes a second radiation source 20 that emit at least one imaging beam 22 along a second beam axis. The imaging beam 22 will be transmitted in a direction orthogonal to the treatment beam 14. The imaging beam is transmitted through the patient (or at least through the region of the patient) to a radiation detector 24 that is configured to detect radiation transmitted through the target. The spatial intensity of the received radiation is converted to an x-ray image that is a projection of said at least one imaging beam in a plane normal to the direction its emission. Typically the imaging system will be a kilovolt imaging system built into the linear accelerator.
A support platform 26 (e.g. a bed) on which the subject of the radiation therapy is supported during treatment. Support platform is repositionable relative to the imaging system and radiation source so that the patient can be positioned with the centre of the target (e.g. tumour) located as near as possible to the intersection between the first and second beam axes.
A control system 30. That controls the parameters of operation of the radiotherapy system. Generally speaking the control system 30 is a computer system comprising one or more processors with associated working memory, data storage and other necessary hardware, that operates under control of software instructions to receive input data from one or more of a user, other components of the system (e.g. the imaging system), and outputs control signals to control the operation of the radiation therapy system. Amongst other things the control system 30 causes the radiation source 12 to direct its least one treatment beam at the target. To do this the control system receives images from the imaging system and estimates the motion of the target, then issues a control signal to adjust the system 10 to direct the treatment beam 14 at the target.
As will be appreciated by those skilled in the art, the radiation source 12, imaging system 16 and support platform 30 are common to most conventional image radiation therapy systems. Accordingly, in the conventional manner the radiation source 12, imaging system 16 can be rotatably mounted (on a structure commonly called a gantry) with respect to the patient support platform 30 so that they can rotate about the patient in use. The rotational axis of the gantry motion is thus orthogonal to the directions of the treatment beam and imaging beam (i.e. the first and second directions.) It enables sequential treatment and imaging of the patient at different angular positions about the system's gantry's axis.
As noted above, the control system 30 processes images received from the imaging system 16 and estimates the motion of the target, then issues a control signal to adjust the system 10 to direct the treatment beam at the target. The adjustment will typically comprise at least one of the following: changing a geometrical property of the treatment beam such as its shape or position, e.g. by adapting a multi-leaf collimator of the linac; changing the time of emission of the beam, e.g. by delaying treatment beam activation to a more suitable time; gating the operation of the beam, e.g. turning off the beam if the estimated motion is greater than certain parameters; changing an angle at which the beam is emitted relative to the target about the system rotational axes. The system 10 can also be adjusted so as to direct the treatment beam at the target by moving the patient support platform 26. Moving the support platform 26 effectively changes the position of the centroid of the target with respect to the position of the treatment beam 14 (and imaging beam).
In use the general method of operation in of the system 10 is as follows. The radiation source and imaging system rotates around the patient during treatment. The imaging system acquires 2D projections of the target. Generally the target will be marked by the placement of fiducial markers within or about the target. The positioning of the markers may be such that the centroid of the markers lies at the centre of the target, but this is not strictly necessary. The control system 30 identifies the positon of the markers in each image to determine estimate the target's three dimensional position and orientation. The control system therefore needs a mechanism for estimating the target's position in 3-dimensions based on its location in a 2D image.
The method employed in the present invention directly estimates the translational motion in three dimensions and rotational motion of the target about three axes from the 2-dimensional
projection (image) of the target. The present inventors have determined that the problem of solving for the target's 3D position in this manner is ill-posed, hence, some a priori knowledge or assumption is required.
The approach used makes use of interdimensional correlation (IDC) of the motion of the target. In particular the preferred embodiments are based on the understanding that the thoracic and upper abdominal tumour motion in the Anterior-Posterior (AP) and Left-Right (LR) directions are correlated to the tumour motion in the Superior-Inferior (SI) direction and so are the rotational tumour motion around these axes. Also since the imaging system rotates around the patient with its axis parallel to the patient' s SI axis, the SI position of the tumour is always visible on the kV images. Accordingly the preferred method includes computing the target's motion based on a correlation between its movement in the y axis and its movement along or about the other axes.
Before turning to the solution proposed, it is worth considering the problem of projecting 6 degree of freedom motion (i.e. translational motion in three dimensions and rotational motion of the target about three axes) into 2 dimensional images. Throughout the discussions that follows the IEC 61217 coordinate system is used to describe the patient's motion relative to the treatment beam.
In order to find a unique solution for the 6DoF motion of a target when provided its projection in 2D is an inverse problem. To facilitate the description of this inverse problem, it helps to first describe the forward problem, namely, "If we know the target's 6DoF motions with respect to a reference, what is the projection of the object on a rotating kV imager?"
In the Euclidean coordinate system, the rotational and translational position of a target M, where
referenced position Mref is defined as:
R= RxRyRz= cosfi cosy cosfi siny $ίηβ cosa siny + since $ίηβ cosy cosa cosy --- sina $ϊηβ siny —since cosfi
sina siny— cosa $ίηβ cosy sina cosy + cosa $ίηβ siny cosa cosfi . where α,β,γ) are the angles describing the rotation of the object around the axes x, y and z,
TVy
Tr
X' is merely a mathematical by-product of the rotation equation to accurately relate a 3D object with coordinates (x,y,z) with its referenced coordinates
). The vector Tr on its own does not provide the translational motion information.
The real translational vector is defined as simply a vector difference between the current centroid of the target and its referenced centroid coordinates. That is:
Given the point M with the 3D coordinates (x,y,z), we can find its projected position {xp.yp) on the imaging system when the treatment beam is at a certain angle Θ with the following projection equation:
where SID is the source-to-imager distance and SAD is the source-to-axis distance, i.e. the distance between the kV X-ray source to the radiation isocentre.
Thus, from equations (1) and (2), we can determine the position of a target projected onto the receiver 24 of the imaging system 16 if the rotational matrix R and the vector Tr are known.
The reference position Mref in the context of external beam radiotherapy can be determined as the tumour position in the planning CT. However any convenient Mref can be selected.
On the other hand, to solve for the matrix R and the vector Tr given only the projected positions (xp p) of the target is an ill-posed problem, and the one faced in implementing the preferred embodiment. However, given three or more points in the target M, a solution can be found numerically, providing a priori knowledge. As noted above, the prior knowledge applied in this embodiment is that there is a linear correlation between the translational and rotational components of the object's motion.
In one embodiment the translational and rotational components of the target's motion are linearly correlated, and accordingly the following equation can be assumed:
where y(t) is the target's coordinate in the y (superior-inferior) direction and the vectors A and B contain all scalars. Equation (3) relates all the components of equation (1) with the target's y- coordinate. However, other models of correlation can be used such as a state-augmentation model and a 2nd order correlation model, and equation (3) will change accordingly. As noted above, this is advantageous because the treatment beam and imaging system rotate around the y-axis. In fact, from the equation (2), we have:
For practical purposes it can be assume the quantity (x(t)-cos9+z(t)-sin9) in equation (4) is much smaller than SAD. SAD is ranges beween 600mm and 2000mm in most cases and is normally about 1000 mm in most clinical linac systems. However, x(t) and z(t) are typically small. This is because in reality, the radiation isocentre will either in or very close to the tumour,
i.e. the target. In some cases, implanted markers are to be tracked instead of the tumour, which are usually implanted in the vicinity of the tumour. Then y(t) can be approximated as:
(4-1)
As equation (4-1) enables a relatively accurate estimation of y(t), the scalar vectors A and B can
hence be estimated using the least squares method. Firstly, from equation (
estimated from y(t). Consequently, we can compute the estimated coordinates
using equation which can then be used to compute the estimated projected coordinates vW by applying equation (2) to z' .
The cost function of the Euclidean distance between ( \ v>?p.)and the actual coordinates of the target (Xp)
In the simulation, the number of points is n = 3. The parameter F indicates the number of image frames used to calculate the cost function, which is explained in further details in the next section.
(**)
Finally, the vectors A and B can be estimated by minimizing the cost function C, given vW(£), in the least squares sense:
{«, b) — arg min
Since equation (4-1) is only an approximation of equation (4), it may be necessary to iteratively refine the solution, as shown in the pseudo-code below. for i = 1 : n
if (i==l)AND no previous estimate of A, B exists
- Compute y using equation (4-1) .
- A and B initialised to unity vectors,
else
- Compute (Tr x ry rz, ά, β,γ based on previous value of A,B.
- Compute new rotation matrix with α , β , γ .
- Compute new estimated centroid location.
- Compute new y of estimated centroid location using equation (4) .
end
- Optimise the cost function C with new value of y to solve for A and B.
end
This process can be summarised as follows: Identify the target in said projection (e.g. by segmenting the image to identify the markers).
Determine the projected position of a reference point (e.g. the centroid of the target) within the target. Estimate the position along the y axis of the reference point of the target, and initial coefficient vectors correlating the motion of the target and motion along the y axis. This initial estimation will typically depend on whether this is the first captured image in the sequence of images captured in succession or a later image. In the case of a first image the initial estimation will typically use a unit vector for the initial coefficients.
A refined estimate of the movement of the target in 6DOF can then be generated based on the previously determined y-axis position of the reference point and the previous estimates. Based on this a refined estimate of rotational and translational position can then be determined, followed by a refined estimate of the y position of the reference point within the target.
Updated coefficient vectors correlating the motion of the target and motion along the y axis based on the distance between the refined projected position can be computed, e.g. by applying a least squares optimisation. These vectors can form the basis of an iterative recalculation of the movement of the target in 6DOF. The number of iterations used can be selected as appropriate. The inventors have found that on test data 6 iterations are suitable if the image is the first image and no starting point initial estimate is available. However, no repeated iterations are needed if a previous motion determination has been made - e.g. the images is not the first image in the sequence, in which case the coefficient vectors correlating the motion of the target and motion along the y axis for the initial estimate can be derived from the previous image. Figure 2 illustrates a method of guided radiation therapy in which the process described above can be used. The methods of guided radiation therapy are similar to those followed by Huang et al. 2015 (Huang, C.-Y., Tehrani, J. N., Ng, J. A., Booth, J. T. & Keall, P. J. 2015. Six Degrees-of-Freedom Prostate and Lung Tumour Motion Measurements Using Kilovoltage Intrafraction Monitoring. Int J Radiat Oncol Biol Phys, 91, 368-375;); and Keall et al. 2016 (Keall, P. J., Ng, J. A., Juneja, P., O'brien, R. T., Huang, C.-Y., Colvill, E., Caillet, V., Simpson, E., Poulsen, P. R., Kneebone, A., Eade, T. & Booth, J. T. 2016. Real-Time 3D Image Guidance Using a Standard LINAC: Measured Motion, Accuracy, and Precision of the First Prospective Clinical Trial of Kilovoltage Intrafraction Monitoring Guided Gating for Prostate Cancer Radiation Therapy. Int J Radiat Oncol Biol Phys, 94, 1015-1021) (the contents of which are each incorporated by reference for all purposes with the exception of the use of the motion tracking method described herein).
The process 200 can be divided into two phases, set up 201 and treatment 202. The set up phase 201 uses an imaging procedure 204, e.g. Cone Beam CT, before treatment to initialise 206 the parameters for the movement tracking method described above. Target segmentation 208 is used to identify fiducial markers in the target during initialisation. The initialised movement tracking method can then be used to track target motion 210. In some cases 212 patient realignment may be necessary. After initialisation, the method moves to the treatment phase 202. During the treatment phase the treatment beam is activated and the target irradiated, movement tracking system will update the tumour's translational and rotational motion 224 in real-time using continuous small field kV imaging 220. As explained above the position of the fiducial
markers are identified using target segmentation 222. The field of view for the kV imaging during treatment can be reduced to encompass only the tumour and anticipated motion range+50% to reduce imaging dose to the surrounding anatomy.
Motions output by movement tracking method can be used to either or both of: (1) control adaptation of an automatic Multi-Leaf-Collimator (MLC) which will follow the motion of the tumours and adapt the treatment field to hit the tumour at its current position 226; or (2) gate the operation of the treatment beam 228. In the event that detected motion of the target exceeds a pre-set threshold, the treatment beam can be deactivated and the robotic couch moved to re-align the target with the treatment field, after which the treatment can continue. Gating can be automatic or manually performed by a technician in response to an alert issued by the system controller.
The effectiveness of the technique described herein can be seen in simulations the inventors have performed as set out below.
The simulations involved comparing the predicted motion of real tumours with a "ground truth" motion describing their true motion using the overall methodology outlined in figure 3.
A dataset of 29 patients with 3 fiducial markers implanted near the tumour (target) in the liver for image guided radiotherapy, was used in the in silico simulation 301. Each patient was treated with Stereotactic Body Radiation Therapy (SBRT), receiving treatment in 3-6 fractions. In each fraction, each patient's 1 to 3 cone beam CBCT scans (11 fps, 125 kV, 80 mA, 13 ms) were acquired. The fiducials were segmented in each image. Traces were rejected where the data were not continuous for at least 50 seconds as these traces do not mimic real-time treatment. If there were more than one segment of continuous data within one fraction, each continuous segment was used independently. Overall, the refined dataset contained 81 traces from 19 patients. The range of 6DoF motions in the ground truth dataset is illustrated in Figure 4. Figure 4A shows translational motion in each direction, and figure 4B shows rotational motion about the listed axes.
In figure 3, the ground truth 6 DoF motion data were computed in two steps 302. First the 2D to 3D estimation of each marker's position in each image frame was computed using the method of Poulsen et al. (2008b), which has been measured to have sub-mm accuracy. From the imaging frames with all 3 markers successfully segmented, the 6DoF motions of the target were calculated using the
ICP algorithm (Tehrani, J. N., O'brien, R. T., Poulsen, P. R. & Keall, P. 2013. Real-time estimation of prostate tumour rotation and translation with a kV imaging system based on an iterative closest point algorithm. Phys Med Biol, 58, 8517-8533.), which computed the 6DoF motions from individual 3D coordinates of the three markers. The accuracy of this method in estimating the rotation motions during radiotherapy were evaluated and quantified by Kim et al. (Kim, J.-H., Nguyen, D. T., Huang, C.-Y., O'brien, R., Caillet, V., Poulsen, P. R., Booth, J. T. & Keall, P. Quantifying the Accuracy and Precision of Six Degree-Of-Freedom Motion Estimation for Use in Real-Time Tumour Motion Monitoring During Radiotherapy. The 58th 40 Annual Meeting of the American Association of Physicists in Medicine, 2016. Medical Physics, 3858-3859.) and found to be accurate within 1°. The positions of the markers at the first frame of imaging in each fraction were used as the reference positions for 6DoF calculation as it was the intrafraction 6DoF motion that were of interest.
In this simulation the referenced position (Mref using the terminology above) is taken to be the first three dimensional position and rotational orientation that can be determined form a notional image of the target. In order to test the accuracy of a movement tracking method according to an embodiment of the present invention estimated 6DoF motion, for each trace in the ground-truth dataset, the ground- truth 3D positions of the markers were projected onto the notional imager using equation (2). The SAD and SID value were set at 1000 mm and 1800 mm, respectively. For each simulation, the gantry started at 180° and rotated counter-clockwise at 6 s to simulate a full rotation VMAT treatment - 303.
The movement tracking method according to an embodiment of the present disclosure was then used in step 304 to estimate 6DoF motion using only information from the projected positions of the markers on each image frame, as described above. Tracking begun after 200 imaging frames, equivalent to 110° of gantry rotation. After that, the tracked motion was updated, for each new frame, using all the data from the beginning of the treatment. However, when updating the model, only one iteration of optimisation was used, instead of multiple noted above. The least square optimisation was started with the previous solution for the correlation vectors A and B. For the initial estimate, it was found that using 6 iteration allows the solution to converge for all the test trajectories with the difference in the sum of square error criterion set at le-6 mm. But other numbers of iterations may be useful with other implementations. During the update phase, the least square solver used the solution from the last time point - i.e. the previous simulated "image", this effectively gives it a "warm start".
Thus, the 6 iterations were not necessary and one iteration was sufficient to have the solution converge.
Analysis of the simulation results
The error of the movement tracking method was defined as the difference between 6DoF motions estimated with the movement tracking method and the 6DoF ground-truth motion. Analysis was performed of the following factors affecting the accuracy of movement tracking method:
1. Deformation: estimated by the change in the area of the triangle, that was formed by the 3 markers, in 3D in each frame, compared with the referenced area.
2. Absolute magnitude of motion in each DoF: the absolute value of 6DoF motions in each frame relative to planned marker position.
3. Linear correlation between motion in each DoF and the motion of the SI direction: defined by the absolute value of the Pearson's linear correlation value (p) computed between the motion in each DoF and the motion in SI for each tested trace.
The effect of each of the aforementioned factors to the accuracy was quantified by calculating the Pearson's correlation value (p) between the absolute value of the error in each DoF and the tested parameter, except the linear correlation value. The correlation between the maximum value of error in estimating 6DoF motion and the linear correlation value of each DoF was used instead because linear correlation value was a trace-specific value.
In the following description translational motion is denoted by its axis of motion, e.g., translation motion in LR is denoted as "LR". The rotational motion is denoted by an "r" before its axis of rotation, e.g. rotation motion around the SI axis is denoted as "rSI". This is simply for clarity in figures.
Figure 5 shows a comparison of 6DoF motion estimated using an illustrative embodiment and the ground truth motion used in the simulation. The six plots of figure 5 each represent either translation or rotational motion along or about the labelled axes. In each plot the solid line indisctest the "ground truth" motion and the dotted line indicates the 6D-IDC prediction. The means and standard deviations of the differences are summarised in Table 1.
Table 1. Summary of error of 6DoF motion estimated with 6D-IDC using projected positions of three markers after 110° learning arc.
The mean of error in the 6DoF are under 0.1 mm and 0.1° across 81 motion traces from 19 patients. The standard deviation of error for motion estimated with the illustrative embodiment, are less than 1 mm for translational motion and less than 1.5° for rotational motion. This result is a pooled analysis across 53736 imaging frames of the 81 liver motion traces from 19 patients. The boxplot of the overall error is shown in Figure 6. Figure 7 shows the boxplot of the mean of error of the motion tracking method according to an embodiment of the present invention compared with ground-truth 6DoF motion for each of the 81 tested traces. From Figure 6, it can be observed than even though there are outliers of error up to 20°, as seen in Figure 6, in 95% of cases, the mean error of the motion tracking method according to an embodiment of the present invention of each trace are within 1 mm and 1°. A typical example of an outlier is shown in Figure 11. Some such outliers were caused the by sudden motion, such as coughing, which alter the linear relationship between the motion in SI and motions in the other DoFs.
The plots of figure 11 show plots of the 6D-IDC framework (dotted lines) compared to the ground truth data set (solid line), around an anomaly. The position of the anomaly on each of the plots is shown by an arrow. Each plot represents translation or rotational motion about a given axis as labelled.
Table 2 shows the summary statistics of the error of 6DoF motion estimation of an embodiment of the present invention if only one marker projection is available after a learning arc of 110°.
Table 2. Summary of error of 6DoF motion estimated with 6D-IDC using projected position of one marker after 110° learning arc.
In all estimations, the mean and standard deviation of errors are higher when only one marker projection is used for estimation, compared when all three markers projections are used, as shown in figure 12.1n the boxplot The whiskers contains 99.9% of the data. Extreme outliers are not shown to highlight the difference between the two distributions. Figure 13 shows a comparison of tracking performed with 3 markers vs. 1 marker. Figures 131- A and 13I-B represent estimated motion of a liver tumour in a patient. Figure 131- A show the outcome vs. ground truth using 3 markers. Figure 13I-B represents the estimated motion using only 1 marker. Each plot represents translation in, or rotation about, a given axis as labelled on the figure.
Similarly Figure 13II-A represent estimated motion of a liver tumour in a patient using 3 markers. Figure 13II-B represents the estimated motion using only 1 marker. As can be seen Figure 131 A and B represent a case in which 6D-IDC is as accurate with one marker (figure 13(I-B)) as with three markers (figure 13(1- A)), and Figures 1311 A and B show a case in which 6D-IDC with one marker is less accurate (figure 13(II-B)) than with three markers (figure 13(11- A)).
Figures 8 and 9 are scatter plots of the error between motion tracking method according to an embodiment of the present invention in each DoF as a function of the magnitude of deformation, assessed by the variation in the area of the triangle subtended by the markers (Figure 8) and the absolute value of motion (Figure 9). The p value indicates the Pearson correlation coefficient between each value pair. As can be seen in figures 8 and 9 the magnitude of the deformation seen in the ground-truth dataset had little effect on the accuracy of the motion tracking method according to an embodiment of the present invention (Figure 8). However, from the scatter plots in figure 8, the relationship between the magnitude of error and the change in area in each frame was weak in all 6 DoFs. The computed Pearson correlation coefficients shows that the error in the translational SI
direction has the highest correlation with the change in area (p = 0.41), followed by the error in the translational AP direction (p = 0.28) and the error in the rotation around the LR axis (p = 0.22).
Figure 10 shows scatter plots of the maximum of error and the linear correlation between each DoF motion and the translational SI motion for all tested traces. A strong correlation is found in the AP translation motion and the rotation around the LR axis (rLR), with Pearson's correlation p values of -0.6 for AP and -0.5 for rLR. A negative Pearson's correlation indicates a negatively correlated relationship. However, in all other DoF motions, no correlation or very weak correlation can be observed. From Figure 10, it can also be observed that most of the outliers occurred with weak correlation with SI (< 0.2), especially in translation motions in AP and rLR and rAP rotation motion.
The foregoing has described embodiments of a motion tracking method according to an embodiment of the present invention e.g. suitable to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager that is orthogonally mounted on the gantry of a standard linac. The method utilises the interdimensional correlation in the translation in SI direction with other 5 degrees of freedom motions as an a priori. Simulations demonstrate that the simulated embodiment performed with sub-mm and sub-degree accuracy on the tested dataset. The accuracy (mean) and precision (standard deviation) of the exemplary method in estimating translation motions of the tested dataset were sub-mm.
As can be seen from the foregoing description embodiments of the tracking algorithm perform better (with both higher accuracy and precision), when the projections of all three markers are available. However, with only one marker projection information available after the learning arc, the algorithm still gives sub-mm and sub-degree mean error in all 6 DoFs. This is particularly advantageous for real-time applications as all three markers may not be visible or reliably detected on a projection at all time. Furthermore, the algorithm can also be used with MV tracking provided the initial correlation model is built during initial CBCT imaging. The standard deviation of error for the 3D translations are under 1 mm while the standard deviation of error for the 3D rotations are under 2° using only one marker projection. This is compared to a maximum standard deviation of error of 0.52 mm for translation and 1.3° for rotation error estimation when 3 markers are available. This is because the correlation model does not get updated when only one marker projection is available while it is constantly updated when all three markers are available. In the cases when occasionally all three markers are available while only one of the markers projection are available
most of the time, the algorithm can be optimised by updating the correlation model in the occasions when all three markers projections are available to improve its performance.
The preferred embodiment of the motion tracking method according to the present invention employs solving the correlation matrix in a least square sense. This formalism of solving 6D0F motion from the target's projection on an imager is scalable. In the simulation, three markers, were used. Three is the lowest number of points to describe the target that allows the algorithm to uniquely determine the rotation and translation of the object. Other embodiments are capable of solving for the 6DoF motion of a target comprised of a larger number of points, such as situations with four or more markers, or the segmented tumour on a projection image. Embodiments of the present invention may have the advantageous property that utilising equation (3), means that the preferred algorithm is able to compute the rotation and translation of the target directly, without the need to solve for the 3D coordinates of each point separately.
Furthermore, with equation (3), the 6D-IDC algorithm can be used to estimate 6DoF motion when only one marker is available provided the parameters of the correlation matrix are already computed during a learning arc where three or more markers are available.
The simulated results were from a real time scenario where motion tracking method according to an embodiment of the present invention estimated 6DoF motion on each new in-coming image, after a learning arc. A learning arc of 110° was used in the exemplary embodiment our simulation, which is less than the learning arc used in clinical applications It will be understood that the invention disclosed and defined in this specification extends to all alternative combinations of two or more of the individual features mentioned or evident from the text or drawings. All of these different combinations constitute various alternative aspects of the invention.
Claims
1. A method for estimating the motion of a target from a 2-dimensional projection of the target, said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle Θ, about the first axis the method comprising: Identifying the target in said projection;
Determining projected position of a reference point within the target in said projection;
Estimating the position along the y axis of the reference point of the target, and initial coefficient vectors correlating the motion of the target and motion along the y axis;
Generating a refined estimate of the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on the previously determined y-axis position of the reference point and the previous estimates.
2. The method of claim 1 wherein the method further includes:
(a) Generating a refined estimate of the rotational position of the target in 3 dimensions based on a previous estimate; (b) Generating a refined estimate translational position of the target in 3 dimensions;
(c) Generating a refined estimate of the position along the y axis of the reference point of the target
3. The method of claim 2 wherein the method further includes:
(d) Determining refined coefficient vectors correlating the motion of the target and motion along the y-axis based on the distance between the refined projected position.
4. The method of claim 3 which further includes repeating at least steps (a) and (b) on the basis of the determined refined coefficient vectors from step (d).
5. The method as claimed in any one of the preceding claims wherein the method includes capturing said 2-dimensionsal projection with an imager separated from a radiation source by a source to imager distance (SID).
6. A method for estimating the motion of a target from a 2-dimensional projection of the target, said 2-dimensional projection being captured in a plane parallel to a first axis (y axis) at a rotational angle Θ, about the first axis the method comprising:
Identifying the target in said projection;
Determining the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions based on said projections.
7. A method as claimed in claim 6 wherein the method includes computing a motion based on a correlation between movement in the y axis and movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions.
8. A method as claimed in claim 7 wherein said method is also performed in accordance with any one of claims 1 to 5.
9. A method as claimed in any one of the preceding claims wherein the method is successively performed on a succession of 2D-projections captured in a plane parallel to a first axis (y axis) at a rotational angle Θ, about the first axis , at a succession of different rotational angles Θ.
10. The method claimed in any one of the preceding claims wherein identifying the target in said projection includes:
Identifying one or more markers positioned with respect to the target to facilitate identification of the target.
11. The method of claim 10 wherein the at least three markers are identified and the position of the reference point is at the centroid of the at least three markers.
12. The method of any one of the preceding claims wherein the 2-dimensional projection of the target is a one of a plurality of projections of said target taken in succession and wherein: the initial coefficient vectors correlating the motion of the target and motion along the y axis are determined based on a refined estimate of the translational movement of the target in 3 dimensions and rotational movement of the target in 3 dimensions, previously determined on the basis of least one earlier 2-dimensional projection in the succession.
13. A method of guided radiation therapy in which at least one beam of radiation is directed at a target, said method including: estimating the motion of the target using a method as claimed in any one of the preceding claims, and directing the beam based on the estimated position.
14. The method of claim 13 which further includes, tracking the target by successively performing a method of estimating the motion of the target using a method as claimed in any one of the preceding claims, and directing the beam at the target based on said tracking.
15. The method of claim 13 wherein directing the beam based on the estimated portion includes adjusting or setting one or more of the following system parameters: at least one geometrical property of said at least one emitted beam; a position of the target relative to the beam; a time of emission of the beam, an angle of emission of the beam relative to the target about the system rotational angle.
16. A system for guided radiation therapy including: A radiation source for emitting at least one treatment beam of radiation;
An imaging system arranged to generate a succession of images comprising a two dimensional projection of a field of view and in which the location of the target may be identified;
A control system to direct the at least one treatment beam at the target, wherein said beam control system is configured to:
receive images from the imaging system and estimate the motion of the target based on said images using a method as claimed in any one of the preceding claims; and adjust the system to direct the at least one beam at the target.
17. The system as claimed in claim 16 wherein the radiation source is configured to direct a treatment beam along a first beam axis, and the imaging system includes a second radiation source configured to emit at least one imaging beam along a second beam axis that is orthogonal to the first direction and a radiation detector configured to detect radiation transmitted through the target to generate a projection of said at least one imaging beam in a plane normal to the direction of emission of the at least one imaging beam.
18. The system as claimed in claim 17 which further includes rotating the radiation source and imaging system about a system rotational axis that is orthogonal to the first and second direction to enable sequential treatment and imaging of the patient at different angular positions about the system rotational axis.
19. The system of either of claims 17 or 18 which further includes a support platform on which a subject of radiation therapy is supported during treatment, at a location such that the centroid of the target is substantially aligned with the intersection between the system rotational axis, and the first and second beam axes.
20. The system of any one of the claims 16 to 19 wherein the control system controls one or more of: at least one geometrical property of said at least one emitted beam; a position of the target relative to the beam; a time of emission of the beam, an angle of emission of the beam relative to the target about the system rotational angle.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762469389P | 2017-03-09 | 2017-03-09 | |
US62/469,389 | 2017-03-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018161123A1 true WO2018161123A1 (en) | 2018-09-13 |
Family
ID=63447122
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2018/050212 WO2018161123A1 (en) | 2017-03-09 | 2018-03-09 | Guided radiation therapy |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018161123A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11040221B2 (en) | 2019-08-13 | 2021-06-22 | Elekta Ltd. | Adaptive radiation therapy using composite imaging slices |
WO2022097013A1 (en) * | 2020-11-05 | 2022-05-12 | Seetreat Pty Ltd | A kalman filter framework to estimate 3d intrafraction motion from 2d projection |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060074292A1 (en) * | 2004-09-30 | 2006-04-06 | Accuray, Inc. | Dynamic tracking of moving targets |
US20100172469A1 (en) * | 2008-09-05 | 2010-07-08 | Per Rugaard Poulsen | Method to estimate position, motion and trajectory of a target with a single x-ray imager |
US20110235860A1 (en) * | 2010-02-16 | 2011-09-29 | Keall Paul J | Method to estimate 3D abdominal and thoracic tumor position to submillimeter accuracy using sequential x-ray imaging and respiratory monitoring |
US20140241497A1 (en) * | 2013-02-27 | 2014-08-28 | Paul J. Keall | Method to estimate real-time rotation and translation of a target with a single x-ray imager |
WO2015134521A1 (en) * | 2014-03-03 | 2015-09-11 | Varian Medical Systems, Inc. | Systems and methods for patient position monitoring |
-
2018
- 2018-03-09 WO PCT/AU2018/050212 patent/WO2018161123A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060074292A1 (en) * | 2004-09-30 | 2006-04-06 | Accuray, Inc. | Dynamic tracking of moving targets |
US20100172469A1 (en) * | 2008-09-05 | 2010-07-08 | Per Rugaard Poulsen | Method to estimate position, motion and trajectory of a target with a single x-ray imager |
US20110235860A1 (en) * | 2010-02-16 | 2011-09-29 | Keall Paul J | Method to estimate 3D abdominal and thoracic tumor position to submillimeter accuracy using sequential x-ray imaging and respiratory monitoring |
US20140241497A1 (en) * | 2013-02-27 | 2014-08-28 | Paul J. Keall | Method to estimate real-time rotation and translation of a target with a single x-ray imager |
WO2015134521A1 (en) * | 2014-03-03 | 2015-09-11 | Varian Medical Systems, Inc. | Systems and methods for patient position monitoring |
Non-Patent Citations (1)
Title |
---|
NGUYEN, D. T. ET AL.: "An interdimensional correlation framework for real-time estimation of six degree of freedom target motion using a single x-ray imager during radiotherapy", PHYSICS IN MEDICINE & BIOLOGY, vol. 63, no. 1, 2018, pages 1 - 15, XP020322771 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11040221B2 (en) | 2019-08-13 | 2021-06-22 | Elekta Ltd. | Adaptive radiation therapy using composite imaging slices |
US11103729B2 (en) | 2019-08-13 | 2021-08-31 | Elekta ltd | Automatic gating with an MR linac |
US11602646B2 (en) | 2019-08-13 | 2023-03-14 | Elekta, LTD | Automatic gating with an MR linac |
WO2022097013A1 (en) * | 2020-11-05 | 2022-05-12 | Seetreat Pty Ltd | A kalman filter framework to estimate 3d intrafraction motion from 2d projection |
EP4240236A4 (en) * | 2020-11-05 | 2024-08-07 | Seetreat Pty Ltd | KALMAN FILTER FRAMEWORK FOR ESTIMATING INTRAFRACTION MOTION IN 3D FROM 2D PROJECTION |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11896850B2 (en) | Online angle selection in rotational imaging and tracking systems | |
US11691029B2 (en) | Offline angle selection in rotational imaging and tracking systems | |
Yoganathan et al. | Magnitude, impact, and management of respiration-induced target motion in radiotherapy treatment: a comprehensive review | |
Kilby et al. | The CyberKnife® robotic radiosurgery system in 2010 | |
Depuydt et al. | Geometric accuracy of a novel gimbals based radiation therapy tumor tracking system | |
Gendrin et al. | Monitoring tumor motion by real time 2D/3D registration during radiotherapy | |
US7720196B2 (en) | Target tracking using surface scanner and four-dimensional diagnostic imaging data | |
Keall et al. | Review of real-time 3-dimensional image guided radiation therapy on standard-equipped cancer radiation therapy systems: are we at the tipping point for the era of real-time radiation therapy? | |
US8295435B2 (en) | Cardiac target tracking | |
US9968321B2 (en) | Method and imaging system for determining a reference radiograph for a later use in radiation therapy | |
JP2020512877A (en) | Sequential monoscope tracking | |
US20080021300A1 (en) | Four-dimensional target modeling and radiation treatment | |
US9446264B2 (en) | System and method for patient-specific motion management | |
US11446520B2 (en) | Radiation therapy apparatus configured to track a tracking object moving in an irradiation object | |
JP2015536783A (en) | Radiotherapy with real-time adaptive dose calculation | |
Mostafavi et al. | Detection and localization of radiotherapy targets by template matching | |
Dhont et al. | Feasibility of markerless tumor tracking by sequential dual-energy fluoroscopy on a clinical tumor tracking system | |
Cho et al. | A monoscopic method for real-time tumour tracking using combined occasional x-ray imaging and continuous respiratory monitoring | |
EP3765153B1 (en) | Limiting imaging radiation dose and improving image quality | |
US11376446B2 (en) | Radiation therapy systems and methods using an external signal | |
GB2601560A (en) | Methods for adaptive radiotherapy | |
WO2018161123A1 (en) | Guided radiation therapy | |
AU2023283679A1 (en) | Markerless anatomical object tracking during an image-guided medical procedure | |
Wiersma et al. | Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions | |
Grootjans et al. | Management of Respiratory-Induced Tumour Motion for Tailoring Target Volumes during Radiation Therapy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18763665 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18763665 Country of ref document: EP Kind code of ref document: A1 |