[go: up one dir, main page]

CN112233122B - Object extraction and measurement method and device in ultrasonic image - Google Patents

Object extraction and measurement method and device in ultrasonic image Download PDF

Info

Publication number
CN112233122B
CN112233122B CN201910574501.3A CN201910574501A CN112233122B CN 112233122 B CN112233122 B CN 112233122B CN 201910574501 A CN201910574501 A CN 201910574501A CN 112233122 B CN112233122 B CN 112233122B
Authority
CN
China
Prior art keywords
image
target object
end point
feature
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910574501.3A
Other languages
Chinese (zh)
Other versions
CN112233122A (en
Inventor
张凤姝
凌锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edan Instruments Inc
Original Assignee
Edan Instruments Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edan Instruments Inc filed Critical Edan Instruments Inc
Priority to CN201910574501.3A priority Critical patent/CN112233122B/en
Publication of CN112233122A publication Critical patent/CN112233122A/en
Application granted granted Critical
Publication of CN112233122B publication Critical patent/CN112233122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本发明涉及超声图像处理领域,具体提供了一种超声图像对象提取、测量方法及装置。超声图像中对象提取方法,包括:一种超声图像中对象提取方法,包括:获取包含目标对象的超声图像;对所述超声图像进行图像分割,得到包含多个特征区域的分割图像;根据每一所述特征区域的方向信息将连接所述分割图像上满足预设方向条件的若干个所述特征区域连接,得到包含完整所述目标对象的目标图像;在所述目标图像中提取所述目标对象。通过本方法可自动提取出完整且较为清晰的目标对象,以使目标对象的后续测量更加准确。

The present invention relates to the field of ultrasonic image processing, and specifically provides an ultrasonic image object extraction, measurement method and device. A method for extracting objects in ultrasonic images, comprising: obtaining an ultrasonic image containing a target object; performing image segmentation on the ultrasonic image to obtain a segmented image containing multiple feature areas; connecting a plurality of feature areas that meet preset directional conditions on the segmented image according to the directional information of each feature area to obtain a target image containing the complete target object; extracting the target object from the target image. This method can automatically extract a complete and relatively clear target object, so that subsequent measurement of the target object is more accurate.

Description

Method and device for extracting and measuring object in ultrasonic image
Technical Field
The invention relates to the field of ultrasonic image processing, in particular to an ultrasonic image object extraction and measurement method and device.
Background
Ultrasound imaging is an important means of medical diagnosis, especially in hospital emergency department examinations, and has been widely used in the medical field with the advantages of facts, low cost and no damage, but the factors such as signal attenuation, uneven gray scale distribution, artifacts, speckle noise and the like all cause the signal-to-noise ratio of the ultrasound image to be low, so that a plurality of difficulties exist in quantitative analysis of the ultrasound image, such as measurement of the length of the fetal humerus or femur by the ultrasound imaging.
The fetal bone system is a conventional item of prenatal ultrasonic examination, and the length measurement of the fetal humerus or femur has important clinical value for screening malformations such as congenital dysplasia, limb shortness, disproportionate bone development caused by some chromosome abnormalities, and the like, and is also an indispensable parameter for estimating fetal weight, fetal age, and the like. At present, the fetal humerus or femur measurement in clinical diagnosis is mainly performed manually by an ultrasonic doctor operating a track ball, but the problems of a large amount of speckle noise, artifact interference, bone edge blurring and the like exist on an ultrasonic image, so that random errors generated in the manual measurement, visual errors of a clinician and the like can influence the accuracy of a measurement result, and the repeated operation can additionally increase the time cost.
Therefore, it is important to realize automatic measurement of a target object in analysis of an ultrasound image. However, when the traditional image recognition method is adopted to segment the ultrasonic image, the gray level distribution of the target object in the ultrasonic image is not completely uniform, so that the same target object is split into two or more areas after segmentation, and the subsequent screening and calculation are difficult, so that the process is complex and difficult to realize when the target is extracted from the ultrasonic image.
Disclosure of Invention
The invention provides an object extraction method in an ultrasonic image for accurately extracting a target object on the ultrasonic image, which aims to solve the technical problems that the existing image identification method is complex and difficult to realize in the process of extracting the target on the ultrasonic image.
Meanwhile, in order to solve the technical problem that the accuracy of manually measuring the target object on the ultrasonic image is low in the prior art, the invention provides an object measuring method in the ultrasonic image, which is capable of automatically measuring and has more accurate measuring results.
In a first aspect, the present invention provides a method for extracting an object in an ultrasound image, including:
Acquiring an ultrasonic image containing a target object;
Image segmentation is carried out on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, and at least one characteristic area in the characteristic areas corresponds to the target object;
connecting the characteristic areas meeting preset direction conditions on the segmented image according to the direction information of the characteristic areas to obtain a target image containing the complete target object;
and extracting the target object from the target image.
In some embodiments, the image segmentation of the ultrasound image to obtain a segmented image comprising a plurality of feature regions comprises:
and carrying out binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
In some embodiments, the connecting the feature region on the segmented image according to the direction information of the feature region, where the feature region meets a preset direction condition, includes:
Acquiring an endpoint of the characteristic region;
Determining whether other endpoints are included in a preset area pointed by one endpoint,
When the preset area pointed by the end point comprises other end points, extracting another end point which is not in the same characteristic area as the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one end point and the other end point.
In some embodiments, the connecting the one endpoint and the other endpoint comprises:
and filling pixels between the one end point and the other end point to obtain a connection region.
In some embodiments, the obtaining the end point of the feature region includes:
refining a plurality of characteristic areas on the binary image to obtain end points of the refined characteristic areas;
after said connecting said one end point and said another end point, further comprising:
and performing expansion processing on the connection region, and overlapping the expanded connection region on the binary image.
In some embodiments, the image segmentation of the ultrasound image comprises:
And filtering the ultrasonic image, and performing image segmentation on the filtered ultrasonic image.
In some embodiments, between the image segmentation of the ultrasound image and the connecting of the feature regions on the segmented image according to the direction information of the feature regions further comprises:
and screening the segmented image based on the characteristics of the target object.
In some embodiments, the target object comprises a fetal humerus and/or femur.
In a second aspect, the present invention provides a method for measuring an object in an ultrasound image, including:
Acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the method for extracting the object in the ultrasonic image;
and measuring parameters of the target object.
In some embodiments, the measuring the parameter of the target object includes:
performing straight line fitting on the target object;
the length of a line segment between two intersection points of the straight line and the target object is calculated.
In a third aspect, the present invention provides an object extraction apparatus in an ultrasound image, comprising:
An image acquisition module for acquiring an ultrasound image including a target object;
the image segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, and at least one characteristic area in the plurality of characteristic areas corresponds to the target object;
a connection module for connecting the characteristic regions on the segmented image according to the direction information of the characteristic regions to obtain a target image containing the complete target object, and
And the extraction module is used for extracting the target object from the target image.
In a fourth aspect, the present invention provides an object measurement apparatus in an ultrasound image, comprising:
an acquisition module for acquiring a target object on an ultrasonic image, the target object being obtained by the ultrasonic image object extraction method, and
And the measuring module is used for measuring the parameters of the target object.
In a fifth aspect, the present invention provides a medical device comprising:
Processor, and
And the memory is in communication connection with the processor and stores computer readable instructions capable of being executed by the processor, and when the computer readable instructions are executed, the processor executes the method for extracting the object in the ultrasonic image.
The method for extracting the object in the ultrasonic image comprises the steps of obtaining the ultrasonic image of a target object, carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, wherein the characteristic areas comprise fracture areas and other interference areas corresponding to the target object, and the fracture areas cause difficulty in subsequent screening and calculation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an ultrasound image in accordance with some embodiments of the present invention;
FIG. 2 is a schematic diagram of an ultrasound image object extraction method in accordance with one embodiment of the invention;
FIG. 3 is a schematic view of an ultrasound fetal humerus/femur image binarization process;
FIG. 4 is a schematic diagram of connecting a number of feature regions in accordance with some embodiments of the invention;
FIG. 5 is a schematic diagram of an ultrasound image object measurement method according to some embodiments of the invention;
FIG. 6 is a schematic diagram of measuring a target object according to one embodiment of the invention;
FIG. 7 is a schematic diagram of an ultrasound image object extraction apparatus according to some embodiments of the invention;
FIG. 8 is a schematic diagram of an ultrasound image object measurement apparatus according to some embodiments of the present invention;
FIG. 9 is a schematic diagram of a computer system suitable for use in implementing the method or processor of embodiments of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention. In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The method for extracting the object in the ultrasonic image can be used for extracting the target object on ultrasonic imaging in medical diagnosis. It should be noted that, there are a lot of problems such as uneven gray scale distribution, artifacts, and speckle noise on the ultrasonic image, so the signal-to-noise ratio of the ultrasonic image is low, and thus the operator has difficulty in manually measuring the target object of the ultrasonic image, resulting in inaccurate measurement. Meanwhile, due to the problems of the ultrasonic image, a large number of non-target areas are generated when the ultrasonic image is segmented by adopting the existing image recognition method, and the difficulty is brought to target extraction.
More importantly, the segmentation of the ultrasound image by conventional image recognition results in segmentation breaks of the target object on the segmented image. The reason for the break is that the gray level distribution of the target object in the ultrasound image is not completely uniform, and a situation may occur in which the gray level is low at different positions of the same target object, in which case the image recognition easily recognizes the position as belonging to the representation of the background area, so that the same target object is divided into two or even a plurality of different sub-areas after division. In the subsequent calculation process, the fracture subareas belong to target objects in theory, so that the fracture subareas increase the complexity and accuracy of subsequent calculation, and the target objects can not be well selected in subsequent screening due to the damage of the integrity, so that the target extraction of the ultrasonic images by the existing image recognition method is difficult to realize. The method for extracting the object in the ultrasonic image is based on connection of the fracture subareas generated after the ultrasonic image is segmented, so that the target extraction is simpler and more accurate, and the subsequent measurement and calculation are facilitated.
An object extraction method in an ultrasound image in some embodiments of the invention is shown in fig. 1. As shown in fig. 1, in some embodiments, the method for extracting an object in an ultrasound image includes:
s1, acquiring an ultrasonic image containing a target object.
S2, image segmentation is carried out on the ultrasonic image, and a segmented image containing a plurality of characteristic areas is obtained.
And S3, connecting a plurality of characteristic areas meeting the preset direction conditions on the segmented image according to the direction information of each characteristic area to obtain a target image containing the complete target object.
S4, extracting a target object from the target image.
Specifically, in step S1, an ultrasound image is acquired that includes a complete target object, which may be any object having a bright characteristic, such as a bone region, for example, without limitation of the present invention, so as to ultimately extract the integrity of the object.
In step S2, the ultrasound image is subjected to image segmentation based on the ultrasound image, so as to obtain a segmented image, and the segmented image includes a plurality of feature regions, and the feature regions may be regions corresponding to the target object, regions corresponding to the non-target object, or a plurality of sub-regions formed by breaking these regions during segmentation. For example, for an ultrasound bone image, the same bone region breaks into two or more characteristic regions after segmentation due to image gray scale non-uniformity, while a highlight region of a non-bone on the ultrasound image may also break into multiple characteristic regions during segmentation.
In step S3, based on the obtained segmented image, the feature areas on the segmented image are connected according to the direction information of each feature area in the direction field of the segmented image, where the direction information of the feature areas refers to the directionality of the feature areas in the direction field of the ultrasound image, for example, the main direction of each feature area may be extracted, or the direction of a point (for example, an endpoint) in each feature area may be extracted, and the feature areas satisfying the preset direction condition are connected. The preset direction condition may be, for example, when the direction difference between the two feature areas is within a threshold range, the two feature areas are connected to finally obtain a plurality of feature areas connected together, where the plurality of feature areas may be one or two or more, so long as the direction information of the plurality of feature areas satisfies the preset direction condition.
For example, taking the above-mentioned ultrasonic bone image as an example, in the connection process, according to the direction information of each feature region, it is determined whether the main direction of each feature region is within the threshold range, and it is considered that the plurality of feature regions within the threshold range are divided into a plurality of feature regions, that is, broken bones, due to the occurrence of lower brightness around the inside of the bones, and therefore, it is necessary to connect the respective broken feature regions satisfying the threshold range, thereby obtaining the target image including the complete bone feature region.
In step S4, a target object is extracted from the connected target image, and, taking the ultrasound bone region as an example, a complete bone feature region is extracted from the target image, thereby obtaining the target object. When the method is used for extracting the ultrasonic image target, the characteristic areas of the target objects broken on the split image are connected, and the complete target objects can be extracted, so that the target objects can be identified in the ultrasonic image more completely and clearly, and various parameters can be measured more accurately.
In fig. 2, a method for extracting an object from an ultrasound image according to some embodiments of the present invention is shown, and in an exemplary embodiment, an ultrasound image is taken as an example of an ultrasound fetal humerus/femur image, and it should be noted that the method provided by the present invention is not limited to extracting a fetal humerus/femur, and any other suitable object may use the present invention, such as an ultrasound image of a bone at another location, and so on.
As shown in fig. 2, in some embodiments, the inventive method comprises:
s10, acquiring an ultrasonic image containing the fetal humerus/femur.
In general in the medical diagnosis of a fetal humerus/femur, a length parameter of the humerus/femur is required, and thus in the acquisition of an ultrasound image, it is generally based on a relatively complete bone image such that the image contains a complete humerus/femur region.
S11, filtering the ultrasonic image.
Because the ultrasound image is more noisy, in some embodiments, a filtering process is performed prior to segmentation of the ultrasound image. In one exemplary embodiment, the filtering process may employ anisotropic filtering, the main idea of which may overcome the drawbacks of the gaussian blur filter, i.e., the gray relative contrast information of the bone edge region is not destroyed, and the image edges may be preserved while smoothing the image. This method is well suited for filtering of ultrasound images.
Anisotropy treats an image as a thermal field, each pixel as a heat flow, and determines the extent of diffusion to the surroundings based on the relationship of the neighbor pixel to the current pixel. When the surrounding pixels are more different from the current pixels, the current pixels are possibly located on the boundary, the heat flow is stopped from diffusing from the current pixels to the direction, the boundary is reserved, and meanwhile, if the difference is smaller, the diffusion can be continued.
The main iteration equation for anisotropy is as follows:
wherein x and y respectively represent the abscissa and the ordinate of the image, and lambda can take on the value of [0,1/4], wherein 1/4 represents the average of the diffusion degrees in four directions. I represents the ultrasound image, since the above formula is an iterative formula, there is a current iteration number t. The four divergence formulas are the bias derivatives for the current pixel in four directions:
cN, cS, cE, cW represent diffusion coefficients in four directions, with the diffusion coefficients at the boundaries being relatively small. The diffusion coefficient c is given by:
Through anisotropic filtering processing, however, the ultrasound image is smoother, while the noise of the image is removed to some extent. In the present embodiment, the diffusion coefficient method is used, and this method is only used to explain this step, and is not intended to limit the present invention, and other anisotropic treatments may be used by those skilled in the art according to circumstances.
S20, performing binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
Image segmentation is performed on the ultrasound image after the filtering process of step S11, and in some embodiments, binarization is used for the image segmentation. The binarization process can convert the original detection of the target object into simple shape detection, feature detection, etc., thereby simplifying the calculation.
In one exemplary embodiment, the image is segmented using the mean-shift method, which aims at calculating class labels for each pixel. Class labels depend on the cluster to which they belong, and for each cluster it gets a class center, around which points are clustered, enabling them to form a class, i.e. the points that are the same as those class centers are a class. mean-shift takes the extreme points of its density function as the central points in the iterative process, called the modulo points. In the iterative process, the modulo point will be clustered towards the true center of the class to which it belongs.
The formula of the kernel density function is:
Where K (x) may have a variety of functional forms, such as an exponential kernel:
It should be noted that, the mean-shift method provided in this embodiment is only a preferred example, because of the differences of the physical characteristics and parameter settings of the machines and probes of different manufacturers, the segmentation method may be selected according to specific situations, the method for extracting the binary image may be flexibly selected according to specific characteristics of the target image, for example, the image may be segmented by using methods such as OTSU, maximum entropy threshold segmentation method, cluster segmentation method, level set, etc., and the binary image including the fetal humerus/femur may be obtained through this step.
S21, screening the segmented images based on the target object characteristics to obtain screened segmented images.
In the above embodiment, the binary image obtained after image segmentation often has many interference areas due to the influence of speckle noise on the ultrasound image, and the feature area of the binary image often is broken into two or more feature areas due to the uneven gray scale distribution on the ultrasound image. Therefore, before connecting a plurality of characteristic areas, the binary image is screened out, the influence of part of interference areas is removed, and the subsequent calculation is simplified.
In an exemplary embodiment, the degree of screening is performed by the characteristic parameters of roundness, and the formula is:
Where L represents the perimeter of the connected region, S represents the area of the region to be screened, generally a=1 corresponds to the case of a circle, and the bone region of the target object generally exhibits an elongated characteristic, so that in this embodiment, a region with the parameter greater than the threshold T A is removed, that is, a part of the interference region can be screened out, and in this embodiment, the threshold is preferably T A =0.75.
And S30, connecting the characteristic areas on the segmented image according to the direction information of the characteristic areas to obtain a target image containing the complete target object.
For convenience of explanation, in one example, as shown in fig. 3, fig. 3 (a) is an ultrasound image including a fetal humerus, and a bright area in the middle of the image is a characteristic of the fetal humerus, as can be seen in the figure, there is a lot of noise on the ultrasound image, and the gray scale distribution in the middle of the humerus area is uneven, so on a binary image obtained after the binary processing, the fetal humerus area is broken into two or more characteristic areas, and at the same time, an upper non-humerus characteristic area is also shown as a broken plurality of characteristic areas on the binary image, and at the same time, there is a lot of interference areas on the binary image. Fig. 3 (b) is a binary image after feature screening in step S21, and it can be seen in fig. 3 (b) that a large number of interference regions that are represented as non-humeral features are screened out on the binary image after screening, but there are non-humeral feature regions that are represented as being closer to the humeral features, and at the same time, the non-humeral feature regions and the humeral feature regions are all represented as being broken into a plurality of feature regions, so that in this step, the broken feature regions are connected, thereby forming a plurality of complete and well-defined target regions.
S40, extracting the characteristic areas corresponding to the target object from the connected characteristic areas.
Screening is performed based on the connected images, so that the characteristic region of the fetal humerus/femur is extracted. In an exemplary implementation, the screening method may select each feature region by adopting a manner of constructing a cost function, and the features used in the present embodiment are a long axis of each feature region, a region brightness average value, roundness, and the like. I.e. the cost function of the i-th feature region is:
F(i)=c1f1(i)+c2f2(i)+…+cdfd(i) (9)
Wherein d is the total number of features, c 1,c2,…,cd is the weight of each feature parameter, and there are:
in the present embodiment, take Then, the feature area with the largest cost function is the target object.
In some embodiments shown in fig. 2 and 3, the ultrasound fetal humerus/femur image is taken as an example to illustrate the ultrasound image object extraction method of the present invention, in which the complete fetal humerus/femur target is obtained by connecting the fractured fetal humerus/femur regions on the segmented image. A schematic diagram of a method of connecting feature regions in some embodiments of the invention is shown in fig. 4.
As shown in fig. 4, connecting each feature region on the binary image includes:
s301, refining the feature areas on the binary image. Binary image refinement is also called skeletonization, which means that each characteristic region on a binary image is reduced to a unit pixel width so as to find out a region endpoint conveniently.
S302, acquiring endpoints of the refinement structure. Based on the above, the binary image includes a characteristic region of the fetal humerus/femur and a characteristic region of the non-fetal humerus/femur, so that a plurality of curve segments are formed after the binary image is thinned, each curve segment has two end points, and the end points of the line segments are obtained.
S303, judging whether other endpoints are contained in the preset area pointed by any endpoint.
The purpose of this step is to confirm whether the endpoints to be connected are adjacent, and when there is no other endpoint in the preset area pointed by the endpoint, the endpoint can be considered as the endpoint of a certain characteristic area, and connection is not needed. When the predetermined area pointed to by the endpoint contains other endpoints, the process proceeds to step S304. In an exemplary implementation, the predetermined area pointed by an endpoint may be an area around the endpoint with r as a radius, where r may be 3 to 8 pixels. Those skilled in the art will appreciate that the preset area pointed by the endpoint may also be other shapes and threshold ranges, and will not be described herein.
S304, extracting another endpoint which is not in the same characteristic area with the one endpoint from other endpoints. The difference between the direction value of the other end point in the ultrasound image direction field and the direction value of the one end point in the ultrasound image direction field is within a preset range. When other endpoints are detected in the preset area of one endpoint, the other endpoint which is similar to the one endpoint in direction is further judged, so that the connection between the one endpoint and the other endpoint can be confirmed. When the endpoints are judged, the judgment can be performed based on the direction field of the image, and as the humerus/femur characteristic of the fetus is in a slender straight shape, the difference value of the direction values of the two endpoints can be set in a smaller range, for example, 0-30 degrees, and can also be in a smaller range, so that the judgment precision is improved.
And S305, filling pixels between one end point and the other end point to obtain a connection region. Based on the judgment of the steps, two endpoints to be connected are obtained, and pixel filling is carried out between the two endpoints, so that a connecting line segment, namely a connecting region, can be obtained.
S306, performing expansion processing on the connection region, and overlapping the expanded connection region on the binary image. The connection line segment obtained in step S305 is expanded to obtain an expanded region, and the expanded region is superimposed on, for example, the binary image in fig. 3 (b), thereby completing the connection of the broken feature region.
Through the implementation mode, the binary image is obtained by binarizing the ultrasonic image, then the fracture characteristic areas on the binary image are connected to obtain the complete fetal humerus/femur target, and further the target object with complete and clear area is extracted from the ultrasonic image, so that the subsequent measurement is facilitated.
In a second aspect, the present invention also provides a method for measuring an object in an ultrasound image, as shown in fig. 5, in some embodiments, the method includes:
s50, acquiring a target object on the ultrasonic image, wherein the target object is obtained by adopting the method for extracting the object in the ultrasonic image in any embodiment.
S60, measuring parameters of the target object.
The method can be used for automatically measuring the target object extracted in the embodiment, and random errors and repeated labor of manual measurement are avoided.
In one exemplary implementation, the fetal humerus/femur length is measured using the ultrasound fetal humerus/femur images described above as an example. As shown in fig. 6, the measurement method includes:
S500, obtaining the fetal humerus/femur region by adopting the method for extracting the object in the ultrasonic image in the embodiment.
S601, performing linear fitting on the humerus/femur region of the fetus. The fitting method can adopt least square method, hough straight line fitting and the like, and the invention is not limited to the least square method.
In one exemplary implementation, the fitting method uses a least squares method, which is calculated by minimizing the residuals of the fitted line or curve, and if n pairs of points to be fitted are known, the fitted line equation is set as:
y=b0+b1x (11)
the residual equation for straight line fitting is:
Thus, the sum of squares of residuals i is:
The straight line equation can be determined by minimizing Q, i.e., taking the extremum equivalent to solving its bias with respect to parameter 0,b1 and making its bias equal to 0, and substituting the resulting parameter b 0,b1 into the straight line equation (11). If a curve fitting method is needed, the method is similar in calculation thought, the difference is that the parameters of the curve may be more, by listing the curve expression equation, the Q value is constructed according to the mode of the formula (13), and the parameters when the Q value is obtained respectively, so that the straight line equation where the line segment is located can be determined, and the two intersection points of the straight line and the fetal humerus/femur region are the two end points of the line segment.
S602, calculating the unit length of line segment pixels. And calculating the pixel unit length of the two end points of the line segment.
S603, converting the pixel unit length into a physical unit length, and obtaining the length parameters of the fetal humerus and femur.
In a third aspect, the present invention provides an object extraction apparatus in an ultrasound image, as shown in fig. 7, the apparatus may include:
an image acquisition module 10 for acquiring an ultrasound image including a target object;
The image segmentation module 20 is configured to perform image segmentation on the ultrasound image to obtain a segmented image that includes a plurality of feature regions, where at least one feature region of the plurality of feature regions corresponds to the target object;
a connection module 30 for connecting the feature regions on the segmented image according to the direction information of the feature regions to obtain a target image containing the complete target object, and
An extraction module 40 is configured to extract the target object from the target image.
In a fourth aspect, the present invention provides an apparatus for measuring an object in an ultrasound image, as shown in fig. 8, the apparatus may include:
An acquisition module 50 for acquiring a target object on an ultrasound image, the target object being obtained by the ultrasound image object extraction method of the above embodiment, and
A measurement module 60 for measuring a parameter of the target object.
In a fifth aspect, the present invention provides a medical device, which may be an ultrasound device, such as an ultrasound diagnostic apparatus, comprising:
Processor, and
And a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, wherein the processor performs the method of object extraction and/or measurement in ultrasound images of the above embodiments when the computer readable instructions are executed.
In a sixth aspect, the present invention provides a storage medium storing computer instructions for causing a computer to perform the above method of object extraction and/or measurement in an ultrasound image.
In particular, FIG. 9 shows a schematic diagram of a computer system 600 suitable for implementing a method or processor of an embodiment of the invention, by which the corresponding functions of the medical device and the storage medium are implemented, via the system shown in FIG. 9.
As shown in fig. 9, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Connected to the I/O interface 605 are an input section 606 including a keyboard, a mouse, and the like, an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be apparent that the above embodiments are merely examples for clarity of illustration and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (11)

1. An object extraction method in an ultrasound image, comprising:
Acquiring an ultrasonic image containing a target object;
image segmentation is carried out on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas;
According to the direction information of each characteristic region, connecting a plurality of characteristic regions meeting preset direction conditions on the segmented image to obtain a target image containing the complete target object, wherein the preset direction conditions are that when the direction difference of the two characteristic regions is within a threshold range, the two characteristic regions are connected;
Extracting the target object from the target image;
wherein the image segmentation of the ultrasound image to obtain a segmented image comprising a plurality of feature regions comprises:
performing binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas;
The step of connecting a plurality of feature areas satisfying a preset direction condition on the segmented image according to the direction information of each feature area includes:
Acquiring an endpoint of the characteristic region;
Determining whether other endpoints are included in a preset area pointed by one endpoint,
When the preset area pointed by the end point comprises other end points, extracting another end point which is not in the same characteristic area as the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one end point and the other end point.
2. The method of claim 1, wherein said connecting said one endpoint and said another endpoint comprises:
and filling pixels between the one end point and the other end point to obtain a connection region.
3. The method of claim 2, wherein the acquiring the end points of the feature region comprises:
refining the characteristic areas on the binary image to obtain end points of the refined characteristic areas;
after said connecting said one end point and said another end point, further comprising:
and performing expansion processing on the connection region, and overlapping the expanded connection region on the binary image.
4. The method of claim 1, wherein the image segmentation of the ultrasound image comprises:
And filtering the ultrasonic image, and performing image segmentation on the filtered ultrasonic image.
5. The method according to claim 1, characterized in that between the image segmentation of the ultrasound image and the connection of the feature areas on the segmented image according to the direction information of the feature areas, further comprising:
and screening the segmented image based on the characteristics of the target object.
6. The method for extracting an object from an ultrasound image according to claim 1, wherein,
The target object comprises a fetal humerus and/or femur.
7. A method for measuring an object in an ultrasound image, comprising:
acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the method for extracting the object from the ultrasonic image according to any one of claims 1 to 6;
and measuring parameters of the target object.
8. The method of claim 7, wherein measuring parameters of the target object comprises:
performing straight line fitting on the target object;
And calculating the length of a line segment between the straight line and two intersection points of the target object.
9. An object extraction device in an ultrasound image, comprising:
An image acquisition module for acquiring an ultrasound image including a target object;
the image segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas;
a connection module for connecting a plurality of feature areas satisfying a preset direction condition on the segmented image according to the direction information of each feature area to obtain a target image containing the complete target object, wherein the preset direction condition is that when the direction difference of the two feature areas is within a threshold range, the two feature areas are connected, and
The extraction module is used for extracting the target object from the target image;
wherein the image segmentation of the ultrasound image to obtain a segmented image comprising a plurality of feature regions comprises:
performing binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas;
The step of connecting a plurality of feature areas satisfying a preset direction condition on the segmented image according to the direction information of each feature area includes:
Acquiring an endpoint of the characteristic region;
Determining whether other endpoints are included in a preset area pointed by one endpoint,
When the preset area pointed by the end point comprises other end points, extracting another end point which is not in the same characteristic area as the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one end point and the other end point.
10. An object measurement device in an ultrasound image, comprising:
An acquisition module for acquiring a target object on an ultrasound image, the target object being obtained by the method for object extraction in an ultrasound image according to any one of claims 1 to 6, and
And the measuring module is used for measuring the parameters of the target object.
11. A medical device, comprising:
Processor, and
A memory communicatively coupled to the processor, storing computer readable instructions executable by the processor, which when executed, performs the method of object extraction in an ultrasound image according to any one of claims 1 to 6.
CN201910574501.3A 2019-06-28 2019-06-28 Object extraction and measurement method and device in ultrasonic image Active CN112233122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910574501.3A CN112233122B (en) 2019-06-28 2019-06-28 Object extraction and measurement method and device in ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910574501.3A CN112233122B (en) 2019-06-28 2019-06-28 Object extraction and measurement method and device in ultrasonic image

Publications (2)

Publication Number Publication Date
CN112233122A CN112233122A (en) 2021-01-15
CN112233122B true CN112233122B (en) 2025-07-15

Family

ID=74111369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910574501.3A Active CN112233122B (en) 2019-06-28 2019-06-28 Object extraction and measurement method and device in ultrasonic image

Country Status (1)

Country Link
CN (1) CN112233122B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663728A (en) * 2012-03-11 2012-09-12 西安电子科技大学 Dictionary learning-based medical image interactive joint segmentation
CN107169975A (en) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 Ultrasonic image analysis method and device
CN109886938A (en) * 2019-01-29 2019-06-14 深圳市科曼医疗设备有限公司 A kind of ultrasound image blood vessel diameter method for automatic measurement

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437068B (en) * 2017-07-13 2020-11-20 江苏大学 Pig individual identification method based on Gabor direction histogram and pig body hair pattern
CN109846513B (en) * 2018-12-18 2022-11-25 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method, ultrasonic imaging system, image measuring method, image processing system, and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663728A (en) * 2012-03-11 2012-09-12 西安电子科技大学 Dictionary learning-based medical image interactive joint segmentation
CN107169975A (en) * 2017-03-27 2017-09-15 中国科学院深圳先进技术研究院 Ultrasonic image analysis method and device
CN109886938A (en) * 2019-01-29 2019-06-14 深圳市科曼医疗设备有限公司 A kind of ultrasound image blood vessel diameter method for automatic measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于方向场分布率的静脉图像分割方法;康文雄 等;自动化学报;第35卷(第12期);第1496-1502页 *

Also Published As

Publication number Publication date
CN112233122A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Loizou et al. Snakes based segmentation of the common carotid artery intima media
Loizou A review of ultrasound common carotid artery image and video segmentation techniques
JP6265588B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
US8849000B2 (en) Method and device for detecting bright brain regions from computed tomography images
JP6422198B2 (en) Image processing apparatus, image processing method, and image processing program
US10628941B2 (en) Image processing apparatus, image processing method, and image processing program
US20100322495A1 (en) Medical imaging system
CN102792336B (en) Device for quantifying the development of diseases involving changes in body volume, especially tumors
US9607392B2 (en) System and method of automatically detecting tissue abnormalities
KR101926015B1 (en) Apparatus and method processing image
TWI624807B (en) Iterative analysis of medical images
CN107169975B (en) The analysis method and device of ultrasound image
CN105389810A (en) Identification system and method of intravascular plaque
US20250160784A1 (en) Systems and methods for detecting angles of hip joints
US20230115927A1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
JP2011509141A (en) Discrimination between infarctions and artifacts in MRI scan data
CN108171696B (en) Placenta detection method and device
US20230103920A1 (en) Device and method of angiography for cerebrovascular obliteration
CN101241597B (en) Visual enhancement of interval changes using temporal subtraction techniques
CN112233122B (en) Object extraction and measurement method and device in ultrasonic image
WO2023133935A1 (en) Method for automatic detection and display of ultrasound craniocerebral abnormal region
Almi'ani et al. A modified region growing based algorithm to vessel segmentation in magnetic resonance angiography
CN114155205A (en) Mammary nodule boundary definition judging device
Reeja et al. A study on detection and segmentation of ischemic stroke in MRI and CT images
Chakkarwar et al. Automated analysis of gestational sac in medical image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant