Disclosure of Invention
The invention provides an object extraction method in an ultrasonic image for accurately extracting a target object on the ultrasonic image, which aims to solve the technical problems that the existing image identification method is complex and difficult to realize in the process of extracting the target on the ultrasonic image.
Meanwhile, in order to solve the technical problem that the accuracy of manually measuring the target object on the ultrasonic image is low in the prior art, the invention provides an object measuring method in the ultrasonic image, which is capable of automatically measuring and has more accurate measuring results.
In a first aspect, the present invention provides a method for extracting an object in an ultrasound image, including:
Acquiring an ultrasonic image containing a target object;
Image segmentation is carried out on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, and at least one characteristic area in the characteristic areas corresponds to the target object;
connecting the characteristic areas meeting preset direction conditions on the segmented image according to the direction information of the characteristic areas to obtain a target image containing the complete target object;
and extracting the target object from the target image.
In some embodiments, the image segmentation of the ultrasound image to obtain a segmented image comprising a plurality of feature regions comprises:
and carrying out binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
In some embodiments, the connecting the feature region on the segmented image according to the direction information of the feature region, where the feature region meets a preset direction condition, includes:
Acquiring an endpoint of the characteristic region;
Determining whether other endpoints are included in a preset area pointed by one endpoint,
When the preset area pointed by the end point comprises other end points, extracting another end point which is not in the same characteristic area as the end point from the other end points, wherein the difference value between the direction value of the other end point in the ultrasonic image direction field and the direction value of the end point in the ultrasonic image direction field is in a preset range;
connecting the one end point and the other end point.
In some embodiments, the connecting the one endpoint and the other endpoint comprises:
and filling pixels between the one end point and the other end point to obtain a connection region.
In some embodiments, the obtaining the end point of the feature region includes:
refining a plurality of characteristic areas on the binary image to obtain end points of the refined characteristic areas;
after said connecting said one end point and said another end point, further comprising:
and performing expansion processing on the connection region, and overlapping the expanded connection region on the binary image.
In some embodiments, the image segmentation of the ultrasound image comprises:
And filtering the ultrasonic image, and performing image segmentation on the filtered ultrasonic image.
In some embodiments, between the image segmentation of the ultrasound image and the connecting of the feature regions on the segmented image according to the direction information of the feature regions further comprises:
and screening the segmented image based on the characteristics of the target object.
In some embodiments, the target object comprises a fetal humerus and/or femur.
In a second aspect, the present invention provides a method for measuring an object in an ultrasound image, including:
Acquiring a target object on an ultrasonic image, wherein the target object is obtained by adopting the method for extracting the object in the ultrasonic image;
and measuring parameters of the target object.
In some embodiments, the measuring the parameter of the target object includes:
performing straight line fitting on the target object;
the length of a line segment between two intersection points of the straight line and the target object is calculated.
In a third aspect, the present invention provides an object extraction apparatus in an ultrasound image, comprising:
An image acquisition module for acquiring an ultrasound image including a target object;
the image segmentation module is used for carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, and at least one characteristic area in the plurality of characteristic areas corresponds to the target object;
a connection module for connecting the characteristic regions on the segmented image according to the direction information of the characteristic regions to obtain a target image containing the complete target object, and
And the extraction module is used for extracting the target object from the target image.
In a fourth aspect, the present invention provides an object measurement apparatus in an ultrasound image, comprising:
an acquisition module for acquiring a target object on an ultrasonic image, the target object being obtained by the ultrasonic image object extraction method, and
And the measuring module is used for measuring the parameters of the target object.
In a fifth aspect, the present invention provides a medical device comprising:
Processor, and
And the memory is in communication connection with the processor and stores computer readable instructions capable of being executed by the processor, and when the computer readable instructions are executed, the processor executes the method for extracting the object in the ultrasonic image.
The method for extracting the object in the ultrasonic image comprises the steps of obtaining the ultrasonic image of a target object, carrying out image segmentation on the ultrasonic image to obtain a segmented image containing a plurality of characteristic areas, wherein the characteristic areas comprise fracture areas and other interference areas corresponding to the target object, and the fracture areas cause difficulty in subsequent screening and calculation.
Detailed Description
The following description of the embodiments of the present invention will be made more apparent and fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention. In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The method for extracting the object in the ultrasonic image can be used for extracting the target object on ultrasonic imaging in medical diagnosis. It should be noted that, there are a lot of problems such as uneven gray scale distribution, artifacts, and speckle noise on the ultrasonic image, so the signal-to-noise ratio of the ultrasonic image is low, and thus the operator has difficulty in manually measuring the target object of the ultrasonic image, resulting in inaccurate measurement. Meanwhile, due to the problems of the ultrasonic image, a large number of non-target areas are generated when the ultrasonic image is segmented by adopting the existing image recognition method, and the difficulty is brought to target extraction.
More importantly, the segmentation of the ultrasound image by conventional image recognition results in segmentation breaks of the target object on the segmented image. The reason for the break is that the gray level distribution of the target object in the ultrasound image is not completely uniform, and a situation may occur in which the gray level is low at different positions of the same target object, in which case the image recognition easily recognizes the position as belonging to the representation of the background area, so that the same target object is divided into two or even a plurality of different sub-areas after division. In the subsequent calculation process, the fracture subareas belong to target objects in theory, so that the fracture subareas increase the complexity and accuracy of subsequent calculation, and the target objects can not be well selected in subsequent screening due to the damage of the integrity, so that the target extraction of the ultrasonic images by the existing image recognition method is difficult to realize. The method for extracting the object in the ultrasonic image is based on connection of the fracture subareas generated after the ultrasonic image is segmented, so that the target extraction is simpler and more accurate, and the subsequent measurement and calculation are facilitated.
An object extraction method in an ultrasound image in some embodiments of the invention is shown in fig. 1. As shown in fig. 1, in some embodiments, the method for extracting an object in an ultrasound image includes:
s1, acquiring an ultrasonic image containing a target object.
S2, image segmentation is carried out on the ultrasonic image, and a segmented image containing a plurality of characteristic areas is obtained.
And S3, connecting a plurality of characteristic areas meeting the preset direction conditions on the segmented image according to the direction information of each characteristic area to obtain a target image containing the complete target object.
S4, extracting a target object from the target image.
Specifically, in step S1, an ultrasound image is acquired that includes a complete target object, which may be any object having a bright characteristic, such as a bone region, for example, without limitation of the present invention, so as to ultimately extract the integrity of the object.
In step S2, the ultrasound image is subjected to image segmentation based on the ultrasound image, so as to obtain a segmented image, and the segmented image includes a plurality of feature regions, and the feature regions may be regions corresponding to the target object, regions corresponding to the non-target object, or a plurality of sub-regions formed by breaking these regions during segmentation. For example, for an ultrasound bone image, the same bone region breaks into two or more characteristic regions after segmentation due to image gray scale non-uniformity, while a highlight region of a non-bone on the ultrasound image may also break into multiple characteristic regions during segmentation.
In step S3, based on the obtained segmented image, the feature areas on the segmented image are connected according to the direction information of each feature area in the direction field of the segmented image, where the direction information of the feature areas refers to the directionality of the feature areas in the direction field of the ultrasound image, for example, the main direction of each feature area may be extracted, or the direction of a point (for example, an endpoint) in each feature area may be extracted, and the feature areas satisfying the preset direction condition are connected. The preset direction condition may be, for example, when the direction difference between the two feature areas is within a threshold range, the two feature areas are connected to finally obtain a plurality of feature areas connected together, where the plurality of feature areas may be one or two or more, so long as the direction information of the plurality of feature areas satisfies the preset direction condition.
For example, taking the above-mentioned ultrasonic bone image as an example, in the connection process, according to the direction information of each feature region, it is determined whether the main direction of each feature region is within the threshold range, and it is considered that the plurality of feature regions within the threshold range are divided into a plurality of feature regions, that is, broken bones, due to the occurrence of lower brightness around the inside of the bones, and therefore, it is necessary to connect the respective broken feature regions satisfying the threshold range, thereby obtaining the target image including the complete bone feature region.
In step S4, a target object is extracted from the connected target image, and, taking the ultrasound bone region as an example, a complete bone feature region is extracted from the target image, thereby obtaining the target object. When the method is used for extracting the ultrasonic image target, the characteristic areas of the target objects broken on the split image are connected, and the complete target objects can be extracted, so that the target objects can be identified in the ultrasonic image more completely and clearly, and various parameters can be measured more accurately.
In fig. 2, a method for extracting an object from an ultrasound image according to some embodiments of the present invention is shown, and in an exemplary embodiment, an ultrasound image is taken as an example of an ultrasound fetal humerus/femur image, and it should be noted that the method provided by the present invention is not limited to extracting a fetal humerus/femur, and any other suitable object may use the present invention, such as an ultrasound image of a bone at another location, and so on.
As shown in fig. 2, in some embodiments, the inventive method comprises:
s10, acquiring an ultrasonic image containing the fetal humerus/femur.
In general in the medical diagnosis of a fetal humerus/femur, a length parameter of the humerus/femur is required, and thus in the acquisition of an ultrasound image, it is generally based on a relatively complete bone image such that the image contains a complete humerus/femur region.
S11, filtering the ultrasonic image.
Because the ultrasound image is more noisy, in some embodiments, a filtering process is performed prior to segmentation of the ultrasound image. In one exemplary embodiment, the filtering process may employ anisotropic filtering, the main idea of which may overcome the drawbacks of the gaussian blur filter, i.e., the gray relative contrast information of the bone edge region is not destroyed, and the image edges may be preserved while smoothing the image. This method is well suited for filtering of ultrasound images.
Anisotropy treats an image as a thermal field, each pixel as a heat flow, and determines the extent of diffusion to the surroundings based on the relationship of the neighbor pixel to the current pixel. When the surrounding pixels are more different from the current pixels, the current pixels are possibly located on the boundary, the heat flow is stopped from diffusing from the current pixels to the direction, the boundary is reserved, and meanwhile, if the difference is smaller, the diffusion can be continued.
The main iteration equation for anisotropy is as follows:
wherein x and y respectively represent the abscissa and the ordinate of the image, and lambda can take on the value of [0,1/4], wherein 1/4 represents the average of the diffusion degrees in four directions. I represents the ultrasound image, since the above formula is an iterative formula, there is a current iteration number t. The four divergence formulas are the bias derivatives for the current pixel in four directions:
cN, cS, cE, cW represent diffusion coefficients in four directions, with the diffusion coefficients at the boundaries being relatively small. The diffusion coefficient c is given by:
Through anisotropic filtering processing, however, the ultrasound image is smoother, while the noise of the image is removed to some extent. In the present embodiment, the diffusion coefficient method is used, and this method is only used to explain this step, and is not intended to limit the present invention, and other anisotropic treatments may be used by those skilled in the art according to circumstances.
S20, performing binarization processing on the ultrasonic image to obtain a binary image containing a plurality of characteristic areas.
Image segmentation is performed on the ultrasound image after the filtering process of step S11, and in some embodiments, binarization is used for the image segmentation. The binarization process can convert the original detection of the target object into simple shape detection, feature detection, etc., thereby simplifying the calculation.
In one exemplary embodiment, the image is segmented using the mean-shift method, which aims at calculating class labels for each pixel. Class labels depend on the cluster to which they belong, and for each cluster it gets a class center, around which points are clustered, enabling them to form a class, i.e. the points that are the same as those class centers are a class. mean-shift takes the extreme points of its density function as the central points in the iterative process, called the modulo points. In the iterative process, the modulo point will be clustered towards the true center of the class to which it belongs.
The formula of the kernel density function is:
Where K (x) may have a variety of functional forms, such as an exponential kernel:
It should be noted that, the mean-shift method provided in this embodiment is only a preferred example, because of the differences of the physical characteristics and parameter settings of the machines and probes of different manufacturers, the segmentation method may be selected according to specific situations, the method for extracting the binary image may be flexibly selected according to specific characteristics of the target image, for example, the image may be segmented by using methods such as OTSU, maximum entropy threshold segmentation method, cluster segmentation method, level set, etc., and the binary image including the fetal humerus/femur may be obtained through this step.
S21, screening the segmented images based on the target object characteristics to obtain screened segmented images.
In the above embodiment, the binary image obtained after image segmentation often has many interference areas due to the influence of speckle noise on the ultrasound image, and the feature area of the binary image often is broken into two or more feature areas due to the uneven gray scale distribution on the ultrasound image. Therefore, before connecting a plurality of characteristic areas, the binary image is screened out, the influence of part of interference areas is removed, and the subsequent calculation is simplified.
In an exemplary embodiment, the degree of screening is performed by the characteristic parameters of roundness, and the formula is:
Where L represents the perimeter of the connected region, S represents the area of the region to be screened, generally a=1 corresponds to the case of a circle, and the bone region of the target object generally exhibits an elongated characteristic, so that in this embodiment, a region with the parameter greater than the threshold T A is removed, that is, a part of the interference region can be screened out, and in this embodiment, the threshold is preferably T A =0.75.
And S30, connecting the characteristic areas on the segmented image according to the direction information of the characteristic areas to obtain a target image containing the complete target object.
For convenience of explanation, in one example, as shown in fig. 3, fig. 3 (a) is an ultrasound image including a fetal humerus, and a bright area in the middle of the image is a characteristic of the fetal humerus, as can be seen in the figure, there is a lot of noise on the ultrasound image, and the gray scale distribution in the middle of the humerus area is uneven, so on a binary image obtained after the binary processing, the fetal humerus area is broken into two or more characteristic areas, and at the same time, an upper non-humerus characteristic area is also shown as a broken plurality of characteristic areas on the binary image, and at the same time, there is a lot of interference areas on the binary image. Fig. 3 (b) is a binary image after feature screening in step S21, and it can be seen in fig. 3 (b) that a large number of interference regions that are represented as non-humeral features are screened out on the binary image after screening, but there are non-humeral feature regions that are represented as being closer to the humeral features, and at the same time, the non-humeral feature regions and the humeral feature regions are all represented as being broken into a plurality of feature regions, so that in this step, the broken feature regions are connected, thereby forming a plurality of complete and well-defined target regions.
S40, extracting the characteristic areas corresponding to the target object from the connected characteristic areas.
Screening is performed based on the connected images, so that the characteristic region of the fetal humerus/femur is extracted. In an exemplary implementation, the screening method may select each feature region by adopting a manner of constructing a cost function, and the features used in the present embodiment are a long axis of each feature region, a region brightness average value, roundness, and the like. I.e. the cost function of the i-th feature region is:
F(i)=c1f1(i)+c2f2(i)+…+cdfd(i) (9)
Wherein d is the total number of features, c 1,c2,…,cd is the weight of each feature parameter, and there are:
in the present embodiment, take Then, the feature area with the largest cost function is the target object.
In some embodiments shown in fig. 2 and 3, the ultrasound fetal humerus/femur image is taken as an example to illustrate the ultrasound image object extraction method of the present invention, in which the complete fetal humerus/femur target is obtained by connecting the fractured fetal humerus/femur regions on the segmented image. A schematic diagram of a method of connecting feature regions in some embodiments of the invention is shown in fig. 4.
As shown in fig. 4, connecting each feature region on the binary image includes:
s301, refining the feature areas on the binary image. Binary image refinement is also called skeletonization, which means that each characteristic region on a binary image is reduced to a unit pixel width so as to find out a region endpoint conveniently.
S302, acquiring endpoints of the refinement structure. Based on the above, the binary image includes a characteristic region of the fetal humerus/femur and a characteristic region of the non-fetal humerus/femur, so that a plurality of curve segments are formed after the binary image is thinned, each curve segment has two end points, and the end points of the line segments are obtained.
S303, judging whether other endpoints are contained in the preset area pointed by any endpoint.
The purpose of this step is to confirm whether the endpoints to be connected are adjacent, and when there is no other endpoint in the preset area pointed by the endpoint, the endpoint can be considered as the endpoint of a certain characteristic area, and connection is not needed. When the predetermined area pointed to by the endpoint contains other endpoints, the process proceeds to step S304. In an exemplary implementation, the predetermined area pointed by an endpoint may be an area around the endpoint with r as a radius, where r may be 3 to 8 pixels. Those skilled in the art will appreciate that the preset area pointed by the endpoint may also be other shapes and threshold ranges, and will not be described herein.
S304, extracting another endpoint which is not in the same characteristic area with the one endpoint from other endpoints. The difference between the direction value of the other end point in the ultrasound image direction field and the direction value of the one end point in the ultrasound image direction field is within a preset range. When other endpoints are detected in the preset area of one endpoint, the other endpoint which is similar to the one endpoint in direction is further judged, so that the connection between the one endpoint and the other endpoint can be confirmed. When the endpoints are judged, the judgment can be performed based on the direction field of the image, and as the humerus/femur characteristic of the fetus is in a slender straight shape, the difference value of the direction values of the two endpoints can be set in a smaller range, for example, 0-30 degrees, and can also be in a smaller range, so that the judgment precision is improved.
And S305, filling pixels between one end point and the other end point to obtain a connection region. Based on the judgment of the steps, two endpoints to be connected are obtained, and pixel filling is carried out between the two endpoints, so that a connecting line segment, namely a connecting region, can be obtained.
S306, performing expansion processing on the connection region, and overlapping the expanded connection region on the binary image. The connection line segment obtained in step S305 is expanded to obtain an expanded region, and the expanded region is superimposed on, for example, the binary image in fig. 3 (b), thereby completing the connection of the broken feature region.
Through the implementation mode, the binary image is obtained by binarizing the ultrasonic image, then the fracture characteristic areas on the binary image are connected to obtain the complete fetal humerus/femur target, and further the target object with complete and clear area is extracted from the ultrasonic image, so that the subsequent measurement is facilitated.
In a second aspect, the present invention also provides a method for measuring an object in an ultrasound image, as shown in fig. 5, in some embodiments, the method includes:
s50, acquiring a target object on the ultrasonic image, wherein the target object is obtained by adopting the method for extracting the object in the ultrasonic image in any embodiment.
S60, measuring parameters of the target object.
The method can be used for automatically measuring the target object extracted in the embodiment, and random errors and repeated labor of manual measurement are avoided.
In one exemplary implementation, the fetal humerus/femur length is measured using the ultrasound fetal humerus/femur images described above as an example. As shown in fig. 6, the measurement method includes:
S500, obtaining the fetal humerus/femur region by adopting the method for extracting the object in the ultrasonic image in the embodiment.
S601, performing linear fitting on the humerus/femur region of the fetus. The fitting method can adopt least square method, hough straight line fitting and the like, and the invention is not limited to the least square method.
In one exemplary implementation, the fitting method uses a least squares method, which is calculated by minimizing the residuals of the fitted line or curve, and if n pairs of points to be fitted are known, the fitted line equation is set as:
y=b0+b1x (11)
the residual equation for straight line fitting is:
Thus, the sum of squares of residuals i is:
The straight line equation can be determined by minimizing Q, i.e., taking the extremum equivalent to solving its bias with respect to parameter 0,b1 and making its bias equal to 0, and substituting the resulting parameter b 0,b1 into the straight line equation (11). If a curve fitting method is needed, the method is similar in calculation thought, the difference is that the parameters of the curve may be more, by listing the curve expression equation, the Q value is constructed according to the mode of the formula (13), and the parameters when the Q value is obtained respectively, so that the straight line equation where the line segment is located can be determined, and the two intersection points of the straight line and the fetal humerus/femur region are the two end points of the line segment.
S602, calculating the unit length of line segment pixels. And calculating the pixel unit length of the two end points of the line segment.
S603, converting the pixel unit length into a physical unit length, and obtaining the length parameters of the fetal humerus and femur.
In a third aspect, the present invention provides an object extraction apparatus in an ultrasound image, as shown in fig. 7, the apparatus may include:
an image acquisition module 10 for acquiring an ultrasound image including a target object;
The image segmentation module 20 is configured to perform image segmentation on the ultrasound image to obtain a segmented image that includes a plurality of feature regions, where at least one feature region of the plurality of feature regions corresponds to the target object;
a connection module 30 for connecting the feature regions on the segmented image according to the direction information of the feature regions to obtain a target image containing the complete target object, and
An extraction module 40 is configured to extract the target object from the target image.
In a fourth aspect, the present invention provides an apparatus for measuring an object in an ultrasound image, as shown in fig. 8, the apparatus may include:
An acquisition module 50 for acquiring a target object on an ultrasound image, the target object being obtained by the ultrasound image object extraction method of the above embodiment, and
A measurement module 60 for measuring a parameter of the target object.
In a fifth aspect, the present invention provides a medical device, which may be an ultrasound device, such as an ultrasound diagnostic apparatus, comprising:
Processor, and
And a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, wherein the processor performs the method of object extraction and/or measurement in ultrasound images of the above embodiments when the computer readable instructions are executed.
In a sixth aspect, the present invention provides a storage medium storing computer instructions for causing a computer to perform the above method of object extraction and/or measurement in an ultrasound image.
In particular, FIG. 9 shows a schematic diagram of a computer system 600 suitable for implementing a method or processor of an embodiment of the invention, by which the corresponding functions of the medical device and the storage medium are implemented, via the system shown in FIG. 9.
As shown in fig. 9, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Connected to the I/O interface 605 are an input section 606 including a keyboard, a mouse, and the like, an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 608 including a hard disk, and the like, and a communication section 609 including a network interface card such as a LAN card, a modem, and the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such an embodiment, the computer program can be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be apparent that the above embodiments are merely examples for clarity of illustration and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.