CN114693640A - A method, system, electronic device and storage medium for classifying and processing ultrasound image lesion attributes based on video sequence - Google Patents
A method, system, electronic device and storage medium for classifying and processing ultrasound image lesion attributes based on video sequence Download PDFInfo
- Publication number
- CN114693640A CN114693640A CN202210329032.0A CN202210329032A CN114693640A CN 114693640 A CN114693640 A CN 114693640A CN 202210329032 A CN202210329032 A CN 202210329032A CN 114693640 A CN114693640 A CN 114693640A
- Authority
- CN
- China
- Prior art keywords
- target
- focus
- attribute
- lesion
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
技术领域technical field
本发明属于智能医疗诊断技术领域,具体地说,涉及一种基于视频序列的超声图像病灶属性分类处理方法、系统、电子设备及存储介质。The invention belongs to the technical field of intelligent medical diagnosis, and in particular relates to a method, system, electronic device and storage medium for classifying and processing the attributes of ultrasonic image lesions based on video sequences.
背景技术Background technique
超声图像病灶属性分类是指在已定位的具体部位病灶上,进一步判断病灶的属性,如甲状腺结节的方位属性有垂直、水平;后方回声属性有回声增强、回声均匀等。判断病灶的属性对医生后期给出病灶的具体诊断结果有着很重要的意义。The classification of lesion attributes in ultrasound images refers to further judging the attributes of the lesions on the specific localized lesions. For example, the orientation attributes of thyroid nodules include vertical and horizontal attributes; the posterior echo attributes include echo enhancement and echo uniformity. Judging the attributes of the lesions is of great significance for doctors to give the specific diagnosis results of the lesions in the later stage.
目前大多数超声图像自动诊断技术一般是基于单帧图片进行识别的。这种工作方式需要医生找到最佳观察角度,通过与超声设备交互,将超声图像固定住,然后启动自动诊断软件对病灶进行识别。显而易见,这种方式对医生的扫查手法有较强的依赖,扫查角度越好,诊断的准确性就越高。另外,在诊断病灶时,医生往往需要综合从不同的角度观察病灶的信息来给出诊断结果,以提高诊断的正确率。而这种方式无法综合不同角度的扫查信息,给出的判断有一定的片面性。目前,有些技术做了部分改进,譬如通过提高模型的速度,使得算法能实时在扫查图像上进行诊断,给出诊断结果。这种方式虽然能实时给出诊断结果,但由于训练数据的数量以及模型本身拟合差异,使模型有一定的错误率,让诊断出的属性分类会出现不稳定的情况,这会对医生诊断结果造成一定的干扰。At present, most of the ultrasonic image automatic diagnosis technologies are generally based on a single frame image for identification. This way of working requires doctors to find the best viewing angle, interact with the ultrasound equipment, fix the ultrasound image, and then start the automatic diagnosis software to identify the lesions. Obviously, this method has a strong dependence on the doctor's scanning technique. The better the scanning angle, the higher the diagnostic accuracy. In addition, when diagnosing a lesion, a doctor often needs to synthesize information from different angles of observation of the lesion to give a diagnosis result, so as to improve the accuracy of the diagnosis. However, this method cannot synthesize scanning information from different angles, and the judgment given is one-sided to a certain extent. At present, some technologies have been partially improved. For example, by increasing the speed of the model, the algorithm can make a diagnosis on the scanned image in real time and give the diagnosis result. Although this method can give the diagnosis results in real time, due to the amount of training data and the fitting difference of the model itself, the model has a certain error rate, and the diagnosed attribute classification will be unstable, which will affect the diagnosis of doctors. The result is some disturbance.
发明内容SUMMARY OF THE INVENTION
针对目前诊断软件输出的属性类别存在误差,导致影响诊断结果的问题,本发明避免了采用目标检测器直接判断属性类别的方式,为了能够适应不同情况的属性分类场景,采用先确定病灶以及病灶所在位置,获取不同帧下所述病灶目标的跟踪序列,后根据病灶目标属性类别以及与所述病灶目标属性类别对应的整体置信度,分类病灶属性的方式,提高在困难属性上的诊断正确率,模仿医生的诊断过程,综合多帧结果信息,给出最终的诊断结果,提高病灶属性诊断结果的准确性。Aiming at the problem that there is an error in the attribute category output by the current diagnosis software, which affects the diagnosis result, the present invention avoids the method of directly judging the attribute category by the target detector. position, obtain the tracking sequence of the lesion target in different frames, and then classify the lesion attribute according to the lesion target attribute category and the overall confidence level corresponding to the lesion target attribute category, so as to improve the diagnosis accuracy rate on difficult attributes, It imitates the doctor's diagnosis process, integrates multi-frame result information, gives the final diagnosis result, and improves the accuracy of the diagnosis result of the lesion attribute.
为了实现上述目的,本发明采用如下的技术方案。In order to achieve the above objects, the present invention adopts the following technical solutions.
本发明第一方面提供一种基于视频序列的超声图像病灶属性分类处理方法,所述方法包括:A first aspect of the present invention provides a method for classifying and processing ultrasound image lesion attributes based on a video sequence, the method comprising:
S102:获得待测的超声图像,提取所述待测超声图像中某一处或多处的病灶图像作为病灶目标;S102: Obtain an ultrasound image to be tested, and extract a lesion image at one or more locations in the ultrasound image to be tested as a lesion target;
S104:在待测的超声影像中追踪所述病灶目标,获得多组与所述病灶目标对应的跟踪序列;S104: Track the lesion target in the ultrasound image to be tested, and obtain multiple sets of tracking sequences corresponding to the lesion target;
S106:获取所述跟踪序列中病灶目标的属性信息,并将所述病灶目标的属性信息加入所述跟踪序列中;S106: Acquire attribute information of the lesion target in the tracking sequence, and add the attribute information of the lesion target to the tracking sequence;
S108:获取所述跟踪序列中当前病灶目标的属性信息,计算跟踪序列中病灶目标对应的整体置信度;根据所述整体置信度确定所述病灶目标的属性类别,并输出。S108 : Acquire attribute information of the current lesion target in the tracking sequence, calculate the overall confidence level corresponding to the lesion target in the tracking sequence; determine the attribute category of the lesion target according to the overall confidence level, and output.
作为优选方案,所述步骤S102包括:As a preferred solution, the step S102 includes:
利用目标检测模型提取所述待测超声图像中某一处或多处的病灶图像作为病灶目标,获取所述病灶目标的病灶位置和病灶类型;所述病灶检测模型使用带标注的包围框样本进行训练。Use the target detection model to extract the image of one or more lesions in the ultrasound image to be tested as the target of the lesion, and obtain the lesion position and type of the lesion of the target; the lesion detection model uses the annotated bounding box samples to perform train.
作为优选方案,所述步骤S106包括:As a preferred solution, the step S106 includes:
利用分类器模型识别所述跟踪序列中病灶目标的属性类别,计算与病灶目标的属性类别对应的目标属性类别置信度;Identify the attribute category of the lesion target in the tracking sequence by using the classifier model, and calculate the confidence level of the target attribute category corresponding to the attribute category of the lesion target;
将超声影像中病灶目标的属性置信度、病灶目标的包围框以及所述病灶目标的属性信息类别均加入所述跟踪序列。The attribute confidence of the lesion target in the ultrasound image, the bounding box of the lesion target, and the attribute information category of the lesion target are all added to the tracking sequence.
作为优选方案,所述步骤S108还包括:As a preferred solution, the step S108 further includes:
根据病灶目标高属性分类置信度、超声影像中前后两帧属性类别不同以及超声影像中前后两帧包围框的重叠度,构建整体置信度数学模型;According to the high attribute classification confidence of the lesion target, the different attribute categories of the two frames in the ultrasound image, and the overlap of the bounding boxes of the two frames in the ultrasound image, the overall confidence mathematical model was constructed;
计算所述病灶目标属性类别以及与所述病灶目标属性类别对应的整体置信度。Calculate the target attribute category of the lesion and an overall confidence level corresponding to the target attribute category of the lesion.
作为优选方案,将所述整体置信度与第一阈值和第二阈值进行比较,其中第一阈值大于第二阈值;As a preferred solution, the overall confidence is compared with a first threshold and a second threshold, wherein the first threshold is greater than the second threshold;
如果所述整体置信度大于第一阈值,则将与所述整体置信度对应的病灶目标属性类别置于输出队列;If the overall confidence level is greater than the first threshold, placing the target attribute category of the lesion corresponding to the overall confidence level in an output queue;
如果所述整体置信度小于第一阈值,但大于第二阈值,则将与所述整体置信度对应的病灶目标属性类别保留在所述跟踪序列;If the overall confidence is less than the first threshold but greater than the second threshold, retaining the target attribute category of the lesion corresponding to the overall confidence in the tracking sequence;
如果所述整体置信度小于第二阈值,则将与所述整体置信度对应的病灶目标属性类别从所述跟踪序列删除。If the overall confidence level is less than a second threshold, the lesion target attribute category corresponding to the overall confidence level is deleted from the tracking sequence.
作为优选方案,所述构建整体置信度数学模型如下:As a preferred solution, the construction of the overall confidence mathematical model is as follows:
其中,wj表示跟踪序列中病灶目标对应的整体置信度;表示跟踪序列中病灶目标是否连续出现的赋值;表示病灶目标属性置信度权重;表示病灶目标重叠度权重;γ表示权重偏置。Among them, w j represents the overall confidence corresponding to the lesion target in the tracking sequence; An assignment indicating whether the target of the lesion appears consecutively in the tracking sequence; Represents the confidence weight of the target attribute of the lesion; Represents the weight of the overlap of the lesion target; γ represents the weight bias.
本发明第二方面提供一种基于视频序列的超声图像病灶属性分类处理系统,包括A second aspect of the present invention provides a system for classifying and processing ultrasound image lesion attributes based on a video sequence, comprising:
病灶识别模块,其用于获得待测的超声图像,提取所述待测超声图像中某一处或多处的病灶图像作为病灶目标;a lesion identification module, which is used for obtaining an ultrasound image to be tested, and extracting a lesion image at one or more locations in the ultrasound image to be tested as a lesion target;
病灶追踪模块,其用于在待测的超声影像中追踪所述病灶目标,获得多组与所述病灶目标对应的跟踪序列;a lesion tracking module, which is used to track the lesion target in the ultrasound image to be tested, and obtain multiple sets of tracking sequences corresponding to the lesion target;
属性识别模块,其用于获取所述跟踪序列中病灶目标的属性信息,并将所述病灶目标的属性信息加入所述跟踪序列中;an attribute identification module, configured to acquire attribute information of the lesion target in the tracking sequence, and add the attribute information of the lesion target to the tracking sequence;
判断模块,其用于获取所述跟踪序列中当前病灶目标的属性信息,计算跟踪序列中病灶目标对应的整体置信度;根据所述整体置信度确定所述病灶目标的属性类别,并输出。a judgment module, which is used for acquiring attribute information of the current lesion target in the tracking sequence, calculating the overall confidence level corresponding to the lesion target in the tracking sequence; determining the attribute category of the lesion target according to the overall confidence level, and outputting it.
作为优选方案,所述判断模块包括第一判断单元、第二判断单元以及第三判断单元;As a preferred solution, the judging module includes a first judging unit, a second judging unit and a third judging unit;
第一判断单元,用于将所述整体置信度与第一阈值比较,其中第一阈值大于第二阈值;如果大于第一阈值,则将与所述整体置信度对应的病灶目标属性类别置于输出队列;A first judging unit, configured to compare the overall confidence with a first threshold, where the first threshold is greater than the second threshold; if it is greater than the first threshold, place the target attribute category of the lesion corresponding to the overall confidence in the output queue;
第二判断单元,用于将所述整体置信度与第一阈值和第二阈值进行比较,如果所述整体置信度小于第一阈值,但大于第二阈值,则将与所述整体置信度对应的病灶目标属性类别保留在所述跟踪序列;a second judgment unit, configured to compare the overall confidence with a first threshold and a second threshold, if the overall confidence is less than the first threshold but greater than the second threshold, it will correspond to the overall confidence The lesion target attribute category is retained in the tracking sequence;
第三判断单元,用于将所述整体置信度与第二阈值进行比较,如果所述整体置信度小于第二阈值,则将与所述整体置信度对应的病灶目标属性类别从所述跟踪序列删除。a third judging unit, configured to compare the overall confidence with a second threshold, and if the overall confidence is less than the second threshold, remove the target attribute category of the lesion corresponding to the overall confidence from the tracking sequence delete.
本发明第三方面提供一种电子设备,包括处理器、输入设备、输出设备和存储器,所述处理器、输入设备、输出设备和存储器依次连接,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行如上述的方法。A third aspect of the present invention provides an electronic device, including a processor, an input device, an output device, and a memory, wherein the processor, the input device, the output device, and the memory are connected in sequence, and the memory is used to store a computer program, and the computer The program includes program instructions, the processor being configured to invoke the program instructions to perform the method as described above.
本发明第四方面提供一种可读存储介质,所述存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如上述的方法。A fourth aspect of the present invention provides a readable storage medium storing a computer program, the computer program including program instructions, the program instructions, when executed by a processor, cause the processor to perform the method as described above .
相比于现有技术,本发明的有益效果为:Compared with the prior art, the beneficial effects of the present invention are:
本发明实施例避免了采用目标检测器直接判断属性类别的方式,为了能够适应不同情况的属性分类场景,采用先确定病灶以及病灶所在位置,获取不同帧下所述病灶目标的跟踪序列,然后根据病灶目标属性类别以及与所述病灶目标属性类别对应的置信度,分类病灶属性的方式,提高在困难属性上的诊断正确率,模仿医生的诊断过程,综合多帧结果信息,给出最终的诊断结果,提高病灶属性诊断结果的准确性;另外在置信度数学模型构建过程中,其一考虑到分类器的置信度因素,使高属性分类置信度的目标贡献更大,其二包围框建模过程考虑了连续两帧目标形态变化应该不大,如果过大,则两者属于同一个目标的概率就越低,对整体置信度的贡献就越低;其三如果前后两帧目标不属于同一个属性类别,那此属性分类正确的概率就越低,应该降低整体置信度的贡献,通过这三点保证最终跟踪序列的置信度的稳定性和连续性,可以大大提高分类结果的准确性。The embodiment of the present invention avoids the method of directly judging the attribute category by the target detector. In order to be able to adapt to the attribute classification scenarios of different situations, the lesion and the location of the lesion are firstly determined, and the tracking sequence of the lesion target in different frames is obtained. The target attribute category of the lesion and the confidence level corresponding to the target attribute category of the lesion, the method of classifying the lesion attribute, improving the diagnosis accuracy rate on the difficult attribute, imitating the diagnosis process of the doctor, synthesizing the multi-frame result information, and giving the final diagnosis As a result, the accuracy of the diagnosis results of the lesion attribute is improved; in addition, in the process of constructing the confidence mathematical model, the first one takes the confidence factor of the classifier into account, so that the target with high attribute classification confidence contributes more, and the second is the modeling of the bounding box. The process considers that the target shape change in two consecutive frames should not be large. If it is too large, the probability that the two belong to the same target will be lower, and the contribution to the overall confidence will be lower. Third, if the two frames before and after the target do not belong to the same target An attribute category, the lower the probability of correct classification of this attribute, the contribution of the overall confidence should be reduced. Through these three points to ensure the stability and continuity of the confidence of the final tracking sequence, the accuracy of the classification results can be greatly improved.
附图说明Description of drawings
通过结合附图对本申请实施例进行更详细地描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。附图中:The above and other objects, features and advantages of the present application will become more apparent by describing the embodiments of the present application in more detail in conjunction with the accompanying drawings. The accompanying drawings are used to provide a further understanding of the embodiments of the present application, constitute a part of the specification, and are used to explain the present application together with the embodiments of the present application, and do not constitute a limitation to the present application. In the drawings, the same reference numbers generally refer to the same components or steps. In the attached picture:
图1为本发明实施例提供的一种基于视频序列的超声图像病灶属性分类处理方法流程图;FIG. 1 is a flowchart of a method for classifying and processing ultrasound image lesion attributes based on a video sequence provided by an embodiment of the present invention;
图2为本发明的实施例提供的甲状腺结节超声图像;2 is an ultrasound image of a thyroid nodule provided by an embodiment of the present invention;
图3为本发明实施例提供的一种基于视频序列的超声图像病灶属性分类处理系统框图;FIG. 3 is a block diagram of a system for classifying and processing ultrasound image lesion attributes based on a video sequence according to an embodiment of the present invention;
图4图示了根据本申请实施例的电子设备的框图;4 illustrates a block diagram of an electronic device according to an embodiment of the present application;
图5位本发明实施例提供的分类器结构改进结构框图。FIG. 5 is a structural block diagram of an improved classifier structure provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
示例性方法Exemplary method
如图1所示,本示例公开了一种基于视频序列的超声图像病灶属性分类处理方法,所述方法包括如下步骤:As shown in FIG. 1 , this example discloses a method for classifying and processing ultrasound image lesion attributes based on a video sequence, and the method includes the following steps:
S102:获得待测超声图像,提取所述待测超声图像中某一处或多处的病灶图像作为病灶目标。S102: Obtain an ultrasound image to be tested, and extract a lesion image at one or more locations in the ultrasound image to be tested as a lesion target.
具体的,本示例中的待测超声图像是利用超声声束扫描人体,通过对反射信号的接收、处理,以获得体内器官或者某一特点区域的图像。待测超声图像可以是甲状腺以及乳腺等部位的超声图像,获取超声图像的方式可以是基于超声波(超声)的医学影像设备拍摄的,例如读取超声设备显示接口的视频流,把视频流按帧为单位解码成一张张连续甲状腺超声图像。也可以是非实时获得,例如,接收预先存储在服务器等设备中的甲状腺超声图像,或者接收从其他设备传输的甲状腺超声图像。如图2所示为甲状腺结节,属性为低回声图像。Specifically, the ultrasonic image to be tested in this example is to scan the human body with ultrasonic sound beams, and obtain images of internal organs or a certain characteristic area by receiving and processing reflected signals. The ultrasound image to be tested can be the ultrasound image of the thyroid gland and breast, and the method of acquiring the ultrasound image can be taken by a medical imaging device based on ultrasound (ultrasound), such as reading the video stream of the display interface of the ultrasound device, and dividing the video stream by frame. Decode into a continuous thyroid ultrasound image for the unit. It can also be acquired in non-real time, for example, receiving thyroid ultrasound images pre-stored in a device such as a server, or receiving thyroid ultrasound images transmitted from other devices. Figure 2 shows a thyroid nodule with a hypoechoic image.
提取所述待测超声图像的病灶图像是利用基于深度学习的目标检测模型,先对超声图像中病灶类别进行识别,再对病灶位置进行精确定位。这里的目标检测模型例如是yolo-v5,faster-rcnn等检测网络模型。需要说明的是,本示例需要包围框作为后续的整体置信度的计算,因此需要通过使用带标注包围框的超声图像样本进行训练,例如甲状腺结节检测器,需要使用大量含有甲状腺结节包围框的样本对神经网络进行训练,才能使检测器具有检测甲状腺结节的能力。To extract the lesion image of the ultrasound image to be tested, a target detection model based on deep learning is used to first identify the type of the lesion in the ultrasound image, and then accurately locate the position of the lesion. The target detection model here is, for example, yolo-v5, faster-rcnn and other detection network models. It should be noted that this example requires a bounding box as the subsequent calculation of the overall confidence, so it needs to be trained by using ultrasound image samples with annotated bounding boxes. For example, a thyroid nodule detector needs to use a large number of bounding boxes containing thyroid nodules. The samples of the neural network are trained so that the detector has the ability to detect thyroid nodules.
S104:在待测的超声影像中追踪所述病灶目标,获得多组与所述病灶目标对应的跟踪序列。S104: Track the lesion target in the ultrasound image to be tested, and obtain multiple sets of tracking sequences corresponding to the lesion target.
具体的,这里是采用目标跟踪器追踪所述病灶目标,利用目标跟踪器对获取的多个所述的病灶目标进行跟踪,此时所述的病灶目标成为所述目标跟踪器的跟踪目标,并获得所述跟踪目标在t1~t2时间内的轨迹以及跟踪目标结果获得跟踪队列。应当理解使用目标检测器去检测下一帧预测位置是否是目标,随后再使用新检测结果去更新训练集进而更新目标检测器。而在训练目标检测器时一般选取目标区域为正样本,目标的周围区域为负样本,当然越靠近目标的区域为正样本的可能性越大。Specifically, the target tracker is used to track the lesion target, and the target tracker is used to track a plurality of the acquired lesion targets. At this time, the lesion target becomes the tracking target of the target tracker, and Obtain the trajectory of the tracking target within the time t1-t2 and the tracking target result to obtain a tracking queue. It should be understood that an object detector is used to detect whether the predicted position of the next frame is an object, and then the training set is updated with the new detection results to update the object detector. When training the target detector, the target area is generally selected as a positive sample, and the surrounding area of the target is a negative sample. Of course, the area closer to the target is more likely to be a positive sample.
这里的采用目标跟踪器例如IOU、KCF对检测出的病灶目标进行跟踪,当采用IOU单目标跟踪器,其主要是根据前后两个目标的包围框的重叠度来判断二者是否属于同一个目标的。当采用KCF单目标跟踪器时,其主要是根据前后两个目标包围框内目标特征相似性来判断二者是否属于同一个目标的。由于同一时刻,在一张超声图像中,可能检测出多个不同的病灶。单目标跟踪器就可以在连续超声扫查过程中,判断前后帧中检测出的多个病灶目标是同一个病灶目标。Here, target trackers such as IOU and KCF are used to track the detected lesion targets. When the IOU single target tracker is used, it is mainly based on the overlap of the bounding boxes of the two targets before and after to determine whether the two belong to the same target. of. When the KCF single target tracker is used, it mainly judges whether the two belong to the same target according to the similarity of the target features in the bounding boxes of the front and rear targets. Due to the same moment, in one ultrasound image, multiple different lesions may be detected. The single target tracker can determine that the multiple target targets detected in the previous and subsequent frames are the same target target during the continuous ultrasound scanning process.
S106:获取所述跟踪序列中病灶目标的属性信息,并将所述病灶目标的属性信息加入所述跟踪序列中。S106: Acquire attribute information of the lesion target in the tracking sequence, and add the attribute information of the lesion target to the tracking sequence.
具体的,这里是利用分类器模型识别所述跟踪序列中最新的病灶目标的属性类别,计算与病灶目标的属性类别对应的目标属性类别置信度;这里的分类器可采用resnet,vgg,inception等主流的目标分类网络;将超声影像中病灶目标的属性置信度、病灶目标的包围框以及所述病灶目标的属性信息类别加入所述跟踪序列。Specifically, the classifier model is used to identify the attribute category of the latest lesion target in the tracking sequence, and the confidence level of the target attribute category corresponding to the attribute category of the lesion target is calculated; the classifier here can use resnet, vgg, inception, etc. Mainstream target classification network; the attribute confidence of the lesion target in the ultrasound image, the bounding box of the lesion target and the attribute information category of the lesion target are added to the tracking sequence.
S108:获取所述跟踪序列中当前病灶目标的属性信息,计算跟踪序列中病灶目标对应的整体置信度;根据所述整体置信度确定所述病灶目标的属性类别,并输出。S108 : Acquire attribute information of the current lesion target in the tracking sequence, calculate the overall confidence level corresponding to the lesion target in the tracking sequence; determine the attribute category of the lesion target according to the overall confidence level, and output.
具体的,这里预先构建整体置信度数学模型,然后计算跟踪序列中病灶目标对应的整体置信度,根据整体置信度大小去判断所述病灶目标的属性类别,从而输出当前病灶目标的属性类别,应当理解这里的根据整体置信度大小去判断可以通过单一阈值实现也可以通过多个阈值控制实现,提高当前病灶目标的属性类别的输出准确性。Specifically, the overall confidence mathematical model is pre-built here, and then the overall confidence corresponding to the lesion target in the tracking sequence is calculated, and the attribute category of the lesion target is judged according to the overall confidence, so as to output the attribute category of the current lesion target, which should be Understand that the judgment based on the overall confidence level here can be realized through a single threshold or through multiple threshold control, so as to improve the output accuracy of the attribute category of the current lesion target.
本示例能够适应不同情况的属性分类场景,采用先确定病灶以及病灶所在位置,获取不同帧下所述病灶目标的跟踪序列,然后根据病灶目标属性类别以及与所述病灶目标属性类别对应的整体置信度,分类病灶属性的方式,提高在困难属性上的诊断正确率,模仿医生的诊断过程,综合多帧结果信息,给出最终的诊断结果,提高病灶属性诊断结果的稳定性。这里的困难属性指的是,某些属性从单张超声图片很难判断属性类型,需要综合多帧信息进行判断,也就是要结合病灶图像的跟踪序列以及对应的属性特征,如内部钙化属性,有的角度无法观察到钙化点,需要不同角度观察评判。This example can adapt to attribute classification scenarios in different situations. First, the lesion and the location of the lesion are determined, and the tracking sequence of the lesion target in different frames is obtained. Then, according to the attribute category of the lesion target and the overall confidence corresponding to the attribute category of the lesion target It can improve the accuracy of diagnosis on difficult attributes, imitate the diagnosis process of doctors, integrate multi-frame result information, give the final diagnosis results, and improve the stability of the diagnosis results of lesion attributes. The difficult attribute here refers to the fact that some attributes are difficult to judge the attribute type from a single ultrasound image, and need to integrate multiple frames of information to judge, that is, to combine the tracking sequence of the lesion image and the corresponding attribute characteristics, such as internal calcification attributes, In some angles, calcification points cannot be observed, and different angles of observation and judgment are required.
作为优选的实施方式,本示例通过设置多组阈值对于病灶目标的属性类别的准确性进行控制。As a preferred embodiment, this example controls the accuracy of the attribute category of the lesion target by setting multiple sets of thresholds.
首先考虑病灶目标的属性信息对于整体置信度的影响,根据病灶目标高属性分类置信度、超声影像中前后两帧属性类别不同以及超声影像中前后两帧包围框的重叠度,构建整体置信度数学模型;计算所述病灶目标属性类别以及与所述病灶目标属性类别对应的整体置信度。Firstly, the influence of the attribute information of the lesion target on the overall confidence is considered, and the overall confidence is mathematically constructed according to the high attribute classification confidence of the lesion target, the different attribute categories of the two frames before and after the ultrasound image, and the overlap of the bounding boxes of the two frames before and after the ultrasound image. A model; calculating the target attribute category of the lesion and the overall confidence level corresponding to the target attribute category of the lesion.
具体的,假设步骤S106中经过目标跟踪器,输出了N条病灶目标对应的跟踪序列,T={t1,t2,…,tN},T表示目标跟踪序列,每条目标跟踪序列中目标数量为D={d1,d2,…,dN};根据目标检测器和目标分类器的结果,目标跟踪序列中的每个目标有三种信息:病灶目标属性类别Cj,病灶目标包围框bj,病灶目标属性类别置信度Pj,其中j为某一条跟踪序列中的第j个目标。Specifically, it is assumed that the target tracker in step S106 outputs the tracking sequences corresponding to N lesion targets, T={t 1 ,t 2 ,...,t N }, T represents the target tracking sequence, and each target tracking sequence in The number of targets is D={d 1 , d 2 ,...,d N }; according to the results of the target detector and the target classifier, each target in the target tracking sequence has three kinds of information: the lesion target attribute category C j , the lesion target The bounding box b j , the confidence level P j of the target attribute category of the lesion, where j is the j-th target in a certain tracking sequence.
则置信度数学模型构建过程考虑如下:Then the process of constructing the confidence mathematical model is considered as follows:
(1)考虑目标属性类别置信度越高,目标属性类别判断准确性也会越高,因此采用log对数对置信度进行变换,由于置信度不能为负值,因此对结果进行max取值。(1) Considering that the higher the confidence of the target attribute category, the higher the judgment accuracy of the target attribute category will be, so the logarithm is used to transform the confidence. Since the confidence cannot be a negative value, the max value is used for the result.
其中,表示病灶目标属性置信度权重,ɑ为一个固定的权重值,Pj表示病灶目标属性类别置信度。in, Represents the confidence weight of the target attribute of the lesion, ɑ is a fixed weight value, and P j represents the confidence of the attribute category of the lesion target.
(2)考虑病灶目标在影像连续几个连续视频帧中,包围框的变化率,不应该很大,其中IOU为两个包围框的交并比(Intersection over Union),它计算的是两个包围框的交集和并集的比值,主要用来对前后帧目标包围框的重叠度进行量化,β为一个固定的权重值,ɑ与β之和为1。(2) Considering that the change rate of the bounding box of the lesion target in several consecutive video frames of the image should not be very large, the IOU is the intersection over union of the two bounding boxes, which calculates two bounding boxes. The ratio of the intersection and union of the bounding boxes is mainly used to quantify the overlap of the target bounding boxes before and after the frame. β is a fixed weight value, and the sum of ɑ and β is 1.
bj-1表示病灶目标的跟踪序列中第j-1个目标包围框的坐标信息(x1,y1,x2,y2),x1,y1表示包围框左上角横、纵坐标,x2,y2表示包围框右下角横、纵坐标;bj表示病灶目标序列中第j个目标包围框的坐标信息;表示病灶目标重叠度权重。b j-1 represents the coordinate information (x 1 , y 1 , x 2 , y 2 ) of the j-1th target bounding box in the tracking sequence of the lesion target, and x 1 , y 1 represent the horizontal and vertical coordinates of the upper left corner of the bounding box , x 2 , y 2 represent the horizontal and vertical coordinates of the lower right corner of the bounding box; b j represents the coordinate information of the j-th target bounding box in the lesion target sequence; Indicates the weight of the lesion target overlap degree.
(3)考虑病灶目标在影像连续视频帧中连续出现的属性类别是相同的,更有可能是期望的输出属性类别,而不间断出现的目标属性是期望输出属性的概率较低。因此,采用如下方式,将二者的贡献区分开。(3) Considering that the attribute categories of the lesion target appearing continuously in the continuous video frames of the image are the same, it is more likely to be the expected output attribute category, while the continuous occurrence of the target attribute has a lower probability of the expected output attribute. Therefore, the following ways are adopted to distinguish the contributions of the two.
cj-1表示目标序列中第j-1个目标的属性类别,表示跟踪序列中病灶目标是否连续出现的赋值。c j-1 represents the attribute category of the j-1th target in the target sequence, An assignment representing whether the foci targets appear consecutively in the tracking sequence.
如果一个属性类别在下一帧中没连续出现,则整体置信度会降低一个固定值γ。If an attribute category does not appear consecutively in the next frame, the overall confidence is reduced by a fixed value γ.
综合上述三点,第一条置信度建模考虑到分类器的置信度因素,使高属性分类置信度的目标贡献更大。第二条包围框建模过程考虑了连续两帧目标形态变化应该不大,如果过大,则两者属于同一个目标的概率就越低,对整体置信度的贡献就越低。第三条如果前后两帧目标不属于同一个属性类别,那此属性分类正确的概率就越低,应该降低整体置信度的贡献。这三点保证最终跟踪序列的置信度的稳定性和连续性,可以大大提高分类结果的准确性。Combining the above three points, the first confidence model takes into account the confidence factor of the classifier, so that the target contribution of high attribute classification confidence is greater. The second bounding box modeling process considers that the target shape change in two consecutive frames should not be large. If it is too large, the probability that the two belong to the same target will be lower, and the contribution to the overall confidence will be lower. The third rule is that if the objects in the two frames before and after do not belong to the same attribute category, the probability of correct classification of this attribute is lower, and the contribution of the overall confidence should be reduced. These three points ensure the stability and continuity of the confidence of the final tracking sequence, which can greatly improve the accuracy of the classification results.
则目标序列整体置信度建模如下:Then the overall confidence of the target sequence is modeled as follows:
其中,wj表示跟踪序列中病灶目标对应的整体置信度;表示跟踪序列中病灶目标是否连续出现的赋值;表示病灶目标属性置信度权重;表示病灶目标重叠度权重;γ表示权重偏置。Among them, w j represents the overall confidence corresponding to the lesion target in the tracking sequence; An assignment indicating whether the target of the lesion appears consecutively in the tracking sequence; Represents the confidence weight of the target attribute of the lesion; Represents the weight of the overlap of the lesion target; γ represents the weight bias.
需要说明的是,不同属性的置信度计算完后,从中取整体置信度最大的属性类别进行输出。整体置信度输出后,需要设定阈值进行比较,本示例中第一阈值thr是可以具体根据病灶目标的属性类别设置的,例如为5,6,7或8等;第二阈值为0。It should be noted that after the confidence of different attributes is calculated, the attribute category with the largest overall confidence is selected for output. After the overall confidence is output, a threshold needs to be set for comparison. In this example, the first threshold thr can be set according to the attribute category of the lesion target, for example, 5, 6, 7, or 8; the second threshold is 0.
将所述整体置信度与第一阈值和第二阈值进行比较,其中第一阈值大于第二阈值;comparing the overall confidence with a first threshold and a second threshold, wherein the first threshold is greater than the second threshold;
如果大于第一阈值,则将与所述整体置信度对应的病灶目标属性类别置于输出队列;讲结果进行输出。If it is greater than the first threshold, the target attribute category of the lesion corresponding to the overall confidence level is placed in the output queue; the result is outputted.
如果所述整体置信度小于第一阈值,但大于第二阈值,则将与所述整体置信度对应的病灶目标属性类别保留在所述跟踪序列;便于后期整体置信度计算。If the overall confidence level is less than the first threshold but greater than the second threshold, the target attribute category of the lesion corresponding to the overall confidence level is retained in the tracking sequence, which is convenient for later calculation of the overall confidence level.
如果所述整体置信度小于第二阈值,则将与所述整体置信度对应的病灶目标属性类别从所述跟踪序列删除。If the overall confidence level is less than a second threshold, the lesion target attribute category corresponding to the overall confidence level is deleted from the tracking sequence.
具体的,通过上述整体置信度计算,第0类的属性目标连续多帧未曾出现,该类别的属性置信度小于0;第1类属性目标连续三帧出现,该类别的属性置信度为5.1,第2类的属性目标连续三帧未出现,该类别的属性置信度为1.2。经过整体判断,第1类的置信度最高,且大于阈值3.0,则将其输出显示。第0类属性已小于0,故将此队列中此属性的目标的全部删除。第2类目标置信度小于阈值3.0,但大于0,则将其仍然保留在队列中;经过处理,第一目标序列队列更新为{1,1,2,1,1,1}。Specifically, according to the above calculation of the overall confidence, the attribute target of the 0th category has not appeared in multiple consecutive frames, and the attribute confidence of this category is less than 0; the attribute target of the first category has appeared in three consecutive frames, and the attribute confidence of this category is 5.1, The attribute target of class 2 does not appear for three consecutive frames, and the attribute confidence of this class is 1.2. After the overall judgment, the confidence of the first category is the highest, and it is greater than the threshold of 3.0, then its output is displayed. The 0th class attribute is less than 0, so all the targets of this attribute in this queue are deleted. The second type of target confidence is less than the threshold 3.0, but greater than 0, it is still kept in the queue; after processing, the first target sequence queue is updated to {1,1,2,1,1,1}.
作为又一种优选的实施方式,本示例中分类器模型与传统的分类器不同,传统的分类器只能做单一属性的分类,这里是在传统分类器的基础上改进而成的一种多任务分类器,即同时能进行不同属性的分类。如结节回声、结节边缘两种属性。其中结节回声细分为高回声、低回声和无回声,结节边界分为清晰,不清晰。As another preferred implementation, the classifier model in this example is different from the traditional classifier. The traditional classifier can only classify a single attribute. Here is an improved multi-class classifier based on the traditional classifier. Task classifier, which can classify different attributes at the same time. Such as nodule echo, nodule edge two attributes. The nodule echo was subdivided into hyperechoic, hypoechoic and anechoic, and the nodule boundary was divided into clear and unclear.
如图5所示,具体分类器结构改进如下,其中CNN为识别网络的特征提取网络,与传统深度分类器的特征提取网络一样,FC为全连接层,与传统深度分类器的全连接层一样。最后输出类别。如果要做两个属性的识别,如结节边界,结节回声两种属性。改进前分类器需要两个单独的CNN网络,使用各自属性数据进行单独训练。改进后的分类器只需要一个CNN网络,可以同时使用两种属性数据进行训练。这样改进后的分类器的CNN网络可以学习到两种数据的信息,通过实验也证明,在相同数据量的情况下,改进后的分类器比改进前的分类器的在各自属性上的分类准确性提高了6%。As shown in Figure 5, the specific classifier structure is improved as follows, in which CNN is the feature extraction network of the recognition network, which is the same as the feature extraction network of the traditional deep classifier, and the FC is the fully connected layer, which is the same as the fully connected layer of the traditional deep classifier. . The final output category. If you want to identify two attributes, such as nodule boundary, nodule echo two attributes. The pre-improvement classifier requires two separate CNN networks, trained separately using their respective attribute data. The improved classifier requires only one CNN network and can be trained using both attribute data simultaneously. In this way, the CNN network of the improved classifier can learn the information of the two kinds of data. It is also proved by experiments that under the same amount of data, the improved classifier is more accurate than the pre-improved classifier on the respective attributes. Sex increased by 6%.
本示例在传统的分类器模型经过网络结构改造,使其具有多任务同时识别的功能。这样做的有点是,不同的任务之间可以相互促进,增强分类器中的主干网络对图片的特征的提取能力,达到相互促进,相互提高的目标的识别能力。In this example, the traditional classifier model has been transformed into the network structure, so that it has the function of multi-task recognition at the same time. The advantage of this is that different tasks can promote each other, enhance the ability of the backbone network in the classifier to extract the features of the picture, and achieve the recognition ability of the goal of mutual promotion and improvement.
示例性系统Exemplary System
如图3所示,一种基于视频序列的超声图像病灶属性分类处理系统,其特征在于,包括As shown in FIG. 3 , a system for classifying and processing ultrasound image lesion attributes based on a video sequence is characterized in that it includes:
病灶识别模块20,其用于获得待测的超声图像,提取所述待测超声图像中某一处或多处的病灶图像作为病灶目标;A
病灶追踪模块30,其用于在待测的超声影像中追踪所述病灶目标,获得多组与所述病灶目标对应的跟踪序列;a
属性识别模块40,其用于获取所述跟踪序列中病灶目标的属性信息,并将所述病灶目标的属性信息加入所述跟踪序列中;an
判断模块50,其用于获取所述跟踪序列中当前病灶目标的属性信息,计算跟踪序列中病灶目标对应的整体置信度;根据所述整体置信度确定所述病灶目标的属性类别,并输出。The
所述判断模块50包括第一判断单元、第二判断单元以及第三判断单元;The judging
第一判断单元,用于将所述整体置信度与第一阈值比较,其中第一阈值大于第二阈值;如果大于第一阈值,则将与所述整体置信度对应的病灶目标属性类别置于输出队列;A first judging unit, configured to compare the overall confidence with a first threshold, where the first threshold is greater than the second threshold; if it is greater than the first threshold, place the target attribute category of the lesion corresponding to the overall confidence in the output queue;
第二判断单元,用于将所述整体置信度与第一阈值和第二阈值进行比较,如果所述整体置信度小于第一阈值,但大于第二阈值,则将与所述整体置信度对应的病灶目标属性类别保留在所述跟踪序列;a second judgment unit, configured to compare the overall confidence with a first threshold and a second threshold, if the overall confidence is less than the first threshold but greater than the second threshold, it will correspond to the overall confidence The lesion target attribute category is retained in the tracking sequence;
第三判断单元,用于将所述整体置信度与第二阈值进行比较,如果所述整体置信度小于第二阈值,则将与所述整体置信度对应的病灶目标属性类别从所述跟踪序列删除。a third judging unit, configured to compare the overall confidence with a second threshold, and if the overall confidence is less than the second threshold, remove the target attribute category of the lesion corresponding to the overall confidence from the tracking sequence delete.
本示例为了能够适应不同情况的属性分类场景,采用先确定病灶以及病灶所在位置,获取不同帧下所述病灶目标的跟踪序列,后根据病灶目标属性类别以及与所述病灶目标属性类别对应的置信度,分类病灶属性的方式,提高在困难属性上的诊断正确率,模仿医生的诊断过程,综合多帧结果信息,给出最终的诊断结果,提高病灶属性诊断结果的稳定性In order to adapt to attribute classification scenarios in different situations, this example firstly determines the lesion and the location of the lesion, obtains the tracking sequence of the lesion target in different frames, and then uses the lesion target attribute category and the confidence corresponding to the lesion target attribute category. It can improve the accuracy of diagnosis on difficult attributes, imitate the diagnosis process of doctors, integrate multi-frame result information, give the final diagnosis results, and improve the stability of the diagnosis results of lesion attributes.
示例性电子设备Exemplary Electronics
下面,参考图4来描述根据本申请实施例的电子设备。该电子设备可以是可移动设备本身,或与其独立的单机设备,该单机设备可以与可移动设备进行通信,以从它们接收所采集到的输入信号,并向其发送所选择的目标决策行为。Hereinafter, an electronic device according to an embodiment of the present application will be described with reference to FIG. 4 . The electronic device may be the movable device itself, or a stand-alone device independent of it, which may communicate with the movable devices to receive collected input signals from them and transmit to them selected target decision-making behaviors.
图4图示了根据本申请实施例的电子设备的框图。FIG. 4 illustrates a block diagram of an electronic device according to an embodiment of the present application.
如图4所示,电子设备10包括一个或多个处理器11和存储器12。As shown in FIG. 4 , the electronic device 10 includes one or
处理器11可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备10中的其他组件以执行期望的功能。
存储器12可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的决策行为决策方法以及/或者其他期望的功能。
在一个示例中,电子设备10还可以包括:输入装置13和输出装置14,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。例如,该输入设备13可以包括例如摄像头、CT、MRI以及超声影像设备等各种设备。该输入设备13还可以包括例如键盘、鼠标等等。该输出装置14可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。In one example, the electronic device 10 may also include an
当然,为了简化,图4中仅示出了该电子设备10中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备10还可以包括任何其他适当的组件。Of course, for simplicity, only some of the components in the electronic device 10 related to the present application are shown in FIG. 4 , and components such as buses, input/output interfaces, and the like are omitted. Besides, the electronic device 10 may also include any other suitable components according to the specific application.
示例性计算机程序产品和计算机可读存储介质Exemplary computer program product and computer readable storage medium
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的决策行为决策方法中的步骤。In addition to the methods and apparatuses described above, embodiments of the present application may also be computer program products comprising computer program instructions that, when executed by a processor, cause the processor to perform the "exemplary methods" described above in this specification The steps in the decision-making method according to various embodiments of the present application described in the section.
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。The computer program product can write program codes for performing the operations of the embodiments of the present application in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as "C" language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user device, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的决策行为决策方法中的步骤。In addition, embodiments of the present application may also be computer-readable storage media having computer program instructions stored thereon, the computer program instructions, when executed by a processor, cause the processor to perform the above-mentioned "Example Method" section of this specification Steps in the decision-making method according to various embodiments of the present application described in .
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。The basic principles of the present application have been described above in conjunction with specific embodiments. However, it should be pointed out that the advantages, advantages, effects, etc. mentioned in the present application are only examples rather than limitations, and these advantages, advantages, effects, etc., are not considered to be Required for each embodiment of this application. In addition, the specific details disclosed above are only for the purpose of example and easy understanding, rather than limiting, and the above-mentioned details do not limit the application to be implemented by using the above-mentioned specific details.
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of devices, apparatus, apparatuses, and systems referred to in this application are merely illustrative examples and are not intended to require or imply that the connections, arrangements, or configurations must be in the manner shown in the block diagrams. As those skilled in the art will appreciate, these means, apparatuses, apparatuses, systems may be connected, arranged, configured in any manner. Words such as "including", "including", "having" and the like are open-ended words meaning "including but not limited to" and are used interchangeably therewith. As used herein, the words "or" and "and" refer to and are used interchangeably with the word "and/or" unless the context clearly dictates otherwise. As used herein, the word "such as" refers to and is used interchangeably with the phrase "such as but not limited to".
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。It should also be pointed out that in the apparatus, equipment and method of the present application, each component or each step can be decomposed and/or recombined. These disaggregations and/or recombinations should be considered as equivalents of the present application.
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use this application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Therefore, this application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The foregoing description has been presented for the purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210329032.0A CN114693640B (en) | 2022-03-31 | 2022-03-31 | Ultrasonic image focus attribute classification processing method, system, electronic equipment and storage medium based on video sequence |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210329032.0A CN114693640B (en) | 2022-03-31 | 2022-03-31 | Ultrasonic image focus attribute classification processing method, system, electronic equipment and storage medium based on video sequence |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114693640A true CN114693640A (en) | 2022-07-01 |
| CN114693640B CN114693640B (en) | 2025-05-23 |
Family
ID=82141314
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210329032.0A Active CN114693640B (en) | 2022-03-31 | 2022-03-31 | Ultrasonic image focus attribute classification processing method, system, electronic equipment and storage medium based on video sequence |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114693640B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117495757A (en) * | 2022-07-22 | 2024-02-02 | 数坤(深圳)智能网络科技有限公司 | Ultrasound image regularization method, device, equipment and computer-readable storage medium |
| WO2024093099A1 (en) * | 2022-11-01 | 2024-05-10 | 上海杏脉信息科技有限公司 | Thyroid ultrasound image processing method and apparatus, medium and electronic device |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112102372A (en) * | 2020-09-16 | 2020-12-18 | 上海麦图信息科技有限公司 | Cross-camera track tracking system for airport ground object |
| CN113344854A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Breast ultrasound video-based focus detection method, device, equipment and medium |
| CN113657219A (en) * | 2021-08-02 | 2021-11-16 | 上海影谱科技有限公司 | A video object detection and tracking method, device and computing device |
| CN113808105A (en) * | 2021-09-17 | 2021-12-17 | 合肥合滨智能机器人有限公司 | Focus detection method based on ultrasonic scanning |
-
2022
- 2022-03-31 CN CN202210329032.0A patent/CN114693640B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112102372A (en) * | 2020-09-16 | 2020-12-18 | 上海麦图信息科技有限公司 | Cross-camera track tracking system for airport ground object |
| CN113344854A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Breast ultrasound video-based focus detection method, device, equipment and medium |
| CN113657219A (en) * | 2021-08-02 | 2021-11-16 | 上海影谱科技有限公司 | A video object detection and tracking method, device and computing device |
| CN113808105A (en) * | 2021-09-17 | 2021-12-17 | 合肥合滨智能机器人有限公司 | Focus detection method based on ultrasonic scanning |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117495757A (en) * | 2022-07-22 | 2024-02-02 | 数坤(深圳)智能网络科技有限公司 | Ultrasound image regularization method, device, equipment and computer-readable storage medium |
| WO2024093099A1 (en) * | 2022-11-01 | 2024-05-10 | 上海杏脉信息科技有限公司 | Thyroid ultrasound image processing method and apparatus, medium and electronic device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114693640B (en) | 2025-05-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110148142B (en) | Training method, device and equipment of image segmentation model and storage medium | |
| CN109035187B (en) | Medical image labeling method and device | |
| EP3975117A1 (en) | Image segmentation method and apparatus, and training method and apparatus for image segmentation model | |
| US8775916B2 (en) | Validation analysis of human target | |
| CN112052896B (en) | Image processing method and device, and classification model training method and device | |
| US20200226752A1 (en) | Apparatus and method for processing medical image | |
| WO2020037932A1 (en) | Image quality assessment method, apparatus, electronic device and computer readable storage medium | |
| CN111768366A (en) | Ultrasound imaging system, BI-RADS classification method and model training method | |
| CN113658146B (en) | Nodule grading method and device, electronic equipment and storage medium | |
| CN110189307B (en) | Pulmonary nodule detection method and system based on multi-model fusion | |
| US10706534B2 (en) | Method and apparatus for classifying a data point in imaging data | |
| CN114693640B (en) | Ultrasonic image focus attribute classification processing method, system, electronic equipment and storage medium based on video sequence | |
| WO2020027228A1 (en) | Diagnostic support system and diagnostic support method | |
| CN111275699A (en) | Medical image processing method, device, equipment and storage medium | |
| CN113222989A (en) | Image grading method and device, storage medium and electronic equipment | |
| CN118094118B (en) | Data set quality evaluation method, system, electronic equipment and storage medium | |
| CN110909889A (en) | Training set generation and model training method and device based on feature distribution | |
| CN112734707B (en) | Auxiliary detection method, system and device for 3D endoscope and storage medium | |
| CN111028940A (en) | Multi-scale lung nodule detection method, device, equipment and medium | |
| CN112651400A (en) | Stereoscopic endoscope auxiliary detection method, system, device and storage medium | |
| Alzubaidi et al. | FetSAM: advanced segmentation techniques for fetal head biometrics in ultrasound imagery | |
| JP2021128476A (en) | Image processing method and image processing program and image processing system | |
| KR102601970B1 (en) | Apparatus and method for detecting leison region and gland region in medical image | |
| CN114119484B (en) | A method and related device for detecting thyroid cancer or breast cancer | |
| CN116958720B (en) | Training method of target detection model, target detection method, device and equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |