Disclosure of Invention
The application provides an oral health detection system, which aims to solve the problem that the self-service tooth health problem observation cannot be carried out based on natural pictures of the oral cavity of a patient in the prior art. The application further provides an oral health detection method and device, an oral image registration model construction method and device and electronic equipment.
The present application provides an oral health detection system comprising:
The system comprises a client, a server and a user terminal, wherein the client is used for collecting oral cavity image data of a user as oral cavity natural image data, sending an oral cavity image fusion display request aiming at the oral cavity natural image data to the server, and displaying fusion image data corresponding to teeth between oral cavity medical image data and natural image data of the user, which are returned by the server;
The server is used for receiving the request, generating the fusion image data according to the oral medical image data and the natural image data of the user through an oral image registration model, and returning the fusion image data to the client.
The application also provides an oral health detection method, which comprises the following steps:
collecting oral cavity natural image data of a user as oral cavity natural image data;
an oral image fusion display request aiming at the oral natural image data is sent to a server;
And displaying fused image data corresponding to teeth between the oral medical image data and the natural image data of the user, which are returned by the server, so that the user can observe oral health information according to the fused image data.
Optionally, the natural image data includes local dental image data;
the medical image data includes global dental image data;
And the fused image data takes the medical image data as an oral cavity background model, and the local tooth image data is displayed at the corresponding position in the medical image data.
Optionally, the fused image data includes tooth change information of different periods.
The application also provides an oral health detection method, which comprises the following steps:
receiving an oral image data fusion display request aiming at oral natural image data of a target user;
Generating fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data and the natural image data of the target user through an oral image registration model;
and sending the fused image data back to the requesting party.
Optionally, extracting first tooth feature data in the oral natural image data and second tooth feature data in the oral medical image data of the user through a tooth feature extraction network included in the oral image registration model;
Determining the tooth characteristic data after the first tooth characteristic data and the second tooth characteristic data are fused according to the first tooth characteristic data and the second tooth characteristic data through a tooth characteristic fusion network included in the oral cavity image registration model;
Determining registration transformation parameters of the oral cavity image according to the fused tooth characteristic data through a registration transformation parameter determination network included in the oral cavity image registration model;
performing image transformation processing on the natural image data according to the registration transformation parameters through an oral image transformation network included in an oral image registration model;
And generating the fused image data according to the transformed natural image data and the medical image data through an oral image fusion network included in an oral image registration model.
Optionally, extracting the first tooth feature data from the natural image data through a first tooth feature extraction sub-network included in the tooth feature extraction network;
And extracting the second tooth characteristic data from the medical image data through a second tooth characteristic extraction sub-network included in the image characteristic extraction sub-network.
Optionally, the registration transformation parameters include rotation parameters, translation parameters, and scaling parameters.
Optionally, the natural image data comprises local dental image data and the medical image data comprises global dental image data.
Optionally, the method further comprises:
and inquiring the medical image data of the target user from an oral image database.
Optionally, the method further comprises:
receiving an image data submitting request for oral medical image data of a target user;
and storing the corresponding record between the target user and the medical image data into an oral image database.
The application also provides an oral health detection device, comprising:
the natural image acquisition unit is used for acquiring oral cavity natural image data of a user and taking the oral cavity natural image data as oral cavity natural image data;
The request sending unit is used for sending an oral image fusion display request aiming at the oral natural image data to the server;
And the fused image display unit is used for displaying fused image data corresponding to teeth between the oral medical image data and the natural image data of the user, which are returned by the server, so that the user can observe oral health information according to the fused image data.
The present application also provides an electronic device including:
processor, and
The device comprises a processor, a storage, a server and a user interface, wherein the processor is used for processing the oral cavity health detection method, the storage is used for storing a program for realizing the oral cavity health detection method, the device is electrified and runs the program of the method through the processor, and then the steps of collecting oral cavity natural image data of the user as oral cavity natural image data, sending an oral cavity image fusion display request aiming at the oral cavity natural image data to the server, and displaying fusion image data corresponding to teeth between the oral cavity medical image data and the natural image data of the user, which are returned by the server, so that the user can observe oral cavity health information according to the fusion image data.
The application also provides an oral health detection device, comprising:
A request receiving unit for receiving an oral image data fusion display request for oral natural image data of a target user;
the image fusion unit is used for generating fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data and the natural image data of the target user through an oral image registration model;
and the fusion image returning unit is used for returning the fusion image data to the requesting party.
The present application also provides an electronic device including:
processor, and
The device comprises a memory, a processor, an oral image registration model and a request party, wherein the memory is used for storing a program for realizing an oral health detection method, the device is electrified and runs the program of the method through the processor, and then the device receives an oral image data fusion display request aiming at oral natural image data of a target user, generates fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data and the natural image data of the target user through the oral image registration model, and returns the fusion image data to the request party.
Optionally, the equipment comprises a smart phone, a smart sound box, a tablet personal computer and a personal computer.
The application also provides a method for constructing the mouth image registration model, which is characterized by comprising the following steps:
Determining a training data set, wherein the training data comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth;
Constructing a network structure of an oral image registration model;
And training the network parameters of the model according to the training data set to obtain the model.
Optionally, the oral image registration model includes a tooth feature extraction network for extracting first tooth feature data in the oral natural image data and second tooth feature data in the oral medical image data of the user;
the oral cavity image registration model comprises a tooth characteristic fusion network, and is used for determining tooth characteristic data after the first tooth characteristic data and the second tooth characteristic data are fused according to the first tooth characteristic data and the second tooth characteristic data;
the oral image registration model comprises a registration transformation parameter determination network, which is used for determining an oral image registration transformation parameter according to the fused tooth characteristic data;
The oral image registration model comprises an oral image transformation network, which is used for executing image transformation processing on the natural image data according to the registration transformation parameters;
The oral image registration model comprises an oral image fusion network and is used for generating fusion image data according to the transformed natural image data and the medical image data.
Optionally, the tooth feature extraction network includes a first tooth feature extraction sub-network for extracting the first tooth feature data from the natural image data;
the image feature extraction sub-network includes a second tooth feature extraction sub-network for extracting the second tooth feature data from the medical image data.
The application also provides an oral image registration model construction device, which comprises:
The training data determining unit is used for determining a training data set, wherein the training data comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth;
the network construction unit is used for constructing a network structure of the oral image registration model;
And the network training unit is used for training the network parameters of the model according to the training data set to obtain the model.
The present application also provides an electronic device including:
processor, and
The device comprises a memory, a processor, a training data set, a network structure and a training data set, wherein the memory is used for storing a program for realizing an oral image registration model construction method, the training data set is determined after the device is electrified and the program of the method is operated by the processor, the training data set comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth, the network structure of the oral image registration model is constructed, and the network parameters of the model are trained according to the training data set to obtain the model.
The present application also provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the various methods described above.
The application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the various methods described above.
Compared with the prior art, the application has the following advantages:
The oral health detection system provided by the embodiment of the application collects the oral natural image data of a user through the client, sends an oral image data fusion display request aiming at the oral natural image data to the server, generates the oral medical image data of the user and the image data of the natural image data fused with each other through the oral image registration model, displays the fused image data returned by the server so that the user observes oral health information according to the fused image data, and the processing mode enables the user to fuse and display the oral natural image shot by the user along with the professional medical image of the user through the client (such as a smart phone, a smart sound box and the like), and the user can intuitively know tooth health change information through the fused image, so that the timeliness of oral health detection of the user can be effectively improved, and the user experience is improved.
The method for constructing the oral image registration model comprises the steps of determining a training data set, wherein the training data comprises oral medical image data, oral natural image data and labeling data of corresponding relations between natural image teeth and medical image teeth, constructing a network structure of the oral image registration model, training network parameters of the model according to the training data set to obtain the model, and learning the oral image registration model from the training data in the processing mode, wherein the model can generate fusion images corresponding to teeth between oral natural images and professional medical images taken by a user, and the user can intuitively know tooth health change information of the user through the fusion images.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The application provides an oral health detection system, an oral health detection method, an oral image registration model construction device and electronic equipment. The various schemes are described in detail one by one in the examples below.
First embodiment
Referring to fig. 1, a block diagram of an embodiment of an oral health detection system according to the present application is shown. The system comprises a server 1 and a client 2.
The server 1 may be a server deployed on a cloud server, or may be a server dedicated to implementing oral health management, and may be deployed in a data center. The server may be a cluster server or a single server.
The client 2 includes, but is not limited to, mobile communication devices, namely, so-called mobile phones or smart phones, and further includes clients such as smart toothbrushes, smart speakers, personal computers, PADs, ipads, and the like.
Please refer to fig. 2, which is a schematic diagram illustrating a scenario of the oral health detection system of the present application. The server 1 and the client 2 can be connected through a network, for example, the client 2 can be networked through WIFI or the like. The user of the oral disease patient can aim at the oral cavity part of the user through the image acquisition device equipped at the client side such as the intelligent sound box and the intelligent mobile phone, acquire the oral cavity natural image data of the user, the client side sends the acquired image data to the server side, the server side generates the fused image data corresponding to the teeth between the oral cavity natural image and the oral cavity medical image data through the oral cavity image registration model, and the fused image data is returned to the client side for display so as to be convenient for the user to check and the like.
Please refer to fig. 3, which is a schematic diagram illustrating interaction between devices of an embodiment of the oral health detection system of the present application. In the embodiment, a client acquires oral natural image data of a user, sends an oral image fusion display request for the oral natural image data to a server, displays fusion image data corresponding to teeth between oral medical image data of the user and the natural image data, which are returned by the server, receives the request, generates the fusion image data according to the oral medical image data of the user and the natural image data through an oral image registration model, and returns the fusion image data to the client.
The oral cavity image fusion display request at least comprises the oral cavity natural image data, and can also comprise information such as user identification and the like. The oral cavity natural image data can be multi-frame image data in an oral cavity video, and the acquired multi-frame natural image data can comprise comprehensive oral cavity information such as images of all teeth by shooting the oral cavity part at multiple angles. The natural image data of the oral cavity may be data of an oral cavity image obtained by shooting a certain part of the oral cavity at a time.
In one example, the client is a terminal device such as a smart phone or a smart speaker, and the device can be loaded with an oral health management application APP, after a user opens the APP, the user can start collection processing of oral natural image data through an operation option (such as an "oral modeling detection" option) provided by the APP, for example, a CMOS/CCD camera equipped with the smart phone or the smart speaker can shoot and collect N frames of continuous images of an oral part, and oral modeling detection is performed.
In another example, the client is a terminal device such as a smart phone or a smart speaker, the device is connected with the smart toothbrush (the smart phone and the smart speaker can be connected in a wireless manner or a wired manner), the user can start an image acquisition function of the smart toothbrush when brushing teeth, if teeth bleeding is found, an image of the part is shot, natural image data of the oral cavity can be acquired through a camera arranged on the surface of the smart toothbrush, the image data is sent to the client, and the client sends the data to the server.
In the implementation, the user can register the oral health management service provided by the service end through the client in advance and become a registered user of the service. The oral image fusion display request sent by the client to the server may include the oral natural image data and a user identifier.
As shown in fig. 2, since the image capturing device of the client is typically a camera, the natural image data of the oral cavity of any frame captured by the image capturing device is typically local image data in the oral cavity, and the user can typically capture the teeth that are considered to be problematic.
After the server receives the request, the fused image data can be generated according to the oral medical image data and the natural image data of the user through an oral image registration model, and the fused image data is returned to the client.
The oral medical image data can be oral part image data shot by professional oral medical detection equipment, such as oral X-ray film and the like. The server can query the medical image data of the user from an oral image database according to the user identifier carried by the request.
In the implementation, the server can also be used for receiving an image data submitting request aiming at the oral cavity medical image data of the user, and storing the corresponding record between the user and the medical image data into an oral cavity image database so as to be convenient for use in an oral cavity modeling stage.
The oral image registration model may be a simple model for image registration based on tooth shape. The mouth image registration model can also be an artificial intelligent model, and the mouth image registration reaction is made in a human intelligent similar mode. The network structure of the oral image registration model can comprise a common neural network structure of DNN, CNN, RNN and the like. The model may include a feature data extraction network, an image registration network, and so on.
In one example, the oral image registration model performs fusion registration on the two entered modality data in the following manner. Firstly, an optimization algorithm (such as downhill, SGD and the like) can be used for carrying out optimization of image registration through manually defined similarity measurement to obtain a rigid body transformation matrix to generate a transformed image, and then, the transformed image and an x-ray image are used for carrying out image fusion to obtain a final fusion image. This approach does not use artificial intelligence algorithms, cannot form an end-to-end solution, and is limited in registration efficiency and registration accuracy.
Please refer to fig. 4, which is a schematic diagram of an oral image registration model of an embodiment of the oral health detection system of the present application. In the embodiment, the oral image registration model comprises a tooth feature extraction network, a tooth feature fusion network, a registration transformation parameter determination network, an oral image transformation network and an oral image fusion network. The model can be used for extracting the characteristics of two images independently, carrying out characteristic fusion in a deep level, and then obtaining registration transformation parameters by using regression.
As can be seen from fig. 4, due to the limitation of the image capturing device, the image capturing device (i.e. the natural image of the oral cavity) is an image generated under the macro lens, and mainly captures details of teeth, so that there is usually only a few teeth or even one tooth, and therefore, the requirement on image registration is high, and the tooth position corresponding to the global image (i.e. the medical image of the oral cavity, such as X-ray film) can be found according to the characteristics of the teeth on the image capturing device, such as shape, lesion, etc.
The server can extract first tooth characteristic data in the oral cavity natural image data and second tooth characteristic data in the oral cavity medical image data of the user through a tooth characteristic extraction network. The first tooth characteristic data comprises tooth characteristic data extracted from the oral cavity natural image data. The second tooth characteristic data comprises tooth characteristic data extracted from the medical natural image data.
In one example, the first tooth feature data may be extracted from the natural image data through a first tooth feature extraction sub-network included in the tooth feature extraction network, and the second tooth feature data may be extracted from the medical image data through a second tooth feature extraction sub-network included in the image feature extraction sub-network. The first tooth feature extraction sub-network and the second tooth feature extraction sub-network extract tooth feature data for oral cavity images of two different modes.
In this embodiment, the first tooth feature extraction sub-Network and the second tooth feature extraction sub-Network adopt a Residual Network (ResNet) structure, and perform CNN convolution on the input image data through a Residual mechanism, so as to avoid deep gradient disappearance. For example, for two input images, the two input images enter into the corresponding feature extraction networks respectively, and the networks can be common feature extraction CNN network models such as resnet, resenet, 101, densenet and the like, so as to extract deep features.
In particular, the first tooth feature extraction sub-network and the second tooth feature extraction sub-network may also employ other neural network structures, such as a multi-layer CNN convolutional network or a neural network based on an attention mechanism. In addition, the input oral medical image and the natural image can be spliced together, and the tooth characteristic data can be extracted through a tooth characteristic extraction network.
After the server side extracts the first tooth characteristic data and the second tooth characteristic data, the tooth characteristic data fused between the first tooth characteristic data and the second tooth characteristic data can be determined according to the first tooth characteristic data and the second tooth characteristic data through a tooth characteristic fusion network included in the oral cavity image registration model.
The tooth feature fusion network can adopt a processing mode that feature weights respectively output by CNN convolution layers of a first tooth feature extraction sub-network and a second tooth feature extraction sub-network are spliced (concatate) to obtain fused tooth feature data. In the implementation, a depth fusion mode may also be adopted to fuse the first tooth feature data and the second tooth feature data. The fused tooth characteristic data may include characteristic data representing a tooth correspondence between oral medical image data and natural image data.
The server can determine the registration transformation parameters of the oral cavity image according to the fused tooth characteristic data through a registration transformation parameter determination network included in the registration model of the oral cavity image. The registration transformation parameters include, but are not limited to, rotation parameters, translation parameters, and scaling parameters.
In specific implementation, the registration transformation parameter determining network may adopt a processing manner that, for the fused tooth characteristic data, the final full-connection layer is connected through a convolution layer and a pooling layer of 3 layers of CNNs, 7 parameters are regressed, namely, rotation angles (rotation x, y, z) of a three-dimensional space, scaling (scale x, y) of a two-dimensional space and translation amounts (shift x, y) of the two-dimensional space, so as to construct an image transformation relation of an affine matrix (affine matrix), and then the image transformation relation is input to the image transformation network.
It should be noted that the mouth image registration model may be an unsupervised network model, for which the ngf loss function may be selected, so that the multimodal data may be better dealt with.
The server side can execute image transformation processing on the natural image data according to the registration transformation parameters through an oral image transformation network included in an oral image registration model, and the image registration parameters are subjected to spatial transformation (spatial transform) to obtain the transformed image position relationship of the shot images. The specific processing procedure of the oral image transformation network can be that the constructed affine matrix is used for transforming the follow-up image, and bilinear interpolation is used for generating a new transformed follow-up image. The oral cavity image transformation network inputs the transformed follow-up image to the image fusion network.
The image transformation is the process of interpolating an image according to an affine matrix, and may be represented by the following formula rs|image|+t= |image' |. Where R may be a 3*3 rotation matrix, S may be a scale diagonal matrix of 3*3, and z-direction is always 1;T and 3*1 translation matrix due to 2D projective transformation, and z-direction is always 0, image is pre-registration image coordinates, and image' is transformed image coordinates. The transformation relation of the image positions is obtained through a formula, and then the color value or the gray value of the corresponding position of the new image is obtained through image interpolation.
The server side can generate the fused image data according to the transformed natural image data and the medical image data through an oral image fusion network included in an oral image registration model. The image fusion is to paste the transformed natural image onto the standard medical image. In particular, the image fusion network comprises the steps that the position information can be obtained by an affine matrix, and the transformed follow-up image is aligned to the position on the medical image. In addition, the edges of the transformed image may be blurred and attached to the medical image to make the image appear smooth, resulting in the final result, as shown in fig. 5.
In particular embodiments, the fused image data may include tooth change information for different periods of time. Tooth changes may be changes in tooth growth or damage, which the user perceives by observing the same tooth at different times and in different image representations. It should be noted that, the system uses the medical image as a template for modeling, and mainly makes the user perceive the tooth change situation by taking the image.
The user can more intuitively feel the problem of teeth by viewing the fusion influence data through the client. Meanwhile, a standard model can be established according to the oral medical image data, and all subsequent natural images of the oral cavity taken along with the standard model can be matched to the established model so as to be beneficial to the observation and change of a user. In addition, in the subsequent stage, the fusion influence data can be used as historical data, and oral health return visit, prediction and other processes can be performed according to the historical data.
In the embodiment, the server can fuse medical image data and oral cavity natural image data through the oral cavity image registration model, marks local teeth in the natural image in the X-ray film, so that a user can compare and observe the change condition of the local teeth in the natural image relative to corresponding teeth in the X-ray film, and can directly perceive the change of the current tooth condition relative to the previous tooth condition, thereby prompting the user to go to a hospital in time or further detect the tooth health through the system provided by the embodiment of the application, and effectively improving the user experience.
According to the oral health detection system provided by the embodiment of the application, the oral natural image data of the user is acquired through the client, an oral image data fusion display request aiming at the oral natural image data is sent to the server, the server generates image data of the oral medical image data of the user and the natural image data fused with each other through an oral image registration model, the client displays the fused image data returned by the server so that the user can observe oral health information according to the fused image data, the processing mode enables the user to fuse and display the oral natural image taken by the user along with the oral natural image of the user through the client (such as a smart phone, a smart sound box and the like) and the professional medical image of the user, and the user can intuitively know tooth health change information through the fused image, so that the timeliness of oral health detection of the user can be effectively improved, and user experience is improved.
Second embodiment
Corresponding to the oral health detection system, the application also provides an oral health detection method, and an execution subject of the method comprises, but is not limited to, a terminal device, such as a smart phone, a smart toothbrush, a smart sound box, a tablet computer, a desktop computer, a wearable device and the like, which can be realized by a mode of installing an Application (APP) on the terminal device or a mode of installing software on the desktop computer. The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment.
The oral health detection method provided by the application comprises the following steps:
Step 1, collecting oral cavity natural image data of a user as oral cavity natural image data;
Step 2, sending an oral image fusion display request aiming at the oral natural image data to a server;
and 3, displaying fused image data corresponding to teeth between the oral medical image data and the natural image data of the user, which are returned by the server, so that the user can observe oral health information according to the fused image data.
The natural image data comprises local tooth image data, the medical image data comprises global tooth image data, the fusion image data takes the medical image data as an oral cavity background model, and the local tooth image data is displayed at the position in the corresponding medical image data.
In one example, the fused image data includes tooth change information for different periods of time.
According to the oral health detection method, oral natural image data of a user are collected through the client, an oral image data fusion display request aiming at the oral natural image data is sent to the server, the server generates image data, in which oral medical image data of the user and the natural image data are fused with each other, through an oral image registration model, the client displays the fused image data returned by the server, so that the user can observe oral health information according to the fused image data, the processing mode enables the user to fuse and display oral natural images shot by the client (such as a smart phone and a smart sound box) with professional medical images of the user, and the user can intuitively know tooth health change information through the fused images, so that timeliness of oral health detection of the user can be effectively improved, and user experience is improved.
Third embodiment
In the above embodiment, an oral health detection method is provided, and correspondingly, the application also provides an oral health detection device. The device corresponds to the embodiment of the method described above.
The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment. The oral health detection device provided by the application comprises:
the natural image acquisition unit is used for acquiring oral cavity natural image data of a user and taking the oral cavity natural image data as oral cavity natural image data;
The request sending unit is used for sending an oral image fusion display request aiming at the oral natural image data to the server;
And the fused image display unit is used for displaying fused image data corresponding to teeth between the oral medical image data and the natural image data of the user, which are returned by the server, so that the user can observe oral health information according to the fused image data.
Fourth embodiment
The application further provides electronic equipment. Since the apparatus embodiments are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
The electronic equipment comprises a processor and a memory, wherein the memory is used for storing a program for realizing an oral health detection method, and after the equipment is electrified and the program of the method is operated by the processor, the steps of collecting oral natural image data of a user and using the oral natural image data as oral natural image data, sending an oral image fusion display request aiming at the oral natural image data to a service end, and displaying fusion image data corresponding to teeth between the oral medical image data and the natural image data of the user, which are returned by the service end, so that the user can observe oral health information according to the fusion image data.
The device includes, but is not limited to, a smart phone, a tablet computer, and a personal computer.
Fifth embodiment
Corresponding to the oral health detection system, the application also provides an oral health detection method, and an execution subject of the method comprises, but is not limited to, a server side, and any device capable of realizing the method. The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment.
In this embodiment, the method includes the steps of:
Step 1, receiving an oral cavity image data fusion display request aiming at oral cavity natural image data of a target user;
step 2, generating fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data and the natural image data of the target user through an oral image registration model;
and 3, returning the fused image data to the requesting party.
In one example, first tooth feature data in the oral cavity natural image data and second tooth feature data in oral cavity medical image data of a user are extracted through a tooth feature extraction network included in an oral cavity image registration model, tooth feature data after the first tooth feature data and the second feature tooth data are fused is determined according to the first tooth feature data and the second tooth feature data through a tooth feature fusion network included in the oral cavity image registration model, registration transformation parameters are determined according to the fused tooth feature data through a registration transformation parameter determination network included in the oral cavity image registration model, image transformation processing is performed on the natural image data according to the registration transformation parameters through an oral cavity image transformation network included in the oral cavity image registration model, and fused image data is generated according to the transformed natural image data and the medical image data through an oral cavity image fusion network included in the oral cavity image registration model.
In one example, the first dental feature data is extracted from the natural image data via a first dental feature extraction sub-network included in the dental feature extraction network, and the second dental feature data is extracted from the medical image data via a second dental feature extraction sub-network included in the image feature extraction sub-network.
The registration transformation parameters include, but are not limited to, rotation parameters, translation parameters, and scaling parameters.
In one example, the natural image data includes local dental image data and the medical image data includes global dental image data.
In one example, the method may further include querying the medical image data of the target user from an oral image database.
In one example, the method may further include the steps of receiving an image data submission request for oral medical image data of a target user, and storing a record of correspondence between the target user and the medical image data to an oral image database.
According to the oral health detection method, oral natural image data of a user are collected through the client, an oral image data fusion display request aiming at the oral natural image data is sent to the server, the server generates image data, in which oral medical image data of the user and the natural image data are fused with each other, through an oral image registration model, the client displays the fused image data returned by the server, so that the user can observe oral health information according to the fused image data, the processing mode enables the user to fuse and display oral natural images shot by the client (such as a smart phone and a smart sound box) with professional medical images of the user, and the user can intuitively know tooth health change information through the fused images, so that timeliness of oral health detection of the user can be effectively improved, and user experience is improved.
Sixth embodiment
In the above embodiment, an oral health detection method is provided, and correspondingly, the application also provides an oral health detection device. The device corresponds to the embodiment of the method described above.
The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment. The oral health detection device provided by the application comprises:
A request receiving unit for receiving an oral image data fusion display request for oral natural image data of a target user;
the image fusion unit is used for generating fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data and the natural image data of the target user through an oral image registration model;
and the fusion image returning unit is used for returning the fusion image data to the requesting party.
Seventh embodiment
The application further provides an embodiment of the electronic equipment. Since the apparatus embodiments are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
The electronic equipment comprises a processor and a memory, wherein the memory is used for storing a program for realizing an oral health detection method, and after the equipment is electrified and the program of the method is operated by the processor, the electronic equipment performs the following steps of receiving an oral image data fusion display request aiming at oral natural image data of a target user, generating fusion image data corresponding to teeth between the oral medical image data and the natural image data according to the oral medical image data of the target user and the natural image data through an oral image registration model, and sending the fusion image data back to a request party.
Eighth embodiment
Corresponding to the oral health detection system, the application also provides an oral image registration model construction method, and an execution subject of the method comprises, but is not limited to, a server side, and any equipment capable of realizing the method. The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment.
In this embodiment, the method for constructing the mouth image registration model includes the following steps:
Step 1, determining a training data set, wherein the training data comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth;
step 2, constructing a network structure of an oral image registration model;
and step 3, training the network parameters of the model according to the training data set to obtain the model.
In one example, the oral image registration model comprises a tooth feature extraction network for extracting first tooth feature data in the oral natural image data and second tooth feature data in oral medical image data of a user, the oral image registration model comprises a tooth feature fusion network for determining tooth feature data fused by the first tooth feature data and the second feature tooth data according to the first tooth feature data and the second tooth feature data, the oral image registration model comprises a registration transformation parameter determination network for determining oral image registration transformation parameters according to the fused tooth feature data, the oral image registration model comprises an oral image transformation network for performing image transformation processing on the natural image data according to the registration transformation parameters, and the oral image registration model comprises an oral image fusion network for generating the fused image data according to the transformed natural image data and the medical image data.
In one example, the dental feature extraction network includes a first dental feature extraction sub-network for extracting the first dental feature data from the natural image data, and the image feature extraction sub-network includes a second dental feature extraction sub-network for extracting the second dental feature data from the medical image data.
According to the method for constructing the oral image registration model, the training data set is determined, the training data comprise oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth, a network structure of the oral image registration model is constructed, the network parameters of the model are trained according to the training data set to obtain the model, the oral image registration model is obtained through learning from the training data, the model can generate fusion images corresponding to teeth between oral natural images and professional medical images taken by a user, and the user can intuitively know tooth health change information through the fusion images, so that accuracy and efficiency of oral image registration can be effectively improved, and user experience is improved.
Ninth embodiment
In the above embodiment, an oral image registration model construction method is provided, and correspondingly, the application also provides an oral image registration model construction device. The device corresponds to the embodiment of the method described above.
The same parts of the present embodiment as those of the first embodiment will not be described again, please refer to the corresponding parts in the first embodiment. The application provides an oral image registration model construction device, which comprises:
The training data determining unit is used for determining a training data set, wherein the training data comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth;
the network construction unit is used for constructing a network structure of the oral image registration model;
And the network training unit is used for training the network parameters of the model according to the training data set to obtain the model.
Tenth embodiment
The application further provides an embodiment of the electronic equipment. Since the apparatus embodiments are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
The electronic equipment comprises a processor and a memory, wherein the memory is used for storing a program for realizing an oral image registration model construction method, after the equipment is electrified and the program of the method is operated by the processor, the electronic equipment executes the following steps of determining a training data set, wherein the training data set comprises oral medical image data, oral natural image data and labeling data of the corresponding relation between natural image teeth and medical image teeth, constructing a network structure of the oral image registration model, and training network parameters of the model according to the training data set to obtain the model.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include non-transitory computer-readable media (transitorymedia), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.