Disclosure of Invention
In order to solve the technical problems in the prior art, the embodiment of the invention provides a tumor CT image segmentation processing method and system. The technical scheme is as follows:
in one aspect, a tumor CT image segmentation processing method is provided, the method comprising:
s1, based on a patient tumor CT image, adjusting image processing parameters to carry out enhancement processing on the image, extracting various image characteristic information of a medical image, and generating an image enhancement processing result;
S2, based on the image enhancement processing result, performing edge detection on the image by analyzing pixel intensities and texture features of a plurality of areas, extracting edge information of the image, and generating an edge information extraction record;
s3, extracting and recording by utilizing the edge information, dividing a tumor area in the image according to the texture characteristics and the color information of the image, adjusting dividing parameters by analyzing the accuracy of division, extracting the geometric characteristics of the tumor, and generating a tumor division image;
S4, according to the tumor segmentation image, performing three-dimensional reconstruction on the tumor according to the shape information and shooting angles of the tumor in a plurality of medical images of the target patient, calculating the volume of the tumor, and generating tumor volume estimation data;
S5, classifying the images according to the tumor volume estimation data and according to the patient information, tumor type, position, volume and shooting time of the target images, matching the search labels to optimize the search efficiency, and generating image classification and storage records.
As a further scheme of the invention, the image enhancement processing result is specifically an image enhancement parameter, a color feature extraction record and a texture feature extraction result, the edge information extraction record comprises a pixel intensity analysis result, texture feature analysis information and image edge information, the tumor segmentation image is specifically a segmentation accuracy analysis result, a segmentation parameter adjustment record and tumor geometric feature information, the tumor volume estimation data is specifically a tumor three-dimensional model, tumor volume calculation data and image shooting angle information, and the image classification and storage record comprises a patient information extraction record, a tumor information data set and a search label matching record.
As a further scheme of the invention, based on the CT image of the tumor of the patient, the image processing parameters are adjusted to carry out enhancement processing on the image, and various image characteristic information of the medical image is extracted, and the step of generating the image enhancement processing result specifically comprises the following steps:
s101, based on a CT image of a tumor of a patient, performing enhancement processing on the image by adjusting contrast and brightness parameters of the image, wherein the enhancement processing comprises the steps of adjusting the contrast and the brightness to generate an image preprocessing record;
S102, extracting color features of an image based on the image preprocessing record to generate color feature data;
S103, analyzing and extracting texture characteristic information of the target medical image based on the color characteristic data, and generating an image enhancement processing result.
As a further aspect of the present invention, based on the image enhancement processing result, edge detection is performed on an image by analyzing pixel intensities and texture features of a plurality of regions, edge information of the image is extracted, and the step of generating an edge information extraction record specifically includes:
s201, extracting pixel intensity information of a plurality of areas in an image based on the image enhancement processing result to generate a local intensity analysis result;
S202, carrying out edge detection on the image based on the local intensity analysis result, identifying edge information in the image, and generating an edge feature analysis result;
And S203, based on the edge characteristic analysis result, adjusting edge extraction parameters by analyzing the continuity and definition of the edge, recording the extracted edge information, and generating an edge information extraction record.
As a further aspect of the present invention, the specific formula of the edge information in the identification image is:
Wherein H i represents the calculation result of the gradient direction histogram corresponding to the ith pixel, i is the index variable, w i is the weight of the ith pixel, Is the partial derivative of the function f at the point x i,Is the partial derivative sign, f represents the luminance function of the image, x i represents the position of the ith considered pixel.
As a further scheme of the present invention, the edge information is used to extract and record, the tumor area in the image is segmented according to the texture feature and color information of the image, the segmentation parameters are adjusted by analyzing the segmentation accuracy, the geometric feature of the tumor is extracted, and the step of generating the tumor segmentation image specifically comprises:
S301, extracting and recording by utilizing the edge information, dividing the image according to texture and color information in the image, and identifying tumor and non-tumor areas to generate a preliminary tumor division image;
s302, according to the preliminary tumor segmentation image, the segmentation accuracy is adjusted by analyzing the accuracy of image segmentation, and a segmentation parameter optimization record is generated;
s303, optimizing the record based on the segmentation parameters, segmenting the image and extracting geometric characteristic information of the tumor, including the shape and the boundary, and generating a tumor segmentation image.
As a further aspect of the present invention, the specific formula for adjusting the segmentation parameter to optimize the segmentation accuracy is:
Where P new represents the new segmentation parameter value, P old represents the original segmentation parameter, α represents the learning rate, P best represents the currently known optimal segmentation parameter, D represents the degree of difference between the current segmentation effect and the target effect, and D max represents the maximum acceptable degree of difference.
As a further scheme of the present invention, according to the tumor segmentation image, according to shape information and shooting angles of tumors in a plurality of medical images of a target patient, three-dimensional reconstruction is performed on the tumors, and volumes of the tumors are calculated, so that tumor volume estimation data is generated specifically:
S401, analyzing a plurality of medical images of a target patient based on the tumor segmentation image, extracting shooting angles and geometric shape information of tumors, and generating shape and angle analysis data;
S402, constructing a three-dimensional model of the tumor of the patient based on the shape and angle analysis data, and mapping color features and texture features to the model to generate a three-dimensional tumor model;
and S403, calculating the volume of the tumor according to the geometric shape information based on the three-dimensional tumor model, and generating tumor volume estimation data.
As a further scheme of the present invention, according to the tumor volume estimation data, classifying images according to patient information, tumor type, position, volume and shooting time of target images, matching search labels to optimize search efficiency, and generating image classification and storage records specifically comprises:
S501, extracting tumor type, position, volume and shooting time information of a target medical image based on the tumor volume estimation data, and generating a classification information extraction record by combining personal information of a patient;
S502, extracting records based on the classification information, classifying images according to the characteristic information of tumors and the patient information, and generating an image classification result;
and S503, matching the medical image with a search label by using the image classification result, optimizing the search efficiency, and generating an image classification and storage record.
In another aspect, a tumor CT image segmentation processing system is provided, the system is applied to a tumor CT image segmentation processing method, the system includes:
the image preprocessing module is used for carrying out enhancement processing on the image by adjusting the brightness of the contrast of the image based on the CT image of the tumor of the patient, and recording the color and texture characteristics of the image by combining with the characteristic extraction to generate an image enhancement processing result;
The edge detection module analyzes the pixel intensities and texture characteristics of a plurality of areas in the image by utilizing the image enhancement processing result, and identifies and extracts the edge information of the image through edge detection to generate an edge information extraction record;
The image segmentation module uses the edge information to extract and record, segments a tumor area in the image according to the texture characteristics and color information of the image, optimizes segmentation processing parameters in combination with analysis of segmentation accuracy, and extracts geometric characteristics of tumors to generate a tumor segmentation image;
The three-dimensional modeling module analyzes a plurality of medical images of a patient based on the tumor segmentation image, performs three-dimensional reconstruction on the images and calculates tumor volume according to shooting angles of the images and shape information of tumors, and generates tumor volume estimation data;
and the data classification module extracts patient information, tumor types, positions, volumes and shooting time according to the tumor volume estimation data, classifies the image data, and matches with the retrieval tag to generate image classification and storage records.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
The image is enhanced by adjusting the contrast and brightness of the image, so that the visual quality of the image is improved, a clearer foundation is provided for subsequent edge detection, the accuracy of the edge detection is improved, the tumor area is divided by combining texture and color information, the extraction of the geometric characteristics of the tumor is realized, the understanding of the tumor form is enhanced, the construction of a three-dimensional model and the calculation of the tumor volume are realized, the importance is placed on the stage evaluation of diseases and the formulation of treatment plans, the classification and storage process of the image are optimized by integrating tumor volume estimation data and key patient information, and the efficiency of data retrieval is improved.
Detailed Description
The technical scheme of the invention is described below with reference to the accompanying drawings.
In embodiments of the invention, words such as "exemplary," "such as" and the like are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term use of an example is intended to present concepts in a concrete fashion. Furthermore, in embodiments of the present invention, the meaning of "and/or" may be that of both, or may be that of either, optionally one of both.
In the embodiments of the present invention, "image" and "picture" may be sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized. "of", "corresponding (corresponding, relevant)" and "corresponding (corresponding)" are sometimes used in combination, and it should be noted that the meaning of the expression is consistent when the distinction is not emphasized.
In embodiments of the present invention, sometimes a subscript such as W 1 may be written in a non-subscript form such as W1, and the meaning of the expression is consistent when de-emphasizing the distinction.
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a tumor CT image segmentation processing method, such as a flow chart of the tumor CT image segmentation processing method shown in fig. 1, wherein the processing flow of the method can comprise the following steps:
s1, based on a patient tumor CT image, adjusting image processing parameters to carry out enhancement processing on the image, extracting various image characteristic information of a medical image, and generating an image enhancement processing result;
S2, based on an image enhancement processing result, performing edge detection on the image by analyzing pixel intensities and texture features of a plurality of areas, extracting edge information of the image, and generating an edge information extraction record;
s3, extracting and recording by utilizing edge information, dividing a tumor area in the image according to texture features and color information of the image, adjusting dividing parameters by analyzing the accuracy of division, extracting geometric features of the tumor, and generating a tumor division image;
S4, according to the tumor segmentation image, carrying out three-dimensional reconstruction on the tumor according to the shape information and shooting angles of the tumor in a plurality of medical images of the target patient, calculating the volume of the tumor, and generating tumor volume estimation data;
S5, classifying the images according to the tumor volume estimation data and according to the patient information, tumor type, position, volume and shooting time of the target images, matching the retrieval labels to optimize the retrieval efficiency, and generating image classification and storage records.
The image enhancement processing results comprise image enhancement parameters, color feature extraction records and texture feature extraction results, the edge information extraction records comprise pixel intensity analysis results, texture feature analysis information and image edge information, the tumor segmentation images comprise segmentation accuracy analysis results, segmentation parameter adjustment records and tumor geometric feature information, the tumor volume estimation data comprise tumor three-dimensional models, tumor volume calculation data and image shooting angle information, and the image classification and storage records comprise patient information extraction records, tumor information data sets and search label matching records.
Referring to fig. 2, based on a patient tumor CT image, image processing parameters are adjusted to enhance the image, and various image feature information of a medical image is extracted, so as to generate an image enhancement result, which specifically includes the steps of:
s101, based on a CT image of a tumor of a patient, performing enhancement processing on the image by adjusting contrast and brightness parameters of the image, wherein the enhancement processing comprises the steps of adjusting the contrast and the brightness to generate an image preprocessing record;
The image is subjected to preliminary processing, contrast and brightness parameters of the image are modified, image quality is improved, tumor details in the image are more clearly visible, dark parts and bright parts in the image are properly displayed by using image processing software such as Adobe Photoshop, edge characteristics in the image are further enhanced by using a high-pass filter, contrast between the tumor and surrounding tissues is enhanced, image parameters recorded in the process comprise contrast values before and after adjustment, brightness values and filter setting parameters, and target data are stored in an image preprocessing record for reference and reproduction of subsequent steps.
S102, extracting color features of an image based on an image preprocessing record to generate color feature data;
Converting the image from RGB color space to HSV color space by using a color analysis tool such as a color space conversion function in Matlab, intuitively analyzing color distribution and saturation in the image, identifying and distinguishing tumor areas from normal tissues, performing statistical analysis on hue, saturation and brightness in the HSV space, calculating color histograms of all areas, generating color characteristic data, wherein the data records comprise distribution conditions and statistical characteristics of all main color components, and providing basic data for texture characteristic extraction of the next step.
S103, analyzing and extracting texture feature information of the target medical image based on the color feature data to generate an image enhancement processing result;
The image processing tool and the technology are used for further extracting texture information of the image, a gray level co-occurrence matrix method is used for analyzing a texture mode in the image, fine changes of textures, such as texture differences of tumor tissues and surrounding normal tissues, are captured from a local area of the image, a GLCM matrix is calculated by setting different distance and angle parameters, statistical indexes such as contrast, uniformity, entropy and relativity of the textures are obtained, and a target texture index provides quantized texture characteristics for an image enhancement processing result, so that tumor identification and analysis are more accurate, and accurate diagnosis of tumor properties and boundaries is further promoted.
Referring to fig. 3, based on the image enhancement processing result, edge detection is performed on an image by analyzing pixel intensities and texture features of a plurality of regions, edge information of the image is extracted, and the step of generating an edge information extraction record specifically includes:
S201, extracting pixel intensity information of a plurality of areas in an image based on an image enhancement processing result to generate a local intensity analysis result;
The enhanced image is divided into a plurality of small areas using image analysis software such as ImageJ, each area typically comprising hundreds to thousands of pixels, and for each area, its average pixel intensity value is calculated, which is obtained by summing up the brightness values of all pixels in the area and dividing by the total number of pixels, and the standard deviation of the pixel intensities is analyzed to evaluate the degree of variation of the pixel intensities within the area, which facilitates discrimination of the edge intensities in subsequent steps, the average intensity and standard deviation data of each area are recorded and constitute the local intensity analysis result, the target data is critical for understanding the illumination and detail variation of the image area, and accurate input data is provided for edge detection.
S202, carrying out edge detection on an image based on a local intensity analysis result, identifying edge information in the image, and generating an edge feature analysis result;
the specific formula for identifying the edge information in the image is as follows:
Wherein H i represents the calculation result of the gradient direction histogram corresponding to the ith pixel, i is the index variable, w i is the weight of the ith pixel, Is the partial derivative of the function f at the point x i,Is the partial derivative sign, f represents the luminance function of the image, x i represents the position of the ith considered pixel.
The formula:
formula details and formula calculation derivation process:
The formula is used for calculating a gradient direction histogram of each pixel point in the image, and the result is used for detecting edges in the image and determining the boundaries of tumor and non-tumor tissues;
Parameter meaning and setting value:
w i is a weight coefficient, representing the influence of the ith pixel point, assuming that w i =0.5;
x i is the pixel position in the image;
f (x i) is the luminance value of the pixel point x i, assuming f (x i) =120;
Representing the luminance gradient of pixel x i, assuming the luminance of f (x i) in its neighboring pixels x i+1 and x i-1 is 125 and 115 respectively,
Substituting the parameters into a formula to calculate:
Hi=0.5·(5)2=0.5·25=12.5;
The result H i =12.5 shows that the edge feature of the pixel point x i in the image is more remarkable, the numerical value reflects the higher brightness change rate from the point, and the high value in the edge detection indicates a possible boundary area, so that the segmentation of tumor and non-tumor tissues is more accurate.
S203, based on the edge feature analysis result, adjusting edge extraction parameters by analyzing the continuity and definition of the edge, recording the extracted edge information, and generating an edge information extraction record;
And determining the continuity of the edge by adopting an edge continuity algorithm, if a broken or blurred edge is detected, automatically adjusting edge extraction parameters by the algorithm, such as adjusting a low threshold and a high threshold in a Canny algorithm, so as to improve the accuracy of edge detection, recording the adjusted parameters and repeatedly executing the edge detection process until the preset edge continuity and definition standard is reached, recording the edge information extraction record, recording all the adjusted parameters and the improved edge information in detail, and providing important basis for ensuring clear separation of a tumor area from a non-tumor area in a subsequent image processing step.
Referring to fig. 4, the edge information is used to extract and record, segment the tumor area in the image according to the texture feature and color information of the image, the accuracy of analysis segmentation is utilized to adjust segmentation parameters, geometrical characteristics of tumors are extracted, and the steps of generating tumor segmentation images are specifically as follows:
s301, extracting and recording by utilizing edge information, dividing the image according to texture and color information in the image, and identifying tumor and non-tumor areas to generate a preliminary tumor segmentation image;
an area growing algorithm is used, wherein the area growing algorithm starts from a seed point, pixels adjacent to the seed point are added into the seed area, if target pixels and the seed point have similar textures or color characteristics, the process is iterated until all pixels conforming to conditions are contained, the position of the seed point is determined according to edge information in the implementation process, the pixels positioned in the obvious tumor area are usually selected as the seed point, the area growing is carried out based on the gray values, the colors and the texture information of the pixels until a preset growing boundary is reached, namely the boundary line provided by edge detection, and the preliminary tumor segmented image generated by the process clearly shows the boundary between the tumor area and the non-tumor area and provides an accurate basis for subsequent analysis.
S302, according to the preliminary tumor segmentation image, the segmentation accuracy is adjusted by analyzing the accuracy of image segmentation, and a segmentation parameter optimization record is generated;
The specific formula for adjusting the segmentation parameters to optimize the segmentation accuracy is as follows:
Where P new represents the new segmentation parameter value, P old represents the original segmentation parameter, α represents the learning rate, P best represents the currently known optimal segmentation parameter, D represents the degree of difference between the current segmentation effect and the target effect, and D max represents the maximum acceptable degree of difference.
The formula:
formula details and formula calculation derivation process:
The formula is used for optimizing parameters in an image segmentation algorithm, and adjusting segmentation parameters to improve segmentation accuracy;
Parameter meaning and setting value:
P old is the original segmentation parameter, assumed to be 0.2;
P best is the current best segmentation parameter, assumed to be 0.5;
Alpha is the learning rate, and is assumed to be 0.05;
D is the difference between the current segmentation effect and the target effect, and is assumed to be 15;
D max is the maximum acceptable degree of difference, assumed to be 100;
Substituting the parameters into a formula to calculate:
Pnew=0.2+0.05·0.3·0.15;
Pnew=0.2+0.00225;
Pnew=0.20225;
the result 0.20225 is a new value after parameter adjustment, and the computing process is used to improve the image edge and texture difference and is capable of providing accurate tumor recognition and boundary definition for subsequent analysis.
S303, optimizing the record based on the segmentation parameters, segmenting the image and extracting geometric characteristic information of the tumor, including the shape and the boundary, and generating a tumor segmentation image;
The method utilizes a calculation geometry technology to define and quantify the shape parameters of the tumor, comprises the steps of calculating the area, the perimeter and the shape complexity of the tumor by using a geometric moment and contour analysis method, accurately describing the physical form of the tumor by target geometric calculation, providing important information about the growth mode and the diffusion path of the tumor for clinic, determining the edge of the tumor by adopting a gradient-based method through identifying the area with the most obvious change of color and texture information by boundary extraction, ensuring that the tumor area can be accurately segmented from a CT image by executing a target process, and recording each geometric feature of the tumor in detail, thereby providing a reliable data base for subsequent treatment planning and monitoring.
Referring to fig. 5, according to the tumor segmentation image, according to the shape information and the shooting angle of the tumor in the multiple medical images of the target patient, the tumor is reconstructed in three dimensions, and the volume of the tumor is calculated, and the step of generating tumor volume estimation data specifically includes:
s401, analyzing a plurality of medical images of a target patient based on the tumor segmentation image, extracting shooting angles and geometric shape information of tumors, and generating shape and angle analysis data;
The method comprises the steps of extracting key shape and angle information from a plurality of medical images, aligning CT images shot from different points by using an image registration technology, ensuring that tumor shape information extracted from each image is comparable, ensuring accurate alignment between the images by means of positioning marks in the images or obvious characteristics of tumors, processing aligned image data through a characteristic extraction algorithm, such as gradient-based shape descriptors, quantifying geometrical characteristics of tumors in the images, such as boundary curvature, area, volume and the like by using a target descriptor, recording and analyzing each extracted characteristic, evaluating the change and development of the tumor shape, integrating the shape and angle information of all the images, generating shape and angle analysis data, reflecting the shape change of the tumors at different points by using the target data, and providing necessary input information for subsequent three-dimensional modeling.
S402, constructing a three-dimensional model of a tumor of a patient based on shape and angle analysis data, and mapping color features and texture features onto the model to generate a three-dimensional tumor model;
The voxel method is applied to convert two-dimensional image data into point clouds in a three-dimensional space, each point cloud represents voxels of a tumor at a specific position, the attribute of each point cloud comprises color and texture information, a target is inherited from an original CT image, the point clouds are converted into a continuous three-dimensional surface model through a three-dimensional rendering technology such as volume rendering or surface rendering, the model displays the appearance of the tumor, the texture features of the tumor are accurately mapped, a doctor observes the tumor structure from multiple angles, and the generated three-dimensional tumor model is an omnibearing and dynamic expression of the tumor form, so that a powerful visual auxiliary tool is provided for clinic.
S403, calculating the volume of the tumor based on the three-dimensional tumor model according to the geometric shape information to generate tumor volume estimation data;
The method comprises the steps of estimating the volume of a tumor by adopting a geometric volume calculation method, calculating the volume of the whole tumor by utilizing geometric data of a three-dimensional model, such as side length, area and voxel density, and calculating the volume of the whole tumor by a mathematical formula, wherein the calculation formula possibly comprises three-dimensional integration or accumulation of voxels with complex shapes, each voxel represents a small cube, the volume of the small cube is the cube of the side length, and the total volume of the whole tumor is obtained by accumulating the volumes of all voxels.
Referring to fig. 6, according to tumor volume estimation data, classifying images according to patient information, tumor type, position, volume and shooting time of a target image, matching search tags to optimize search efficiency, and generating image classification and storage records specifically includes:
S501, extracting tumor type, position, volume and shooting time information of a target medical image based on tumor volume estimation data, and generating a classification information extraction record by combining personal information of a patient;
The method comprises the steps of analyzing and classifying image data by using a data mining technology, such as a decision tree algorithm, wherein the decision tree branches by evaluating the information gain of each index, selecting the most effective attribute, gradually refining the classification standard, retrieving relevant data, such as age, gender and medical history, from a personal information database of a patient, correlating with the image data, enhancing the classification accuracy, verifying and cleaning each item of data, ensuring the quality of input data, and generating a complete classification information extraction record by a target step, wherein the record comprises the detailed classification data of each image and the personal information of the patient corresponding to the detailed classification data, thereby providing basic data for subsequent image processing and analysis.
S502, extracting records based on classification information, classifying images according to characteristic information of tumors and patient information, and generating an image classification result;
The method comprises the steps of performing further classification processing on tumor images by using a support vector machine in a machine learning technology, performing class segmentation on the support vector machine in a high-dimensional space by constructing one or more hyperplanes, wherein the support vector machine comprises a training stage and a classification stage, training a model by using a marked image set in the training stage, classifying new images by using the model in the classification stage, evaluating feature vectors, such as textures, shapes and boundary information, of various images by using the SVM model, identifying tumor types according to set classification standards, marking specific tumor types on each image, and generating detailed image classification results, wherein the target results not only comprise tumor classes, but also are accurate to specific characteristics of the tumor, such as classification and prognosis indexes.
S503, matching a search label for the medical image by using the image classification result, optimizing the search efficiency, and generating an image classification and storage record;
The method comprises the steps of optimizing the retrieval and storage processes of medical images by adopting a label matching technology, creating a group of descriptive labels for each image by a label system based on the classification result and the medical records of the images, wherein the target labels cover tumor types, positions, volumes and specific medical indexes, converting medical terms and classification information into keywords easy to retrieve by adopting a natural language processing technology in the label generation process, and dynamically adjusting the storage positions and backup strategies of the images by the image storage system according to the correlation and the search frequency of the labels.
Referring to fig. 7, a tumor CT image segmentation processing system, which is used for executing the above tumor CT image segmentation processing method, includes:
the image preprocessing module is used for carrying out enhancement processing on the image by adjusting the brightness of the contrast of the image based on the CT image of the tumor of the patient, and recording the color and texture characteristics of the image by combining with the characteristic extraction to generate an image enhancement processing result;
The edge detection module analyzes the pixel intensity and texture characteristics of a plurality of areas in the image by utilizing the image enhancement processing result, and identifies and extracts the edge information of the image by edge detection to generate an edge information extraction record;
The image segmentation module uses edge information to extract and record, segments a tumor area in the image according to texture features and color information of the image, optimizes segmentation processing parameters in combination with analysis of segmentation accuracy, and extracts geometric features of tumors to generate a tumor segmentation image;
the three-dimensional modeling module analyzes a plurality of medical images of a patient based on the tumor segmentation image, performs three-dimensional reconstruction on the image and calculates tumor volume according to shooting angles of the image and shape information of tumors, and generates tumor volume estimation data;
The data classification module extracts patient information, tumor types, positions, volumes and shooting time according to the tumor volume estimation data, classifies the image data, matches the search label, and generates image classification and storage records.
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B, and may mean that a exists alone, while a and B exist alone, and B exists alone, wherein a and B may be singular or plural. In addition, the character "/" herein generally indicates that the associated object is an "or" relationship, but may also indicate an "and/or" relationship, and may be understood by referring to the context.
In the present invention, "at least one" means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another device, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.