CN111401363A - Frame number image generation method and device, computer equipment and storage medium - Google Patents
Frame number image generation method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111401363A CN111401363A CN202010169557.3A CN202010169557A CN111401363A CN 111401363 A CN111401363 A CN 111401363A CN 202010169557 A CN202010169557 A CN 202010169557A CN 111401363 A CN111401363 A CN 111401363A
- Authority
- CN
- China
- Prior art keywords
- frame number
- frame
- perspective transformation
- image
- identifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
- G06V30/1475—Inclination or skew detection or correction of characters or of image to be recognised
- G06V30/1478—Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Image Analysis (AREA)
Abstract
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a frame number image, a computer device, and a storage medium. The method comprises the following steps: acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers; identifying the reference marks and the frame number areas of the original frame number image to respectively obtain mark positioning frames of the reference marks in each reference mark group and frame number positioning frames of the frame number areas; acquiring the actual position relation of each reference identifier, acquiring reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and each identifier positioning frame; and carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image. By adopting the method, the accuracy of frame number image generation can be improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a frame number image, a computer device, and a storage medium.
Background
The frame number is a unique identifier that identifies the vehicle. In the annual inspection of the vehicle, the frame number is compared with the rubbing film, and whether the frame number is tampered or not is judged, so that the method is very important.
In traditional mode, gather frame number size through the manual work, then compare the data of gathering with rubbing membrane and judge to confirm whether the frame number is changed.
However, measurement and acquisition are performed manually, processing is not intelligent enough, and due to errors caused by manual measurement, the accuracy of the frame number judgment result is low.
Disclosure of Invention
In view of the above, it is desirable to provide a frame number image generation method, device, computer device, and storage medium capable of improving accuracy of a frame number detection determination result.
A method of generating a frame number image, the method comprising:
acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers;
identifying the reference identifier and the frame number area of the original frame number image to respectively obtain an identifier positioning frame of each reference identifier in each reference identifier group and a frame number positioning frame of the frame number area;
acquiring the actual position relation of each reference identifier, acquiring reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frames of the reference identifiers;
and carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame in the frame number area so as to obtain a frame number image.
In one embodiment, obtaining a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frame of each reference identifier includes:
obtaining a plurality of initial perspective transformation matrixes based on the reference coordinates and the position coordinates of the frame number area identification positioning frames, wherein the initial perspective transformation matrixes are in one-to-one correspondence with the reference identification groups respectively;
and calculating the average value of the plurality of initial perspective transformation matrixes to obtain a target perspective transformation matrix.
In one embodiment, obtaining a plurality of initial perspective transformation matrices based on the reference coordinates and the position coordinates of the frame number region identification positioning frames includes:
obtaining the coordinates of the central point of each identification positioning frame according to the position coordinates of each identification positioning frame;
determining a positioning frame group corresponding to each reference identifier group according to the coordinates of each central point, wherein the number of the reference identifier frames in the positioning frame group is equal to that of the reference identifiers in the reference identifier group;
and obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identification groups according to the center point coordinates and the reference coordinates of the positioning frame groups.
In one embodiment, determining the positioning frame group corresponding to each reference identifier group according to the coordinates of each central point includes:
and respectively determining a plurality of reference identification frames adjacent to the coordinates of the central points into a group to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups.
In one embodiment, the method for performing perspective transformation processing on a frame number region through a target perspective transformation matrix according to a frame number positioning frame of the frame number region to obtain a frame number image includes:
according to the coordinate position of the frame number positioning frame in the frame number area, carrying out perspective transformation processing on the frame number area through a target perspective transformation matrix to obtain target pixel points of each original pixel point in the frame number area after perspective transformation;
acquiring pixel values of all original pixel points in the original frame number image;
and filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
In one embodiment, before the identification of the reference mark and the identification of the frame number region, the method further includes:
recognizing a frame number text object of the frame number original image, and determining a text inclination angle of the frame number text object in the frame number original image;
and rotating the original frame number image according to the text inclination angle to obtain the rotated original frame number image with the inclination angle of zero.
In one embodiment, the identification of the reference identifier and the identification of the frame number region for the original frame number image are performed by a pre-trained neural network model, and the training mode of the neural network model comprises the following steps:
acquiring a training set image;
marking the reference mark and the frame number area in the training set image through a marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively;
normalizing the marked training set images to obtain training set images with the same size as a preset size;
inputting the training set image into the constructed initial neural network model, and performing feature extraction on the training set image to obtain feature images of multiple scales;
carrying out feature fusion on the feature images of all scales to obtain a prediction frame corresponding to the feature images of all scales;
determining loss values of the prediction frames corresponding to all scales based on the labeling frames, and updating model parameters through the loss values;
and carrying out iterative processing on the initial neural network model to obtain the trained neural network model.
A frame number image generation apparatus, the apparatus comprising:
the frame number original image acquisition module is used for acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers;
the identification module is used for identifying the reference identifier and the frame number area of the original frame number image to respectively obtain the identifier positioning frame of each reference identifier in each reference identifier group and the frame number positioning frame of the frame number area;
the target perspective transformation matrix generation module is used for acquiring the actual position relation of each reference identifier, acquiring the reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frames of the reference identifiers;
and the perspective transformation processing module is used for carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area so as to obtain a frame number image.
A computer device comprising a memory storing a computer program and a processor implementing the steps of any of the methods described above when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the above.
According to the frame number image generation method, the frame number image generation device, the computer equipment and the storage medium, identification positioning frames of the reference identifications in the reference identification groups and frame number positioning frames of the frame number areas in the frame number original image are respectively obtained through identification of the reference identifications and the frame number areas in the acquired frame number original image, then actual position relations of the reference identifications are obtained, reference coordinates corresponding to the actual position relations are obtained, then target perspective transformation matrixes corresponding to the reference identification groups are solved and obtained based on the reference coordinates and the position coordinates of the identification positioning frames, and then the frame number areas are subjected to perspective transformation processing through the target perspective transformation matrixes to obtain the frame number image. Therefore, the frame number image can be generated intelligently based on the target perspective transformation matrix determined by the plurality of reference identifier groups, and the frame number image obtained by the target perspective transformation matrix is judged and analyzed to obtain a judgment result.
Drawings
FIG. 1 is a diagram of an application scenario of a frame number image generation method in an embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a method for generating a frame number image according to an embodiment;
FIG. 3 is a schematic illustration of an original frame number image in one embodiment;
FIG. 4 is a schematic diagram of an original frame number image in another embodiment;
FIG. 5 is a block diagram showing the structure of a carriage number image generating apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The frame number image generation method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. Specifically, the terminal 102 collects an original image of the vehicle frame number and sends the collected original image of the vehicle frame number to the server 104. After the server 104 obtains the original frame number image, the reference identifier and the frame number region can be identified for the original frame number image, so as to obtain the identifier positioning frame of each reference identifier in each reference identifier group and the frame number positioning frame of the frame number region respectively. Then the server 104 obtains the actual position relationship of each reference identifier, obtains the reference coordinates corresponding to the actual position relationship, obtains the target perspective transformation matrix corresponding to the plurality of reference identifier groups based on the reference coordinates and the identifier positioning frame of each reference identifier, and then the server 104 performs perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image. The terminal 102 may be, but not limited to, various video cameras, video recorders, personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices with image capturing functions, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a frame number image generation method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
step S202, acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers.
The frame number original image is an image acquired and uploaded in an actual vehicle through a terminal, and the frame number original image can comprise a plurality of reference identification groups. Each reference mark group comprises a plurality of reference marks.
In particular, referring to fig. 3, in a practical scenario, a plurality of sets of reference markers, for example 3 sets, may be placed on the plane of the vehicle frame number, each set of reference markers comprising 4 reference markers, for example 4 small squares. The centers of the 4 small squares in the same group also constitute one large square. Optionally, the distance between each group of reference marks and the group of reference marks is greater than the side length of the large square, that is, the distance between the groups is greater than the distance between the reference marks in the group and the reference marks, for example, the distance between the reference marks in the group of positioning marks and the midpoint of the reference marks is 10cm, which constitutes a large square of 10cm × 10cm, and the distance between the group of reference marks and the group of reference marks is 30cm, so as to avoid confusion between the group of reference marks and the group of reference marks.
Step S204, identifying the reference mark and the frame number area of the original frame number image to respectively obtain the mark positioning frame of each reference mark in each reference mark group and the frame number positioning frame of the frame number area.
The frame number area refers to an area where the frame number is located in the frame number original image.
Specifically, the server may perform reference identifier and frame number region identification on the original frame number image through a plurality of different identification algorithms, for example, algorithms such as deep learning, image identification, and text identification, to obtain an identifier location frame corresponding to each reference identifier and a frame number location frame corresponding to the frame number region.
Step S206, obtaining the actual position relation of each reference mark, obtaining the reference coordinate corresponding to the actual position relation, and obtaining the target perspective transformation matrix corresponding to a plurality of reference mark groups based on the reference coordinate and the mark positioning frame of each reference mark.
The actual position relationship of the reference identifier refers to a position relationship between the reference identifier and the reference identifier in an actual scene, for example, referring to fig. 3, the actual position relationship of the reference identifier refers to that the horizontal and vertical distances between the reference identifier and the reference identifier in the reference identifier group are both 10 cm.
In this embodiment, the actual position relationship of the reference identifier may be input by the user through the terminal and transmitted to the server by the terminal through the network.
The reference coordinates refer to coordinate positions corresponding to the reference marks in the reference mark groups after perspective transformation. Specifically, referring to fig. 3, the server may determine the corresponding reference coordinate according to the position relationship between the reference identifier and the positioning mark in the reference identifier group. For example, in a frame number plane of an actual vehicle, the actual horizontal and vertical distances between the reference markers and the reference markers are both 10cm, the server may preset the coordinates of one reference marker in the reference marker group after perspective transformation to be (x1, y1), and then determine the coordinate positions of the corresponding other reference markers according to the actual horizontal and vertical distances between the reference markers and the reference markers, that is, the reference coordinates corresponding to the actual position relationship may be Q1(x1, y1), Q2(x1+10, y2), Q3(x1+10, y1+10), and Q4(x1, y1+ 10).
Further, the server carries out solving processing on the perspective transformation matrix based on the obtained reference coordinates and the position coordinates of each identification positioning frame to obtain a target perspective transformation matrix.
And S208, performing perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image.
Specifically, the server may perform perspective transformation on the frame number region determined by the frame number positioning frame in the frame number original image based on the established target perspective transformation matrix, and output the frame number image.
In the frame number image generation method, identification positioning frames of all reference identifications in all reference identification groups and frame number positioning frames of frame number areas in an acquired frame number original image are respectively obtained through identification of the reference identifications and the frame number areas in the acquired frame number original image, then actual position relations of a plurality of reference identifications are obtained, reference coordinates corresponding to the actual position relations are obtained, then a target perspective transformation matrix corresponding to a plurality of reference identification groups is solved and obtained based on the reference coordinates and the position coordinates of all identification positioning frames, and then the frame number areas are subjected to perspective transformation processing through the target perspective transformation matrix to obtain the frame number image. Therefore, the frame number image can be generated intelligently based on the target perspective transformation matrix determined by the plurality of reference identifier groups, and the frame number image obtained by the target perspective transformation matrix is judged and analyzed to obtain a judgment result.
In one embodiment, obtaining a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frame of each reference identifier may include: obtaining a plurality of initial perspective transformation matrixes based on the reference coordinates and the position coordinates of the identifier positioning frame of each reference identifier, wherein the initial perspective transformation matrixes are in one-to-one correspondence with the reference identifier groups respectively; and calculating the average value of the plurality of initial perspective transformation matrixes to obtain a target perspective transformation matrix.
As described above, the original frame number image may include 3 reference mark groups, and each reference mark group may include 4 reference marks.
In this embodiment, the actual position relationships between the reference marks in the 3 groups of reference mark groups may all be equal to each other, and all are 10cm, that is, the reference marks in the 3 reference mark groups all form a square with a side length of 10cm × 10cm, and then the reference coordinates Q1(x1, y1), Q2(x1+10, y2), Q3(x1+10, y1+10), and Q4(x1, y1+10) described above correspond to the 3 reference mark groups.
Further, the server may solve the established perspective transformation matrix according to the determined reference coordinates and the position coordinates of each identifier positioning frame obtained through identification, so as to obtain a plurality of initial perspective transformation matrices corresponding to the plurality of reference identifier groups.
For example, continuing to use the previous example, the original frame number image includes 3 reference identifier groups, and after the reference identifier groups are identified, the coordinates of the identifier positioning frame corresponding to each reference identifier can be obtained as follows:
P11(x1,y1)、P12(x2,y2)、P13(x3,y3)、P14(x4,y4)
P21(x1,y1)、P22(x2,y2)、P23(x3,y3)、P24(x4,y4),
P31(x1,y1)、P32(x2,y2)、P33(x3,y3)、P34(x4,y4)
wherein, P11-P14, P21-P24 and P31-P34 are the coordinates of the corresponding 4 marked positioning frames in the three reference mark groups respectively.
Further, the server may calculate a preset perspective transformation matrix a by using a coordinate vector B of each identification positioning frame before perspective transformation and a reference coordinate vector C after perspective transformation, where an expression of A, B, C is as follows:
further, letSo thatAnd then, enabling elements in the perspective transformation matrix A, the coordinate vector B of each identification positioning frame before perspective transformation and the reference coordinate vector C after perspective transformation to satisfy the following relational expression (1):
in this embodiment, since the perspective transformation is performed on the plan view, let a be33=1。
Further, based on the above perspective transformation matrix a, the coordinate vector B of each mark positioning frame before perspective transformation, and each element in the reference coordinate vector C after perspective transformation, the initial perspective transformation matrix is solved, and a solving formula (2) is as follows:
further, the server may find the plurality of initial perspective transformation matrices a1, a2, and A3 by substituting the coordinates of P11-P14, P21-P24, and P31-P34 as elements of the coordinate vector B, and the coordinates of Q1-Q4 as elements of the reference coordinate vector C into the above equation (2).
Further, the server performs averaging processing on the initial perspective transformation matrices a1, a2, and A3 to obtain a target perspective transformation matrix.
In the above embodiment, the target perspective transformation matrix is obtained by generating the plurality of initial perspective transformation matrices corresponding to the plurality of reference identifier groups and calculating the average value of the plurality of initial perspective transformation matrices, so that the target perspective transformation matrix is generated based on the plurality of initial perspective transformation matrices, the perspective transformation processing process can be more stable, and the accuracy of the obtained frame number image can be improved.
In one embodiment, obtaining a plurality of initial perspective transformation matrices based on the reference coordinates and the identifier positioning frame of each reference identifier may include: obtaining the coordinates of the central point of each identification positioning frame according to the position coordinates of each identification positioning frame; determining a positioning frame group corresponding to each reference identifier group according to the coordinates of each central point, wherein the number of the reference identifier frames in the positioning frame group is equal to that of the reference identifiers in the reference identifier group; and obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identification groups according to the center point coordinates and the reference coordinates of the positioning frame groups.
Specifically, the server may determine the coordinates of the center point of each identification positioning frame according to the position coordinates of each identification positioning frame obtained after the identification, that is, the coordinate positions of the four vertexes of the identification positioning frame.
Further, the server groups the identification positioning frames obtained by identification according to the coordinates of the central point of each identification positioning frame to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups. And the number of the reference mark frames in each positioning frame group is equal to that of the reference marks in the reference mark group.
And then, the server calculates the initial perspective transformation matrix according to the center point coordinates of the reference identification frame in each positioning frame group and the reference coordinates to obtain each initial perspective transformation matrix corresponding to each reference identification group.
In the above embodiment, the positioning frame group corresponding to each reference identifier group is determined according to the center point coordinate of each identifier positioning frame, and then the corresponding initial perspective transformation matrix is obtained according to the center point coordinate and the reference coordinate of the reference identifier frame in each positioning frame group, so that the generated initial perspective transformation matrix corresponds to the reference identifier group one by one, the accuracy of the generated initial perspective transformation matrix can be improved, and the accuracy of data processing in the subsequent processing process can be improved.
In one embodiment, determining the positioning frame group corresponding to each reference identifier group according to the coordinates of each central point may include: and respectively determining a plurality of reference identification frames adjacent to the coordinates of the central points into a group to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups.
Specifically, the server may determine the center point coordinate of each identifier locator box according to the coordinate positions of the 4 vertices of each identifier locator box, and then determine the position distance from other center point coordinates according to the center point coordinate.
Further, the server determines a plurality of reference identifier frames adjacent to the center point coordinates as a group according to the position distance between the center point coordinates, for example, the reference identifier frames corresponding to the 4 center point coordinates with the closest position distance between the center point coordinates are a group.
Further, the server groups all the identification positioning frames of the reference identification obtained by identification based on the original frame number image in such a way, and a plurality of positioning frame groups corresponding to the plurality of reference identification groups are obtained.
In the above embodiment, based on the center point coordinate of the identification positioning frame, a plurality of reference identification frames adjacent to the center point coordinate are determined as a group, so that the distribution of the positioning frame group is more accurate, and the accuracy of the subsequent generation of the perspective transformation matrix can be improved.
In one embodiment, the performing perspective transformation processing on the frame number region through the target perspective transformation matrix according to the frame number positioning frame of the frame number region to obtain the frame number image may include: according to the coordinate position of the frame number positioning frame in the frame number area, carrying out perspective transformation processing on the frame number area through a target perspective transformation matrix to obtain target pixel points of each original pixel point in the frame number area after perspective transformation; acquiring pixel values of all original pixel points in the original frame number image; and filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
Specifically, the server may traverse the frame number region determined by the coordinate position of the frame number positioning frame according to the obtained perspective transformation matrix, and obtain the target pixel point after perspective transformation of each pixel point in the frame number region. And then the server acquires pixel values of all pixel points in the frame number area, namely values of three channels of RGB (red, green and blue), from the original frame number image, and fills the pixel values into target pixel points after perspective transformation to obtain a frame number image.
In this embodiment, the server may obtain a pixel value of each pixel point based on a corresponding relationship between a pixel point before perspective transformation and a target pixel point after perspective transformation, and fill each target pixel point.
In the above embodiment, the pixel values of the original pixel points in the original frame number image are obtained, and the target pixel points are filled based on the pixel values of the original pixel points, so that the generated frame number image contains the pixel values in the original frame number image, the information of the original frame number image is retained, and the accuracy of image generation is improved.
In one embodiment, before the identification of the reference identifier and the identification of the frame number region on the original frame number image, the method may further include: recognizing a frame number text object of the frame number original image, and determining a text inclination angle of the frame number text object in the frame number original image; and rotating the original frame number image according to the text inclination angle to obtain the rotated original frame number image with the inclination angle of zero.
The identification of the frame number text object on the frame number original image can be realized by a common character identification mode or a neural network model.
Specifically, the frame number original image obtained by the server may be an image with an inclination angle, as shown in fig. 4 (a), the server may input the frame number original image into a trained text object recognition network, for example, PseNet, and recognize text content of the frame number original image to obtain a recognition frame of a frame number area in the frame number original image, where the recognition frame carries a text inclination angle.
Further, the server may perform rotation processing on the original frame number image according to the text inclination angle, so that the text inclination angle becomes zero, and further obtain the corrected original frame number image, as shown in fig. 4 (b).
In the above embodiment, before the identification of the reference identifier and the identification of the frame number region are performed on the frame number original image, the frame number text object is identified on the frame number original image, so as to obtain the text inclination angle, and the frame number original image is subjected to rotation processing based on the text inclination angle, so as to obtain the frame number original image with the zero inclination angle after rotation, so that the post-processing process is performed based on the corrected image, the accuracy of the post-processing can be improved, and the accuracy of the generated frame number image can be improved.
In one embodiment, the identification of the reference identifier and the identification of the frame number region for the original frame number image are performed by a pre-trained neural network model, and the training mode of the neural network model may include: acquiring a training set image; marking the reference mark and the frame number area in the training set image through a marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively; normalizing the marked training set images to obtain training set images with the same size as a preset size; inputting the training set image into the constructed initial neural network model, and performing feature extraction on the training set image to obtain feature images of multiple scales; carrying out feature fusion on the feature images of all scales to obtain a prediction frame corresponding to the feature images of all scales; determining loss values of the prediction frames corresponding to all scales based on the labeling frames, and updating model parameters through the loss values; and carrying out iterative processing on the initial neural network model to obtain the trained neural network model.
Specifically, the server may obtain a plurality of images similar to the original frame number image collected by the terminal as training set images, and then label each object in the training set images by using a labeling tool, for example, a reference mark, a frame number region, and the like may be respectively labeled by L abelmg, so as to obtain position information and category information of each object in the training set images.
Further, the server may perform image filling and image scaling processing on the training set image according to the requirement of the input size of the neural network model, to obtain a training set image that is consistent with the input size required by the neural network model.
Further, the server may extract features of multiple scales for each training set image to obtain feature images of multiple sizes, and further perform feature fusion for the feature images of each scale to obtain a prediction frame corresponding to the feature images of each scale.
Further, the server may calculate the loss value of the category score, the confidence score, the center coordinate, the width dimension, and the height dimension of each predicted frame compared to the category, the center coordinate, the width dimension, and the height dimension of the annotation frame through a loss function, obtain a model gradient after back propagation according to the calculated loss value, and update each weight parameter of the model to obtain a neural network model after updating the weight parameter.
Then, the server can perform iterative processing on the model according to a preset learning rate to obtain a trained neural network model.
In the above embodiment, the trained neural network model is used for identifying the reference identifier and the frame number region, so that the accuracy of identifying the reference identifier can be improved, the accuracy of the target perspective transformation matrix generated by the identifier positioning frame of the reference identifier can be improved, and the accuracy of the generated image can be improved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a frame number image generating device including: the system comprises a frame number original image acquisition module 100, an identification module 200, a target perspective transformation matrix generation module 300 and a perspective transformation processing module 400, wherein:
the frame number original image obtaining module 100 is configured to obtain a frame number original image, where the frame number original image includes a plurality of reference identifier groups, and each reference identifier group includes a plurality of reference identifiers.
The first recognition module 200 is configured to perform identification of the reference identifier and identification of the frame number region on the original frame number image, and obtain an identifier positioning frame of each reference identifier in each reference identifier group and a frame number positioning frame of the frame number region respectively.
And a target perspective transformation matrix generation module 300, configured to obtain an actual position relationship of each reference identifier, obtain a reference coordinate corresponding to the actual position relationship, and obtain a target perspective transformation matrix corresponding to the multiple reference identifier groups based on the reference coordinate and the identifier positioning frame of each reference identifier.
And the perspective transformation processing module 400 is used for performing perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image.
In one embodiment, the target perspective transformation matrix generation module 300 may include:
and the initial perspective transformation matrix generation submodule is used for obtaining a plurality of initial perspective transformation matrices based on the reference coordinates and the position coordinates of the identification positioning frame of each reference identification, and the initial perspective transformation matrices are respectively in one-to-one correspondence with the reference identification groups.
And the target perspective transformation matrix generation submodule is used for calculating the average value of the plurality of initial perspective transformation matrices to obtain a target perspective transformation matrix.
In one embodiment, the initial perspective transformation matrix generation sub-module may include:
and the central point coordinate generating unit is used for obtaining the central point coordinate of each identification positioning frame according to the position coordinate of each identification positioning frame.
And the positioning frame group generating unit is used for determining the positioning frame group corresponding to each reference identifier group according to the coordinates of each central point, and the number of the reference identifier frames in the positioning frame group is equal to that of the reference identifiers in the reference identifier group.
And the initial perspective transformation matrix generating unit is used for obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identifier groups according to the center point coordinates and the reference coordinates of each positioning frame group.
In one embodiment, the central point coordinate generating unit is configured to determine a plurality of reference identifier blocks adjacent to the central point coordinates as a group, and obtain a plurality of positioning block groups corresponding to the plurality of reference identifier blocks.
In one embodiment, the perspective transformation processing module 400 may include:
and the perspective transformation processing submodule is used for carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the coordinate position of the frame number positioning frame of the frame number area to obtain target pixel points after each original pixel point of the frame number area is subjected to perspective transformation.
And the pixel value acquisition submodule is used for acquiring the pixel value of each original pixel point in the frame number original image.
And the filling submodule is used for filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
In one embodiment, the apparatus may further include:
and the second recognition module is used for recognizing the frame number text object on the frame number original image before the first recognition module 200 recognizes the reference identifier and the frame number region on the frame number original image, and determining the text inclination angle of the frame number text object in the frame number original image.
And the rotation processing module is used for performing rotation processing on the original frame number image according to the text inclination angle to obtain the original frame number image with the rotated text inclination angle being zero.
In one embodiment, the first recognition module 200 performs the recognition of the reference identifier and the recognition of the frame number region on the original frame number image through a neural network model trained in advance by a training module, and the training module may include:
and the training set image acquisition submodule is used for acquiring training set images.
And the marking submodule is used for marking the reference mark and the frame number area in the training set image through the marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively.
And the normalization processing module is used for performing normalization processing on the labeled training set images to obtain the training set images with the same preset size.
And the feature extraction submodule is used for inputting the images of the training set into the constructed initial neural network model, and performing feature extraction on the images of the training set to obtain feature images of multiple scales.
And the characteristic fusion submodule is used for carrying out characteristic fusion on the characteristic images of all scales to obtain a prediction frame corresponding to the characteristic images of all scales.
And the loss processing submodule is used for determining the loss value of the prediction frame corresponding to each scale based on the labeling frame and updating the model parameters through the loss value.
And the iterative processing module is used for carrying out iterative processing on the initial neural network model to obtain the trained neural network model.
For specific limitations of the frame number image generation device, reference may be made to the above limitations of the frame number image generation method, which are not described herein again. Each module in the above-described frame number image generation apparatus may be entirely or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as the original vehicle frame number image, the vehicle frame number image obtained after perspective transformation and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a frame number image generation method.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers; identifying the reference identifier and the frame number area of the original frame number image to respectively obtain an identifier positioning frame of each reference identifier in each reference identifier group and a frame number positioning frame of the frame number area; acquiring the actual position relation of each reference identifier, acquiring reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frames of the reference identifiers; and carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image.
In one embodiment, the obtaining of the target perspective transformation matrix corresponding to the plurality of reference identifier groups based on the reference coordinates and the frame number region identifier positioning frames by the processor executing the computer program may include: obtaining a plurality of initial perspective transformation matrixes based on the reference coordinates and the position coordinates of the frame number area identification positioning frames, wherein the initial perspective transformation matrixes are in one-to-one correspondence with the reference identification groups respectively; and calculating the average value of the plurality of initial perspective transformation matrixes to obtain a target perspective transformation matrix.
In one embodiment, the processor, when executing the computer program, implements the position coordinates of the positioning frame identified based on the reference coordinates and each frame number region to obtain a plurality of initial perspective transformation matrices, which may include: obtaining the coordinates of the central point of each identification positioning frame according to the position coordinates of each identification positioning frame; determining a positioning frame group corresponding to each reference identifier group according to the coordinates of each central point, wherein the number of the reference identifier frames in the positioning frame group is equal to that of the reference identifiers in the reference identifier group; and obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identification groups according to the center point coordinates and the reference coordinates of the positioning frame groups.
In one embodiment, the determining, by the processor, the set of positioning frames corresponding to the set of reference identifiers according to the coordinates of the central points when the processor executes the computer program may include: and respectively determining a plurality of reference identification frames adjacent to the coordinates of the central points into a group to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups.
In one embodiment, the performing, by the processor, a frame number positioning frame according to a frame number region by the processor executing the computer program to perform perspective transformation processing on the frame number region through the target perspective transformation matrix to obtain a frame number image may include: according to the coordinate position of the frame number positioning frame in the frame number area, carrying out perspective transformation processing on the frame number area through a target perspective transformation matrix to obtain target pixel points of each original pixel point in the frame number area after perspective transformation; acquiring pixel values of all original pixel points in the original frame number image; and filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
In one embodiment, before the processor executes the computer program to recognize the reference identifier and the frame number region of the original frame number image, the processor may further perform: recognizing a frame number text object of the frame number original image, and determining a text inclination angle of the frame number text object in the frame number original image; and rotating the original frame number image according to the text inclination angle to obtain the rotated original frame number image with the inclination angle of zero.
In one embodiment, when the processor executes the computer program, the identification of the reference identifier of the original frame number image and the identification of the frame number region are performed through a pre-trained neural network model, and the training mode of the neural network model may include: acquiring a training set image; marking the reference mark and the frame number area in the training set image through a marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively; normalizing the marked training set images to obtain training set images with the same size as a preset size; inputting the training set image into the constructed initial neural network model, and performing feature extraction on the training set image to obtain feature images of multiple scales; carrying out feature fusion on the feature images of all scales to obtain a prediction frame corresponding to the feature images of all scales; determining loss values of the prediction frames corresponding to all scales based on the labeling frames, and updating model parameters through the loss values; and carrying out iterative processing on the initial neural network model to obtain the trained neural network model.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers; identifying the reference identifier and the frame number area of the original frame number image to respectively obtain an identifier positioning frame of each reference identifier in each reference identifier group and a frame number positioning frame of the frame number area; acquiring the actual position relation of each reference identifier, acquiring reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the frame number area identifier positioning frames; and carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image.
In one embodiment, the computer program, when executed by the processor, implements obtaining a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the frame number region identifier positioning frames, and may include: obtaining a plurality of initial perspective transformation matrixes based on the reference coordinates and the position coordinates of the frame number area identification positioning frames, wherein the initial perspective transformation matrixes are in one-to-one correspondence with the reference identification groups respectively; and calculating the average value of the plurality of initial perspective transformation matrixes to obtain a target perspective transformation matrix.
In one embodiment, the computer program when executed by the processor implements the method for obtaining a plurality of initial perspective transformation matrices based on the reference coordinates and the position coordinates of the frame number region identification positioning frame, and may include: obtaining the coordinates of the central point of each identification positioning frame according to the position coordinates of each identification positioning frame; determining a positioning frame group corresponding to each reference identifier group according to the coordinates of each central point, wherein the number of the reference identifier frames in the positioning frame group is equal to that of the reference identifiers in the reference identifier group; and obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identification groups according to the center point coordinates and the reference coordinates of the positioning frame groups.
In one embodiment, the computer program when executed by the processor for determining the positioning frame group corresponding to each reference identifier group according to the coordinates of the central point may include: and respectively determining a plurality of reference identification frames adjacent to the coordinates of the central points into a group to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups.
In one embodiment, the computer program, when executed by the processor, for performing perspective transformation processing on the frame number region through the target perspective transformation matrix according to the frame number positioning frame of the frame number region to obtain the frame number image, may include: according to the coordinate position of the frame number positioning frame in the frame number area, carrying out perspective transformation processing on the frame number area through a target perspective transformation matrix to obtain target pixel points of each original pixel point in the frame number area after perspective transformation; acquiring pixel values of all original pixel points in the original frame number image; and filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
In one embodiment, before the computer program is executed by the processor to perform the identification of the reference identifier and the identification of the frame number region on the original frame number image, the computer program may further perform: recognizing a frame number text object of the frame number original image, and determining a text inclination angle of the frame number text object in the frame number original image; and rotating the original frame number image according to the text inclination angle to obtain the rotated original frame number image with the inclination angle of zero.
In one embodiment, when executed by the processor, the computer program implements the identification of the reference identifier for the original frame number image and the identification of the frame number region through a pre-trained neural network model, and the training mode of the neural network model may include: acquiring a training set image; marking the reference mark and the frame number area in the training set image through a marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively; normalizing the marked training set images to obtain training set images with the same size as a preset size; inputting the training set image into the constructed initial neural network model, and performing feature extraction on the training set image to obtain feature images of multiple scales; carrying out feature fusion on the feature images of all scales to obtain a prediction frame corresponding to the feature images of all scales; determining loss values of the prediction frames corresponding to all scales based on the labeling frames, and updating model parameters through the loss values; and carrying out iterative processing on the initial neural network model to obtain the trained neural network model.
It will be understood by those of ordinary skill in the art that all or a portion of the processes of the methods of the embodiments described above may be implemented by a computer program that may be stored on a non-volatile computer-readable storage medium, which when executed, may include the processes of the embodiments of the methods described above, wherein any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of generating a frame number image, the method comprising:
acquiring a frame number original image, wherein the frame number original image comprises a plurality of reference identifier groups, and each reference identifier group comprises a plurality of reference identifiers;
identifying a reference identifier and a frame number area of the original frame number image to respectively obtain an identifier positioning frame of each reference identifier in each reference identifier group and a frame number positioning frame of the frame number area;
acquiring the actual position relation of each reference identifier, acquiring reference coordinates corresponding to the actual position relation, and acquiring a target perspective transformation matrix corresponding to a plurality of reference identifier groups based on the reference coordinates and the identifier positioning frames of each reference identifier;
and carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area so as to obtain a frame number image.
2. The method of claim 1, wherein obtaining a target perspective transformation matrix corresponding to a plurality of reference marker groups based on the reference coordinates and the marker locating boxes of each of the reference markers comprises:
obtaining a plurality of initial perspective transformation matrixes based on the reference coordinates and the position coordinates of the identifier positioning frame of each reference identifier, wherein the initial perspective transformation matrixes are in one-to-one correspondence with the reference identifier groups respectively;
and calculating the average value of the plurality of initial perspective transformation matrixes to obtain a target perspective transformation matrix.
3. The method of claim 2, wherein obtaining a plurality of initial perspective transformation matrices based on the reference coordinates and the location coordinates of the identified location boxes of each of the reference identifiers comprises:
obtaining the coordinates of the central point of each identification positioning frame according to the position coordinates of each identification positioning frame;
determining a positioning frame group corresponding to each positioning identification group according to the coordinates of each central point, wherein the number of the positioning identification frames in the positioning frame group is equal to the number of the reference identifications in the reference identification group;
and obtaining a plurality of initial perspective transformation matrixes corresponding to the plurality of reference identification groups according to the center point coordinates of each positioning frame group and the reference coordinates.
4. The method of claim 3, wherein determining a set of location boxes corresponding to each set of reference identifiers based on each of the center point coordinates comprises:
and respectively determining a plurality of reference identification frames adjacent to the coordinates of the central point into a group to obtain a plurality of positioning frame groups corresponding to the plurality of reference identification groups.
5. The method according to claim 1, wherein the subjecting the frame number area to perspective transformation processing by the target perspective transformation matrix according to the frame number positioning frame of the frame number area to obtain a frame number image comprises:
according to the coordinate position of the frame number positioning frame of the frame number area, carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix to obtain target pixel points after perspective transformation of all original pixel points of the frame number area;
acquiring a pixel value of each original pixel point in the original frame number image;
and filling each target pixel point based on the pixel value of each original pixel point to obtain a frame number image.
6. The method of claim 1, wherein the identifying the reference identifier and the identifying the frame number region for the original frame number image is preceded by:
recognizing a frame number text object of the frame number original image, and determining a text inclination angle of the frame number text object in the frame number original image;
and according to the text inclination angle, performing rotation processing on the frame number original image to obtain a frame number original image with a zero rotated inclination angle.
7. The method according to claim 1, wherein the identification of the reference mark and the identification of the frame number region on the frame number original image are performed by a pre-trained neural network model, and the training of the neural network model comprises:
acquiring a training set image;
marking the reference mark and the frame number area in the training set image through a marking frame respectively to obtain the position information and the category information of the reference mark and the frame number area in the training set image respectively;
carrying out normalization processing on the labeled training set images to obtain training set images with the same size as a preset size;
inputting the training set images into the constructed initial neural network model, and performing feature extraction on the training set images to obtain feature images of multiple scales;
carrying out feature fusion on the feature images of all scales to obtain a prediction frame corresponding to the feature images of all scales;
determining loss values of the prediction frames corresponding to all scales based on the labeling frames, and updating model parameters through the loss values;
and carrying out iterative processing on the initial neural network model to obtain a trained neural network model.
8. A frame number image generating apparatus, characterized in that the apparatus comprises:
the system comprises a frame number original image acquisition module, a frame number original image acquisition module and a frame number image acquisition module, wherein the frame number original image acquisition module is used for acquiring a frame number original image, the frame number original image comprises a plurality of reference identification groups, and each reference identification group comprises a plurality of reference identifications;
the identification module is used for identifying the reference identifier and the frame number area of the original frame number image to respectively obtain the identifier positioning frame of each reference identifier in each reference identifier group and the frame number positioning frame of the frame number area;
a target perspective transformation matrix generation module, configured to obtain an actual position relationship of each reference identifier, obtain a reference coordinate corresponding to the actual position relationship, and obtain a target perspective transformation matrix corresponding to the plurality of reference identifier groups based on the reference coordinate and an identifier positioning frame of each reference identifier;
and the perspective transformation processing module is used for carrying out perspective transformation processing on the frame number area through the target perspective transformation matrix according to the frame number positioning frame of the frame number area so as to obtain a frame number image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010169557.3A CN111401363A (en) | 2020-03-12 | 2020-03-12 | Frame number image generation method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010169557.3A CN111401363A (en) | 2020-03-12 | 2020-03-12 | Frame number image generation method and device, computer equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111401363A true CN111401363A (en) | 2020-07-10 |
Family
ID=71430752
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010169557.3A Pending CN111401363A (en) | 2020-03-12 | 2020-03-12 | Frame number image generation method and device, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111401363A (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111738223A (en) * | 2020-07-28 | 2020-10-02 | 上海眼控科技股份有限公司 | Vehicle frame number image generation method, device, computer equipment and storage medium |
| CN111832497A (en) * | 2020-07-17 | 2020-10-27 | 西南大学 | A Post-processing Method for Text Detection Based on Geometric Features |
| CN112580501A (en) * | 2020-12-17 | 2021-03-30 | 上海眼控科技股份有限公司 | Frame number image generation method and device, computer equipment and storage medium |
| CN113516131A (en) * | 2020-12-24 | 2021-10-19 | 阿里巴巴集团控股有限公司 | Image processing method, apparatus, device and storage medium |
| CN113673416A (en) * | 2021-08-18 | 2021-11-19 | 浙江大华技术股份有限公司 | Frame number identification method and device, storage medium and electronic device |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110414309A (en) * | 2019-05-27 | 2019-11-05 | 上海眼控科技股份有限公司 | A method for automatic identification of vehicle nameplates |
| WO2019232853A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Chinese model training method, chinese image recognition method, device, apparatus and medium |
| CN110796709A (en) * | 2019-10-29 | 2020-02-14 | 上海眼控科技股份有限公司 | Method and device for acquiring size of frame number, computer equipment and storage medium |
-
2020
- 2020-03-12 CN CN202010169557.3A patent/CN111401363A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019232853A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Chinese model training method, chinese image recognition method, device, apparatus and medium |
| CN110414309A (en) * | 2019-05-27 | 2019-11-05 | 上海眼控科技股份有限公司 | A method for automatic identification of vehicle nameplates |
| CN110796709A (en) * | 2019-10-29 | 2020-02-14 | 上海眼控科技股份有限公司 | Method and device for acquiring size of frame number, computer equipment and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 张壮飞;: "车架号识别系统二值化图像算法应用研究" * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111832497A (en) * | 2020-07-17 | 2020-10-27 | 西南大学 | A Post-processing Method for Text Detection Based on Geometric Features |
| CN111832497B (en) * | 2020-07-17 | 2022-06-28 | 西南大学 | A Post-processing Method for Text Detection Based on Geometric Features |
| CN111738223A (en) * | 2020-07-28 | 2020-10-02 | 上海眼控科技股份有限公司 | Vehicle frame number image generation method, device, computer equipment and storage medium |
| CN112580501A (en) * | 2020-12-17 | 2021-03-30 | 上海眼控科技股份有限公司 | Frame number image generation method and device, computer equipment and storage medium |
| CN113516131A (en) * | 2020-12-24 | 2021-10-19 | 阿里巴巴集团控股有限公司 | Image processing method, apparatus, device and storage medium |
| CN113673416A (en) * | 2021-08-18 | 2021-11-19 | 浙江大华技术股份有限公司 | Frame number identification method and device, storage medium and electronic device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111401363A (en) | Frame number image generation method and device, computer equipment and storage medium | |
| US9165365B2 (en) | Method and system for estimating attitude of camera | |
| CN110580723B (en) | Method for carrying out accurate positioning by utilizing deep learning and computer vision | |
| CN106203242B (en) | Similar image identification method and equipment | |
| CN109671119A (en) | A kind of indoor orientation method and device based on SLAM | |
| WO2021017882A1 (en) | Image coordinate system conversion method and apparatus, device and storage medium | |
| CN112132907B (en) | Camera calibration method and device, electronic equipment and storage medium | |
| CN109960962B (en) | Image recognition method and device, electronic equipment and readable storage medium | |
| CN114862973B (en) | Space positioning method, device and equipment based on fixed point location and storage medium | |
| CN109740487B (en) | Point cloud labeling method and device, computer equipment and storage medium | |
| CN111459269B (en) | Augmented reality display method, system and computer readable storage medium | |
| CN112200056B (en) | Face living body detection method and device, electronic equipment and storage medium | |
| CN111666922A (en) | Video matching method and device, computer equipment and storage medium | |
| CN109740659B (en) | Image matching method and device, electronic equipment and storage medium | |
| CN114005169B (en) | Face key point detection method and device, electronic equipment and storage medium | |
| EP3825804A1 (en) | Map construction method, apparatus, storage medium and electronic device | |
| CN112651315B (en) | Information extraction method, device, computer equipment and storage medium for line graph | |
| CN113592839A (en) | Distribution network line typical defect diagnosis method and system based on improved fast RCNN | |
| CN110796709A (en) | Method and device for acquiring size of frame number, computer equipment and storage medium | |
| CN108647264B (en) | Automatic image annotation method and device based on support vector machine | |
| CN117896626B (en) | Method, device, equipment and storage medium for detecting motion trajectory with multiple cameras | |
| CN114882115B (en) | Vehicle pose prediction method and device, electronic equipment and storage medium | |
| CN110766077A (en) | Method, device and equipment for screening sketch in evidence chain image | |
| CN112950528A (en) | Certificate posture determining method, model training method, device, server and medium | |
| CN113793392A (en) | Camera parameter calibration method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| AD01 | Patent right deemed abandoned | ||
| AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20240927 |