CN113808209B - Positioning identification method, positioning identification device, computer equipment and readable storage medium - Google Patents
Positioning identification method, positioning identification device, computer equipment and readable storage medium Download PDFInfo
- Publication number
- CN113808209B CN113808209B CN202111117093.2A CN202111117093A CN113808209B CN 113808209 B CN113808209 B CN 113808209B CN 202111117093 A CN202111117093 A CN 202111117093A CN 113808209 B CN113808209 B CN 113808209B
- Authority
- CN
- China
- Prior art keywords
- target object
- identity
- coordinates
- dimensional
- preset area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1452—Methods for optical code recognition including a method step for retrieval of the optical code detecting bar code edges
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a bit identification method, a bit identification device, computer equipment and a readable storage medium, wherein the bit identification device is in communication connection with an infrared camera module arranged above a preset area, a camera of the infrared camera module faces to the preset area, at least one target object exists in the preset area, and an identity is arranged on the top surface of each target object. According to the self characteristics or the identity of the stage target, the target object is rapidly and accurately positioned, so that the recognition stability and the positioning accuracy of the target object such as a robot are improved. The method has the advantages of realizing simultaneous positioning of multiple targets, along with high precision and good stability, and reducing the influence of stage lighting and personnel walking on positioning precision.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to a positioning identification method, a positioning identification device, a computer device, and a readable storage medium.
Background
At present, robots are increasingly applied to stage performance, a plurality of robots are simultaneously arranged on a stage, and orderly-arranged formation transformation and actions are important methods for showing technological development. One major difficulty in stage performance is how to accurately position the poses of multiple robots to achieve the stage performance effect of orderly formation. The light on the stage is changed vigorously, and has strong light, weak light and even no light sometimes, so that the robot on the stage needs to be positioned in the complex light and shadow environment.
At present, various stage positioning schemes exist, the first is a wireless positioning scheme such as WIFI, bluetooth, UWB and the like, and the technology is not interfered by lamplight, but has low precision and is easily affected by walking of personnel.
It can be seen that the existing stage positioning scheme has the technical problem of lower positioning accuracy.
Disclosure of Invention
In order to solve the technical problems, the embodiment of the invention provides a positioning identification method, a positioning identification device, computer equipment and a readable storage medium.
In a first aspect, an embodiment of the present invention provides a positioning and identifying method, which is applied to a positioning and identifying device, where the positioning and identifying device is in communication connection with an infrared camera module set above a preset area, a camera of the infrared camera module faces the preset area, at least one target object exists in the preset area, and an identity is set on a top surface of each target object; the method comprises the following steps:
acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image;
determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area;
determining the identity information of each target object according to a reference identity stored in an identity information base in advance and an actual identity corresponding to each pixel point attribute value in a characteristic frame of each target object;
and outputting the identity information and the positioning data of each target object.
According to one embodiment of the present disclosure, the step of determining a feature frame of each target object in the initial image includes:
processing the initial image into a binarized image;
searching edge pixel points from the binarized image, and fitting into at least one candidate frame according to all the edge pixel points;
screening out candidate frames meeting preset requirements to serve as characteristic frames of the target object.
According to a specific embodiment of the disclosure, the step of searching for edge pixel points from the binarized image and fitting at least one candidate frame according to all edge pixel points includes:
acquiring pixel types of adjacent pixel points of each pixel point;
finding out pixel points with different pixel types from adjacent pixel points to serve as edge pixel points;
ordering all edge pixel points according to the inclination angles of the gravity centers, sequentially selecting points which are within a preset distance range from a preset center point, performing straight line fitting, and calculating the error sum of each straight line;
and selecting corner indexes corresponding to four straight lines with minimum error sum to form candidate frames.
According to one embodiment of the disclosure, the step of determining the positioning data of each target object according to the coordinates of the pixel points corresponding to the feature frame of each target object and the reference coordinate system corresponding to the preset area includes:
acquiring two-dimensional coordinates of four vertexes in the characteristic frame;
converting the two-dimensional coordinates of each vertex into three-dimensional coordinates according to the shooting parameters of the infrared shooting module;
and calculating the positioning data of the target object corresponding to the characteristic frame according to the three-dimensional coordinates of the four vertexes in the characteristic frame.
According to one embodiment of the disclosure, the step of converting the two-dimensional coordinates of each vertex into three-dimensional coordinates according to the shooting parameters of the infrared camera module includes:
according to the focal length and the offset of the infrared camera module, respectively converting the two-dimensional horizontal and vertical coordinates of the vertex into three-dimensional horizontal and vertical pixel coordinates, and taking a fixed value from the three-dimensional vertical axis pixel coordinates of the vertex;
multiplying the three-dimensional horizontal and vertical coordinates and the vertical coordinates of the vertex by the actual object distance to obtain the three-dimensional horizontal and vertical actual coordinates and the three-dimensional number axis actual coordinates of the vertex;
and/or the number of the groups of groups,
according to three-dimensional coordinates of four vertexes in the characteristic frame, calculating positioning data of a target object corresponding to the characteristic frame;
and calculating the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex according to the three-dimensional transverse and longitudinal actual coordinates of the vertex, and taking the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex as the positioning data of the vertex.
According to one specific embodiment of the disclosure, the step of determining the identity information of each target object according to the reference identity stored in the identity information library in advance and the actual identity corresponding to the attribute value of each pixel point in the feature frame of each target object includes:
obtaining corresponding actual identity marks according to the attribute values of each pixel point in the characteristic frame of each target object;
transforming the actual identity of each target object to obtain a group of associated identity;
and exclusive-or is carried out on a group of associated identity identifiers of each target object and all the reference identity identifiers, and identity information corresponding to the reference identity identifier which is matched best is used as the identity information of the target object.
According to one specific embodiment of the disclosure, the identity arranged on the top surface of the target object is a two-dimensional code;
the characteristic frame is a quadrilateral frame corresponding to the size of the two-dimensional code.
In a second aspect, an embodiment of the present invention provides a positioning and identifying device, where the positioning and identifying device is in communication connection with an infrared camera module disposed above a preset area, a camera of the infrared camera module faces the preset area, at least one target object exists in the preset area, and an identity is set on a top surface of each target object; the positioning and identifying device comprises:
the acquisition module is used for acquiring an initial image of a preset area acquired by the infrared camera module and determining a characteristic frame of each target object in the initial image;
the first determining module is used for determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area;
the second determining module is used for determining the identity information of each target object according to the reference identity mark pre-stored in the identity information base and the actual identity mark corresponding to the attribute value of each pixel point in the characteristic frame of each target object;
and the output module is used for outputting the identity information and the positioning data of each target object.
In a third aspect, an embodiment of the present invention provides a computer device, comprising a memory and a processor, the memory being configured to store a computer program, which when executed by the processor performs the positioning identification method according to any one of the first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium storing a computer program which, when run on a processor, performs the location identification method of any one of the first aspects.
The positioning identification method, the positioning identification device, the computer equipment and the computer readable storage medium provided by the application are in communication connection with the infrared camera module arranged above the preset area, the camera of the infrared camera module faces to the preset area, at least one target object exists in the preset area, and the top surface of each target object is provided with an identity mark; the method comprises the following steps: acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image; determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area; determining the identity information of each target object according to a reference identity stored in an identity information base in advance and an actual identity corresponding to each pixel point attribute value in a characteristic frame of each target object; and outputting the identity information and the positioning data of each target object. According to the self characteristics or the identity of the stage target, the target object is rapidly and accurately positioned, so that the recognition stability and the positioning accuracy of the target object such as a robot are improved. The method has the advantages of realizing simultaneous positioning of multiple targets, along with high precision and good stability, and reducing the influence of stage lighting and personnel walking on positioning precision.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
Fig. 1 shows a flow chart of a positioning identification method provided in an embodiment of the present application;
fig. 2 shows a schematic diagram of an interaction scenario to which the positioning identification method provided in the embodiment of the present application is applied;
fig. 3 shows a schematic diagram of an interaction scenario to which the positioning identification method provided in the embodiment of the present application is applied;
FIG. 4 is a block diagram of a positioning and identifying device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
Example 1
Referring to fig. 1, a flow chart of a positioning identification method according to an embodiment of the present invention is shown. The provided positioning recognition method is applied to a positioning recognition device, as shown in fig. 2, the positioning recognition device 210 is in communication connection with an infrared camera module 220 arranged above a preset area (Z), a camera of the infrared camera module 220 faces the preset area, at least one target object 230 exists in the preset area, and as shown in fig. 3, an identity mark 240 is arranged on the top surface of each target object 230.
As shown in fig. 2 and 3, the positioning recognition device is a main recognition and control device, and may be a central control center arranged on the ground. The infrared camera module is infrared equipment comprising one or more infrared cameras, and can be equipment such as an infrared laser radar which can directly position a target object through infrared rays. The preset area can be an open or closed stage or performance area, and a certain distance exists between the preset area and the infrared camera module at the top. The target object of positioning identification corresponding to the embodiment may be a robot or other mobile devices. The top surface of each target object can be provided with an identity, such as a two-dimensional code or other special-shaped identities convenient to identify.
As shown in fig. 1, the method includes:
s101, acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image;
and when positioning identification is performed, the infrared camera module at the top of the preset area downwards collects an infrared image of the preset area, and the infrared image is defined as an initial image. The initial image contains characteristic pixels of each target object to be identified, and when positioning identification is carried out, the characteristic frame of each target object in the initial image is determined.
According to one specific embodiment of the disclosure, the identity mark set on the top surface of the target object is a two-dimensional code, and the characteristic frame is a quadrilateral frame corresponding to the size of the two-dimensional code. Of course, the identification mark of the two-dimensional code can be replaced by an obvious, easily-distinguished and easily-identified mark attached to the target object, such as a special pattern or a high-reflectivity reflector.
According to one embodiment of the present disclosure, the step of determining a feature frame of each target object in the initial image includes:
processing the initial image into a binarized image;
searching edge pixel points from the binarized image, and fitting into at least one candidate frame according to all the edge pixel points;
screening out candidate frames meeting preset requirements to serve as characteristic frames of the target object.
The initial image is processed into a binarized image, and the subsequent calculated amount is reduced as much as possible on the premise of not influencing the positioning recognition precision. The binarization threshold value of any pixel point (5, y) of the image is a Gaussian weighted average value T (5, y) of a neighborhood, and the binarization formula is as follows:
。
further, the step of searching for edge pixel points from the binarized image and fitting at least one candidate frame according to all the edge pixel points includes:
acquiring pixel types of adjacent pixel points of each pixel point;
finding out pixel points with different pixel types from adjacent pixel points to serve as edge pixel points;
ordering all edge pixel points according to the inclination angles of the gravity centers, sequentially selecting points which are within a preset distance range from a preset center point, performing straight line fitting, and calculating the error sum of each straight line;
and selecting corner indexes corresponding to four straight lines with minimum error sum to form candidate frames.
In specific implementation, in the binarized image processed by the steps, the pixel points opposite to the adjacent pixel values are regarded as edge pixel points, and the continuous edge pixel points form an edge point set. After the binarized image is obtained, searching edge point sets in the image by using a union algorithm, and assigning an id to each edge point set. The two-dimensional code is square, namely a quadrilateral is formed in the image, the quadrilateral is used for fitting to obtain edge pixel point sets of all the graphs in the image in the last step, the edge pixel point sets are marked as a quadrilateral qi, and meanwhile pixel coordinates of four vertexes of the quadrilateral are calculated. The two-dimensional code recognition program outputs pixel coordinates (u, v) of four vertices { p1, p2, p3, p4} of each two-dimensional code in the image.
Of course, in other embodiments, detection of multi-objective feature frames may also be performed using deep learning based techniques. As shown in the figure, aiming at various regular or irregular appearances of the target object to be detected, a multi-target detection method based on deep learning, such as frames of Yolov4, fast R-CNN and Mask R-CNN, can be adopted, and a boundary Bounding box (Bounding Bo 5) of the target object in an infrared image can be rapidly and accurately positioned. Each bounding box is rectangular with four vertices { p1, p2, p3, p4 }. If N objects in the image are detected by multiple objects, the bounding box BBo, BBo, BBo N of each object can be obtained.
S102, determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area;
after the characteristic frames corresponding to the target objects are determined according to the steps, the pixel point coordinates of the vertexes of the characteristic frames can be obtained, and then the positioning data of the target objects are obtained through calculation.
According to one embodiment of the disclosure, the step of determining the positioning data of each target object according to the coordinates of the pixel points corresponding to the feature frame of each target object and the reference coordinate system corresponding to the preset area includes:
acquiring two-dimensional coordinates of four vertexes in the characteristic frame;
converting the two-dimensional coordinates of each vertex into three-dimensional coordinates according to the shooting parameters of the infrared shooting module;
and calculating the positioning data of the target object corresponding to the characteristic frame according to the three-dimensional coordinates of the four vertexes in the characteristic frame.
The two-dimensional coordinate data acquired by the infrared camera module are converted into three-dimensional coordinate data, and then the three-dimensional coordinate data are processed into corresponding positioning data. According to one embodiment of the disclosure, the step of converting the two-dimensional coordinates of each vertex into three-dimensional coordinates according to the shooting parameters of the infrared camera module includes:
according to the focal length and the offset of the infrared camera module, respectively converting the two-dimensional horizontal and vertical coordinates of the vertex into three-dimensional horizontal and vertical pixel coordinates, and taking a fixed value from the three-dimensional vertical axis pixel coordinates of the vertex;
and multiplying the three-dimensional horizontal and vertical coordinates and the vertical coordinates of the vertex by the actual object distance to obtain the three-dimensional horizontal and vertical actual coordinates and the three-dimensional numerical axis actual coordinates of the vertex.
The two-dimensional pixel coordinates of each vertex of each target object feature frame are back projected to the normalization plane according to the following formula (one) to obtain the coordinates (5) ij ,y ij ,z ij )。
Wherein f x ,f y Is the focal length of the infrared camera module in the abscissa, namely 5y direction, c x ,c y The offset of the infrared camera module in the 5y direction is indicated, which is typically half of the frame of the field of view of the infrared camera module.
The infrared camera module is perpendicular to the preset area and is arranged at the top of the stage, and the heights of the infrared camera module and the preset area are fixed and are known as z 0 According to the formula (II), the three-dimensional actual coordinates of the four vertexes of the target object i are obtained as follows:
in addition, according to three-dimensional coordinates of four vertexes in the characteristic frame, positioning data of a target object corresponding to the characteristic frame are calculated;
and calculating the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex according to the three-dimensional transverse and longitudinal actual coordinates of the vertex, and taking the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex as the positioning data of the vertex.
Obtaining three-dimensional coordinates of verticesThen, calculating the transverse axis pose data x of the vertex according to the following formula (three) coordinates i Longitudinal axis pose data y i And deflection angle theta i The corresponding positioning data can be obtained as pose data for distinguishing each target object.
S103, determining the identity information of each target object according to a reference identity stored in the identity information library in advance and an actual identity corresponding to each pixel point attribute value in the characteristic frame of each target object;
and finally, after identifying the positioning data corresponding to the characteristic frames of each target object, comparing and identifying according to the pre-stored reference identity.
According to one specific embodiment of the disclosure, the step of determining the identity information of each target object according to the reference identity stored in the identity information library in advance and the actual identity corresponding to the attribute value of each pixel point in the feature frame of each target object includes:
obtaining corresponding actual identity marks according to the attribute values of each pixel point in the characteristic frame of each target object;
transforming the actual identity of each target object to obtain a group of associated identity;
and exclusive-or is carried out on a group of associated identity identifiers of each target object and all the reference identity identifiers, and identity information corresponding to the reference identity identifier which is matched best is used as the identity information of the target object.
The actual identity is changed to obtain a group of associated identity, and the group of identity is compared with the reference identity, so that the condition that the two-dimensional code and other identity deflection influence recognition caused by the condition that the target object moves and rotates in the performance process can be covered as much as possible, and the positioning recognition precision is further improved.
S104, outputting the identity information and the positioning data of each target object.
After the identity information of each target object is finally obtained and positioned through the steps, the identity information and positioning data of each target object can be output. The output modes are not limited, and the output modes can be transmitted to a corresponding memory for storage, or transmitted to a corresponding display terminal for real-time dynamic display, for example.
Of course, for the positioning and identifying process described above, if the vertex distance is predetermined for the two-dimensional code, the light reflecting plate, and the like, the identifying process can be further simplified. Assuming that the distance between vertices is known as d, a special mark coordinate system can be constructed with the center of the vertex in three-dimensional space as the origin. The 3D coordinates of the four vertexes from the upper left corner in the clockwise order are (D/2, -D/2, 0), (-D/2, 0), (D/2, 0), and the image coordinates of the corresponding four vertexes are given as (u 1, v 1), (u 2, v 2), (u 3, v 3), (u 4, v 4) by the two-dimensional code recognition program, so that the association of the 3D coordinates and the 2D coordinates of the vertexes can be established. The 3D-2D association is known, namely the actual three-dimensional pose can be calculated by using a PnP algorithm, coarse positioning is performed by using an EPnP algorithm, and then fine positioning is performed by using nonlinear optimization to minimize the re-projection errors of four vertexes of the two-dimensional code.
In summary, according to the positioning and identifying method provided by the embodiment of the application, the target object is quickly and accurately positioned according to the self-feature or the identity of the stage target, so that the identifying stability and the positioning accuracy of the target object such as a robot are improved. The method has the advantages of realizing simultaneous positioning of multiple targets, along with high precision and good stability, and reducing the influence of stage lighting and personnel walking on positioning precision.
Example 2
Referring to fig. 4, a block diagram of a positioning and identifying device according to an embodiment of the present invention is provided. As shown in fig. 2, the positioning recognition device is in communication connection with an infrared camera module arranged above a preset area, a camera of the infrared camera module faces the preset area, at least one target object exists in the preset area, and an identity is arranged on the top surface of each target object. As shown in fig. 4, the location recognition apparatus 400 includes:
the acquiring module 401 is configured to acquire an initial image of a preset area acquired by the infrared camera module, and determine a feature frame of each target object in the initial image;
the first determining module 402 is configured to determine positioning data of each target object according to coordinates of pixel points corresponding to a feature frame of each target object and a reference coordinate system corresponding to the preset area;
a second determining module 403, configured to determine identity information of each target object according to a reference identity stored in advance in the identity information base and an actual identity corresponding to each pixel attribute value in the feature frame of each target object;
and the output module 404 is used for outputting the identity information and the positioning data of each target object.
The positioning and identifying device is in communication connection with the infrared camera shooting module arranged above the preset area, a camera of the infrared camera shooting module faces the preset area, at least one target object exists in the preset area, and an identity is arranged on the top surface of each target object; the method comprises the following steps: acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image; determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area; determining the identity information of each target object according to a reference identity stored in an identity information base in advance and an actual identity corresponding to each pixel point attribute value in a characteristic frame of each target object; and outputting the identity information and the positioning data of each target object. Meanwhile, the target object is rapidly and accurately positioned according to the self characteristics or the identity of the stage target, so that the recognition stability and the positioning accuracy of the target object such as a robot are improved. The method has the advantages of realizing simultaneous positioning of multiple targets, along with high precision and good stability, and reducing the influence of stage lighting and personnel walking on positioning precision.
Furthermore, the disclosed embodiments provide a computer device comprising a memory and a processor, the memory storing a computer program which, when run on the processor, performs the positioning identification method provided by the above-described method embodiments.
In particular, as shown in FIG. 5, to implement a computer device of various embodiments of the present invention, the computer device 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power source 511. Those skilled in the art will appreciate that the computer device structure shown in fig. 5 is not limiting of the computer device, and that a computer device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In an embodiment of the present invention, the computer device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 510; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 may also communicate with networks and other devices through a wireless communication system.
The computer device provides wireless broadband internet access to the user through the network module 502, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output (e.g., call signal reception sound, message reception sound, etc.) related to a specific function performed by the computer apparatus 500. The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used for receiving an audio or video signal. The input unit 504 may include a graphics processor (Graphics Processing Unit, abbreviated as GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing computer device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be video played on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. Microphone 5042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 501 in case of a phone call mode.
The computer device 500 further comprises at least one sensor 505, at least comprising a barometer as mentioned in the above embodiments. In addition, the sensor 505 may be other sensors such as a light sensor, a motion sensor, and others. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or backlight when the computer device 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the computer equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 505 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 506 is used for video-playing information input by a user or information provided to the user. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal video player (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the computer device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 5071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). Touch panel 5071 may include two parts, a touch detection computer device and a touch controller. The touch detection computer equipment detects the touch azimuth of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch detection computer device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 510 to determine a type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components for implementing the input and output functions of the computer device, in some embodiments, the touch panel 5071 may be integrated with the display panel 5061 to implement the input and output functions of the computer device, which is not limited herein.
The interface unit 508 is an interface to which an external computer device is connected with the computer device 500. For example, the external computer device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting to a computer device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external computer device and transmit the received input to one or more elements within the computer device 500 or may be used to transmit data between the computer device 500 and an external computer device.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the computer device and connects the various parts of the entire computer device using various interfaces and lines to perform various functions of the computer device and process data by running or executing software programs and/or modules stored in the memory 509 and invoking data stored in the memory 509, thereby performing overall monitoring of the computer device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The computer device 500 may also include a power supply 511 (e.g., a battery) for powering the various components, and preferably the power supply 511 may be logically connected to the processor 510 via a power management system that performs functions such as managing charge, discharge, and power consumption.
In addition, the computer device 500 includes some functional modules, which are not shown, and are not described herein.
The memory is used for storing a computer program which, when run by the processor, performs the positioning identification method described above.
In addition, an embodiment of the present invention provides a computer readable storage medium storing a computer program that runs the positioning identification method described above on a processor.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the invention may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.
Claims (8)
1. The positioning identification method is characterized by being applied to a positioning identification device, wherein the positioning identification device is in communication connection with an infrared camera module arranged above a preset area, a camera of the infrared camera module faces the preset area, at least one target object exists in the preset area, and an identity mark is arranged on the top surface of each target object; the identity mark arranged on the top surface of each target object is a two-dimensional code, and the method comprises the following steps:
acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image, wherein the characteristic frame is a quadrilateral frame corresponding to the size of the two-dimensional code;
determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area;
obtaining corresponding actual identity marks according to the attribute values of each pixel point in the characteristic frame of each target object;
transforming the actual identity of each target object to obtain a group of associated identity;
exclusive or is carried out on a group of associated identity identifiers of each target object and all the reference identity identifiers, and identity information corresponding to the reference identity identifier which is matched most is used as the identity information of the target object;
and outputting the identity information and the positioning data of each target object.
2. The method of claim 1, wherein the step of determining a feature frame for each target object in the initial image comprises:
processing the initial image into a binarized image;
searching edge pixel points from the binarized image, and fitting into at least one candidate frame according to all the edge pixel points;
screening out candidate frames meeting preset requirements to serve as characteristic frames of the target object.
3. The method of claim 2, wherein the step of finding edge pixels from the binarized image and fitting to at least one candidate frame based on all edge pixels comprises:
acquiring pixel types of adjacent pixel points of each pixel point;
finding out pixel points with different pixel types from adjacent pixel points to serve as edge pixel points;
ordering all edge pixel points according to the inclination angles of the gravity centers, sequentially selecting points which are within a preset distance range from a preset center point, performing straight line fitting, and calculating the error sum of each straight line;
and selecting corner indexes corresponding to four straight lines with minimum error sum to form candidate frames.
4. The method according to claim 3, wherein the step of determining the positioning data of each target object according to the coordinates of the pixel points corresponding to the feature frames of each target object and the reference coordinate system corresponding to the preset area includes:
acquiring two-dimensional coordinates of four vertexes in the characteristic frame;
converting the two-dimensional coordinates of each vertex into three-dimensional coordinates according to the shooting parameters of the infrared shooting module;
and calculating the positioning data of the target object corresponding to the characteristic frame according to the three-dimensional coordinates of the four vertexes in the characteristic frame.
5. The method of claim 4, wherein the step of converting the two-dimensional coordinates of each vertex to three-dimensional coordinates according to the photographing parameters of the infrared photographing module comprises:
according to the focal length and the offset of the infrared camera module, respectively converting the two-dimensional horizontal and vertical coordinates of the vertex into three-dimensional horizontal and vertical pixel coordinates, and taking a fixed value from the three-dimensional vertical axis pixel coordinates of the vertex;
multiplying the three-dimensional horizontal and vertical coordinates and the vertical coordinates of the vertex by the actual object distance to obtain the three-dimensional horizontal and vertical actual coordinates and the three-dimensional number axis actual coordinates of the vertex;
and/or the number of the groups of groups,
according to three-dimensional coordinates of four vertexes in the characteristic frame, calculating positioning data of a target object corresponding to the characteristic frame;
and calculating the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex according to the three-dimensional transverse and longitudinal actual coordinates of the vertex, and taking the pose data of the transverse axis, the pose data of the longitudinal axis and the deflection angle of the vertex as the positioning data of the vertex.
6. The positioning identification device is characterized by being in communication connection with an infrared camera module arranged above a preset area, wherein a camera of the infrared camera module faces the preset area, at least one target object exists in the preset area, and an identity mark is arranged on the top surface of each target object; the identity that every target object's top surface set up is the two-dimensional code, location recognition device includes:
the acquisition module is used for acquiring an initial image of a preset area acquired by the infrared camera module, and determining a characteristic frame of each target object in the initial image, wherein the characteristic frame is a quadrilateral frame corresponding to the size of the two-dimensional code;
the first determining module is used for determining positioning data of each target object according to pixel point coordinates corresponding to the characteristic frames of each target object and a reference coordinate system corresponding to the preset area;
the second determining module is used for obtaining corresponding actual identity marks according to the attribute values of the pixel points in the characteristic frame of each target object;
transforming the actual identity of each target object to obtain a group of associated identity;
exclusive or is carried out on a group of associated identity identifiers of each target object and all the reference identity identifiers, and identity information corresponding to the reference identity identifier which is matched most is used as the identity information of the target object;
and the output module is used for outputting the identity information and the positioning data of each target object.
7. A computer device comprising a memory and a processor, the memory for storing a computer program which, when run by the processor, performs the location identification method of any of claims 1 to 5.
8. A computer-readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the location identification method of any of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111117093.2A CN113808209B (en) | 2021-09-23 | 2021-09-23 | Positioning identification method, positioning identification device, computer equipment and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111117093.2A CN113808209B (en) | 2021-09-23 | 2021-09-23 | Positioning identification method, positioning identification device, computer equipment and readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113808209A CN113808209A (en) | 2021-12-17 |
| CN113808209B true CN113808209B (en) | 2024-01-19 |
Family
ID=78896363
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111117093.2A Active CN113808209B (en) | 2021-09-23 | 2021-09-23 | Positioning identification method, positioning identification device, computer equipment and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113808209B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114677443B (en) * | 2022-05-27 | 2022-08-19 | 深圳智华科技发展有限公司 | Optical positioning method, device, equipment and storage medium |
| CN115996342A (en) * | 2022-12-02 | 2023-04-21 | 广州高达尚电子科技有限公司 | Audio control method, electronic device, and computer-readable storage medium |
| CN116506731A (en) * | 2023-01-10 | 2023-07-28 | 腾讯科技(深圳)有限公司 | Focus shooting method, device, storage medium and electronic equipment |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108875451A (en) * | 2017-05-10 | 2018-11-23 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus, storage medium and program product positioning image |
| CN109063620A (en) * | 2018-07-25 | 2018-12-21 | 维沃移动通信有限公司 | An identification method and terminal equipment |
| CN109961040A (en) * | 2019-03-20 | 2019-07-02 | 深圳市华付信息技术有限公司 | Identity card area positioning method, device, computer equipment and storage medium |
| CN110991297A (en) * | 2019-11-26 | 2020-04-10 | 中国科学院光电研究院 | Target positioning method and system based on scene monitoring |
| CN111400426A (en) * | 2020-03-20 | 2020-07-10 | 苏州博众机器人有限公司 | Robot position deployment method, device, equipment and medium |
| CN111580659A (en) * | 2020-05-09 | 2020-08-25 | 维沃移动通信有限公司 | File processing method and device and electronic equipment |
| CN111860152A (en) * | 2020-06-12 | 2020-10-30 | 浙江大华技术股份有限公司 | Personnel state detection method, system, device and computer device |
| CN112763975A (en) * | 2020-12-30 | 2021-05-07 | 中国铁路设计集团有限公司 | Railway frame reference network inter-block splicing method considering railway banded characteristics |
| CN112785625A (en) * | 2021-01-20 | 2021-05-11 | 北京百度网讯科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
| CN113160075A (en) * | 2021-03-30 | 2021-07-23 | 武汉数字化设计与制造创新中心有限公司 | Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium |
-
2021
- 2021-09-23 CN CN202111117093.2A patent/CN113808209B/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108875451A (en) * | 2017-05-10 | 2018-11-23 | 腾讯科技(深圳)有限公司 | A kind of method, apparatus, storage medium and program product positioning image |
| CN109063620A (en) * | 2018-07-25 | 2018-12-21 | 维沃移动通信有限公司 | An identification method and terminal equipment |
| CN109961040A (en) * | 2019-03-20 | 2019-07-02 | 深圳市华付信息技术有限公司 | Identity card area positioning method, device, computer equipment and storage medium |
| CN110991297A (en) * | 2019-11-26 | 2020-04-10 | 中国科学院光电研究院 | Target positioning method and system based on scene monitoring |
| CN111400426A (en) * | 2020-03-20 | 2020-07-10 | 苏州博众机器人有限公司 | Robot position deployment method, device, equipment and medium |
| CN111580659A (en) * | 2020-05-09 | 2020-08-25 | 维沃移动通信有限公司 | File processing method and device and electronic equipment |
| CN111860152A (en) * | 2020-06-12 | 2020-10-30 | 浙江大华技术股份有限公司 | Personnel state detection method, system, device and computer device |
| CN112763975A (en) * | 2020-12-30 | 2021-05-07 | 中国铁路设计集团有限公司 | Railway frame reference network inter-block splicing method considering railway banded characteristics |
| CN112785625A (en) * | 2021-01-20 | 2021-05-11 | 北京百度网讯科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
| CN113160075A (en) * | 2021-03-30 | 2021-07-23 | 武汉数字化设计与制造创新中心有限公司 | Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113808209A (en) | 2021-12-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12106491B2 (en) | Target tracking method and apparatus, medium, and device | |
| CN113808209B (en) | Positioning identification method, positioning identification device, computer equipment and readable storage medium | |
| CN111476780B (en) | Image detection method and device, electronic equipment and storage medium | |
| CN108648235B (en) | Relocation method, device and storage medium for camera attitude tracking process | |
| CN109947886B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
| CN113280752B (en) | Groove depth measurement method, device and system and laser measurement equipment | |
| CN107818282B (en) | Two-dimensional code identification method, terminal and computer readable storage medium | |
| US20160112701A1 (en) | Video processing method, device and system | |
| CN107846583B (en) | A kind of image shadow compensation method and mobile terminal | |
| CN112150560B (en) | Method, device and computer storage medium for determining vanishing point | |
| CN110717964B (en) | Scene modeling method, terminal and readable storage medium | |
| CN108564613A (en) | A kind of depth data acquisition methods and mobile terminal | |
| CN108347558A (en) | A kind of method, apparatus and mobile terminal of image optimization | |
| CN117654030A (en) | Virtual object rendering method and device, electronic equipment and storage medium | |
| CN115081643B (en) | Confrontation sample generation method, related device and storage medium | |
| CN108322639A (en) | A kind of method, apparatus and mobile terminal of image procossing | |
| CN107516099A (en) | A kind of method, apparatus and computer-readable recording medium for marking picture detection | |
| CN112200130B (en) | Three-dimensional target detection method and device and terminal equipment | |
| CN115713616B (en) | House source space model generation method and device, terminal equipment and storage medium | |
| CN113780291B (en) | Image processing method and device, electronic equipment and storage medium | |
| CN115797657A (en) | Moving object fusion processing method and device and computer readable storage medium | |
| CN115311359B (en) | Camera pose correction method and device, electronic equipment and storage medium | |
| CN115329420B (en) | Marking generation method and device, terminal equipment and storage medium | |
| US20250321091A1 (en) | Groove depth measurement method, apparatus and system, and laser measurement device | |
| CN111107271A (en) | Shooting method and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |