[go: up one dir, main page]

CN112710308B - Positioning method, device and system of robot - Google Patents

Positioning method, device and system of robot Download PDF

Info

Publication number
CN112710308B
CN112710308B CN201911025754.1A CN201911025754A CN112710308B CN 112710308 B CN112710308 B CN 112710308B CN 201911025754 A CN201911025754 A CN 201911025754A CN 112710308 B CN112710308 B CN 112710308B
Authority
CN
China
Prior art keywords
robot
steering
positioning
parameter
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911025754.1A
Other languages
Chinese (zh)
Other versions
CN112710308A (en
Inventor
张明明
左星星
陈一鸣
李名杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201911025754.1A priority Critical patent/CN112710308B/en
Publication of CN112710308A publication Critical patent/CN112710308A/en
Application granted granted Critical
Publication of CN112710308B publication Critical patent/CN112710308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a positioning method, a device and a system of a robot. Wherein the method comprises the following steps: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter. The application solves the technical problem of inaccurate positioning during the steering of the robot in the prior art.

Description

Positioning method, device and system of robot
Technical Field
The application relates to the field of robot control, in particular to a positioning method, device and system of a robot.
Background
The sliding steering robot is a robot which can steer by changing the speeds of left and right wheels or caterpillar tracks, has simple structure and flexible movement because of no special steering mechanism, and is widely applied to the work and scientific exploration of outdoor environment.
In the prior art, when positioning a skid steer robot, data acquired by a GPS (Global Positioning System ) is generally used to position the skid steer robot, however, in an area without GPS data (for example, indoor) or an area with weak GPS signals (for example, an area with trees or buildings being blocked outdoors), the accurate positioning of the skid steer robot cannot be performed because the GPS data cannot be acquired or only a part of the GPS data can be acquired.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a positioning method, a device and a system for a robot, which at least solve the technical problem of inaccurate positioning when the robot turns in the prior art.
According to an aspect of an embodiment of the present application, there is provided a positioning method of a robot, including: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
According to another aspect of the embodiment of the present application, there is also provided a positioning method of a robot, including: collecting image information in the steering process of the robot, and determining a visual re-projection error according to the image information; determining a minimum objective function corresponding to the robot according to the vision re-projection error; estimating the steering parameters by solving a minimum objective function to obtain the steering parameters of the robot; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and positioning the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
According to another aspect of the embodiment of the present application, there is also provided a positioning device for a robot, including: the acquisition module is used for acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; the processing module is used for obtaining the steering parameters of the robot through the constraint function in the steering process of the robot; the determining module is used for determining positioning parameters based on the steering parameters and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameters, wherein the steering model is used for representing the relation between the positioning parameters and the steering parameters.
According to another aspect of the embodiment of the present application, there is also provided a storage medium, where the storage medium includes a stored program, and when the program runs, the device where the storage medium is controlled to execute the above-mentioned positioning method of the robot.
According to another aspect of the embodiment of the present application, there is also provided a processor for running a program, where the program runs to execute the above-mentioned positioning method of the robot.
According to another aspect of the embodiment of the present application, there is also provided a positioning system of a robot, including: a processor; and a memory, coupled to the processor, for providing instructions to the processor for processing the steps of: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
In the embodiment of the application, a visual positioning mode is adopted, firstly, a visual re-projection error is determined according to image information acquired by a robot, a constraint function corresponding to the robot is determined according to the visual re-projection error, then, in the steering process of the robot, the steering parameter of the robot is obtained through the constraint function, finally, a positioning parameter is determined based on the steering parameter and a pre-acquired steering model, and positioning information of the robot is acquired according to the positioning parameter.
From the above, the application obtains the positioning information of the robot by processing the image information collected by the robot, and the process combines the robot with the vision positioning, thereby accurately determining the positioning information of the robot. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize the accurate positioning of the robot in the scene of weak GPS signal or no GPS signal, and the application range of the robot is enlarged.
Therefore, the scheme provided by the application achieves the purpose of positioning the robot, thereby realizing the technical effect of improving the positioning precision of the robot and further solving the technical problem of inaccurate positioning when the robot turns in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of an alternative computing device according to an embodiment of the application;
FIG. 2 is a flow chart of a method of positioning a robot according to an embodiment of the application;
FIG. 3 is a schematic view of an alternative robotic chassis according to an embodiment of the present application;
FIG. 4 is a schematic illustration of an alternative determination of visual re-projection errors in accordance with an embodiment of the present application;
FIG. 5 is a schematic illustration of an alternative determination of visual re-projection errors in accordance with an embodiment of the present application;
FIG. 6 is a flow chart of a method of positioning a robot according to an embodiment of the application;
FIG. 7 is a schematic view of a positioning device of a robot according to an embodiment of the present application;
FIG. 8 is a block diagram of an alternative computing device in accordance with an embodiment of the present application;
FIG. 9 is a flow diagram of an alternative robot-based positioning method according to an embodiment of the application;
FIG. 10 is a flow diagram of an alternative robot-based positioning method according to an embodiment of the application; and
Fig. 11 is a schematic view of a scenario of an alternative robot-based positioning method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
There is also provided, in accordance with an embodiment of the present application, an embodiment of a method of positioning a robot, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and, although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than what is shown or described herein.
The method embodiment provided by the first embodiment of the application can be executed in a mobile terminal, a computing device or similar computing equipment. Fig. 1 shows a block diagram of a hardware architecture of a computing device (or mobile device) for implementing a positioning method of a robot. As shown in fig. 1, the computing device 10 (or mobile device 10) may include one or more processors 102 (shown in the figures as 102a, 102b, … …,102 n) (the processor 102 may include, but is not limited to, a microprocessor MCU, a programmable logic device FPGA, etc. processing means), a memory 104 for storing data, and a transmission means 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, computing device 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Further, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in computing device 10 (or mobile device). As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the positioning method of the robot in the embodiment of the present application, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the positioning method of the robot. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to computing device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the networks described above may include wireless networks provided by communication providers of computing device 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computing device 10 (or mobile device).
It should be noted here that, in some alternative embodiments, the computer device (or mobile device) shown in fig. 1 described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that fig. 1 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
In the above-described operating environment, the present application provides a method for positioning a robot as shown in fig. 2. Fig. 2 is a flowchart of a positioning method of a robot according to a first embodiment of the present application, as shown in fig. 2, the method includes:
Step S202, a constraint function corresponding to the robot is obtained, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot.
In step S202, the robot may be, but is not limited to, a skid steer robot, where the chassis of the skid steer robot may be a crawler chassis or a wheel chassis, and optionally, in the present application, the skid steer robot is a four-wheel chassis.
The constraint function is an objective function related to a decision variable, and in the above embodiment of the present application, the decision variable is a steering parameter of the robot, and the constraint function is an objective function related to the steering parameter. And solving the constraint function to obtain the steering parameters of the robot. In an alternative embodiment, the constraint function is a minimum objective function, i.e. the steering parameter obtained is the optimal solution when the constraint function takes its minimum value.
In an alternative embodiment, the robot has an image acquisition device, which may be, but is not limited to, a camera, and a processor that may acquire image information acquired by the image acquisition device and process the image information to obtain a visual re-projection error, and then determine a constraint function corresponding to the robot based on the visual re-projection error.
Optionally, fig. 11 shows a schematic view of a scenario of a positioning method based on a robot, in fig. 11, the robot is only used for collecting image information, after the robot collects the image information, the robot may also send the image information to a computing device, and the computing device processes the image information, so that a visual re-projection error can be obtained. In addition, the computing device may also obtain a detection value detected by the inertial measurement unit, and determine an inertial measurement constraint according to the detection value; the computing equipment can obtain the odometer data by reading the detection value of the odometer; the computing device obtains historical image information from the database and obtains prior information from the historical image information. Finally, the computing device can obtain the constraint function through the visual re-projection error, the inertial measurement constraint, the odometer data and the priori information.
As can be seen from fig. 11, the constraint functions include visual re-projection errors, inertial measurement constraints, odometry data, and a priori information. In the application, the four parameters are mainly optimized to generate an estimation result of the robot chassis model, and then the positioning information of the robot is accurately determined according to the estimation result.
In computer vision, for example, when calculating a planar homography matrix and a projection matrix, a cost function is constructed by using a visual reprojection error, and the cost function is subjected to a minimization process to optimize the homography matrix or the projection matrix. In the process, in the construction of the cost function by using the vision re-projection error, not only the calculation error of the homography matrix but also the measurement error of the image point are considered, so that the measurement accuracy can be improved by using the vision re-projection error in computer vision.
Step S204, obtaining the steering parameters of the robot through the constraint function in the steering process of the robot.
Alternatively, as shown in fig. 11, during the steering operation of the robot, the computing device may obtain the steering parameters by solving the constraint function.
In step S204, the robot comprises a left tire and a right tire, as shown in the schematic view of the robot chassis in fig. 3, in fig. 3 the left tire of the robot comprises two tires, and the right tire also comprises two tires. The steering parameters of the robot comprise: a first coordinate parameter of an instantaneous center of rotation of the left tire at the time of steering and a second coordinate parameter of an instantaneous center of rotation of the right tire at the time of steering; a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing the coefficient superimposed on the left tire and the second scale parameter is used for representing the coefficient superimposed on the right tire.
It should be noted that, the left tire and the right tire have different instantaneous rotation centers, and as shown in fig. 3, the instantaneous rotation center of the left tire at the time of steering is ICR l, and the instantaneous rotation center of the right tire at the time of steering is ICR r. In addition, ICR v is the instantaneous center of rotation corresponding to the entire robot when the robot turns, and ICR v is the instantaneous center of rotation corresponding to the entire robot when the robot turns left at angular velocity ω in fig. 3.
Alternatively, the first coordinate parameter is a coordinate value of an instantaneous rotation center of the left tire when the tire is turned, that is, ICR l may be represented as (X l,Yl), where X l represents an abscissa and Y l represents an ordinate. The second coordinate parameter is a coordinate value of an instantaneous rotation center of the right tire at the time of steering, that is, ICR r may be represented as (X r,Yr), where X r represents an abscissa and Y r represents an ordinate.
Since the X-axis coordinates of the instantaneous rotation centers of the left tire and the right tire at the time of steering are the same, in the present application, the X-axis coordinates of the instantaneous rotation centers of the left tire and the right tire at the time of steering are each represented by X v; since the Y-axis coordinates of the instantaneous centers of rotation of the left and right tires at the time of steering are different, Y l and Y r are used in the present application to represent the Y-axis coordinates of the instantaneous centers of rotation of the left and right tires at the time of steering, respectively. Thus, the steering parameter can be expressed as follows:
In the above formula, ζ is a steering parameter, and α l and α r represent a first scale parameter of the left tire and a second scale parameter of the right tire, respectively. Wherein, alpha l and alpha r are coefficients without units, the values of which are related to the materials and the inflation states of the left wheel and the right wheel respectively, and the specific values can be obtained by solving a constraint function. In addition, the first coordinate parameter of the instantaneous center of rotation of the left tire at the time of steering and the second coordinate parameter of the instantaneous center of rotation of the right tire at the time of steering are related to the type of the ground that the tire of the robot (including the left tire and the right tire) contacts and the size of the robot, and different ground types and robot sizes correspond to the different coordinate parameters of the instantaneous center of rotation, that is, the first coordinate parameter of the instantaneous center of rotation of the left tire at the time of steering and the second coordinate parameter of the instantaneous center of rotation of the right tire at the time of steering are different under different ground types and robot sizes.
Step S206, determining positioning parameters based on the steering parameters and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameters, wherein the steering model is used for representing the relation between the positioning parameters and the steering parameters.
Optionally, as shown in fig. 11, after obtaining the steering parameter, the computing device determines the positioning parameter through a steering model that characterizes a relationship between the positioning parameter and the steering parameter, and then outputs positioning information of the robot to the display device according to the positioning parameter, where the display device may be a device in a management platform used by a robot manager, the robot manager may determine whether the robot reaches the job site through the positioning information displayed by the display device, or the robot manager may send a control instruction to the robot according to the positioning information displayed by the display device. In addition, the management platform can also send a control instruction to the robot according to the positioning information so as to realize the purpose of remotely controlling the robot.
In step S206, the positioning parameters include: the angular velocity of the robot during steering and the linear velocity of the robot during steering are shown as ω in fig. 3 as the angular velocity of the robot as a whole, and v o as the linear velocity of the robot as a whole during steering, wherein v o can be decomposed for easy calculation to obtain components of the linear velocity of the robot as a whole in the X-axis and Y-axis directions, i.e., v ox and v oy. Alternatively, the positioning parameters may be represented in the form of a matrix as follows:
In fig. 3, o l and o r denote the linear velocities of the left and right tires, respectively, and the size of the line segment corresponding to the arrow indicates the size of the linear velocity of the left and right tires, and in fig. 3, the size of the linear velocity of the left tire is smaller than the size of the linear velocity of the right tire.
Alternatively, the positioning parameter and the steering parameter may be represented by the following formulas:
wherein the function g (·) represents the steering model.
In addition, in step S206, the positioning information of the robot includes a position and a posture of the robot, wherein the position of the robot may be determined byRepresenting that the pose of the robot may be represented by/>And (3) representing. The processor of the robot determines the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns.
Optionally, the processor may obtain the position and the posture of the robot at the current moment by using the transfer function G and solving the positioning parameters obtained by the steering model. Wherein the processor of the robot can estimate the position and the posture of the current moment according to the position and the posture of the robot at the last moment. Namely, the position and the posture of the robot at the current moment meet the following formula:
for discrete estimation, the above equation can be transformed into the following equation:
In the above four formulas, Δt is the time difference between the previous time and the current time,
The corresponding transfer matrix is shown as follows:
the transfer function of the covariance matrix is shown as follows:
Wherein Q d∈R9×9 corresponds to Noise information of (a)
Where G is the transfer function.
Based on the above-mentioned schemes defined in step S202 to step S206, it can be known that, in the embodiment of the present application, a visual positioning manner is adopted, firstly, a visual re-projection error is determined according to image information acquired by a robot, and a constraint function corresponding to the robot is determined according to the visual re-projection error, then, in the steering process of the robot, a steering parameter of the robot is obtained through the constraint function, finally, a positioning parameter is determined based on the steering parameter and a pre-acquired steering model, and positioning information of the robot is acquired according to the positioning parameter.
It is easy to notice that the application obtains the positioning information of the robot by processing the image information collected by the robot, and the process combines the robot with vision positioning, thereby accurately determining the positioning information of the robot. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize the accurate positioning of the robot in the scene of weak GPS signal or no GPS signal, and the application range of the robot is enlarged.
Therefore, the scheme provided by the application achieves the purpose of positioning the robot, thereby realizing the technical effect of improving the positioning precision of the robot and further solving the technical problem of inaccurate positioning when the robot turns in the prior art.
In an alternative embodiment, the visual re-projection error is expressed according to a steering parameter, wherein after a constraint function corresponding to the robot is obtained, in the steering process of the robot, a processor of the robot determines the constraint function as a minimum objective function, then solves the minimum objective function to obtain the steering parameter, namely, the function corresponding to the minimum output result of solving the constraint function is the minimum objective function, and at the moment, the parameter corresponding to the minimum constraint function is the steering parameter.
Wherein the constraint function may be represented by the following formula:
C=Cproj+CIMU+Codom+Cprior
In the above formula, C is a constraint function, C proj is a visual re-projection error, C IMU is an inertial measurement constraint, C odom is odometer data, and C prior is priori information. From the above equation, the constraint function includes inertial measurement constraints, odometer data, and a priori information in addition to the visual re-projection errors.
Optionally, for the visual re-projection error, the robot first acquires image information of consecutive multiple frames acquired by the robot, and then determines the visual re-projection error of the robot based on the image information of consecutive multiple frames.
Specifically, after the image information of the continuous multiframe is obtained, the robot extracts the characteristic points in the image information, tracks the characteristic points in the image information of the continuous multiframe to obtain track information corresponding to the characteristic points, and then determines the position information of the characteristic points in the three-dimensional space according to the track information. When the current collected image information of the robot comprises the feature points, the feature points are projected onto a two-dimensional plane corresponding to the current collected image information of the robot according to the position information of the feature points in the three-dimensional space, projection points are obtained, and finally, a visual re-projection error is determined according to the positions of the feature points and the positions of the projection points in the current collected image information of the robot.
In the above process, the feature points in the image information may be corner points in the image, where the corner points may be isolated points with maximum or minimum intensity on some properties, and end points of line segments, for example, the corner points may be connection points of object contour lines in the image (for example, corner angles of houses). In addition, the feature points in the image information may also be points in the image where the color is prominent.
Optionally, the robot may track the feature points in each frame of image by using a KIT algorithm, so as to obtain track information of the feature points, and then calculate position information of the feature points in the three-dimensional space by using a triangulation algorithm. The triangulation algorithm is a positioning algorithm, and in the application, the position of the feature point is determined by applying the triangle geometry principle according to the track information of the feature point.
Further, when the image acquisition device of the robot acquires the image again and the acquired image includes the feature points, the robot projects the three-dimensional positions of the feature points (namely, the position information of the feature points in the three-dimensional space) onto the two-dimensional plane to obtain projection points, and finally, the vision re-projection error is calculated according to the positions of the projection points and the positions of the feature points. For example, in fig. 4, the observations P1 and P2 are projections of the same spatial point P, and a certain distance e is provided between the projection P2' of P and the observation P2, and the distance e is the visual re-projection error.
As shown in fig. 5, x and x' are projection points corresponding to feature points in the image,Is the estimated value of x,/>For the estimated value of x ', the visual re-projection error corresponding to x and x' satisfies the following equation:
In the above formula, ε is the visual re-projection error, and x' satisfy the following formula:
x'=Hx
And/> Satisfies the following formula:
wherein, And H is an estimated value of H, and H is a homography matrix.
As can be seen from the above formula of the visual re-projection error, there is an error between the feature point and the estimated value in the image, so that the coordinates of the estimated value need to be re-estimated, and the estimated new estimated value satisfies the homography relationship, for example, the sum of d and d' in fig. 5 is the visual re-projection error,For/>Is inverted.
Optionally, the constraint function further includes: and the inertial measurement constraint is uncorrelated with the steering parameter and is detected by the inertial measurement instrument. Specifically, the robot first acquires a detection value of an inertial measurement unit provided in the robot, and then determines an inertial measurement constraint based on the detection value of the inertial measurement unit. Wherein the detection value includes: acceleration and angular velocity of the robot.
The inertial measurement unit may be an IMU (Inertial Measurement Unit ), where the inertial measurement unit has a gyroscope and an accelerometer, and the gyroscope is a three-axis gyroscope for detecting an angular velocity of the robot; the accelerometer comprises an accelerometer of three directions (namely three directions of x, y and z) in space, and is used for respectively detecting the acceleration of the robot in the three directions.
In addition, after obtaining the detection value of the inertial detector, the processor of the robot may perform integral calculation on the detection value, thereby obtaining the inertial measurement constraint.
In an alternative embodiment, the constraint function further comprises: and obtaining the odometer data by reading the detection value of the odometer. The odometer is a method of estimating a change in the position of an object with time using data obtained from a motion sensor.
From the above, it is clear that the inertial measurement constraints and odometry data can be based on different detection devices to observe the position and angle of the robot.
In an alternative embodiment, the constraint function further comprises: the apparatus includes a priori information, wherein the a priori information includes edge information in the historical image information. Optionally, the processor may acquire the acquisition time length of the image acquired by the image acquisition device, sort the images acquired by the image acquisition device according to the acquisition time length, delete the history image with the acquisition time length longer than the preset time length to obtain the target history image, and finally calculate the edge distribution of the target history image to obtain the prior information.
Further, after determining the visual re-projection error, inertial measurement constraints, odometer data, and a priori information, a constraint function may be obtained. And then the processor obtains the steering parameters of the robot through the constraint function in the steering process of the robot, further determines positioning parameters based on the steering parameters and a steering model obtained in advance, and obtains the positioning information of the robot according to the positioning parameters. Wherein the processor of the robot needs to create a steering model before determining the positioning parameters.
Specifically, the processor may first acquire an intermediate variable, wherein the intermediate variable is determined by a difference between the rotation center of the left tire and the rotation center of the right tire in the vertical direction. Then acquiring an intermediate matrix, wherein the intermediate matrix comprises: the first matrix is composed of the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, the second matrix is composed of the first scale parameter and the second scale parameter, and the third matrix is composed of the linear velocity of the left tire and the linear velocity of the right tire. And finally, determining the corresponding relation between the target matrix formed by the positioning parameters and the intermediate variable and the intermediate matrix as a steering model.
Alternatively, the steering model g (·) can be expressed as:
In the above formula, W is an intermediate matrix, wherein, The first matrix beingThe second matrix is/>The third matrix is/>
In the above formula, since the abscissa parameter of the first coordinate parameter and the abscissa parameter of the second coordinate parameter are the same, Y v represents the abscissa parameter of the first coordinate parameter and the abscissa parameter of the second coordinate parameter, Y l represents the vertical coordinate parameter of the first coordinate parameter, and Y r represents the vertical coordinate parameter of the second coordinate parameter. Further, α l is a first scale parameter, α r is a second scale parameter, o l is a linear velocity of the left tire, and o r is a linear velocity of the right tire.
In the above formula, Δy is an intermediate variable, wherein Δy satisfies the following formula: Δy=y l-Yr.
After the steering model is obtained, the processor can determine the positioning parameters through the steering model, and then obtain the positioning information of the robot according to the positioning parameters.
In addition, it should be noted that, based on the positioning method of the robot provided by the application, the positioning information of the robot can be determined by combining the GPS data of the robot acquired by the GPS, so as to improve the positioning accuracy of the robot.
In an alternative embodiment, fig. 9 shows a flow diagram of a robot-based positioning method, and as can be seen from fig. 9, after the image information is acquired by the image acquisition device in the robot, the image information is sent to the computing device, which may be a processor in the robot. The computing device processes the image information of the continuous multiframes to obtain a constraint function, wherein the constraint function at least comprises a visual re-projection error, an inertial measurement constraint, mileage calculation data and priori information. In the process of steering operation of the robot, the computing equipment can obtain steering parameters by solving the constraint function. And then, the computing equipment determines the positioning parameters through a steering model representing the relation between the positioning parameters and the steering parameters, and finally, the positioning information of the robot is output according to the positioning parameters.
In another alternative embodiment, fig. 10 shows a flow chart of a positioning method based on a robot, and in fig. 10, the robot is applied in a scene of working in an outdoor environment, where a processor in the robot can process image information collected by the robot, so as to obtain positioning information. As can be seen from fig. 10, when the robot performs outdoor operation, the image acquisition device of the robot may acquire image information of a continuous multi-frame operation environment, and send the acquired image information to the processor of the robot, and the processor processes the image information of the continuous multi-frame, so as to obtain a constraint function, where the constraint function at least includes a visual re-projection error, an inertial measurement constraint, mileage calculation data and prior information. In the process of steering operation of the robot, the processor can obtain steering parameters by solving the constraint function, then determine the positioning parameters through a steering model representing the relation between the positioning parameters and the steering parameters, and finally output the positioning information of the robot according to the positioning parameters. After obtaining the positioning information, the robot sends the positioning information to a computing device, wherein the computing device may be a robot management platform. The computing device may analyze the positioning information of the robot, determine whether the robot has arrived at the job site, whether the robot is beginning to work, and the like. For example, if the computing device determines that the robot does not reach the operation site according to the positioning information sent by the robot, the computing device determines a moving direction of the robot according to the positioning information of the robot and the position information of the operation site, and then generates a control instruction according to the moving direction and sends the control instruction to the robot so as to enable the robot to move according to the moving direction. For another example, after determining that the robot has reached the job site, the computing device controls the robot to perform the job, for example, in a scene where a soil specimen is collected, the computing device transmits a control instruction to start the job to the robot, and after receiving the control instruction, controls a component (e.g., a manipulator) of the robot that collects the soil specimen to start performing the job.
From the above, the scheme provided by the application fuses the chassis model of the robot with visual positioning and IMU observation, so that the chassis model of the robot can be estimated on line in real time to adapt to different terrain conditions. In addition, the scheme provided by the application is fused with vision instead of the GPS, so that the problem that GPS data cannot be acquired when the GPS signal is weak or no GPS signal exists is avoided, the application range of the robot is enlarged, and the positioning accuracy of the robot is improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments, and that the acts and modules referred to are not necessarily required for the present application.
From the above description of the embodiments, it will be clear to a person skilled in the art that the positioning method of the robot according to the above embodiments may be implemented by means of software plus a necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
Example 2
According to an embodiment of the present application, there is also provided a positioning method of a robot, as shown in fig. 6, the method including the steps of:
Step S602, image information is collected in the steering process of the robot, and a visual re-projection error is determined according to the image information.
In step S602, the robot may be, but is not limited to, a skid steer robot, where the chassis of the skid steer robot may be a crawler chassis or a wheel chassis, and optionally, in the present application, the skid steer robot is a four-wheel chassis.
In an alternative embodiment, the robot has an image acquisition device, which may be, but is not limited to, a camera, and a processor that can acquire image information acquired by the image acquisition device and process the image information to obtain the visual re-projection error.
Specifically, the robot firstly acquires image information of continuous multiframes acquired by the robot, then extracts characteristic points in the image information, tracks the characteristic points in the image information of the continuous multiframes to obtain track information corresponding to the characteristic points, and determines position information of the characteristic points in a three-dimensional space according to the track information. When the current collected image information of the robot comprises the feature points, the feature points are projected onto a two-dimensional plane corresponding to the current collected image information of the robot according to the position information of the feature points in the three-dimensional space, projection points are obtained, and finally, visual re-projection errors are determined according to the positions of the feature points and the positions of the projection points in the current collected image information of the robot.
In computer vision, for example, when calculating a planar homography matrix and a projection matrix, a cost function is constructed by using a visual reprojection error, and the cost function is subjected to a minimization process to optimize the homography matrix or the projection matrix. In the process, in the construction of the cost function by using the vision re-projection error, not only the calculation error of the homography matrix but also the measurement error of the image point are considered, so that the measurement accuracy can be improved by using the vision re-projection error in computer vision.
Step S604, determining a minimum objective function corresponding to the robot according to the vision re-projection error.
In step S604, the minimum objective function is an objective function related to a decision variable, and in the above embodiment of the present application, the decision variable is a steering parameter of the robot, and the minimum objective function is an objective function related to the steering parameter. And solving the minimum objective function to obtain the steering parameters of the robot. In an alternative embodiment, the steering parameter is determined to be the optimal solution when the minimum objective function takes its minimum value.
In an alternative embodiment, the minimum objective function at least includes a visual re-projection error, so that after the visual re-projection error is obtained, the corresponding minimum objective function of the robot is obtained.
In another alternative embodiment, the minimum objective function includes at least: visual re-projection errors, inertial measurement constraints, odometer data, and a priori information. Wherein the robot first acquires a detection value of an inertial measurement unit provided in the robot, and then determines an inertial measurement constraint based on the detection value of the inertial measurement unit. Wherein the detection value includes: acceleration and angular velocity of the robot. And the robot obtains the odometer data by reading the detection value of the odometer. The processor of the robot can acquire the acquisition time length of the images acquired by the image acquisition equipment, sort the images acquired by the acquisition equipment according to the acquisition time length, delete the historical images with the acquisition time length longer than the preset time length to obtain a target historical image, and finally calculate the edge distribution of the target historical image to obtain the prior information.
After the vision re-projection error, the inertia measurement constraint, the odometer data and the priori information are obtained, the vision re-projection error, the inertia measurement constraint, the odometer data and the priori information are summed to obtain the minimum objective function.
Step S606, estimating the steering parameters by solving the minimum objective function to obtain the steering parameters of the robot.
In step S606, the robot comprises a left tire and a right tire, as shown in the schematic view of the robot chassis in fig. 3, in fig. 3 the left tire of the robot comprises two tires, and the right tire also comprises two tires. The steering parameters of the robot comprise: a first coordinate parameter of an instantaneous center of rotation of the left tire at the time of steering and a second coordinate parameter of an instantaneous center of rotation of the right tire at the time of steering; a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing the coefficient superimposed on the left tire and the second scale parameter is used for representing the coefficient superimposed on the right tire.
Optionally, the processor of the robot may determine the preset function as a minimum objective function, and obtain the steering parameter by solving the minimum objective function.
Step S608, determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and positioning the robot according to the positioning parameter, where the steering model is used to represent a relationship between the positioning parameter and the steering parameter.
In step S608, the positioning parameters include: the angular velocity of the robot during steering and the linear velocity of the robot during steering are shown as ω in fig. 3 as the angular velocity of the robot as a whole, and v o as the linear velocity of the robot as a whole during steering, wherein v o can be decomposed for easy calculation to obtain components of the linear velocity of the robot as a whole in the X-axis and Y-axis directions, i.e., v ox and v oy. Alternatively, the positioning parameters may be represented in the form of a matrix as follows:
In fig. 3, o l and o r denote the linear velocities of the left and right tires, respectively, and the size of the line segment corresponding to the arrow indicates the size of the linear velocity of the left and right tires, and in fig. 3, the size of the linear velocity of the left tire is smaller than the size of the linear velocity of the right tire.
Alternatively, the positioning parameter and the steering parameter may be represented by the following formulas:
wherein the function g (·) represents the steering model.
Further, after the positioning parameters are obtained, the processor of the robot determines the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns, so that the positioning information of the robot is obtained, and the positioning of the robot is completed.
It is easy to notice that the application obtains the positioning information of the robot by processing the image information collected by the robot, and the process combines the robot with vision positioning, thereby accurately determining the positioning information of the robot. In addition, because the visual positioning is not influenced by the strength of the GPS signal, the scheme provided by the application can realize the accurate positioning of the robot in the scene of weak GPS signal or no GPS signal, and the application range of the robot is enlarged.
Therefore, the scheme provided by the application achieves the purpose of positioning the robot, thereby realizing the technical effect of improving the positioning precision of the robot and further solving the technical problem of inaccurate positioning when the robot turns in the prior art.
Example 3
According to an embodiment of the present application, there is also provided a positioning device of a robot for implementing the positioning method of a robot, as shown in fig. 7, the device 70 includes: an acquisition module 701, a processing module 703 and a determination module 705.
The acquiring module 701 is configured to acquire a constraint function corresponding to the robot, where the constraint function includes at least a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; the processing module 703 is configured to obtain a steering parameter of the robot through a constraint function during the steering process of the robot; the determining module 705 is configured to determine a positioning parameter based on a steering parameter and a pre-acquired steering model, and acquire positioning information of the robot according to the positioning parameter, where the steering model is used to represent a relationship between the positioning parameter and the steering parameter.
Here, it should be noted that the above-mentioned obtaining module 701, processing module 703 and determining module 705 correspond to steps S202 to S206 in embodiment 1, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above-mentioned embodiment one. It should be noted that the above module may be implemented as part of the apparatus in the computing device 10 provided in the first embodiment.
Optionally, the robot includes a left tire and a right tire, and the steering parameters include: a first coordinate parameter of an instantaneous center of rotation of the left tire at the time of steering and a second coordinate parameter of an instantaneous center of rotation of the right tire at the time of steering; a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing the coefficient superimposed on the left tire and the second scale parameter is used for representing the coefficient superimposed on the right tire.
In an alternative embodiment, the positioning parameters include: an angular velocity when the robot turns and a linear velocity when the robot turns, wherein the determining module comprises: the first determining module is used for determining the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns.
In an alternative embodiment, the visual re-projection error is expressed in terms of a steering parameter, wherein the processing module comprises: the second determining module and the first processing module. The second determining module is used for determining the constraint function as a minimum objective function; and the first processing module is used for solving the minimum objective function to obtain the steering parameter.
In an alternative embodiment, the acquisition module includes: the first acquisition module and the third determination module. The first acquisition module is used for acquiring image information of continuous multiframes acquired by the robot; and a third determining module for determining the vision re-projection error of the robot based on the image information of the continuous multi-frames.
In an alternative embodiment, the third determining module includes: the device comprises an extraction module, a fourth determination module, a second processing module and a fifth determination module. The extraction module is used for extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; the fourth determining module is used for determining the position information of the feature points in the three-dimensional space according to the track information; the second processing module is used for projecting the characteristic points onto a two-dimensional plane corresponding to the image information currently collected by the robot according to the position information of the characteristic points in the three-dimensional space when the image information currently collected by the robot comprises the characteristic points, so as to obtain projection points; and a fifth determining module, configured to determine a visual re-projection error according to the position of the feature point and the position of the projection point in the image information currently acquired by the robot.
In an alternative embodiment, the constraint function further comprises: inertial measurement constraints, wherein the acquisition module comprises: the second acquisition module and the sixth determination module. Wherein, the second acquisition module is used for acquiring the detected value of the inertial measurement unit arranged in the robot, wherein the detected value comprises: acceleration and angular velocity of the robot; and the sixth determining module is used for determining inertial measurement constraint according to the detection value of the inertial measurement instrument.
In an alternative embodiment, the constraint function further comprises: and obtaining the odometer data by reading the detection value of the odometer.
In an alternative embodiment, the constraint function further comprises: the apparatus includes a priori information, wherein the a priori information includes edge information in the historical image information.
In an alternative embodiment, the positioning device of the robot further comprises a creation module for creating a steering model, wherein the creation module comprises: a third acquisition module for acquiring an intermediate variable, wherein the intermediate variable is determined by a difference value between a rotation center of the left tire and a rotation center of the right tire in a vertical direction; a fourth obtaining module, configured to obtain an intermediate matrix, where the intermediate matrix includes: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and a seventh determining module, configured to determine that a correspondence between the target matrix formed by the positioning parameters and the intermediate variables and the intermediate matrix is a steering model.
Example 4
According to an embodiment of the present application, there is also provided a positioning system of a robot for implementing the positioning method of a robot, the system including: a processor and a memory.
The memory is connected with the processor and is used for providing instructions for the processor to process the following processing steps: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
It should be noted that, the processor in this embodiment may execute the positioning method of the robot in embodiment 1, and the related content is described in embodiment 1, which is not described herein.
Example 5
Embodiments of the present application may provide a computing device, which may be any one of a group of computer terminals. Alternatively, in this embodiment, the above-mentioned computing device may be replaced by a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the computing device may be located in at least one network device of a plurality of network devices of the computer network.
In this embodiment, the above-mentioned computing device may execute the program code of the following steps in the positioning method of the robot: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
Alternatively, FIG. 8 is a block diagram of a computing device according to an embodiment of the application. As shown in fig. 8, the computing device 10 may include: one or more (only one is shown) processors 802, memory 804, and a peripheral interface 806.
The memory may be used to store software programs and modules, such as program instructions/modules corresponding to the positioning method and apparatus of the robot in the embodiments of the present application, and the processor executes the software programs and modules stored in the memory, thereby executing various functional applications and data processing, that is, implementing the positioning method of the robot. The memory may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the computing device 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
Optionally, the robot includes a left tire and a right tire, and the steering parameters include: a first coordinate parameter of an instantaneous center of rotation of the left tire at the time of steering and a second coordinate parameter of an instantaneous center of rotation of the right tire at the time of steering; a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing the coefficient superimposed on the left tire and the second scale parameter is used for representing the coefficient superimposed on the right tire.
Optionally, the above processor may further execute program code for: and determining the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns. Wherein the positioning parameters include: an angular velocity when the robot turns and a linear velocity when the robot turns.
Optionally, the above processor may further execute program code for: determining the constraint function as a minimum objective function; and solving the minimum objective function to obtain the steering parameter. Wherein the visual re-projection error is expressed according to the steering parameter.
Optionally, the above processor may further execute program code for: acquiring image information of continuous multiframes acquired by a robot; a visual re-projection error of the robot is determined based on the image information of the successive multiframes.
Optionally, the above processor may further execute program code for: extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; determining position information of the feature points in the three-dimensional space according to the track information; when the current collected image information of the robot comprises characteristic points, projecting the characteristic points onto a two-dimensional plane corresponding to the current collected image information of the robot according to the position information of the characteristic points in the three-dimensional space, so as to obtain projection points; and determining a visual re-projection error according to the position of the characteristic point and the position of the projection point in the image information currently acquired by the robot.
Optionally, the above processor may further execute program code for: acquiring a detection value of an inertial measurement unit provided in a robot, wherein the detection value includes: acceleration and angular velocity of the robot; and determining inertial measurement constraints according to the detection value of the inertial measurement instrument. Wherein the constraint function further comprises: inertial measurement constraints.
Optionally, the constraint function further includes: and obtaining the odometer data by reading the detection value of the odometer.
Optionally, the constraint function further includes: the apparatus includes a priori information, wherein the a priori information includes edge information in the historical image information.
Optionally, the above processor may further execute program code for: creating a steering model, wherein creating the steering model comprises: acquiring an intermediate variable, wherein the intermediate variable is determined by a difference value between the rotation center of the left tire and the rotation center of the right tire in the vertical direction; obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and determining the corresponding relation between the target matrix formed by the positioning parameters and the intermediate variables and the intermediate matrix as a steering model.
It will be appreciated by those skilled in the art that the configuration shown in fig. 8 is merely illustrative, and the computing device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile internet device (Mobile INTERNET DEVICES, MID), a PAD, etc. Fig. 8 is not limited to the structure of the electronic device. For example, computing device 10 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
Example 6
The embodiment of the application also provides a storage medium. Alternatively, in this embodiment, the storage medium may be used to store the program code executed by the positioning method of the robot provided in the first embodiment.
Alternatively, in this embodiment, the storage medium may be located in any one of the computing devices in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot; in the process of steering the robot, obtaining the steering parameters of the robot through a constraint function; and determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter.
Optionally, the robot includes a left tire and a right tire, and the steering parameters include: a first coordinate parameter of an instantaneous center of rotation of the left tire at the time of steering and a second coordinate parameter of an instantaneous center of rotation of the right tire at the time of steering; a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing the coefficient superimposed on the left tire and the second scale parameter is used for representing the coefficient superimposed on the right tire.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: and determining the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns. Wherein the positioning parameters include: an angular velocity when the robot turns and a linear velocity when the robot turns.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: determining the constraint function as a minimum objective function; and solving the minimum objective function to obtain the steering parameter. Wherein the visual re-projection error is expressed according to the steering parameter.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring image information of continuous multiframes acquired by a robot; a visual re-projection error of the robot is determined based on the image information of the successive multiframes.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points; determining position information of the feature points in the three-dimensional space according to the track information; when the current collected image information of the robot comprises characteristic points, projecting the characteristic points onto a two-dimensional plane corresponding to the current collected image information of the robot according to the position information of the characteristic points in the three-dimensional space, so as to obtain projection points; and determining a visual re-projection error according to the position of the characteristic point and the position of the projection point in the image information currently acquired by the robot.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring a detection value of an inertial measurement unit provided in a robot, wherein the detection value includes: acceleration and angular velocity of the robot; and determining inertial measurement constraints according to the detection value of the inertial measurement instrument. Wherein the constraint function further comprises: inertial measurement constraints.
Optionally, the constraint function further comprises: and obtaining the odometer data by reading the detection value of the odometer.
Optionally, the constraint function further comprises: the apparatus includes a priori information, wherein the a priori information includes edge information in the historical image information.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: creating a steering model, wherein creating the steering model comprises: acquiring an intermediate variable, wherein the intermediate variable is determined by a difference value between the rotation center of the left tire and the rotation center of the right tire in the vertical direction; obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix formed by the abscissa parameter of the first coordinate parameter, the vertical coordinate parameter of the first coordinate parameter and the vertical coordinate parameter of the second coordinate parameter, a second matrix formed by the first scale parameter and the second scale parameter, and a third matrix formed by the linear velocity of the left tire and the linear velocity of the right tire; and determining the corresponding relation between the target matrix formed by the positioning parameters and the intermediate variables and the intermediate matrix as a steering model.
According to the embodiment of the application, the application scene of the robot can be determined, and whether the vision re-projection error module is started or not is determined according to the application scene, wherein the robot executes the positioning method of the robot when the vision re-projection error module is determined to be started according to the application scene.
Optionally, the robot may determine whether accurate positioning is required according to environmental information of an environment in which the robot is located, for example, the robot may collect an environmental image of the environment in which the robot is located and analyze the environmental image to obtain environmental information, where the environmental information includes, but is not limited to, an indoor environment, an outdoor environment, and the like. If the environmental information is analyzed, the robot is determined to be in an indoor restaurant, the task of the robot is determined to be meal delivery to a table, and under the application scene, the robot needs to be accurately positioned, and at the moment, a vision re-projection error module of the robot starts to be started. In addition, if it is determined that the robot does not need to be accurately positioned, for example, it is determined that the robot is outdoors, and the task of the robot is to collect a soil sample, in the application scenario, as long as the robot is ensured to be in the working area, the accurate positioning is not needed, and at this time, the vision re-projection error module of the robot is not started.
According to another embodiment of the invention, the user may also determine whether the robot turns on the visual re-projection error module. For example, a user may control the robot to turn on or off the visual re-projection error module by sending a control instruction to the robot. The user may send a control command to the robot by means of voice, or may send a control command to the robot by means of a control terminal (e.g., a remote controller, an upper computer).
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely an alternative embodiment of the application, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the principles of the application, and such modifications and variations should also be considered as being within the scope of the application.

Claims (15)

1. A method of positioning a robot, comprising:
obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
in the process of steering the robot, obtaining the steering parameters of the robot through the constraint function;
And determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter, and is created based on at least an intermediate variable, and the intermediate variable is determined by the difference value between the rotation center of the left tire and the rotation center of the right tire of the robot in the vertical direction.
2. The method of claim 1, wherein the robot includes a left tire and a right tire, and the steering parameters include:
a first coordinate parameter of an instantaneous center of rotation of the left tire when steering and a second coordinate parameter of an instantaneous center of rotation of the right tire when steering;
a first scale parameter of the left tire and a second scale parameter of the right tire, wherein the first scale parameter is used for representing a coefficient superimposed on the left tire and the second scale parameter is used for representing a coefficient superimposed on the right tire.
3. The method of claim 1, wherein the positioning parameters comprise: the angular speed of the robot when steering and the linear speed of the robot when steering, according to the positioning parameters, obtain the positioning information of the robot, including:
And determining the position and the posture of the robot at the current moment according to the angular speed of the robot when the robot turns and the linear speed of the robot when the robot turns.
4. The method of claim 1, wherein the visual re-projection error is expressed in terms of the steering parameter, the steering parameter of the robot being obtained by a constraint function during steering of the robot, comprising:
Determining the constraint function as a minimum objective function;
and solving the minimum objective function to obtain the steering parameter.
5. The method of claim 1, wherein obtaining the constraint function corresponding to the robot comprises:
Acquiring image information of continuous multiframes acquired by the robot;
a visual re-projection error of the robot is determined based on the image information of consecutive multiframes.
6. The method of claim 5, wherein determining a visual re-projection error of the robot based on the image information for consecutive frames comprises:
Extracting characteristic points in the image information, and tracking the characteristic points in the continuous multi-frame image information to obtain track information corresponding to the characteristic points;
Determining the position information of the characteristic points in a three-dimensional space according to the track information;
when the image information currently collected by the robot comprises the feature points, projecting the feature points onto a two-dimensional plane corresponding to the image information currently collected by the robot according to the position information of the feature points in the three-dimensional space, so as to obtain projection points;
And determining the visual re-projection error according to the position of the characteristic point and the position of the projection point in the image information currently acquired by the robot.
7. The method of claim 1, wherein the constraint function further comprises: inertial measurement constraints, wherein obtaining inertial measurement constraints of the robot comprises:
Acquiring a detection value of an inertial measurement unit arranged in the robot, wherein the detection value comprises: acceleration and angular velocity of the robot;
And determining the inertial measurement constraint according to the detection value of the inertial measurement instrument.
8. The method of claim 1, wherein the constraint function further comprises: and the odometer data is obtained by reading the detection value of the odometer.
9. The method of claim 1, wherein the constraint function further comprises: and a priori information, wherein the priori information includes edge information in the historical image information.
10. The method of claim 2, wherein creating the steering model comprises:
Acquiring the intermediate variable;
Obtaining an intermediate matrix, wherein the intermediate matrix comprises: a first matrix of the abscissa parameters of the first coordinate parameters, the ordinate parameters of the first coordinate parameters, and the ordinate parameters of the second coordinate parameters, a second matrix of the first scale parameters and the second scale parameters, and a third matrix of the linear velocity of the left tire and the linear velocity of the right tire;
And determining the correspondence between a target matrix formed by the positioning parameters and the intermediate variables and the intermediate matrix as the steering model.
11. A method of positioning a robot, comprising:
acquiring image information in the steering process of the robot, and determining a visual re-projection error according to the image information;
determining a minimum objective function corresponding to the robot according to the visual re-projection error;
Estimating steering parameters by solving the minimum objective function to obtain the steering parameters of the robot;
And determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and positioning the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter, and is created based on at least an intermediate variable, and the intermediate variable is determined by the difference value between the rotation center of the left tire and the rotation center of the right tire of the robot in the vertical direction.
12. A positioning device for a robot, comprising:
The acquisition module is used for acquiring a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
the processing module is used for obtaining the steering parameters of the robot through the constraint function in the steering process of the robot;
The determining module is used for determining positioning parameters based on the steering parameters and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameters, wherein the steering model is used for representing the relation between the positioning parameters and the steering parameters, the steering model is created at least based on intermediate variables, and the intermediate variables are determined through differences between the rotation centers of the left tire and the right tire of the robot in the vertical direction.
13. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method of positioning a robot according to any one of claims 1 to 10.
14. A processor, characterized in that the processor is adapted to run a program, wherein the program when run performs the positioning method of the robot according to any of the claims 1 to 10.
15. A positioning system for a robot, comprising:
A processor; and
A memory, coupled to the processor, for providing instructions to the processor to process the following processing steps:
Obtaining a constraint function corresponding to the robot, wherein the constraint function at least comprises a visual re-projection error, and the visual re-projection error is determined according to image information acquired by the robot;
in the process of steering the robot, obtaining the steering parameters of the robot through the constraint function;
And determining a positioning parameter based on the steering parameter and a pre-acquired steering model, and acquiring positioning information of the robot according to the positioning parameter, wherein the steering model is used for representing the relation between the positioning parameter and the steering parameter, and is created based on at least an intermediate variable, and the intermediate variable is determined by the difference value between the rotation center of the left tire and the rotation center of the right tire of the robot in the vertical direction.
CN201911025754.1A 2019-10-25 2019-10-25 Positioning method, device and system of robot Active CN112710308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911025754.1A CN112710308B (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911025754.1A CN112710308B (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Publications (2)

Publication Number Publication Date
CN112710308A CN112710308A (en) 2021-04-27
CN112710308B true CN112710308B (en) 2024-05-31

Family

ID=75540955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911025754.1A Active CN112710308B (en) 2019-10-25 2019-10-25 Positioning method, device and system of robot

Country Status (1)

Country Link
CN (1) CN112710308B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
CN101077578A (en) * 2007-07-03 2007-11-28 北京控制工程研究所 Mobile Robot local paths planning method on the basis of binary environmental information
CN101301220A (en) * 2008-07-03 2008-11-12 哈尔滨工程大学 Puncture hole positioning device and positioning method of endoscope-operated surgical robot
CN101619984A (en) * 2009-07-28 2010-01-06 重庆邮电大学 Mobile robot visual navigation method based on colorful road signs
CN102221358A (en) * 2011-03-23 2011-10-19 中国人民解放军国防科学技术大学 Monocular Vision Positioning Method Based on Inverse Perspective Projection Transformation
CN103706568A (en) * 2013-11-26 2014-04-09 中国船舶重工集团公司第七一六研究所 System and method for machine vision-based robot sorting
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN109959381A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 Positioning method, positioning device, robot and computer readable storage medium
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8698875B2 (en) * 2009-02-20 2014-04-15 Google Inc. Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN101876533B (en) * 2010-06-23 2011-11-30 北京航空航天大学 Microscopic stereovision calibrating method
JP6688521B2 (en) * 2016-12-23 2020-04-28 深▲せん▼前海達闥云端智能科技有限公司Cloudminds (Shenzhen) Robotics Systems Co.,Ltd. Positioning method, terminal and server
CN107747941B (en) * 2017-09-29 2020-05-15 歌尔股份有限公司 Binocular vision positioning method, device and system

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330356B1 (en) * 1999-09-29 2001-12-11 Rockwell Science Center Llc Dynamic visual registration of a 3-D object with a graphical model
CN101077578A (en) * 2007-07-03 2007-11-28 北京控制工程研究所 Mobile Robot local paths planning method on the basis of binary environmental information
CN101301220A (en) * 2008-07-03 2008-11-12 哈尔滨工程大学 Puncture hole positioning device and positioning method of endoscope-operated surgical robot
CN101619984A (en) * 2009-07-28 2010-01-06 重庆邮电大学 Mobile robot visual navigation method based on colorful road signs
CN102221358A (en) * 2011-03-23 2011-10-19 中国人民解放军国防科学技术大学 Monocular Vision Positioning Method Based on Inverse Perspective Projection Transformation
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN103706568A (en) * 2013-11-26 2014-04-09 中国船舶重工集团公司第七一六研究所 System and method for machine vision-based robot sorting
JP2015182144A (en) * 2014-03-20 2015-10-22 キヤノン株式会社 Robot system and calibration method of robot system
CN104359464A (en) * 2014-11-02 2015-02-18 天津理工大学 Mobile robot positioning method based on stereoscopic vision
CN109959381A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 Positioning method, positioning device, robot and computer readable storage medium
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN108489482A (en) * 2018-02-13 2018-09-04 视辰信息科技(上海)有限公司 The realization method and system of vision inertia odometer
CN108827315A (en) * 2018-08-17 2018-11-16 华南理工大学 Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
CN109676604A (en) * 2018-12-26 2019-04-26 清华大学 Robot non-plane motion localization method and its motion locating system
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information

Also Published As

Publication number Publication date
CN112710308A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
US20210012520A1 (en) Distance measuring method and device
US20210183100A1 (en) Data processing method and apparatus
Lupton et al. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN110967711A (en) Data acquisition method and system
CN107478214A (en) A kind of indoor orientation method and system based on Multi-sensor Fusion
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN110246182A (en) Vision-based global map positioning method and device, storage medium and equipment
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN112113582A (en) Time synchronization processing method, electronic device, and storage medium
CN115366097B (en) Robot following method, device, robot and computer readable storage medium
CN109443345B (en) Positioning method and system for monitoring navigation
CN114593735A (en) A kind of pose prediction method and device
CN112445222A (en) Navigation method, navigation device, storage medium and terminal
CN101802738A (en) System for detecting environment
CN110567467A (en) map construction method and device based on multiple sensors and storage medium
KR101319525B1 (en) System for providing location information of target using mobile robot
CN111521971A (en) Robot positioning method and system
CN113158779B (en) Walking method, walking device and computer storage medium
Kumar et al. An improved tracking using IMU and vision fusion for mobile augmented reality applications
CN106652028A (en) Environment three-dimensional mapping method and apparatus
CN113601510A (en) Robot movement control method, device, system and equipment based on binocular vision
CN103903253A (en) Mobile terminal positioning method and system
CN112710308B (en) Positioning method, device and system of robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant