CN118990480A - Target identification and positioning method and system for humanoid robot - Google Patents
Target identification and positioning method and system for humanoid robot Download PDFInfo
- Publication number
- CN118990480A CN118990480A CN202411143152.7A CN202411143152A CN118990480A CN 118990480 A CN118990480 A CN 118990480A CN 202411143152 A CN202411143152 A CN 202411143152A CN 118990480 A CN118990480 A CN 118990480A
- Authority
- CN
- China
- Prior art keywords
- target
- humanoid robot
- identified
- positioning
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a target identification and positioning method and system for a humanoid robot. The method comprises the following steps: calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method; performing preliminary detection and classification on the recognition targets by using an image feature extraction and support vector machine classifier based on the directional gradient; tracking the identification target by using a random sampling filtering algorithm; positioning the humanoid robot by using a position algorithm based on gait detection to obtain a motion track of the humanoid robot; and correcting the motion trail of the humanoid robot through a sensor fusion algorithm. The system comprises a target positioning module, a target classifying module, a target tracking module, a positioning module and a positioning correction module. The invention can complete target identification and positioning and the positioning of the humanoid robot, help the humanoid robot to complete the understanding and recognition of the environment, obtain the relative positions of the humanoid robot and the target object, and facilitate the development of subsequent work.
Description
Technical Field
The invention relates to the technical field of humanoid robots, in particular to a target identification and positioning method and system of a humanoid robot.
Background
The visual system occupies a vital place in the daily cognitive process of humans. For robots, the vision sensor plays the same important role in the perception of the external environment of the robot, and along with the development of the robot technology, how to make the robot become more intelligent and sensitive is a core subject of the robot vision technology. The robot vision technology realizes the identification, tracking and positioning of the target object by analyzing and processing the information acquired by the vision sensor. However, there are limitations to relying solely on visual information, such as the effect of recognition by the visual system being affected under complex or adverse environmental conditions.
The existing target recognition method mainly comprises the following steps: based on the identification method of division, separating the target object from the background by an image segmentation technology so as to facilitate the subsequent processing; training and classifying target features by using a machine learning algorithm such as a Support Vector Machine (SVM), a neural network and the like based on a learning identification method; the method comprises the steps of identifying and classifying targets based on a knowledge identification method by combining an expert knowledge base; the method comprises the steps of on the basis of a model identification method, matching the characteristics of a mathematical model of a target object with the characteristics in an actual scene by establishing the mathematical model; the information of various sensors is fused based on the identification method of information fusion, so that the accuracy and the robustness of identification are improved. In a complex environment, a single method often cannot meet the requirement of accurate identification, so that an identification method based on information fusion becomes a trend.
In the field of mobile robots, positioning technology is the basis for autonomous navigation of robots. According to the different sensors used, the robot positioning technology can be divided into a traditional positioning mode and a visual positioning mode: the traditional positioning mode depends on various sensors, such as an electronic compass, an IMU unit, a GPS, an ultrasonic sensor and the like, and the position and posture information provided by the sensors is fused, so that the robot can realize positioning and navigation in the environment; the visual positioning mode mainly relies on information provided by visual sensors (such as CMOS cameras and CCD cameras), and environmental characteristics are extracted through an image processing technology, so that identification and positioning of targets are realized.
According to the difference of the positioning methods, the traditional positioning and the visual positioning can be divided into incremental positioning and global positioning: the incremental positioning is based on continuous updating of the current state of the robot, the current position is calculated by utilizing the position and speed information of the last step, and the method is easy to realize, but has larger accumulated error; the global positioning is based on known characteristic points in the environment, the absolute position of the robot is directly calculated, and the method is high in accuracy and depends on known information in the environment.
Although the above-described localization and identification techniques have been widely studied and applied in robotic applications, the following deficiencies still remain: the limitation of visual information is that under the conditions of insufficient illumination, shielding, complex background and the like, the recognition and positioning simply relying on the visual sensor are easy to fail, so that the recognition precision is reduced or fails; the complexity of sensor fusion, while multi-sensor fusion methods have been employed in the prior art, how to effectively fuse information from different sensors remains a challenge, especially in the face of sensor data heterogeneity and real-time requirements; the traditional incremental positioning method is simple and feasible, but has outstanding accumulated error, is difficult to meet the requirement of accurate positioning, and has high global positioning accuracy and strong dependence on environment.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method and a system for identifying and positioning a target of a humanoid robot, which can complete identification and positioning of the target and positioning of the humanoid robot itself, help the humanoid robot complete understanding and identification of the environment, and obtain the relative positions of the humanoid robot and the target object, so as to facilitate the development of subsequent work.
Embodiments of the present invention are implemented as follows:
a method for target identification and localization of a humanoid robot, comprising:
And calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method.
And performing preliminary detection and classification on the identification targets by using an image feature extraction and support vector machine classifier based on the directional gradient.
The identified targets are tracked using a random sampling filtering algorithm.
And positioning the humanoid robot by using a position algorithm based on gait detection to obtain the motion trail of the humanoid robot.
And correcting the motion trail of the humanoid robot through a sensor fusion algorithm.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the calculating a three-dimensional position of the target to be identified using a binocular vision positioning method includes:
The human-shaped robot is provided with two monocular cameras which are respectively arranged at different heights, and the distance between the two monocular cameras is d.
The pixel coordinate of the object to be identified on the image plane of the first monocular camera is P 1(x1,y1), and the pixel coordinate on the image plane of the second monocular camera is P 2(x2,y2).
For the first monocular camera, establishing a geometric relationship between the first planar pixel coordinates P 1(x1,y1 of the object to be identified and the actual three-dimensional coordinates (X, Y, Z),Wherein f 1 is the focal length of the first monocular camera, and D is the depth information of the object to be identified.
For the second monocular camera, establishing a geometric relationship of the second planar pixel coordinates P 2(x2,y2 of the object to be identified with the actual three-dimensional coordinates (X, Y, Z),Wherein f 2 is the focal length of the second monocular camera.
Calculating the vertical parallax of the object to be identified in the plane images of the two monocular cameras
Calculating to obtain depth information of the target to be identified
The three-dimensional position of the object to be identified is
The technical effects are as follows: the method has the advantages that the depth of the target is calculated by adopting two monocular cameras through parallax information, real-time three-dimensional reconstruction is realized, the relative positions of the two monocular cameras are known base lines, the three-dimensional coordinates of the target can be directly calculated through a simple geometric relationship, the algorithm is simple, the calculation efficiency is high, the required hardware is simple to set, the cost is low, the method is suitable for large-scale practical application, and compared with the traditional binocular stereoscopic vision, the depth information can be obtained only by one base line, the system design is simplified, and the implementation is easy; because the relative position between the two cameras is fixed, the algorithm model is stable, the positioning error is small, and the precision of the system is ensured.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the preliminary detection and classification of the identified target using a direction gradient-based image feature extraction and support vector machine classifier includes:
and carrying out graying treatment and normalization treatment on the color image containing the identification target to obtain an input image.
And establishing a directional gradient feature extraction model, taking the input image as the input of the directional gradient feature extraction model, and outputting to obtain a directional gradient feature vector.
And collecting images containing identification targets as samples, and respectively labeling corresponding category labels for each sample to obtain a data set.
Designing a support vector machine classifier, training the support vector machine classifier by using the data set, taking the direction gradient feature vector as the input of the trained support vector machine classifier, and outputting a class label to obtain the classification result of the identification target.
The technical effects are as follows: according to the method, the image edge direction information can be fully considered through the direction gradient feature extraction, the target feature is effectively extracted, the accuracy of classification and identification is remarkably improved, the high-dimensional feature space is processed through the kernel skills of the SVM classifier, and the classification effect is effectively optimized. The image features and the classifier are organically combined, so that the system can intelligently classify and identify the appearance features of the targets, has high identification accuracy and high reliability, provides reliable input for follow-up target tracking and positioning, is simple and efficient, is easy to use, can be expanded to target identification of more categories, and is suitable for a robot vision system with lower computing capacity.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the establishing a direction gradient feature extraction model, and outputting the input image as input of the direction gradient feature extraction model to obtain a direction gradient feature vector includes:
and establishing a directional gradient feature extraction model, wherein the directional gradient feature extraction model comprises an input layer, a gradient calculation layer, a directional gradient histogram layer, a feature vector splicing layer and a normalization layer.
The input layer receives the input image that has been subjected to a graying process and a normalization process, wherein each pixel in the input image is denoted as I norm (I, j).
The gradient calculating layer calculates a horizontal gradient G x(i,j)=Inorm(i,j+1)-Inorm (i, j-1) in the x-direction and a vertical gradient G y(i,j)=Inorm(i+1,j)-Inorm (i-1, j) in the y-direction for each pixel in the input image, and calculates a gradient magnitude representing the edge intensityGradient direction representing edge direction
The direction gradient histogram layer divides the input image into a plurality of cell units with the size of N multiplied by N, wherein N is a pixel value, the gradient amplitude is added into a corresponding direction interval based on the calculated gradient direction in each cell unit, and for each direction interval b, the gradient amplitude added in the direction is calculatedA histogram vector h= [ H 1,H2,...Hb,...Hm ] of the gradient direction of the cell unit is obtained.
And the feature vector splicing layer sequentially splices the histogram vectors of each adjacent cell unit in the gradient direction to obtain a block feature vector v.
The normalization layer performs normalization processing on the spliced block feature vectors,And obtaining a directional gradient feature vector, wherein the co is a positive number.
The technical effects are as follows: according to the invention, the edge information in the image, including the edge intensity and the direction, can be directly extracted through the gradient calculation layer and the direction gradient histogram layer, the key structure information of the target is effectively captured, the image is divided into a plurality of cell units, the histograms of the gradient directions are constructed in each unit, and the influence of noise and illumination change on the identification effect is effectively resisted through accumulation of the histograms. The directional gradient feature extraction model provided by the invention can be used for efficiently and accurately extracting the edge and structure information of the image, and combining strong classification capability, so that reliable technical support is provided for target identification and positioning of the human-shaped robot, and meanwhile, good real-time performance and robustness are realized, and stable target identification and tracking can be realized in a complex environment.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the step of designing a support vector machine classifier, training the support vector machine classifier using the dataset, using the directional gradient feature vector as an input of the trained support vector machine classifier, and outputting a class label, where the step of obtaining a classification result of the identified target includes:
constructing multi-class support vector machine classifiers, training one support vector machine classifier for each pair of classes, and taking J (J-1)/2 classifiers as a total, wherein J is the number of classes of the identification target.
Training the support vector machine classifier of each class, and adjusting the loss function through an optimization algorithmWherein F is a penalty term, e is a weight vector, q is an offset term, M is a sample number, and the weight vector w and the offset term q are adjusted to minimize the loss function to reach a preset value.
And taking the directional gradient feature vector extracted by one input image as input, respectively inputting each class of trained support vector machine classifier, and respectively outputting a class label by each class of support vector machine classifier to obtain a classification result y epsilon {1,2, &..J } of the identification target of the input image.
The technical effects are as follows: by introducing a multi-class classification strategy and combining the feature vector features of the directional gradient, various targets can be effectively identified and classified, and the method has higher accuracy when processing multi-class object identification tasks, and is particularly suitable for target classification tasks in complex scenes.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the tracking the identified target using a random sampling filtering algorithm includes:
The initial position of the recognition target is (X 0,Y0,Z0), and an initial search window is defined at the initial position of the recognition target, the radius of the initial search window being R 0.
For each frame in the input image set, calculating the distribution of the color histogram in the three-dimensional color spaceWherein C i is a color value in the color space, C (x, y, z) is a pixel color value in the target area, I is an indication function, the value of the indication function is 1 when the color difference satisfies the condition |C i -C (x, y, z) |is less than or equal to deltaC, otherwise, the value is 0, and deltaC is a threshold value of the color difference.
Updating a center position (Xt=Xt-1+ΔXt,Yt=Yt-1+ΔYt,Zt=Zt-1+ΔZt), of the recognition target, wherein DeltaX t,ΔYt,ΔZt is a displacement of the recognition target in a current frame,
Updating the radius of a search windowWherein S t is the area or volume of the target in the current frame.
Updating the position of the identification target of each frame in the input image set to obtain a three-dimensional track of the identification target in the tracking process Trajectory={(X1,Y1,Z1),(X2,Y2,Z2),...,(Xt,Yt,Zt)}.
The technical effects are as follows: the random sampling filtering algorithm can rapidly calculate and update the target position in each frame of the input image, is suitable for processing video streams with high frame rate, and ensures the real-time performance of target tracking; by continuously updating the radius and the position of the search window, the algorithm can adapt to the scale change and displacement of the target, so that even if the target changes in the motion process, the target can be continuously and stably tracked; the color histogram distribution in the three-dimensional color space is used as a target feature, so that the appearance change of the target and the interference of a complex background can be effectively caused; the algorithm not only tracks the two-dimensional plane position of the target, but also combines the position information in the three-dimensional space to output the three-dimensional track of the target, and provides accurate space position information for navigation and operation of the artificial robot in a complex environment. The random sampling filtering algorithm is simple in calculation and low in resource consumption, is suitable for an embedded system or an application scene with limited resources, and reduces dependence on hardware resources while ensuring tracking accuracy.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the positioning of the humanoid robot using a position algorithm based on gait detection includes:
The current walking steps of the humanoid robot are in the walking process The joint angle of the left leg of the humanoid robot in the walking process is theta L (t), the joint angle of the right leg of the humanoid robot in the walking process is theta R (t), 1 is an index function of gait detection, and when a complete gait cycle is detected, the index function value of the gait detection is 1.
And combining with IMU angle data to obtain a current three-dimensional coordinate position ,XR(t)=XR(t-1)+s·n(t)·cos(φ(t)),YR(t)=YR(t-1)+s·n(t)·sin(φ(t)),ZR(t)=ZR(t-1), of the humanoid robot, wherein phi (t) is the angle variation of the trunk of the humanoid robot around the Z axis, and s is the step length spanned by each gait cycle of the humanoid robot.
The technical effects are as follows: by combining random sampling filtering with three-dimensional position modeling, the limitation that the traditional gait detection position algorithm can only track a target in a two-dimensional plane is overcome, so that the algorithm can be applied to a more complex three-dimensional environment; by fusing various sensor information to perform track correction, errors caused by gait detection and IMU detection are effectively reduced, and robustness of the system in a dynamic environment is enhanced; the position algorithm based on gait detection is used in combination with visual information, so that the robot can be autonomously positioned and can keep accurate track tracking in complex terrains.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the correcting, by a sensor fusion algorithm, a motion trajectory of the humanoid robot includes:
Predicting the position of the humanoid robot at the next moment t+delta t according to the current position (X R(t),YR(t),ZR (t)) of the humanoid robot and the movement speed (v x(t),vy(t),vz (t)) of the current moment t,
Where Δt is the time interval.
The sensor fusion measurement value is acquired, the X-axis position is X s (t+delta t), the Y-axis position is Y s (t+delta t), and the Z-axis position is Z s (t+delta t).
And updating a covariance matrix P (t+delta t |t) =P (t) +Q (t) for the predicted position of the humanoid robot at the next moment, wherein P (t) is a state covariance matrix at the current moment, and Q (t) is a process noise covariance matrix.
Calculating Kalman gainWherein R (t) is a measurement noise covariance matrix.
Correcting the predicted position of the humanoid robot,
The technical effects are as follows: according to the invention, the measured values of various sensors are fused and the motion trail is corrected in real time by combining with the Kalman filter, so that the influence of errors of a single sensor is effectively reduced, and the positioning accuracy of the robot is improved; by calculating and updating the covariance matrix and the Kalman gain, the error control can be continuously optimized in real-time processing, the uncertainty of a system is reduced, and the accuracy and the efficiency of track prediction and correction are improved; the information provided by different sensors (such as an IMU (inertial measurement unit), a vision sensor and the like) is effectively fused, the advantages of multi-source data are fully utilized, the reliability and the anti-interference capability of the system are improved, and the motion track precision of the humanoid robot and the stability of the system are remarkably improved through integration, real-time prediction and correction of the multi-sensor data.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the method for acquiring a sensor fusion measurement value includes:
Acquiring the acceleration (a x(t),ay(t),az (t)) of an IMU measuring unit of the humanoid robot at the moment t, calculating the speed of the humanoid robot to be vx(t+Δt)=vx(t)+ax(t)·Δt,vy(t+Δt)=vy(t)+ay(t)·Δt,vz(t+Δt)=vz(t)+az(t)·Δt,, and calculating the position of the humanoid robot to be
IMU measurement cell measurements are obtained.
Acquiring the distance measured by the ultrasonic sensor of the humanoid robot at the time tWherein v s is the propagation speed of sound wave, Δt is the time from the transmission to the reception of sound wave, the ultrasonic sensor is located in front of the humanoid robot, and has a fixed offset (Δx u,ΔYu,ΔZu) with the position (X R(t),YR(t),ZR (t)) of the humanoid robot, and the calculated position of the humanoid robot is XUS(t)=XR(t)+ΔXu+d(t)·cos(θ(t))·cos(φ(t)),YUS(t)=YR(t)+ΔYu+d(t)·cos(θ(t))·sin(φ(t)),ZUS(t)=ZR(t)+ΔZu+d(t)·sin(θ(t)),, where θ (t) is the pitch angle of sound wave propagation, and Φ (t) is the azimuth angle of sound wave propagation, so as to obtain the measured value of the ultrasonic sensor.
And fusing the measured value of the IMU measuring unit and the measured value of the ultrasonic sensor to obtain the fused measured value of the sensor, wherein the X-axis position is X s(t+Δt)=w1·XIMU(t+Δt)+w2·XUS (t+delta t), the Y-axis position is Y s(t+Δt)=w1·YIMU(t+Δt)+w2·YUS (t+delta t), the Z-axis position is Z s(t+Δt)=w1·ZIMU(t+Δt)+w2·ZUS (t+delta t), and w 1 and w 2 are weight coefficients.
The technical effects are as follows: the IMU provides dynamic information of acceleration and speed, is suitable for rapid motion tracking in a short time, and the ultrasonic sensor provides a distance measurement value, so that the IMU is suitable for accurate distance measurement of the environment in static or slow motion; by simultaneously utilizing the data of the IMU measuring unit and the ultrasonic sensor, the information of multiple dimensions such as acceleration, speed, position, distance and the like is synthesized, the measuring errors and noise of the IMU and the ultrasonic sensor can be effectively restrained through a fusion algorithm, the influence of the errors of a single sensor on a final positioning result is reduced, the position of the robot can be corrected and predicted more accurately, the positioning errors are reduced, and the track tracking precision is improved.
A target recognition and positioning system for a humanoid robot, comprising:
And the target positioning module is used for calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method.
And the target classification module is used for carrying out preliminary detection and classification on the identification targets by using the image feature extraction and support vector machine classifier based on the directional gradient.
And the target tracking module is used for tracking the identification target by using a random sampling filtering algorithm.
And the positioning module is used for positioning the humanoid robot by using a position algorithm based on gait detection to obtain the motion trail of the humanoid robot.
And the positioning correction module is used for correcting the motion trail of the humanoid robot through a sensor fusion algorithm.
The embodiment of the invention has the beneficial effects that:
according to the invention, the three-dimensional position of the target is calculated by using a binocular vision method with double visual angles, and the parallax is calculated by arranging the two cameras at different heights, so that real-time three-dimensional reconstruction can be realized.
According to the invention, the edge and structure information in the image can be effectively extracted by adopting the directional gradient feature extraction model, and the target detection and classification are carried out by combining the support vector machine classifier, so that the edge information of the image is fully utilized, the classification accuracy is improved through the optimization of the high-dimensional feature space, the accurate target recognition can be realized in a complex environment, and a reliable basis is provided for the follow-up target tracking and positioning.
The invention tracks the target through a random sampling filtering algorithm (such as particle filtering), can update the position of the target in each frame of the input image in real time, adapts to the scale change and displacement of the target, performs feature matching through the color histogram distribution in the three-dimensional color space, can effectively cope with the appearance change and background interference of the target, improves the robustness and the accuracy of tracking, is suitable for the video stream processing with high frame rate, and has simple calculation and low resource consumption.
According to the invention, the gait detection algorithm is combined with the IMU data, so that the positioning of the humanoid robot in the three-dimensional environment is realized, the limitation that the traditional gait detection can only track in a two-dimensional plane is overcome, the robustness of the system in the dynamic environment is enhanced by combining with the IMU data, and the positioning precision and the track tracking capability of the robot in complex terrain are improved by combining the gait detection and the IMU angle data.
According to the invention, the motion trail of the robot is corrected through a sensor fusion algorithm (such as Kalman filtering), so that the error of a single sensor can be effectively reduced, the positioning accuracy is improved, the data of the IMU and the ultrasonic sensor are fused, the acceleration, the speed, the position and the distance information are integrated, the error control is optimized through real-time correction, the reliability and the anti-interference capability of the system are enhanced, and the positioning accuracy and the stability of the system are improved.
According to the invention, advanced target recognition, tracking and positioning technologies are comprehensively utilized, and finally high-precision target recognition and positioning can be realized, so that the robot can better understand and interact with the surrounding environment, stable and accurate behaviors are kept in real-time application, the effective operation of the robot in a dynamic environment is supported through the real-time tracking capability, the autonomous navigation and operation capability of the robot in a complex environment is improved, interaction with the targets in the environment is more accurately carried out, and the user experience is improved in a scene requiring high-precision operation and feedback.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a target recognition and localization method of the robot in the form of a human.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein can be arranged and designed in a wide variety of different configurations.
Referring to fig. 1, a first embodiment of the present invention provides a method for identifying and positioning a target of a humanoid robot, including: calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method; performing preliminary detection and classification on the recognition targets by using an image feature extraction and support vector machine classifier based on the directional gradient; tracking the identification target by using a random sampling filtering algorithm; positioning the humanoid robot by using a position algorithm based on gait detection to obtain a motion track of the humanoid robot; and correcting the motion trail of the humanoid robot through a sensor fusion algorithm.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the calculating a three-dimensional position of the target to be identified using a binocular vision positioning method includes: the method comprises the steps of providing a humanoid robot with two monocular cameras which are respectively arranged at different heights, wherein the distance between the two monocular cameras is d; the pixel coordinate of the object to be identified on the image plane of the first monocular camera is P 1(x1,y1), and the pixel coordinate on the image plane of the second monocular camera is P 2(x2,y2); for the first monocular camera, establishing a geometric relationship between the first planar pixel coordinates P 1(x1,y1 of the object to be identified and the actual three-dimensional coordinates (X, Y, Z),Wherein f 1 is the focal length of the first monocular camera, and D is the depth information of the target to be identified; for the second monocular camera, establishing a geometric relationship of the second planar pixel coordinates P 2(x2,y2 of the object to be identified with the actual three-dimensional coordinates (X, Y, Z),Wherein f 2 is the focal length of the second monocular camera; calculating the vertical parallax of the object to be identified in the plane images of the two monocular camerasCalculating to obtain depth information of the target to be identifiedThe three-dimensional position of the object to be identified is
The technical effects are as follows: the method has the advantages that the depth of the target is calculated by adopting two monocular cameras through parallax information, real-time three-dimensional reconstruction is realized, the relative positions of the two monocular cameras are known base lines, the three-dimensional coordinates of the target can be directly calculated through a simple geometric relationship, the algorithm is simple, the calculation efficiency is high, the required hardware is simple to set, the cost is low, the method is suitable for large-scale practical application, and compared with the traditional binocular stereoscopic vision, the depth information can be obtained only by one base line, the system design is simplified, and the implementation is easy; because the relative position between the two cameras is fixed, the algorithm model is stable, the positioning error is small, and the precision of the system is ensured.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the preliminary detection and classification of the identified target using a direction gradient-based image feature extraction and support vector machine classifier includes: carrying out graying treatment and normalization treatment on the color image containing the identification target to obtain an input image; establishing a directional gradient feature extraction model, taking the input image as the input of the directional gradient feature extraction model, and outputting to obtain a directional gradient feature vector; collecting images containing identification targets as samples, and respectively labeling corresponding category labels for each sample to obtain a data set; designing a support vector machine classifier, training the support vector machine classifier by using the data set, taking the direction gradient feature vector as the input of the trained support vector machine classifier, and outputting a class label to obtain the classification result of the identification target.
The technical effects are as follows: according to the method, the image edge direction information can be fully considered through the direction gradient feature extraction, the target feature is effectively extracted, the accuracy of classification and identification is remarkably improved, the high-dimensional feature space is processed through the kernel skills of the SVM classifier, and the classification effect is effectively optimized. The image features and the classifier are organically combined, so that the system can intelligently classify and identify the appearance features of the targets, has high identification accuracy and high reliability, provides reliable input for follow-up target tracking and positioning, is simple and efficient, is easy to use, can be expanded to target identification of more categories, and is suitable for a robot vision system with lower computing capacity.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the establishing a direction gradient feature extraction model, and outputting the input image as input of the direction gradient feature extraction model to obtain a direction gradient feature vector includes: establishing a directional gradient feature extraction model, wherein the directional gradient feature extraction model comprises an input layer, a gradient calculation layer, a directional gradient histogram layer, a feature vector splicing layer and a normalization layer; the input layer receives the input image subjected to the graying treatment and the normalization treatment, wherein each pixel in the input image is denoted as I norm (I, j); the gradient calculating layer calculates a horizontal gradient G x(i,j)=Inorm(i,j+1)-Inorm (i, j-1) in the x-direction and a vertical gradient G y(i,j)=Inorm(i+1,j)-Inorm (i-1, j) in the y-direction for each pixel in the input image, and calculates a gradient magnitude representing the edge intensityGradient direction representing edge directionThe direction gradient histogram layer divides the input image into a plurality of cell units of size n×n, where N is a pixel value, adds the gradient magnitude to a corresponding direction interval based on the calculated gradient direction in each cell unit, and calculates a gradient magnitude accumulated in the direction for each direction interval b assuming that the direction is divided into 9 intervals of [0 °,20 ° ], 20 °,40 ° ], …, [160 °,180 ° ], respectivelyObtaining a histogram vector h= [ H 1,H2,...Hb,...Hm ] of the gradient direction of the cell unit; the feature vector splicing layer splices the histogram vectors of each adjacent cell unit in the gradient direction in sequence to obtain a block feature vector v, and for a cell unit with the size of 2×2, each cell unit comprises 9 square intervals, and the block feature vector is 36-dimensional; the normalization layer performs normalization processing on the spliced block feature vectors,And obtaining a directional gradient feature vector, wherein the co is a positive number.
The technical effects are as follows: according to the invention, the edge information in the image, including the edge intensity and the direction, can be directly extracted through the gradient calculation layer and the direction gradient histogram layer, the key structure information of the target is effectively captured, the image is divided into a plurality of cell units, the histograms of the gradient directions are constructed in each unit, and the influence of noise and illumination change on the identification effect is effectively resisted through accumulation of the histograms. The directional gradient feature extraction model provided by the invention can be used for efficiently and accurately extracting the edge and structure information of the image, and combining strong classification capability, so that reliable technical support is provided for target identification and positioning of the human-shaped robot, and meanwhile, good real-time performance and robustness are realized, and stable target identification and tracking can be realized in a complex environment.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the step of designing a support vector machine classifier, training the support vector machine classifier using the dataset, using the directional gradient feature vector as an input of the trained support vector machine classifier, and outputting a class label, where the step of obtaining a classification result of the identified target includes: constructing multi-class support vector machine classifiers, training one support vector machine classifier for each pair of classes, and requiring J (J-1)/2 classifiers, wherein J is the class number of the identification targets; training the support vector machine classifier of each class, and adjusting the loss function through an optimization algorithmWherein F is a penalty term, e is a weight vector, q is an offset term, M is a sample number, and the weight vector w and the offset term q are adjusted to minimize a loss function to reach a preset value; and taking the directional gradient feature vector extracted by one input image as input, respectively inputting each class of trained support vector machine classifier, and respectively outputting a class label by each class of support vector machine classifier to obtain a classification result y epsilon {1,2, &..J } of the identification target of the input image.
The technical effects are as follows: by introducing a multi-class classification strategy and combining the feature vector features of the directional gradient, various targets can be effectively identified and classified, and the method has higher accuracy when processing multi-class object identification tasks, and is particularly suitable for target classification tasks in complex scenes.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the tracking the identified target using a random sampling filtering algorithm includes: the initial position of the identification target is (X 0,Y0,Z0), an initial search window is defined at the initial position of the identification target, and the radius of the initial search window is R 0; for each frame in the input image set, calculating the distribution of the color histogram in the three-dimensional color spaceWherein C i is a color value in a color space, C (x, y, z) is a pixel color value in a target area, I is an indication function, the value of the indication function is 1 when the color difference satisfies the condition C i -C (x, y, z) |is less than or equal to delta C, the color value of the pixel belongs to a corresponding color interval of the color histogram, otherwise, the value is 0, the color value of the pixel does not belong to the interval, and delta C is a threshold value of the color difference; updating a center position (Xt=Xt-1+ΔXt,Yt=Yt-1+ΔYt,Zt=Zt-1+ΔZt), of the recognition target, wherein DeltaX t,ΔYt,ΔZt is a displacement of the recognition target in a current frame, Updating the radius of a search windowWherein S t is the area or volume of the target in the current frame; updating the position of the identification target of each frame in the input image set to obtain a three-dimensional track of the identification target in the tracking process Trajectory={(X1,Y1,Z1),(X2,Y2,Z2),...,(Xt,Yt,Zt)}.
The technical effects are as follows: the random sampling filtering algorithm can rapidly calculate and update the target position in each frame of the input image, is suitable for processing video streams with high frame rate, and ensures the real-time performance of target tracking; by continuously updating the radius and the position of the search window, the algorithm can adapt to the scale change and displacement of the target, so that even if the target changes in the motion process, the target can be continuously and stably tracked; the color histogram distribution in the three-dimensional color space is used as a target feature, so that the appearance change of the target and the interference of a complex background can be effectively caused; the algorithm not only tracks the two-dimensional plane position of the target, but also combines the position information in the three-dimensional space to output the three-dimensional track of the target, and provides accurate space position information for navigation and operation of the artificial robot in a complex environment. The random sampling filtering algorithm is simple in calculation and low in resource consumption, is suitable for an embedded system or an application scene with limited resources, and reduces dependence on hardware resources while ensuring tracking accuracy.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the positioning of the humanoid robot using a position algorithm based on gait detection includes: the current walking steps of the humanoid robot are in the walking processWherein, the joint angle of the left leg of the humanoid robot is theta L (t) in the walking process, the joint angle of the right leg is theta R (t) in the walking process, 1 is an index function of gait detection, and when a complete gait cycle is detected, the index function value of the gait detection is 1; and combining with IMU angle data to obtain a current three-dimensional coordinate position ,XR(t)=XR(t-1)+s·n(t)·cos(φ(t)),YR(t)=YR(t-1)+s·n(t)·sin(φ(t)),ZR(t)=ZR(t-1), of the humanoid robot, wherein phi (t) is the angle variation of the trunk of the humanoid robot around the Z axis, and s is the step length spanned by each gait cycle of the humanoid robot.
The technical effects are as follows: by combining random sampling filtering with three-dimensional position modeling, the limitation that the traditional gait detection position algorithm can only track a target in a two-dimensional plane is overcome, so that the algorithm can be applied to a more complex three-dimensional environment; by fusing various sensor information to perform track correction, errors caused by gait detection and IMU detection are effectively reduced, and robustness of the system in a dynamic environment is enhanced; the position algorithm based on gait detection is used in combination with visual information, so that the robot can be autonomously positioned and can keep accurate track tracking in complex terrains.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the correcting, by a sensor fusion algorithm, a motion trajectory of the humanoid robot includes: predicting the position of the humanoid robot at the next moment t+delta t according to the current position (X R(t),YR(t),ZR (t)) of the humanoid robot and the movement speed (v x(t),vy(t),vz (t)) of the current moment t,
Wherein Δt is the time interval; collecting and obtaining a sensor fusion measured value, wherein the X-axis position is X s (t+delta t), the Y-axis position is Y s (t+delta t), and the Z-axis position is Z s (t+delta t); for the predicted position of the humanoid robot at the next moment, updating a covariance matrix P (t+Δt|t) =p (t) +q (t), wherein P (t) is a state covariance matrix at the current moment and Q (t) is a process noise covariance matrix; calculating Kalman gainWherein R (t) is a measurement noise covariance matrix; correcting the predicted position of the humanoid robot,
The technical effects are as follows: according to the invention, the measured values of various sensors are fused and the motion trail is corrected in real time by combining with the Kalman filter, so that the influence of errors of a single sensor is effectively reduced, and the positioning accuracy of the robot is improved; by calculating and updating the covariance matrix and the Kalman gain, the error control can be continuously optimized in real-time processing, the uncertainty of a system is reduced, and the accuracy and the efficiency of track prediction and correction are improved; the information provided by different sensors (such as an IMU (inertial measurement unit), a vision sensor and the like) is effectively fused, the advantages of multi-source data are fully utilized, the reliability and the anti-interference capability of the system are improved, and the motion track precision of the humanoid robot and the stability of the system are remarkably improved through integration, real-time prediction and correction of the multi-sensor data.
In a preferred embodiment of the present invention, in the method for identifying and positioning a target of a humanoid robot, the method for acquiring a sensor fusion measurement value includes: acquiring the acceleration (a x(t),ay(t),az (t)) of an IMU measuring unit of the humanoid robot at the moment t, calculating the speed of the humanoid robot to be vx(t+Δt)=vx(t)+ax(t)·Δt,vy(t+Δt)=vy(t)+ay(t)·Δt,vz(t+Δt)=vz(t)+az(t)·Δt,, and calculating the position of the humanoid robot to be
Obtaining an IMU measurement unit measurement value; acquiring the distance measured by the ultrasonic sensor of the humanoid robot at the time tWherein v s is the propagation speed of sound waves, Δt is the time from the transmission to the reception of sound waves, the ultrasonic sensor is positioned right in front of the humanoid robot and has a fixed offset (Δx u,ΔYu,ΔZu) with the position (X R(t),YR(t),ZR (t)) of the humanoid robot, the position of the humanoid robot is calculated to be XUS(t)=XR(t)+ΔXu+d(t)·cos(θ(t))·cos(φ(t)),YUS(t)=YR(t)+ΔYu+d(t)·cos(θ(t))·sin(φ(t)),ZUS(t)=ZR(t)+ΔZu+d(t)·sin(θ(t)),, θ (t) is the pitch angle of sound wave propagation, and Φ (t) is the azimuth angle of sound wave propagation, and the ultrasonic sensor measurement value is obtained; and fusing the measured value of the IMU measuring unit and the measured value of the ultrasonic sensor to obtain the fused measured value of the sensor, wherein the X-axis position is X s(t+Δt)=w1·XIMU(t+Δt)+w2·XUS (t+delta t), the Y-axis position is Y s(t+Δt)=w1·YIMU(t+Δt)+w2·YUS (t+delta t), the Z-axis position is Z s(t+Δt)=w1·ZIMU(t+Δt)+w2·ZUS (t+delta t), and w 1 and w 2 are weight coefficients.
The technical effects are as follows: the IMU provides dynamic information of acceleration and speed, is suitable for rapid motion tracking in a short time, and the ultrasonic sensor provides a distance measurement value, so that the IMU is suitable for accurate distance measurement of the environment in static or slow motion; by simultaneously utilizing the data of the IMU measuring unit and the ultrasonic sensor, the information of multiple dimensions such as acceleration, speed, position, distance and the like is synthesized, the measuring errors and noise of the IMU and the ultrasonic sensor can be effectively restrained through a fusion algorithm, the influence of the errors of a single sensor on a final positioning result is reduced, the position of the robot can be corrected and predicted more accurately, the positioning errors are reduced, and the track tracking precision is improved.
A target recognition and positioning system for a humanoid robot, comprising: the target positioning module is used for calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method; the target classification module is used for carrying out preliminary detection and classification on the identification targets by using the image feature extraction and support vector machine classifier based on the directional gradient; the target tracking module is used for tracking the identification target by using a random sampling filtering algorithm; the positioning module is used for positioning the humanoid robot by using a position algorithm based on gait detection to obtain the motion trail of the humanoid robot; and the positioning correction module is used for correcting the motion trail of the humanoid robot through a sensor fusion algorithm.
A second embodiment of the present invention provides a target recognition and localization system of a humanoid robot, including: the target positioning module is used for calculating the three-dimensional position of the target to be identified by using a double-view monocular vision positioning method; the target classification module is used for carrying out preliminary detection and classification on the identification targets by using the image feature extraction and support vector machine classifier based on the directional gradient; the target tracking module is used for tracking the identification target by using a random sampling filtering algorithm; the positioning module is used for positioning the humanoid robot by using a position algorithm based on gait detection to obtain the motion trail of the humanoid robot; and the positioning correction module is used for correcting the motion trail of the humanoid robot through a sensor fusion algorithm.
The computer program product of the method and the device for identifying and positioning the target of the humanoid robot provided by the embodiment of the invention comprises a computer readable storage medium storing program codes, and the instructions included in the program codes can be used for executing the method in the previous method embodiment, and specific implementation can be referred to the method embodiment and will not be repeated here.
Specifically, the storage medium can be a general storage medium, such as a mobile magnetic disk, a hard disk, and the like, and when the computer program on the storage medium is run, the target recognition and positioning method of the humanoid robot can be executed, so that the target recognition and positioning and the positioning of the humanoid robot can be completed, the humanoid robot can be helped to complete the understanding and recognition of the environment, the relative positions of the humanoid robot and the target object can be obtained, and the follow-up work can be conveniently carried out.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411143152.7A CN118990480B (en) | 2024-08-20 | 2024-08-20 | A humanoid robot target recognition and positioning method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411143152.7A CN118990480B (en) | 2024-08-20 | 2024-08-20 | A humanoid robot target recognition and positioning method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118990480A true CN118990480A (en) | 2024-11-22 |
| CN118990480B CN118990480B (en) | 2025-05-09 |
Family
ID=93490089
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411143152.7A Active CN118990480B (en) | 2024-08-20 | 2024-08-20 | A humanoid robot target recognition and positioning method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118990480B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120347783A (en) * | 2025-06-24 | 2025-07-22 | 四川译企科技有限公司 | Multi-degree-of-freedom robot control method and system |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109344768A (en) * | 2018-09-29 | 2019-02-15 | 南京理工大学 | Pointer breaker recognition methods based on crusing robot |
| CN111368755A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | A vision-based approach to autonomous pedestrian following for a quadruped robot |
| EP3879435A1 (en) * | 2019-06-28 | 2021-09-15 | Cubelizer S.L. | Method for analysing the behaviour of people in physical spaces and system for said method |
| CN118034267A (en) * | 2023-12-27 | 2024-05-14 | 李志刚 | Intelligent robot obstacle avoidance fine positioning method based on face following |
-
2024
- 2024-08-20 CN CN202411143152.7A patent/CN118990480B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109344768A (en) * | 2018-09-29 | 2019-02-15 | 南京理工大学 | Pointer breaker recognition methods based on crusing robot |
| EP3879435A1 (en) * | 2019-06-28 | 2021-09-15 | Cubelizer S.L. | Method for analysing the behaviour of people in physical spaces and system for said method |
| CN111368755A (en) * | 2020-03-09 | 2020-07-03 | 山东大学 | A vision-based approach to autonomous pedestrian following for a quadruped robot |
| CN118034267A (en) * | 2023-12-27 | 2024-05-14 | 李志刚 | Intelligent robot obstacle avoidance fine positioning method based on face following |
Non-Patent Citations (1)
| Title |
|---|
| 李超: "基于多传感器的NAO机器人定位与跟踪方法研究", 《中国优秀硕士学位论文全文数据库信息科技缉》, 15 February 2018 (2018-02-15), pages 2 - 69 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120347783A (en) * | 2025-06-24 | 2025-07-22 | 四川译企科技有限公司 | Multi-degree-of-freedom robot control method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118990480B (en) | 2025-05-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Yin et al. | Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments | |
| Zhao et al. | Detection, tracking, and geolocation of moving vehicle from uav using monocular camera | |
| Gharani et al. | Context-aware obstacle detection for navigation by visually impaired | |
| Ding et al. | Vehicle pose and shape estimation through multiple monocular vision | |
| Islam et al. | MVS‐SLAM: Enhanced multiview geometry for improved semantic RGBD SLAM in dynamic environment | |
| Yusefi et al. | LSTM and filter based comparison analysis for indoor global localization in UAVs | |
| Guan et al. | Minimal solvers for relative pose estimation of multi-camera systems using affine correspondences | |
| Azzam et al. | A stacked LSTM-based approach for reducing semantic pose estimation error | |
| CN118990480B (en) | A humanoid robot target recognition and positioning method and system | |
| Xian et al. | Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach | |
| Yu et al. | CPR-SLAM: RGB-D SLAM in dynamic environment using sub-point cloud correlations | |
| Zahoor et al. | Remote sensing surveillance using multilevel feature fusion and deep neural network | |
| Michot et al. | Bi-objective bundle adjustment with application to multi-sensor slam | |
| Li et al. | A robust visual slam system for small-scale quadruped robots in dynamic environments | |
| Xu et al. | Intermittent VIO-assisted LiDAR SLAM against degeneracy: Recognition and mitigation | |
| CN119091505A (en) | Waist motion trajectory prediction method, device, equipment and wearable walking aid | |
| CN117152199B (en) | A method, system, device and storage medium for estimating motion vector of dynamic target | |
| CN118091728A (en) | Beidou multi-source fusion positioning method in disaster environment | |
| Jiang et al. | Icp stereo visual odometry for wheeled vehicles based on a 1dof motion prior | |
| Sun | Dance training movement depth information recognition based on artificial intelligence | |
| Xia et al. | YOLO-based semantic segmentation for dynamic removal in visual-inertial SLAM | |
| Chen | Learning methods for robust localization | |
| Pal et al. | Evolution of simultaneous localization and mapping framework for autonomous robotics—a comprehensive review | |
| Kunbum et al. | 3D reconstruction by pretrained features and visual-inertial odometry | |
| Kuriakose et al. | Distance estimation methods for smartphone-based navigation support systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |