CN115871657A - Anti-collision and early warning method and system based on fusion of laser ranging and image ranging - Google Patents
Anti-collision and early warning method and system based on fusion of laser ranging and image ranging Download PDFInfo
- Publication number
- CN115871657A CN115871657A CN202211507134.3A CN202211507134A CN115871657A CN 115871657 A CN115871657 A CN 115871657A CN 202211507134 A CN202211507134 A CN 202211507134A CN 115871657 A CN115871657 A CN 115871657A
- Authority
- CN
- China
- Prior art keywords
- distance
- target vehicle
- image
- camera
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 230000004927 fusion Effects 0.000 title claims abstract description 59
- 239000013598 vector Substances 0.000 claims description 108
- 238000012937 correction Methods 0.000 claims description 12
- 238000005259 measurement Methods 0.000 abstract description 22
- 238000000691 measurement method Methods 0.000 abstract description 8
- 238000011176 pooling Methods 0.000 description 20
- 230000004913 activation Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 14
- 230000009466 transformation Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000000306 recurrent effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Landscapes
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The embodiment of the application provides an anti-collision and early warning method based on fusion of laser ranging and image ranging, and the method comprises the following steps: acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method; acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance; and under the condition that the third distance is less than or equal to the safe distance of the target vehicle, performing collision early warning on the target vehicle. The method provided by the embodiment of the application solves the problem of single distance measurement method in the prior art, and combines various distance measurement methods to realize more accurate distance measurement, thereby preventing the early warning system from generating false alarm.
Description
Technical Field
The application relates to the field of vehicle safety, in particular to an anti-collision and early warning method and system based on fusion of laser ranging and image ranging.
Background
Forklifts are heavy-duty loading equipment used by various enterprises in large quantities, and are used for carrying out goods transferring, loading and unloading, position transferring and the like in various enterprises. As a heavy goods transport tool, the operation of a forklift belongs to a special operation type, and the forklift often operates in crowded environments such as factories and warehouses where a large amount of goods are stacked, so that the operation environment has the characteristics of vision obstruction, narrow space, heavy load, noisy environment and the like; in addition, the driver can also have blind spots for the safety distance around and at the top; therefore, for the special operation type of the forklift, the anti-collision and early warning configuration of the vehicle is very important.
At present, the panoramic image technology and the radar ranging technology of the existing household vehicle are mature, and if the panoramic image technology and the radar ranging technology are used on a forklift, the use scene and the structural characteristics of the forklift need to be combined for optimization. In the prior art, the existing technology of the special vehicle of the forklift only has panoramic images or radar ranging, the ranging method is single, false alarm is easy to generate, and the problem of large ranging error exists.
Disclosure of Invention
Based on this, it is necessary to provide a collision avoidance and early warning method and system based on laser ranging and image ranging fusion, which can achieve various and accurate ranging.
In a first aspect, the application provides an anti-collision and early warning method based on fusion of laser ranging and image ranging. The method comprises the following steps:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, carrying out collision early warning on the target vehicle.
In one embodiment, the performing collision warning on the target vehicle when the third distance is less than or equal to the safe distance of the target vehicle includes: acquiring the time for which the third distance is kept unchanged under the condition that the third distance is less than or equal to the safe distance of the target vehicle; and carrying out collision early warning on the target vehicle according to the time for keeping the third distance unchanged and the keeping time threshold.
In one embodiment, the performing collision warning on the target vehicle according to the holding time of the third distance and the holding time threshold includes: stopping performing collision early warning on the target vehicle when the time for which the third distance is kept unchanged is greater than or equal to the keeping time threshold; and under the condition that the time for keeping the third distance unchanged is less than the keeping time threshold, carrying out collision early warning on the target vehicle.
In one embodiment, the method further comprises: acquiring vehicle safety information of the target vehicle; the vehicle safety information comprises vehicle speed, brake performance information, load information and road condition information; and determining the safe distance of the target vehicle according to the vehicle safety information.
In one embodiment, the method further comprises: under the condition that the target vehicle carries out backward transportation, acquiring image information behind the target vehicle through a camera; and displaying the image information.
In one embodiment, the obtaining, by the lidar, a first distance between the object to be measured and the target vehicle includes: emitting a laser beam to the object to be detected through the laser radar, and acquiring emission time; the object to be measured is used for receiving the laser beam and reflecting the laser beam to the laser radar; under the condition that the laser radar receives the laser beam reflected by the object to be detected, acquiring receiving time; determining a difference between the receive time and the transmit time as a time of flight; and calculating the first distance according to the flight time.
In one embodiment, the calculating, by a camera according to a binocular vision ranging method, a second distance between the object to be measured and the target vehicle includes: acquiring focal lengths and base lines of a first camera and a second camera and acquiring parallax of the first camera and the second camera; the focal lengths of the first camera and the second camera are consistent; the baseline is a distance between a first focus of the first camera and a second focus of the second camera; the parallax is used for indicating the difference of the first camera and the second camera shooting the same object; and acquiring the second distance according to the focal length, the baseline and the parallax.
In one embodiment, the obtaining the parallax of the first camera and the second camera includes: acquiring a first image and a second image; the first image is an image of the object to be detected, which is shot by the first camera; the second image is an image of the object to be detected, which is shot by the second camera; correcting the first image and the second image; the correction is used to ensure that the first image and the second image are in the same plane and parallel to each other; matching the first image with the second image to obtain parallax; the disparity is used for indicating the corresponding relation between each first pixel point on the first image and each second pixel point which is matched with the first pixel point and is arranged on the second image.
In one embodiment, the obtaining a third distance between the object to be measured and the target vehicle according to the first distance and the second distance includes: extracting features from the first distance to obtain a first feature vector corresponding to the first distance; extracting features from the second distance to obtain a second feature vector corresponding to the second distance; performing feature fusion and similarity comparison on the first feature vector and the second feature vector to obtain a fusion feature vector; and correcting the first distance and the second distance based on the fusion feature vector to obtain the third distance.
In a second aspect, the application further provides an anti-collision and early warning device based on the fusion of laser ranging and image ranging. The device comprises:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first distance between an object to be detected and a target vehicle through a laser radar and calculating a second distance between the object to be detected and the target vehicle through a camera according to a binocular vision ranging method;
the fusion module is used for acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and the early warning module is used for carrying out collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safe distance of the target vehicle.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, carrying out collision early warning on the target vehicle.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, carrying out collision early warning on the target vehicle.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, carrying out collision early warning on the target vehicle.
In the anti-collision and early warning method and system based on the fusion of laser ranging and image ranging, the combination of laser radar ranging and image ranging can be realized, the first distance between the object to be measured and the target vehicle is obtained through the laser radar, and the second distance between the object to be measured and the target vehicle is obtained through the camera according to the calculation of the binocular vision ranging method; secondly, acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance; and performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safety distance. In the embodiment of the application, the problem of single distance measurement method in the prior art is solved, and more accurate distance measurement is realized by combining multiple distance measurement methods, so that the early warning system is prevented from generating false alarm.
Drawings
FIG. 1 is an environmental diagram illustrating an exemplary implementation of a collision avoidance and early warning method based on laser ranging and image ranging;
FIG. 2 is a schematic flow chart of an embodiment of a collision avoidance and early warning method based on fusion of laser ranging and image ranging;
FIG. 3 is a schematic flow chart illustrating a process of obtaining a first distance between an object to be measured and a target vehicle by using a lidar in one embodiment;
FIG. 4 is a schematic flowchart illustrating a process of calculating a second distance between the object to be measured and the target vehicle by the camera according to a binocular vision distance measuring method according to an embodiment;
FIG. 5 is a schematic flow chart illustrating an embodiment of obtaining a third distance between the object to be measured and the target vehicle according to the first distance and the second distance;
FIG. 6 is a schematic flow chart of a collision avoidance and early warning method based on the fusion of laser ranging and image ranging in another embodiment;
FIG. 7 is a schematic diagram illustrating a process of obtaining a first distance between an object to be measured and a target vehicle by using a lidar in another embodiment;
FIG. 8 is a schematic view illustrating a process of calculating a second distance between the object to be measured and the target vehicle by the camera according to a binocular vision distance measuring method in another embodiment;
FIG. 9 is a schematic flow chart illustrating a process of obtaining a third distance between the object to be measured and the target vehicle according to the first distance and the second distance in another embodiment;
FIG. 10 is a block diagram of an embodiment of a collision avoidance and early warning apparatus based on the fusion of laser ranging and image ranging;
FIG. 11 is a diagram of the internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The anti-collision and early warning method based on the laser ranging and image ranging fusion can be applied to the application environment shown in the figure 1. The ranging terminal 101 and the intelligent control system 102 may communicate in a wired or wireless manner. The intelligent control system 102 may be a backend server. The data storage system may store data that the intelligent control system 102 needs to process. The data storage system may be integrated on the intelligent control system 102, or may be placed on a cloud server or other network server.
In the embodiment of the present application, the ranging terminal 101 may include, but is not limited to, a plurality of sensors and a plurality of cameras. The plurality of sensors, i.e., the lidar, may acquire laser measurement information; the cameras can acquire 360-degree holographic images of the target vehicle and can comprise binocular cameras which can acquire image measurement information; the ranging terminal 101 may transmit the obtained laser measurement information and image measurement information to the intelligent control system 102.
The intelligent control system 102 may obtain the first distance and the second distance according to the laser measurement information and the image measurement information; and a third distance between the object to be measured and the target vehicle can be acquired according to the first distance and the second distance. The intelligent control system 102 may also calculate a safe distance according to the target vehicle device information, where the target vehicle device information may include, but is not limited to, a vehicle speed and a brake performance of the target vehicle, and may also include load information and road condition information of the target vehicle; the safe distance is used for indicating the maximum distance between the target vehicle and the object to be measured for avoiding collision. In addition, in the case that the third distance is less than or equal to the safe distance, the intelligent control system 102 may send an early warning signal and display the early warning signal to the driver through the on-vehicle screen.
The distance measuring terminal 101 can be but is not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices can be smart speakers, smart televisions, smart air conditioners, smart vehicle-mounted devices, cameras, sensors and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like.
In one embodiment, as shown in fig. 2, a collision avoidance and early warning method based on laser ranging and image ranging fusion is provided, which may include the following steps:
step S201, a first distance between the object to be measured and the target vehicle is obtained through the laser radar, and a second distance between the object to be measured and the target vehicle is obtained through calculation of the camera according to a binocular vision ranging method.
The first distance is the distance between the object to be measured and the target vehicle obtained through laser radar measurement; the second distance is the distance between the object to be measured and the target vehicle, which is calculated by the camera according to a binocular vision ranging method.
In this application embodiment, combine laser radar range finding and image range finding to realize accurate anticollision early warning better. In some possible implementation manners, laser radar devices can be arranged on the front, the back, the left, the right and the upper part of the forklift; the laser radar is used for measuring by emitting laser beams to obtain laser measurement information, and obtaining the distance between the forklift and an object to be measured, namely the first distance, through the laser measurement information. In addition, a 360-degree holographic image device can be arranged at the top of the forklift, and the holographic image can comprise a binocular camera, so that on one hand, the 360-degree all-around shooting of the forklift can be realized, and the image information of each direction of the forklift is transmitted and displayed on a vehicle-mounted display screen, so that a driver can conveniently check the environment of each direction of the forklift in real time; on the other hand, the binocular camera can be used for measuring by using a binocular vision ranging method to obtain image measurement information, and the distance between the forklift and the object to be measured, namely the second distance, is obtained according to the image measurement information.
In some possible implementations, the safe distance may be calculated from the target vehicle device information.
The target vehicle device information may include, but is not limited to, vehicle speed and braking performance of the forklift.
In general, since many factors, such as vehicle speed, road condition, weather, load, and driver response, affect the braking and stopping distance, the determination of the safe distance also needs to be combined with the factors, such as road condition, weather, load, and driver response.
Step S202, acquiring a third distance between the object to be measured and the target vehicle according to the first distance and the second distance.
The descriptions of the first distance and the second distance may refer to the related description in step S201, and are not repeated herein.
The third distance is combined with the characteristics of the first distance and the second distance, and is closer to the actual distance after being corrected by the algorithm.
In the embodiment of the present application, the third distance may be displayed through the in-vehicle display screen and may be updated in real time, for example, the update rate may be updated once every 100 milliseconds.
And step S203, performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safe distance of the target vehicle.
Under the condition that the third distance is smaller than or equal to the safe distance, at the moment, the target vehicle, such as a forklift, can be judged by the intelligent control system to have collision risks, and an early warning signal can be sent out through the vehicle-mounted loudspeaker and the vehicle-mounted display screen.
In some possible implementations, the driver may also set a minimum safety distance according to the working environment, for example, in a relatively narrow space, the minimum safety distance may be correspondingly smaller; in an open space, the minimum safe distance may be appropriately large. And under the condition that the distance between the target vehicle and the object to be detected is less than or equal to the minimum safe distance, the intelligent control system can send out an early warning signal, and under the condition that the third distance is less than or equal to the safe distance of the target vehicle, the collision early warning is carried out on the target vehicle.
In the anti-collision and early warning method and system based on the fusion of laser ranging and image ranging, the combination of laser radar ranging and image ranging can be realized, the first distance between the object to be measured and the target vehicle is obtained through the laser radar, and the second distance between the object to be measured and the target vehicle is obtained through the camera according to the calculation of the binocular vision ranging method; secondly, acquiring a third distance between the object to be measured and the target vehicle according to the first distance and the second distance; and performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safety distance. In the embodiment of the application, the problem of single distance measurement method in the prior art is solved, and more accurate distance measurement is realized by combining multiple distance measurement methods, so that the early warning system is prevented from generating false alarm.
In some embodiments, in the case that the third distance is less than or equal to the safe distance of the target vehicle, performing collision warning on the target vehicle may include:
step 1, under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, obtaining the time for which the third distance is kept unchanged.
In the embodiment of the present application, the third distance may be displayed through the on-board display screen and may be updated in real time, for example, the update rate may be updated once every 100 milliseconds. Therefore, the driver can view the third distance in real time, and the intelligent control system can also record the time information carried by the third distance, wherein the time information can include the ranging time corresponding to the third distance. Further, the time during which the third distance remains unchanged may be obtained according to the ranging time corresponding to each third distance.
And 2, performing collision early warning on the target vehicle according to the time for keeping the third distance unchanged and the time threshold value.
The holding time threshold may be a preset minimum time for keeping a third distance required for stopping the collision warning of the target vehicle unchanged.
In some embodiments, performing collision warning on the target vehicle according to the time when the third distance remains unchanged and the time threshold may include:
and step 1, stopping performing collision early warning on the target vehicle under the condition that the time for keeping the third distance unchanged is greater than or equal to the keeping time threshold.
In some possible implementations, the target vehicle may be a forklift, and in combination with an operation condition of the forklift, the loaded goods may be in front of the forklift, and in a case where the time for which the third distance is kept unchanged is greater than or equal to the holding time threshold, the object to be measured may be a good, and at this time, the collision warning for the target vehicle may be stopped.
And 2, performing collision early warning on the target vehicle under the condition that the time for keeping the third distance unchanged is less than the keeping time threshold.
Under the condition that the time for keeping the third distance unchanged is less than the holding time threshold, the object to be detected can be identified as the obstacle, and collision early warning can be performed on the target vehicle.
In some embodiments, the collision avoidance and early warning method based on laser ranging and image ranging fusion provided in the embodiment of the present application may further include:
step 1, vehicle safety information of a target vehicle is obtained.
The vehicle safety information may include, but is not limited to, vehicle speed, brake performance information, load information, road condition information, and the like.
And 2, determining the safe distance of the target vehicle according to the vehicle safety information.
In some possible implementations, the safe distance may be calculated from vehicle safety information of the target vehicle.
The vehicle safety information may include, but is not limited to, vehicle speed, brake performance information, load information, road condition information, and the like.
In general, since many factors, such as vehicle speed, road condition, weather, load, and driver response, affect the braking and stopping distance, the determination of the safe distance also needs to be combined with the factors, such as road condition, weather, load, and driver response.
In some embodiments, the collision avoidance and early warning method based on laser ranging and image ranging fusion provided in the embodiment of the present application may further include:
step 1, acquiring image information behind the target vehicle through a camera under the condition that the target vehicle carries out backward transportation; and displaying the image information.
In some embodiments, as shown in fig. 3, the obtaining the first distance between the object to be measured and the target vehicle by the lidar may include:
step S301, emitting a laser beam to an object to be measured through a laser radar, and acquiring emission time; the object to be measured is used for receiving the laser beam and reflecting the laser beam to the laser radar.
In the embodiment of the application, the target vehicle can be a forklift, and the laser radar device can be arranged at the front position, the rear position, the left position, the right position and the upper position of the forklift. In addition, a 369-degree holographic image device can be installed at the top of the forklift, 360-degree all-dimensional shooting around the forklift can be achieved, image information of all directions of the forklift is transmitted and displayed on a vehicle-mounted display screen, and a driver can check the environment of all directions of the forklift in real time conveniently. Under the condition that an object to be detected is found in a certain direction of a forklift, the intelligent control system can control the laser radar corresponding to the direction to emit laser beams to the object to be detected and record the emitting time of the emitted laser beams. The laser beam may be reflected to a lidar upon reaching the object to be measured.
Step S302, under the condition that the laser radar receives the laser beam reflected by the object to be measured, the receiving time is obtained.
Under the condition that the laser radar receives a reflected beam of the object to be detected aiming at the emitted laser beam, the intelligent control system can record the receiving time at the moment.
In step S303, the difference between the reception time and the transmission time is determined as the flight time.
And step S304, calculating to obtain a first distance according to the flight time.
Defining the distance between the forklift and the object to be measured, which is measured by the laser radar, as a first distance, wherein the flight time is the time taken by the laser beam to transmit the distance twice as long as the first distance, and the half time of the flight time is the time taken by the laser beam to complete the first distance; at this time, the first distance may be calculated from the time of flight and the speed of light of the laser.
In some embodiments, as shown in fig. 4, the calculating, by the camera according to the binocular vision distance measuring method, the second distance between the object to be measured and the target vehicle may include:
step S401, acquiring focal lengths and base lines of the first camera and the second camera, and acquiring a parallax of the first camera and the second camera.
The focal lengths of the first camera and the second camera are consistent; the baseline is the distance between the first focus of the first camera and the second focus of the second camera; the parallax is used for indicating the difference of the first camera and the second camera shooting the same object.
The 360-degree holographic imaging device at the top of the forklift can comprise a plurality of cameras and a group of binocular cameras, wherein the binocular cameras can be measured by using a binocular vision ranging method to obtain image measurement information, and the distance between the forklift and an object to be measured, namely the second distance, is obtained according to the image measurement information.
Wherein, binocular camera can include first camera and second camera. The binocular vision ranging method is introduced in the following steps:
the principle of a binocular camera is similar to that of the human eye. The reason why the human eyes can perceive the distance of an object is that the two eyes have a difference, also called a "parallax", in the image presented to the same object. The farther the object distance is, the smaller the parallax error is; conversely, the greater the parallax. Similarly, for the first camera and the second camera, there is a difference in the images presented to the same object, that is, there is a parallax. In some possible implementations, the method may include:
step 1, calibrating the first camera and the second camera.
The calibration can be used for obtaining internal parameters, external parameters and distortion parameters of the first camera and the second camera, and determining the mapping relation between the three-dimensional coordinate system and the camera image coordinate system. In the embodiment of the application, the focal length and the baseline of the camera can be obtained through calibration.
And 2, respectively shooting the object to be detected through the first camera and the second camera to obtain a first image and a second image.
And 3, carrying out binocular correction on the first image and the second image.
The correction is used for ensuring that the first image and the second image are positioned on the same plane and are parallel to each other; under the condition, any pixel point on the first image and the corresponding pixel point on the second image can be ensured to have the same line number, and the subsequent parallax acquisition is facilitated.
And 4, matching the first image with the second image to obtain the parallax.
The disparity may also be used to indicate a correspondence between each first pixel point on the first image and each second pixel point that is matched and on the second image.
And step 5, acquiring a second distance according to the focal length, the baseline and the parallax.
Step S402, acquiring a second distance according to the focal length, the base line and the parallax.
In some embodiments, as shown in fig. 5, the first distance and the second distance, and the obtaining a third distance between the object to be measured and the target vehicle may include:
step S501, extracting features from the first distance to obtain a first feature vector corresponding to the first distance.
In this embodiment of the present application, the first feature vector may be obtained through a deep residual network, resNet). The first distance may include a plurality of first distances.
The Resnet may include at least one convolutional layer for extracting characteristics of the input data and at least one pooling layer; the pooling layer is used to sample incoming data. Both the convolutional layer and the pooling layer include activation functions.
In particular, the convolutional layer may be used to extract initial features for a plurality of first distances. The first step, carrying out vector conversion on the plurality of first distances to obtain a plurality of first distance vectors, wherein the plurality of first distance vectors can be combined into a distance vector matrix; secondly, inputting the distance vector matrix into a convolution layer, and performing convolution operation by using a convolution kernel and the distance vector matrix, namely performing inner product operation on the distance vector matrix and the convolution kernel to obtain a convolution result corresponding to the distance vector matrix; secondly, carrying out nonlinear transformation on the convolution result based on an activation function, and adding a bias vector to obtain an initial feature vector; inputting the initial characteristic vector into the pooling layer, and performing characteristic sampling on the initial characteristic vector; then, the feature sampling result is subjected to nonlinear transformation based on the activation function, and a bias vector is added to obtain a first feature vector.
The video feature vectors can also be obtained through other network models, such as a recurrent neural network and a long-short term memory network, which is not limited in this respect.
Step S502, extracting features from the second distance to obtain a second feature vector corresponding to the second distance.
For the method for obtaining the second feature vector, reference may be made to the related description of the first feature vector obtaining method in step S501, and details are not repeated here.
Step S503, feature fusion and similarity comparison are carried out on the first feature vector and the second feature vector to obtain a fusion feature vector.
In some embodiments, step S503 may include:
inputting the first feature vector and the second feature vector into a fusion module to obtain a fusion feature vector;
the fusion module comprises at least one convolution layer and at least one pooling layer; wherein,
the first convolutional layer is used to extract the features of the incoming data, and the pooling layer is used to sample the incoming data.
The fusion module may be a convolutional neural network, which may include at least one convolutional layer and at least one pooling layer. The convolutional layer is used for extracting the characteristics of the multi-modal feature vectors to obtain initial feature vectors, and the pooling layer is used for sampling the initial feature vectors to obtain more accurate fusion feature vectors. Both the convolutional and pooling layers include activation functions.
Specifically, the server inputs the first feature vector and the second feature vector to the convolution layer, and performs convolution operation on the first feature vector and the second feature vector by using a convolution kernel, that is, performs inner product operation on the first feature vector, the second feature vector and the convolution kernel to obtain a corresponding convolution result; secondly, carrying out nonlinear transformation on the convolution result based on an activation function, and adding a bias vector to obtain an initial feature vector; inputting the initial characteristic vector into the pooling layer, and performing characteristic sampling on the initial characteristic vector; then, the convolution result is subjected to nonlinear transformation based on the activation function, and a bias vector is added to obtain a fusion feature vector.
In other embodiments, the server may also obtain the fusion feature vector in other manners, and the fusion module may also be a recurrent neural network, a deep residual error network, or other network models. This is not limited by the present application.
And step S504, correcting the first distance and the second distance based on the fusion feature vector to obtain a third distance.
In some embodiments, step S504 may include:
inputting the fusion feature vector into a correction module to obtain a third distance;
the correction module comprises at least one full connection layer.
In addition, the fully-connected layer includes an activation function that includes a weight matrix and a bias constant.
Specifically, the server may input the fused feature vector to the full connection layer, perform nonlinear transformation on the fused feature vector based on the weight matrix and the offset vector of the activation function, and implement correction on the distance feature to obtain a more accurate third distance.
In the anti-collision and early warning method based on the fusion of laser ranging and image ranging, the combination of laser radar ranging and image ranging can be realized, the first distance between the object to be measured and the target vehicle is obtained through the laser radar, and the second distance between the object to be measured and the target vehicle is calculated through the camera according to a binocular vision ranging method; secondly, acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance; and performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safety distance. In the embodiment of the application, the problem of single distance measurement method in the prior art is solved, and more accurate distance measurement is realized by combining multiple distance measurement methods, so that the early warning system is prevented from generating false alarm.
In one embodiment, as shown in fig. 6, there is provided a collision avoidance and early warning method based on laser ranging and image ranging fusion, the method comprising the steps of:
step S601, the intelligent control system sends a measurement instruction to the measurement terminal.
Step S602, the measuring terminal respectively obtains laser measuring information and image measuring information according to the received measuring instruction, and sends the laser measuring information and the image measuring information to the intelligent control system.
Step S603, the intelligent control system respectively obtains the first distance and the second distance according to the laser measurement information and the image measurement information.
In some embodiments, as shown in fig. 7, step S603 may include:
step S701, transmitting a laser beam to an object to be detected through a laser radar, and acquiring transmitting time; and the object to be measured is used for receiving the laser beam and reflecting the laser beam to the laser radar.
In the embodiment of the application, the target vehicle can be a forklift, and the laser radar device can be arranged at the front position, the rear position, the left position, the right position and the upper position of the forklift. In addition, a 369-degree holographic image device can be installed at the top of the forklift, 360-degree all-dimensional shooting around the forklift can be achieved, image information of all directions of the forklift is transmitted and displayed on a vehicle-mounted display screen, and a driver can check the environment of all directions of the forklift in real time conveniently. Under the condition that an object to be detected is found in a certain direction of a forklift, the intelligent control system can control the laser radar corresponding to the direction to emit laser beams to the object to be detected and record the emitting time of the emitted laser beams. The laser beam may be reflected to the lidar upon reaching the object to be measured.
In some embodiments, in the case of backward transportation of the target vehicle, image information behind the target vehicle is acquired through the camera; and displaying the image information.
Step S702, under the condition that the laser radar receives the laser beam reflected by the object to be measured, the receiving time is obtained.
Under the condition that the laser radar receives a reflected beam of the object to be detected aiming at the emitted laser beam, the intelligent control system can record the receiving time at the moment.
In step S703, the difference between the reception time and the transmission time is determined as the flight time.
Step S704, a first distance is calculated according to the flight time.
Defining the distance between the forklift and the object to be measured, which is measured by the laser radar, as a first distance, wherein the flight time is the time taken by the laser beam to transmit the distance twice as long as the first distance, and the half time of the flight time is the time taken by the laser beam to complete the first distance; at this time, the first distance may be calculated from the time of flight and the speed of light of the laser.
In some embodiments, as shown in fig. 8, step S603 may include:
step S801, acquiring focal lengths and baselines of the first camera and the second camera, and acquiring parallax of the first camera and the second camera.
The focal lengths of the first camera and the second camera are consistent; the baseline is the distance between a first focus of the first camera and a second focus of the second camera; the parallax is used for indicating the difference of the first camera and the second camera shooting the same object.
The 360-degree holographic imaging device at the top of the forklift comprises a plurality of cameras and a group of binocular cameras, wherein the binocular cameras can be measured by using a binocular vision distance measuring method to obtain image measuring information, and the distance between the forklift and an object to be measured, namely the second distance, is obtained according to the image measuring information.
Wherein, binocular camera can include first camera and second camera. The binocular vision ranging method is introduced in the following steps:
the principle of a binocular camera is similar to that of the human eye. The reason why the human eyes can perceive the distance of an object is that the two eyes have a difference, also called a "parallax", in the image presented to the same object. The farther the object distance is, the smaller the parallax error is; conversely, the greater the parallax. Similarly, for the first camera and the second camera, there is a difference in the images presented by the same object, which is the parallax here. In some possible implementations, the method may include:
step 1, calibrating the first camera and the second camera.
The calibration can be used for obtaining internal parameters, external parameters and distortion parameters of the first camera and the second camera and determining the mapping relation between the three-dimensional coordinate system and the camera image coordinate system. In the embodiment of the application, the focal length and the baseline of the camera can be obtained through calibration.
And 2, shooting the object to be detected through the first camera and the second camera respectively to obtain a first image and a second image.
And 3, performing binocular correction on the first image and the second image.
The correction is used for ensuring that the first image and the second image are positioned on the same plane and are parallel to each other; under the condition, any one pixel point on the first image and the corresponding pixel point on the second image can be ensured to have the same line number, and the subsequent acquisition of parallax is facilitated.
And 4, matching the first image with the second image to obtain the parallax.
The disparity may also be used to indicate a correspondence between each first pixel point on the first image and each second pixel point that is matched and on the second image.
And step 5, acquiring a second distance according to the focal length, the baseline and the parallax.
Step S802, a second distance is obtained according to the focal length, the baseline and the parallax.
Step S604, the intelligent control system obtains a third distance between the object to be measured and the target vehicle according to the first distance and the second distance.
In some embodiments, as shown in fig. 9, step S604 may include:
step S901, extracting features from the first distance to obtain a first feature vector corresponding to the first distance.
In this embodiment of the present application, the first feature vector may be obtained through a depth residual network, resNet). The first distance may include a plurality of first distances.
The Resnet may include at least one convolutional layer for extracting characteristics of the input data and at least one pooling layer; the pooling layer is used to sample incoming data. Both the convolutional layer and the pooling layer include activation functions.
In particular, the convolutional layers may be used to extract initial features for a plurality of first distances. The first step, carrying out vector conversion on the plurality of first distances to obtain a plurality of first distance vectors, wherein the plurality of first distance vectors can be combined into a distance vector matrix; secondly, inputting the distance vector matrix into a convolution layer, and performing convolution operation by using a convolution kernel and the distance vector matrix, namely performing inner product operation on the distance vector matrix and the convolution kernel to obtain a convolution result corresponding to the distance vector matrix; secondly, carrying out nonlinear transformation on the convolution result based on an activation function, and adding a bias vector to obtain an initial feature vector; inputting the initial characteristic vector into a pooling layer, and performing characteristic sampling on the initial characteristic vector; then, the feature sampling result is subjected to nonlinear transformation based on the activation function, and a bias vector is added to obtain a first feature vector.
The video feature vectors can also be obtained through other network models, such as a recurrent neural network and a long-short term memory network, which is not limited in this respect.
Step S902, extracting features from the second distance to obtain a second feature vector corresponding to the second distance.
The second feature vector obtaining method may refer to the related description of the first feature vector obtaining method in step S901, and is not described herein again.
And step S903, performing feature fusion and similarity comparison on the first feature vector and the second feature vector to obtain a fused feature vector.
In some embodiments, step S903 may include:
inputting the first feature vector and the second feature vector into a fusion module to obtain a fusion feature vector;
the fusion module comprises at least one convolution layer and at least one pooling layer; wherein,
the first convolutional layer is used to extract the features of the incoming data, and the pooling layer is used to sample the incoming data.
The fusion module may be a convolutional neural network, which may include at least one convolutional layer and at least one pooling layer. The convolutional layer is used for extracting the characteristics of the multi-modal feature vectors to obtain initial feature vectors, and the pooling layer is used for sampling the initial feature vectors to obtain more accurate fusion feature vectors. Both the convolutional layer and the pooling layer include activation functions.
Specifically, the server inputs the first feature vector and the second feature vector to the convolution layer, and performs convolution operation on the first feature vector and the second feature vector by using a convolution kernel, that is, performs inner product operation on the first feature vector, the second feature vector and the convolution kernel to obtain a corresponding convolution result; secondly, carrying out nonlinear transformation on the convolution result based on an activation function, and adding a bias vector to obtain an initial feature vector; inputting the initial characteristic vector into the pooling layer, and performing characteristic sampling on the initial characteristic vector; then, the convolution result is subjected to nonlinear transformation based on the activation function, and a bias vector is added to obtain a fusion feature vector.
In other embodiments, the server may also obtain the fusion feature vector in other manners, and the fusion module may also be a recurrent neural network, a deep residual error network, or other network models. This is not limited by the present application.
Step S904, based on the fusion feature vector, corrects the first distance and the second distance to obtain a third distance.
In some embodiments, step S904 may include:
inputting the fusion feature vector into a correction module to obtain a third distance;
the correction module comprises at least one full connection layer.
In addition, the full-connectivity layer includes an activation function that includes a weight matrix and a bias constant.
Specifically, the server may input the fused feature vector to the full connection layer, perform nonlinear transformation on the fused feature vector based on the weight matrix and the offset vector of the activation function, and implement correction on the distance feature to obtain a more accurate third distance.
And step S605, performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safe distance of the target vehicle.
In some possible implementations, obtaining the safe distance of the target vehicle may include:
step 1, obtaining the speed, the brake performance, the load information and the road condition information of a target vehicle.
And step 2, determining the safe distance of the target vehicle according to the vehicle speed, the brake performance, the load information and the road condition information.
In some embodiments, in the case that the third distance is less than or equal to the safe distance of the target vehicle, performing collision warning on the target vehicle may include:
and 1, acquiring the time for which the third distance is kept unchanged under the condition that the third distance is less than or equal to the safe distance of the target vehicle.
In the embodiment of the present application, the third distance may be displayed through the in-vehicle display screen and may be updated in real time, for example, the update rate may be updated once every 100 milliseconds. Therefore, the driver can view the third distance in real time, and the intelligent control system can also record the time information carried by the third distance, wherein the time information can include the ranging time corresponding to the third distance. Further, the time during which the third distance remains unchanged may be obtained according to the ranging time corresponding to each third distance.
And 2, performing collision early warning on the target vehicle according to the time for keeping the third distance unchanged and the time threshold value.
The holding time threshold may be a preset minimum time for keeping a third distance required for stopping the collision warning of the target vehicle unchanged.
In some embodiments, performing collision warning on the target vehicle according to the time when the third distance remains unchanged and the time threshold may include:
and step 1, stopping performing collision early warning on the target vehicle under the condition that the time for keeping the third distance unchanged is greater than or equal to the keeping time threshold.
In some possible implementations, the target vehicle may be a forklift, and in combination with the operation condition of the forklift, the loaded goods may be in front of the forklift, and in the case that the time for which the third distance remains unchanged is greater than or equal to the holding time threshold, the object to be measured may be a good, and at this time, the collision warning for the target vehicle may be stopped.
And 2, performing collision early warning on the target vehicle under the condition that the time for keeping the third distance unchanged is less than the keeping time threshold.
Under the condition that the time for keeping the third distance unchanged is less than the holding time threshold, the object to be detected can be identified as the obstacle, and collision early warning can be performed on the target vehicle.
In some embodiments, obtaining the safe distance may include:
step 1, vehicle safety information of a target vehicle is obtained.
The vehicle safety information may include, but is not limited to, vehicle speed, brake performance information, load information, road condition information, and the like.
And 2, determining the safe distance of the target vehicle according to the vehicle safety information.
In some possible implementations, the safe distance may be calculated from vehicle safety information of the target vehicle.
The vehicle safety information may include, but is not limited to, vehicle speed, brake performance information, load information, road condition information, and the like.
Generally, since many factors affect the braking and stopping distance, such as vehicle speed, road condition, weather, load, driver response, etc., the determination of the safe distance also needs to be combined with the factors such as road condition, weather, load, driver response, etc.
In some possible implementations, the driver may also set a minimum safety distance according to the working environment, for example, in a relatively narrow space, the minimum safety distance may be correspondingly smaller; in an open space, the minimum safe distance may be appropriately large. Under the condition that the distance between the target vehicle and the object to be measured is smaller than or equal to the minimum safe distance, the intelligent control system can send out an early warning signal.
In the anti-collision and early warning method based on the fusion of laser ranging and image ranging, the combination of laser radar ranging and image ranging can be realized, the first distance between the object to be measured and the target vehicle is obtained through the laser radar, and the second distance between the object to be measured and the target vehicle is calculated through the camera according to a binocular vision ranging method; secondly, acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance; and performing collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safety distance. In the embodiment of the application, the problem that the distance measuring method is single in the prior art is solved, and more accurate distance measurement is realized by combining multiple distance measuring methods, so that the early warning system is prevented from generating misinformation.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an anti-collision and early warning device based on the fusion of the laser ranging and the image ranging, which is used for realizing the anti-collision and early warning method based on the fusion of the laser ranging and the image ranging. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so that specific limitations in one or more embodiments of the collision avoidance and early warning device based on the fusion of laser ranging and image ranging provided below can be referred to the limitations on the collision avoidance and early warning method based on the fusion of laser ranging and image ranging, and are not described herein again.
In one embodiment, as shown in fig. 10, there is provided a collision avoidance and early warning apparatus based on laser ranging and image ranging fusion, comprising: an acquisition module 1010, a fusion module 1020, and an early warning module 1030, wherein:
the acquisition module 1010 is used for acquiring a first distance between an object to be detected and a target vehicle through a laser radar and calculating a second distance between the object to be detected and the target vehicle through a camera according to a binocular vision ranging method;
a fusion module 1020, configured to obtain a third distance between the object to be measured and the target vehicle according to the first distance and the second distance;
and the early warning module 1030 is configured to perform collision early warning on the target vehicle when the third distance is less than or equal to the safe distance of the target vehicle.
All modules in the anti-collision and early-warning device based on the fusion of the laser ranging and the image ranging can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device comprises a processor, a memory, a communication interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a radio-based intelligent monitoring and control method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is less than or equal to the safe distance of the target vehicle, performing collision early warning on the target vehicle.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is less than or equal to the safe distance of the target vehicle, performing collision early warning on the target vehicle.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is less than or equal to the safe distance of the target vehicle, performing collision early warning on the target vehicle.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. An anti-collision and early warning method based on laser ranging and image ranging fusion is characterized in that the method comprises the following steps:
acquiring a first distance between an object to be measured and a target vehicle through a laser radar, and calculating a second distance between the object to be measured and the target vehicle through a camera according to a binocular vision ranging method;
acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and under the condition that the third distance is smaller than or equal to the safe distance of the target vehicle, carrying out collision early warning on the target vehicle.
2. The method of claim 1, wherein the performing collision warning on the target vehicle if the third distance is less than or equal to a safe distance of the target vehicle comprises:
acquiring the time for which the third distance is kept unchanged under the condition that the third distance is less than or equal to the safe distance of the target vehicle;
and carrying out collision early warning on the target vehicle according to the time for keeping the third distance unchanged and the keeping time threshold.
3. The method of claim 2, wherein the performing collision warning on the target vehicle according to the hold time of the third distance and a hold time threshold comprises:
stopping performing collision early warning on the target vehicle when the time for which the third distance is kept unchanged is greater than or equal to the keeping time threshold;
and under the condition that the time for which the third distance is kept unchanged is less than the keeping time threshold, carrying out collision early warning on the target vehicle.
4. The method of claim 1, further comprising:
acquiring vehicle safety information of the target vehicle; the vehicle safety information comprises vehicle speed, brake performance information, load information and road condition information;
and determining the safe distance of the target vehicle according to the vehicle safety information.
5. The method of claim 1, further comprising:
under the condition that the target vehicle carries out backward transportation, acquiring image information behind the target vehicle through a camera; and displaying the image information.
6. The method of claim 1, wherein the obtaining a first distance between the object to be measured and the target vehicle by the lidar comprises:
transmitting a laser beam to the object to be detected through the laser radar, and acquiring transmitting time; the object to be measured is used for receiving the laser beam and reflecting the laser beam to the laser radar;
under the condition that the laser radar receives the laser beam reflected by the object to be detected, acquiring receiving time;
determining a difference between the receive time and the transmit time as a time of flight;
and calculating the first distance according to the flight time.
7. The method of claim 1, wherein the calculating a second distance between the object to be measured and the target vehicle by the camera according to a binocular vision ranging method comprises:
acquiring focal lengths and base lines of a first camera and a second camera and acquiring parallax of the first camera and the second camera; the focal lengths of the first camera and the second camera are consistent; the baseline is a distance between a first focus of the first camera and a second focus of the second camera; the parallax is used for indicating the difference of the first camera and the second camera shooting the same object;
and acquiring the second distance according to the focal length, the baseline and the parallax.
8. The method of claim 7, wherein the obtaining the disparity of the first camera and the second camera comprises:
acquiring a first image and a second image; the first image is an image of the object to be detected, which is shot by the first camera; the second image is an image of the object to be detected, which is shot by the second camera;
correcting the first image and the second image; the correction is used to ensure that the first image and the second image are in the same plane and parallel to each other;
matching the first image with the second image to obtain parallax; the disparity is used for indicating the corresponding relation between each first pixel point on the first image and each second pixel point which is matched with the first pixel point and is arranged on the second image.
9. The method of claim 1, wherein the obtaining a third distance between the object to be measured and the target vehicle based on the first distance and the second distance comprises:
extracting features from the first distance to obtain a first feature vector corresponding to the first distance;
extracting features from the second distance to obtain a second feature vector corresponding to the second distance;
performing feature fusion and similarity comparison on the first feature vector and the second feature vector to obtain a fusion feature vector;
and correcting the first distance and the second distance based on the fusion feature vector to obtain the third distance.
10. The utility model provides an anticollision and early warning device based on laser rangefinder and image range finding fuse which characterized in that, the device includes:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first distance between an object to be detected and a target vehicle through a laser radar and calculating a second distance between the object to be detected and the target vehicle through a camera according to a binocular vision ranging method;
the fusion module is used for acquiring a third distance between the object to be detected and the target vehicle according to the first distance and the second distance;
and the early warning module is used for carrying out collision early warning on the target vehicle under the condition that the third distance is less than or equal to the safe distance of the target vehicle.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211507134.3A CN115871657A (en) | 2022-11-29 | 2022-11-29 | Anti-collision and early warning method and system based on fusion of laser ranging and image ranging |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211507134.3A CN115871657A (en) | 2022-11-29 | 2022-11-29 | Anti-collision and early warning method and system based on fusion of laser ranging and image ranging |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115871657A true CN115871657A (en) | 2023-03-31 |
Family
ID=85764506
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211507134.3A Pending CN115871657A (en) | 2022-11-29 | 2022-11-29 | Anti-collision and early warning method and system based on fusion of laser ranging and image ranging |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115871657A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118053260A (en) * | 2024-04-02 | 2024-05-17 | 惠州市新益鸿科技有限公司 | Production workshop construction operation safety early warning system and method based on Internet of things |
| KR102858765B1 (en) | 2025-05-08 | 2025-09-11 | 주식회사 이편한자동화기술 | Forklift safety operation system based on artificial intelligence |
-
2022
- 2022-11-29 CN CN202211507134.3A patent/CN115871657A/en active Pending
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118053260A (en) * | 2024-04-02 | 2024-05-17 | 惠州市新益鸿科技有限公司 | Production workshop construction operation safety early warning system and method based on Internet of things |
| CN118053260B (en) * | 2024-04-02 | 2025-02-25 | 惠州市新益鸿科技有限公司 | Production workshop construction operation safety early warning system and method based on the Internet of Things |
| KR102858765B1 (en) | 2025-05-08 | 2025-09-11 | 주식회사 이편한자동화기술 | Forklift safety operation system based on artificial intelligence |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3568334B1 (en) | System, method and non-transitory computer readable storage medium for parking vehicle | |
| EP3754448B1 (en) | Data fusion method and related device | |
| CN110044371B (en) | Vehicle positioning method and vehicle positioning device | |
| CN109188457B (en) | Object detection frame generation method, device, equipment, storage medium and vehicle | |
| JP2021515939A (en) | Monocular depth estimation method and its devices, equipment and storage media | |
| CN115871657A (en) | Anti-collision and early warning method and system based on fusion of laser ranging and image ranging | |
| JP2017134814A (en) | Vehicle contour detection method and apparatus based on point cloud data | |
| WO2021072709A1 (en) | Method for detecting and tracking target, system, device, and storage medium | |
| CN111308415B (en) | Online pose estimation method and equipment based on time delay | |
| US11443184B2 (en) | Methods and systems for predicting a trajectory of a road agent based on an intermediate space | |
| CN116740669B (en) | Multi-view image detection method, device, computer equipment and storage medium | |
| US11092690B1 (en) | Predicting lidar data using machine learning | |
| CN116740668B (en) | Three-dimensional target detection method, device, computer equipment and storage medium | |
| CN114217303B (en) | Target positioning and tracking method and device, underwater robot and storage medium | |
| CN114612572B (en) | A laser radar and camera extrinsic parameter calibration method and device based on deep learning | |
| KR20220039101A (en) | Robot and controlling method thereof | |
| WO2022217988A1 (en) | Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program | |
| CN118233842A (en) | Indoor positioning method, device, computer equipment and storage medium | |
| CN115294280A (en) | Three-dimensional reconstruction method, apparatus, device, storage medium, and program product | |
| EP4296615A1 (en) | Distance measuring method and device | |
| CN112200130B (en) | Three-dimensional target detection method and device and terminal equipment | |
| CN118279873A (en) | Environment sensing method and device and unmanned vehicle | |
| Erke et al. | A fast calibration approach for onboard LiDAR-camera systems | |
| CN117250956A (en) | Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused | |
| KR102536096B1 (en) | Learning data generation method and computing device therefor |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |