CN106570487A - Method and device for predicting collision between objects - Google Patents
Method and device for predicting collision between objects Download PDFInfo
- Publication number
- CN106570487A CN106570487A CN201610990048.0A CN201610990048A CN106570487A CN 106570487 A CN106570487 A CN 106570487A CN 201610990048 A CN201610990048 A CN 201610990048A CN 106570487 A CN106570487 A CN 106570487A
- Authority
- CN
- China
- Prior art keywords
- image
- feature point
- collision
- time
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
- B60W30/0956—Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and a device for predicting the collision between objects. The method comprises the following steps: shooting two images of a second object at the position of a first object; determining an identical feature point pair on the second object in the two images; determining the distance between the two feature points of the feature point pair in each image; and determining the time of collision between the first object and the second object according to the distance between the two feature points of the feature point pair in each image and the shooting time interval of the two images. The predicted collision time can visually represent the collision risk at the current moment, and thus, drivers can be warned in a timely manner to perform a corresponding operation. Moreover, the method and the device have the advantages of high accuracy, good reliability, convenient use, low cost, low power consumption, and the like.
Description
Technical Field
The invention relates to a method and a device for predicting collision between objects.
Background
With the rapid development of the automobile industry, the number of vehicles on a road surface is obviously increased, and automobile collision accidents frequently occur. The misoperation of the driver is one of the main reasons of accidents. If the occurrence of a collision accident can be predicted and the driver is appropriately warned, it is important to reduce the traffic accident and improve the driving safety. In recent years, an Advanced Driving Assistance System (ADAS) based on images has been attracting much attention. The ADAS generally has a collision warning system, which uses a camera mounted on a host vehicle to collect video streams, and then predicts the possible collision time between the host vehicle and a preceding vehicle, so as to predict potential collision accidents and reduce the occurrence of accidents.
There are currently some technical solutions for how to predict the time to collision.
Prior art 1: the invention patent application with the application publication number of CN105574552A and the name of 'a vehicle distance measurement and collision early warning method based on monocular vision' discloses the following technical scheme: after the position of the vehicle in the image is obtained, the distance of the front vehicle is calculated by utilizing the center point coordinate of the bottom edge of the vehicle target frame and the geometric distance measuring principle, and the collision time is obtained by dividing the distance by the relative speed of the two vehicles. However, this method depends on the installation position of the camera, and when the user changes the installation height or angle of the camera, there will be a large error in the calculated vehicle distance. Furthermore, due to environmental factors (e.g., cloudy days, rainy days, fog, etc.), the ground point of the vehicle target is difficult to obtain, which in turn results in failure to obtain an accurate collision time.
Prior art 2: the invention patent with the publication number of CN103287372B and the name of 'an automobile anti-collision safety protection method based on image processing' discloses the following technical scheme: the distance between the two vehicles is calculated by using images acquired by the two cameras at the same time, the speed of the vehicle is acquired by using the vehicle speed acquisition module, and then the collision time of the two vehicles is calculated. According to the scheme, images need to be acquired through the two cameras, the distance between two vehicles needs to be calculated, and then the collision time can be obtained through calculation according to the vehicle speed. Moreover, the speed of the vehicle acquired by the vehicle speed acquisition module is difficult, the accuracy of vehicle speed acquisition is not high, and the determination of the collision time is not accurate enough along with the continuous change of the vehicle speed in the driving process of the vehicle.
Prior art 3: in the technical scheme disclosed by the invention patent with the publication number CN102642510B and the name of 'an image-based vehicle anti-collision early warning method', the method for determining the collision time is also described. Unlike prior art 1, prior art 3 derives the collision time of two vehicles using the relationship between the width pixel of the vehicle in the image and the vehicle distance. Although the problem that the installation position of the camera is limited can be solved by judging according to the vehicle width, in the actual driving process, the current vehicle is often not positioned right ahead of the current vehicle, so that the camera can acquire the image of the tail part of the front vehicle and also acquire the image of the side surface of the front vehicle, the system is difficult to clearly define the boundary of the tail part of the front vehicle, the actual width of the tail part of the front vehicle cannot be determined, and the determination result of the collision time is inaccurate.
Prior art 4: the scheme disclosed by the invention with the authorized notice number of CN100386595C and the name of 'an automobile anti-collision early warning method and device' can only obtain the change speed of the distance between the same characteristic points in two or more images, but the change speed of the distance between the characteristic points can not intuitively express the risk of collision in the driving process, so that the scheme of the prior art 4 can not predict the risk of collision, and other operation rules or models are required to be supplemented to achieve the purpose.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The invention provides a collision prediction method and a collision prediction device aiming at the problems in the related art, which can obtain collision time through the result of image processing, can intuitively represent the collision risk at the current moment, have higher accuracy, have no strict requirement on the installation position of a camera, and do not need to measure the distance between vehicles, the speed of a vehicle and the like.
According to one aspect of the present invention, a method of predicting a collision between objects is provided.
The method for predicting the collision between the objects comprises the following steps:
step 1, shooting two images of a second object at a first object in sequence;
step 2, determining the same characteristic point pairs in the areas where the second object is located in the two images;
step 3, determining the distance between two characteristic points of the characteristic point pair in each image;
and 4, determining the collision time of the first object and the second object according to the distance between the two characteristic points of the characteristic point pair in each image and the shooting time interval of the two images.
According to one embodiment of the invention, the collision time for a first object to collide with a second object is determined by:
wherein TTC is the collision time of the first object and the second object,is the distance between the feature point a and the feature point B of the feature point pair in the first image,Δ t is the distance between the feature point a and the feature point B of the feature point pair in the second image, and is the shooting time interval of the first image and the second image.
Further, the characteristic point pairs determined in the area where the second object is located in each image include a plurality of characteristic point pairs, and the collision time at which the first object collides with the second object is determined based on the plurality of characteristic point pairs by:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,represents the distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the first image,and a distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the second image.
In one embodiment, in the case where the number of second objects in the two images captured before and after is plural, the steps 2 to 4 are performed for each second object, and the collision time at which the first object collides with the second object is determined.
Further, determining the same pair of feature points on the second object in the two images taken later includes:
respectively determining characteristic point pairs in the areas of the second object in the first image and the second image by a preset corner point determination method in the first image shot in advance and the second image shot in later;
determining whether the characteristic points in the first image and the characteristic points in the second image satisfy a first matching relation through an optical flow tracking method for the first image; and for the second image, determining whether the characteristic points in the second image and the characteristic points in the first image satisfy a second matching relationship by a backlight flow tracking method;
and, the identical pairs of feature points determined in step 2 are composed of feature points satisfying the first matching relationship and the second matching relationship.
Optionally, the two images are distortion-corrected or distortion-uncorrected images.
Optionally, image regions of the two images that do not contain gradient information are removed in advance.
Furthermore, in one embodiment, the collision prediction method according to the present invention may further include:
comparing the determined time of collision to a predetermined time threshold;
and alarming when the determined collision time is smaller than a time threshold value.
According to another aspect of the present invention, there is provided an inter-object collision prediction apparatus.
The collision prediction device between objects according to the present invention includes:
the characteristic point determining module is used for determining the same characteristic point pairs in the areas where the second object is located in the two images, wherein the two images are obtained by shooting the second object at the first object in sequence;
the distance determining module is used for determining the distance between two characteristic points of the characteristic point pair in each image;
and the time determining module is used for determining the collision time of the first object and the second object according to the distance between the two characteristic points of the characteristic point pair in each image and the shooting time interval of the two images.
In one embodiment, the time determination module is configured to determine the collision time of the first object colliding with the second object by:
wherein TTC is the collision time of the first object and the second object,is the distance between the feature point a and the feature point B of the feature point pair in the first image,Δ t is the distance between the feature point a and the feature point B of the feature point pair in the second image, and is the shooting time interval of the first image and the second image.
In addition, the characteristic point determining module determines that the characteristic point pairs in the area where the second object is located in each image include a plurality of characteristic point pairs; based on the plurality of pairs of characteristic points, the time determination module is configured to determine a collision time for the first object to collide with the second object by:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,showing a first diagramThe distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the image,and a distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the second image.
In addition, under the condition that the number of the second objects in the two images shot in sequence is multiple, the time determining module is used for respectively determining the collision time of the first object and the second object for each second object.
In addition, the characteristic point determining module can be used for determining characteristic point pairs in the areas where the second object is located in the first image and the second image respectively through a predetermined corner point determining method in the first image shot in advance and the second image shot in the later; for the first image, the characteristic point determining module is used for determining whether a first matching relation is satisfied between the characteristic points in the first image and the characteristic points in the second image through an optical flow tracking method; and for the second image, the characteristic point determining module is further used for determining whether the characteristic points in the second image and the characteristic points in the first image meet a second matching relation through a backlight flow tracking method;
and, the identical pairs of feature points determined by the feature point determination module are composed of feature points satisfying the first matching relationship and the second matching relationship.
Alternatively, the collision predicting apparatus according to the present invention may further include:
the correction module is used for carrying out distortion correction on the two shot images in advance; and/or
And the removing module is used for removing image areas which do not contain gradient information in the two shot images in advance.
Alternatively, the collision predicting apparatus according to the present invention may further include:
an analysis module for comparing the determined time of collision with a predetermined time threshold;
and the alarm module is used for giving an alarm under the condition that the determined collision time is less than the time threshold.
The invention can realize the following beneficial effects:
(1) according to the invention, the collision time is obtained through the image processing result, and the collision time can visually represent the collision risk at the current moment and timely warn the driver to perform corresponding operation; moreover, even if the visibility of the current environment is poor (for example, when the vehicle is driven on a road, features such as the tail contour of the front vehicle cannot be judged due to rain or fog) or the angle of the front object deviates to a certain extent, the contour of the front object cannot be accurately recognized, but because the collision time is determined according to the distance between the feature points in the image, the accurate prediction of the collision time can be ensured as long as the obvious feature points of the front object can be obtained (for example, when the front object is a vehicle, the obvious features such as the tail lamp, the license plate and the brand mark of the vehicle of the front vehicle can be obtained), so that the accurate prediction of the collision time has higher accuracy and reliability, and the installation position of the camera has no strict requirement (as long as the front object can be shot); in addition, the method does not need to measure parameters such as the distance between objects, the running speed of the objects and the like through specific equipment (such as radars, infrared sensing equipment and the like), is simple to realize, and has the advantages of convenience in use, low cost and low power consumption;
(2) in one embodiment, the collision time can be accurately obtained through a simple calculation formula (formula 1), the formula does not contain parameters such as speed and distance, but only contains the distance between characteristic points in the image and the image shooting time, so that the scheme of the invention does not need to carry out complex measurement, does not need to install measuring equipment, reduces the cost, and avoids the influence of other measuring equipment on the result accuracy of the collision time; moreover, the collision time can be obtained only by simple image processing and operation, so that the scheme of the invention is very easy to realize and has lower complexity;
(3) in one embodiment, the scheme of the invention can determine a plurality of groups of feature points on an object, then obtain corresponding collision time according to each group of feature points, and finally obtain an average result according to a plurality of collision times, so that the problem that the determination result is influenced or even the scheme cannot be executed due to the fact that part of the feature points cannot be identified due to reflection, shielding and the like can be avoided, and the stability is better;
(4) in one embodiment, the scheme of the invention can respectively determine the collision time of a plurality of shot objects, and because the determination of the collision time is based on the result of image processing, the determination can be completed only by respectively determining the characteristic points of the plurality of objects in the image without adding other additional equipment, the implementation in vehicle-mounted equipment is easy, and the driving safety can be improved;
(5) in one embodiment, the image is subjected to distortion correction in advance, and then the characteristic points in the image are determined, so that the prediction accuracy can be improved, and the problem of inaccurate prediction results caused by shooting angles and the like is avoided;
(6) in one embodiment, by removing the image area which does not contain the gradient information in the image in advance and then determining the characteristic points in the image area, the useless area of the image can be avoided being calculated, the overall calculation amount is reduced, and the real-time performance of the processing is ensured;
(7) in one embodiment, the matching relationship of the feature points is determined by respectively carrying out an optical flow tracking method on two successively shot images, so that the interference caused by noise in the images can be eliminated, the feature points in the images can be ensured to be correctly corresponding, and the problem of inaccurate result caused by mismatching is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method of predicting a collision between objects according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the establishment of a world coordinate system and a pixel coordinate system during collision prediction;
FIG. 3 is a flowchart of a practical implementation of a method for predicting collisions between objects according to an embodiment of the present invention;
fig. 4 is a block diagram of a collision prediction apparatus according to an embodiment of the present invention.
Detailed Description
This description of the illustrative embodiments should be taken in conjunction with the accompanying drawings, which are to be considered part of the complete specification. In the drawings, the shape or thickness of the embodiments may be exaggerated and simplified or conveniently indicated. Further, the components of the structures in the drawings are described separately, and it should be noted that the components not shown or described in the drawings are well known to those skilled in the art.
Any reference to directions and orientations to the description of the embodiments herein is merely for convenience of description and should not be construed as limiting the scope of the invention in any way. The following description of the preferred embodiments refers to combinations of features which may be present independently or in combination, and the present invention is not particularly limited to the preferred embodiments. The scope of the invention is defined by the claims.
According to an embodiment of the present invention, there is provided a method of predicting a collision between objects.
As shown in fig. 1, a method for predicting a collision between objects according to an embodiment of the present invention includes:
step S101, shooting two images of a second object at a first object in sequence;
step S103, determining the same characteristic point pairs in the area where the second object is located in the two images; here, the same feature point pair means that, if the second object is a vehicle, one feature point a determined in the first image is a vehicle brand identifier of the tail of the leading vehicle, and the feature point B is an exhaust pipe of the tail of the leading vehicle; then, in the second image, the feature point the same as the feature point a is located at the brand identifier at the tail of the preceding vehicle in the area where the preceding vehicle is located in the second image, and the feature point a' is assumed; the feature point which is the same as the feature point B is located at an exhaust pipe at the tail part of the front vehicle in the area where the front vehicle is located in the second image, and the feature point B' is assumed; thus, the characteristic point pairs a and B are the same characteristic point pairs as the characteristic points a 'and B'.
Step S105, determining the distance between two characteristic points of the characteristic point pair in each image;
step S107, determining the collision time of the first object and the second object according to the distance between the two characteristic points of the characteristic point pair in each image and the shooting time interval of the two images.
In one embodiment, after determining the time to collision, the determined time to collision may be compared to a predetermined time threshold; when the determined collision time is smaller than the time threshold, the current collision risk is high, and an alarm prompt can be given, for example, when the scheme of the invention is applied to a vehicle, a driver can be prompted to perform operations such as deceleration in time, and the driving safety is ensured.
Specifically, in one embodiment, the time to collision TTC of a first object colliding with a second object may be determined by the following formula:
wherein TTC is the collision time of the first object and the second object,is the distance (distance in the pixel coordinate system) between the feature point a and the feature point B of the feature point pair in the first image,Δ t is the distance (distance in the pixel coordinate system) between the feature point a and the feature point B of the feature point pair in the second image, and is the capturing time interval of the first image and the second image. The positions of the feature points a and B in the world coordinate system should satisfy the following requirements: and a connecting line of the characteristic point A and the characteristic point B under a world coordinate system is parallel to an XY plane of the world coordinate system, wherein the origin of the world coordinate system is positioned at a camera lens mounted on the first object, the optical axis of the camera lens is coaxial with the Z axis of the world coordinate system, the X axis of the world coordinate system extends along the horizontal direction, and the Y axis extends along the vertical direction.
Therefore, when the distance between the first object and the second object is different, the second object is shot from the first object, the distance between the same characteristic points on the shot picture is changed, and the collision time is obtained by using the change.
The derivation process of the above equation 1 will be described below.
First, a world coordinate system is established based on the camera position of the first object, and as shown in fig. 2, the origin of the world coordinate system is located at a camera lens mounted on the first object, the optical axis of the lens is coaxial with the Z-axis of the world coordinate system, the X-axis of the world coordinate system extends in the horizontal direction, the Y-axis extends in the vertical direction, and the XY-plane is perpendicular to the optical axis of the lens. In addition, a pixel coordinate system may be established separately for each image taken from the first object.
Based on the world coordinate system, the time to collision TTC can be expressed by equation 2:
wherein Z istIn order to capture a first image (as shown in FIG. 2, i.e., at time t), the Z-axis coordinates, Z, of feature points A and B on a second object in a world coordinate systemt+1In order to capture the second image (as shown in fig. 2, i.e., at time t + 1), the Z-axis coordinates of the feature point a and the feature point B in the world coordinate system (in the world coordinate system, the connecting line between the feature point a and the feature point B is parallel to the XY plane), V is the relative movement speed between the camera capturing the first image and the second object (i.e., the relative movement speed between the first object and the second object), and Δ t is the capture time interval between the first image and the second image, i.e., the time difference between time t +1 and time t.
In addition, for each feature point in the first image and the second image, the relationship described by the following formula 3 is satisfied between the pixel coordinate of the feature point in the pixel coordinate system and the world coordinate of the feature point on the second object in the world coordinate system:
wherein x and y are pixel coordinates of the feature point in the image in a pixel coordinate system, X, Y and Z are world coordinates of the feature point on the second object in a world coordinate system, and f is the focal length of the camera. That is, equation 3 describes the mapping relationship between the pixel coordinates (X, Y) of the feature points in the image and their corresponding world coordinates (X, Y, Z).
Suppose that the pixel coordinates of two feature points A and B on the first image plane taken at time t are respectivelyAndthe world coordinates in the world coordinate system corresponding to the two characteristic points A and B are respectivelyAndif the line connecting the feature point A and the feature point B is parallel to the XY planeTherefore, according to equation 3, the distance between the feature point a and the feature point B of the feature point pair in the first image in the pixel coordinate systemCan be expressed by the following equation 4:
equation 4 can be simplified to equation 5:
wherein,and ZtThe coordinates of the characteristic point A on the second object in the world coordinate system when the first image is shot,and ZtIs the coordinate of the characteristic point B on the second object in the world coordinate system when the first image is shot, ZtFor taking a first pictureThe Z-axis coordinate of the characteristic point A and the characteristic point B on the second object in the world coordinate system during imaging, C is the distance between the characteristic point A and the characteristic point B on the second object in the world coordinate system, and C is a constant value and is equal to C because the distance between the same characteristic points in the world coordinate system does not change along with time
Similarly, the distance between the feature point A and the feature point B in the second image in the pixel coordinate systemCan be expressed by the following equation 6:
wherein,is the distance, Z, of the feature point A and the feature point B in the second image in the pixel coordinate systemt+1And f is the Z-axis coordinate of the characteristic point A and the characteristic point B on the second object in the world coordinate system when the second image is shot, and f is the focal length of the camera.
From equations 5 and 6, the following equation 7 is obtained:
from equations 2 and 7, equation 1 above can be derived.
Specifically, for equation 2, the numerator and denominator can both be multiplied by Δ t and then divided by ZtAnd substituting the formula 7 to obtain the formula 1.
Since the formula 2 expresses the calculation method of the collision time, and the formula 7 is obtained according to the formula 3 and the calculation method of the distance between the two points, the collision time can be accurately obtained through the formula 1 obtained through the formula 2 and the formula 7, means such as distance measurement and speed measurement are not needed, the method is easy to realize, low in cost, low in energy consumption and convenient to use, and the measurement result is not influenced by the accuracy of equipment such as distance measurement and speed measurement. In addition, the collision time obtained by the method can intuitively represent the collision risk at the current moment, and when the first object is the vehicle driven by the driver, the method can warn the driver to perform corresponding operation in time; in addition, the invention can complete the prediction of the collision time as long as any characteristic point pair of the front vehicle can be obtained, and even if the visibility of the current driving environment is poor (even the characteristics such as the outline of the tail part of the front vehicle and the like cannot be judged), the invention can ensure the accurate prediction of the collision time as long as other characteristic points (such as the obvious parts such as tail lamps, license plates and the like) of the tail part of the front vehicle can be accurately obtained because the collision time is determined according to the distance between the characteristic points in the image, and has higher accuracy and reliability and strong adaptability to the environment. The present invention also has no strict requirement on the mounting position of the camera (as long as the front vehicle can be photographed). Although the technical solution of the present invention is described above in connection with vehicle collision prediction, it should be noted that the application scenario of the present invention is not limited to this, and actually, collision time prediction between other rigid objects can be implemented by using the technical solution of the present invention, and similar effects can be achieved.
Further, in another embodiment, the characteristic point pairs determined on the second object in each image may include a plurality of characteristic point pairs. In the first image and the second image, for each feature point pair, one collision time may be obtained according to the above formula 1, that is, if the number of feature point pairs is multiple, multiple collision times may be obtained, and at this time, the multiple collision times may be averaged (an average value may be directly calculated, or a weighted average may be obtained, and a larger weight may be assigned to a more obvious feature point pair on the second object), and the obtained result is used as the collision time when the first object collides with the second object.
Specifically, assuming that the first object is a vehicle currently driven by the driver and the second object is a preceding vehicle, the number of pairs of characteristic points determined for the preceding vehicle is two pairs in the present embodiment. The first feature point pair is located on two tail lights in the area where the preceding vehicle is located in the first and second images (one feature point in the first feature point pair is located on the left tail light, and the other feature point is located on the right tail light), and the second feature point pair is located on two left and right edges of the license plate in the area where the preceding vehicle is located in the first and second images (one feature point in the second feature point pair is located on the left edge, and the other feature point is located on the right edge). On the one hand, for the first image, the distance D1 between two feature points located at the tail lights in the area of the leading vehicle can be determined; with respect to the second image, the distance D2 between two feature points that are also located at the tail lights in the area where the preceding vehicle is located is determined, and from these two distances D1 and D2 and the capturing time intervals of the first image and the second image (for example, substituted into formula 1), the first time to collision TTC1 at which the current vehicle collides with the preceding vehicle can be determined, assuming that TTC1 is 5 seconds. On the other hand, for the first image, the distance D3 between two feature points located at the edge of the license plate in the area of the preceding vehicle can be determined; for the second image, the distance D4 between two feature points at the edge of the license plate in the area where the preceding vehicle is located is determined, and from these two distances D3 and D4 and the capturing time interval of the first image and the second image, the second time to collision TTC2 of the current vehicle with the preceding vehicle can be determined, assuming that TTC2 is 7 seconds. Then, calculate (TTC1+ TTC2)/2, and then get the collision time of 6 seconds.
In fact, the number of pairs of feature points determined in the region where the same object is located in the first and second images is not limited to two (two sets of feature points), but three or more pairs of feature points may be determined. Thus, the above equation 1 can be modified to obtain the following equation 8:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,represents a distance between a feature point i and a feature point j of an nth feature point pair of the N feature point pairs of the first image,and a distance between the feature point i and the feature point j of the nth feature point pair of the N feature point pairs of the second image is represented. This equation 8 can be understood as: for two images, corresponding collision time is respectively determined for each characteristic point pair in the region where the same object is located, and then the collision time corresponding to a plurality of characteristic point pairs is averaged to obtain the final collision time.
Through determining a plurality of pairs of feature points on the same object, the finally determined collision time can be determined by the distance between the plurality of pairs of feature points on the same object, so that the problem that the determination result is influenced and even the scheme cannot be executed due to the fact that part of the feature points cannot be identified due to reflection, shielding and the like is avoided, and the method has better stability.
Further, in another embodiment, the number of second objects may be both plural in the two images taken at the preceding and subsequent times. In this case, the collision time between the first object and the second object can be determined by performing the above steps S103 to S107 for each region where the second object is located in the image.
For example, assume that the first object is a currently driven vehicle M on which a camera is mounted. At time t, the camera mounted on the vehicle M captures a first image including the vehicle M1 and the vehicle M2; at time t +1, the second image captured by the camera mounted on the vehicle M also includes the vehicle M1 and the vehicle M2. According to the embodiment of the invention, the time of collision of the vehicle M currently driven with the vehicle M1 can be determined, while the time of collision of the vehicle M with the vehicle M2 can also be determined.
Specifically, in the first image, pairs of feature points, including feature points E and F, can be determined in the area where the vehicle M1 is located; meanwhile, for the first image, pairs of feature points, including feature points G and H, are also determined in the area where the vehicle M2 is located.
In the second image, a feature point pair including feature points E1 and F1 may be determined in the area where the vehicle M1 is located, where the feature point E1 and the feature point E are the same (matching) feature points, and the feature point F1 and the feature point F are the same feature points; meanwhile, in the second image, a feature point pair including the feature point G1 and the feature point H1 is also determined in the area where the vehicle M2 is located, where the feature point G1 and the feature point G are the same feature point, and the feature point H1 and the feature point H are the same feature point.
Thus, the collision time of the current vehicle M colliding with the vehicle M1 can be obtained from the distance between the feature point E and the feature point F, the distance between the feature point E1 and the feature point F1, and the time interval between the times t and t + 1. In addition, the collision time of the current vehicle M colliding with the vehicle M2 can be obtained according to the distance between the feature point G and the feature point H, the distance between the feature point G1 and the feature point H1, and the time interval between the times t and t + 1.
It can be seen that, in the above example, it is equivalent to predicting the collision time between the current vehicle and a plurality of vehicles ahead from two images. In fact, in other examples, if two images taken successively include more objects, the time when the current object collides with each object in the images can be respectively predicted in a similar manner. Because the collision time is determined based on the result of image processing, the collision time of the first object and each second object can be obtained only by respectively determining the characteristic points of the plurality of objects in the image without adding other additional equipment, the method is easy to realize in a scene with limited installation space such as a vehicle and the like, and the driving safety can be effectively improved.
Further, in another embodiment, when the first image and the second image each include the vehicles M1 and M2, a plurality of pairs of characteristic points may be determined in the region where the vehicle M1 is located in the two images, assuming that the first pair of characteristic points a and the second pair of characteristic points B are included; meanwhile, a plurality of characteristic point pairs are determined in the area where the vehicle M2 is located in the two images, and it is assumed that the third characteristic point pair P, the fourth characteristic point pair Q, and the fifth characteristic point pair R are included. In this way, for the current vehicle M and the vehicle M1, the time to collision TTC _ a can be obtained from the first characteristic point pair a in the first image and the second image, the time to collision TTC _ B can be obtained from the second characteristic point pair B in the first image and the second image, and the time to collision of the current vehicle M with the vehicle M1 can be obtained based on (TTC _ a + TTC _ B)/2.
On the other hand, for the current vehicle M and the vehicle M2, the time to collision TTC _ P can be obtained from the third characteristic point pair P in the first image and the second image, the time to collision TTC _ Q can be obtained from the fourth characteristic point pair Q in the first image and the second image, the time to collision TTC _ R can be obtained from the fifth characteristic point pair R in the first image and the second image, and the time to collision between the current vehicle M and the vehicle M2 can be obtained based on (TTC _ P + TTC _ Q + TTC _ R)/3.
In this way, the collision time of the current vehicle and a plurality of vehicles in the image is obtained equivalently through the two images, and each collision time is determined by a plurality of characteristic point pairs, so that reliable and all-around collision time information is provided for the driver, and the accuracy of determining the collision time can be ensured at the same time due to the improvement of the driving safety.
In addition, in determining the same pair of feature points in the area where the second object is located in two images taken in advance, various methods may be employed. Specifically, in a first image captured in advance and a second image captured in later time, positions of feature significant pixels such as corners in the first image and the second image are obtained by a predetermined corner determination method (for example, Harris (Harris) corner extraction algorithm), that is, feature point pairs of a second object in the first image and the second image are determined;
for the first image, the matching relation between the feature points in the first image and the feature points in the second image is determined through an optical flow tracking method. Namely, whether a first matching relation is satisfied between the feature points in the image of the previous frame and the feature points in the image of the current frame is calculated by a forward optical flow tracking method;
next, for the second image, a matching relationship between the feature point in the second image and the feature point in the first image is determined by a back-light flow tracking method. Here, the method corresponds to a method of using a backward (inverse) optical flow, and whether or not a second matching relationship is satisfied between a feature point in the current frame image and a feature point in the previous frame image is calculated.
In step S103, each feature point in the determined identical pair of feature points satisfies the first matching relationship and the second matching relationship. That is, in step S103, the same feature point pair between the first image and the second image is determined, and assuming that one feature point a is determined in the first image and one feature point a ' is determined in the second image, if the feature point a and the feature point a ' satisfy the above-described first matching relationship and second matching relationship, the feature point a and the feature point a ' are determined to be the same feature point. Similarly, for other feature points in the first image and the second image, whether the first matching relationship and the second matching relationship are satisfied is determined according to the optical flow tracking method and the inverse optical flow tracking method described above. Next, for the pairs of feature points made up of the feature points satisfying the first matching relationship and the second matching relationship, the following steps of determining the distance and the collision time are performed.
Therefore, the interference caused by the noise in the image can be eliminated, the correct correspondence between the characteristic points in the image can be ensured, and the problem of inaccurate result caused by mismatching is avoided.
Further, in one embodiment, the two images used to determine the time of collision are distortion corrected images. That is, for the shot image, the distortion effect can be removed by using a distortion algorithm to obtain a corrected image; then, based on the corrected image, the feature point pair and the distance between two feature points in the feature point pair are determined, and finally the collision time is determined.
By carrying out distortion correction on the image in advance and then determining the characteristic points, the accuracy of prediction can be improved, and the problem that the prediction result is inaccurate due to shooting angles and the like is solved.
In another embodiment, the distortion correction step can be omitted and the feature points can be determined directly based on the original image.
In addition, in one embodiment, for two images used for determining the collision time, after the two images are captured, image regions not containing gradient information in the two images are removed, and then the feature point pairs and the distance between two feature points in the feature point pairs are determined. Specifically, when removing these image regions, the image regions not containing gradient information may be removed with reference to image gradient information.
By removing the image area which does not contain the gradient information in the image in advance and then determining the characteristic points (and the distance between the characteristic points and the like) in the image area, the useless area of the image can be avoided from being calculated, the whole calculation amount is reduced, and the real-time performance of the processing is ensured.
In addition, in one embodiment, for two images used for determining the collision time, both the processing for removing the distortion effect and the processing for removing the image area not containing the gradient information in the image may be performed, and the order of the two processing may be determined according to actual needs.
In a specific application, as shown in fig. 3, the method for predicting collision between objects according to the present invention may be performed as follows:
firstly, image acquisition is carried out, namely, a series of images of a front vehicle are shot through a camera;
next, the captured image is subjected to preprocessing, for example, removing distortion in the image and removing an image region in the image that does not contain gradient information;
then, detecting the position (area) of the front vehicle in the image, extracting a characteristic point pair from the area of the front vehicle, and completing the detection of the front vehicle;
then, determining the matching relation between the characteristic point pairs in the series of images, thereby determining the same characteristic point pair in the series of images, namely tracking the front vehicle;
after the matched feature points are obtained, the collision time of the current vehicle colliding with the front vehicle can be calculated according to the distance between the feature points in the two images and the shooting time interval of the two images;
and finally, comparing the collision time obtained by calculation with a preset collision time threshold value to realize collision early warning analysis.
Further, according to an embodiment of the present invention, there is provided an inter-object collision prediction apparatus.
As shown in fig. 4, the collision predicting apparatus between objects according to the embodiment of the present invention includes:
a feature point determining module 41, configured to determine the same feature point pair in an area where a second object is located in two images, where the two images are obtained by sequentially shooting the second object at the first object;
a distance determining module 42, configured to determine, for each image, a distance between two feature points of a pair of feature points in the image;
the time determination module 43 determines the collision time when the first object collides with the second object according to the distance between the two feature points of the feature point pair in each image and the capturing time interval of the two images.
In one embodiment, the time determination module 43 is configured to determine the collision time of the first object colliding with the second object by:
wherein TTC is the collision time of the first object and the second object,is the distance between the feature point a and the feature point B of the feature point pair in the first image,Δ t is the distance between the feature point a and the feature point B of the feature point pair in the second image, and is the shooting time interval of the first image and the second image.
Because the collision time can be accurately obtained through a simple calculation formula (formula 1), the formula does not contain parameters such as speed, distance and the like, but only contains the distance between characteristic points in the image and the image shooting time, the scheme of the invention does not need to carry out complex measurement, does not need to install measuring equipment, reduces the cost and avoids the influence of other measuring equipment on the result accuracy of the collision time; moreover, the collision time can be obtained only through simple image processing and operation, so that the scheme of the invention is very easy to realize and has lower complexity.
The derivation process of equation 1 has been described previously and is not described here.
Further, in one embodiment, the feature point determination module 41 may determine the feature point pairs in the area where the second object is located in each image, including a plurality of feature point pairs; accordingly, the distance determination module 42 needs to determine the distance between two feature points in each feature point pair; based on the plurality of pairs of feature points, the time determination module 43 is configured to determine a collision time for the first object to collide with the second object by:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,represents the distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the first image,and a distance between the feature point i and the feature point j of the nth pair of feature points among the N pairs of feature points of the second image.
Therefore, the problem that the determination result is influenced and even the scheme cannot be executed due to the fact that part of feature points cannot be identified due to reflection, shielding and the like can be solved, and the method has better stability.
Further, in one embodiment, in the case where the number of second objects in both of the two images captured at the preceding and subsequent times is plural, the feature point determining module 41 is configured to determine the feature point in the region in which each second object is located in the two images; the distance determining module 42 is configured to determine a distance between each pair of feature points; the time determination module 43 is configured to determine, for each second object, a collision time when the first object collides with the second object.
In the embodiment, the collision time can be determined for each of the plurality of shot objects, and since the determination of the collision time is based on the result of the image processing, the determination can be completed only by determining the characteristic points for each of the plurality of shot objects in the image, without adding other additional devices, which is easy to implement in the vehicle-mounted device and can improve the driving safety.
Further, in one embodiment, the feature point determining module 41 is configured to determine pairs of feature points in the first image and the second image in which the second object is located by a predetermined corner point determining method, respectively, in the first image captured before and the second image captured after; for the first image, the feature point determining module 41 is configured to determine whether a first matching relationship is satisfied between the feature point in the first image and the feature point in the second image by an optical flow tracking method; and, for the second image, the feature point determining module 41 is further configured to determine whether a second matching relationship is satisfied between the feature point in the second image and the feature point in the first image by a back-light flow tracking method. For pairs of feature points composed of feature points satisfying the first matching relationship and the second matching relationship, the distance between the feature points is determined by the distance determination module 42, and then the collision time between the objects is determined by the time determination module 43.
The matching relation of the feature points is determined by respectively carrying out an optical flow tracking method on two successively shot images, so that the interference caused by noise in the images can be eliminated, the correct correspondence between the feature points in the images can be ensured, and the problem of inaccurate result caused by mismatching is avoided.
In addition, in an alternative embodiment, the inter-object collision prediction apparatus according to the present invention may further include: and a correction module (not shown) for performing distortion correction on the two photographed images in advance. Thus, the two images processed by the feature point determination module 41 are distortion-corrected images.
By carrying out distortion correction on the image in advance and then determining the characteristic points, the accuracy of prediction can be improved, and the problem that the prediction result is inaccurate due to shooting angles and the like is solved.
In addition, in an alternative embodiment, the inter-object collision prediction apparatus according to the present invention may further include: and a removing module (not shown) for removing in advance an image region not containing gradient information in the two captured images. In this way, image regions that do not contain gradient information in the two images processed by the feature point determination module 41 are removed in advance.
By removing the image area which does not contain the gradient information in the image in advance and then determining the characteristic points in the image area, the useless area of the image can be avoided being calculated, the whole calculation amount is reduced, and the real-time performance of the processing is ensured.
Furthermore, in an alternative embodiment, the above apparatus according to the present invention may further comprise:
an analysis module (not shown) for comparing the determined time of collision with a predetermined time threshold;
and an alarm module (not shown) for alarming in case the determined collision time is less than the time threshold.
In conclusion, the collision time is obtained through the image processing result, and the collision time can visually represent the collision risk at the current moment and timely warn the driver to perform corresponding operation; moreover, even if the visibility of the current environment is poor (for example, when the vehicle is driven on a road, features such as the tail contour of the front vehicle cannot be judged due to rain or fog) or the angle of the front object deviates to a certain extent, the contour of the front object cannot be accurately recognized, but because the collision time is determined according to the distance between the feature points in the image, the accurate prediction of the collision time can be ensured as long as the obvious feature points of the front object can be obtained (for example, when the front object is a vehicle, the obvious features such as the tail lamp, the license plate and the brand mark of the vehicle of the front vehicle can be obtained), so that the accurate prediction of the collision time has higher accuracy and reliability, and the installation position of the camera has no strict requirement (as long as the front object can be shot); in addition, the method and the device do not need to measure parameters such as the distance between objects, the running speed of the objects and the like through specific equipment (such as radars, infrared sensing equipment and the like), are simple to realize, and have the advantages of convenience in use, low cost and low power consumption.
The technical scheme of the invention is not only used in the road driving scene, but also can be used for predicting the collision between other rigid objects, for example, the collision between ships can be predicted.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (14)
1. A method for predicting a collision between objects, comprising:
step 1, shooting two images of a second object at a first object in sequence;
step 2, determining the same characteristic point pairs in the areas of the two images where the second object is located;
step 3, determining the distance between two characteristic points of the characteristic point pair in each image;
and 4, determining the collision time of the first object colliding with the second object according to the distance between the two characteristic points of the characteristic point pair in each image and the shooting time interval of the two images.
2. The collision prediction method according to claim 1, characterized in that the collision time at which the first object collides with the second object is determined by:
wherein TTC is a collision time of the first object colliding with the second object,is the distance between the feature point a and the feature point B of the feature point pair in the first image,the distance between the characteristic point A and the characteristic point B of the characteristic point pair in the second image is delta t, and the shooting time interval of the first image and the second image is delta t.
3. The collision prediction method according to claim 1, characterized in that the feature point pairs determined in the area where the second object is located in each image include a plurality of feature point pairs, and based on the plurality of feature point pairs, the collision time at which the first object collides with the second object is determined by:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,representing the distance between the feature point i and the feature point j of the N-th feature point pair of the N feature point pairs of the first imageThe distance between the first and second electrodes,and a distance between the feature point i and the feature point j of the nth feature point pair of the N feature point pairs of the second image is represented.
4. The collision prediction method according to claim 1, wherein, in a case where the number of the second objects in the two images taken in the past is plural, the steps 2 to 4 are performed separately for each second object, and the collision time at which the first object collides with the second object is determined.
5. The collision prediction method according to claim 1, wherein determining the same pair of feature points on the second object in the two images taken in the past comprises:
respectively determining feature point pairs in the areas of the second object in the first image and the second image through a preset corner point determination method in a first image shot in advance and a second image shot in later;
determining whether a first matching relation is satisfied between the feature points in the first image and the feature points in the second image through an optical flow tracking method for the first image; and for the second image, determining whether a second matching relation is satisfied between the feature points in the second image and the feature points in the first image by a backlight flow tracking method;
and, the identical pairs of feature points determined in step 2 are composed of feature points satisfying the first matching relationship and the second matching relationship.
6. The collision prediction method according to claim 1, characterized in that the two images are distortion-corrected or distortion-uncorrected images.
7. The collision prediction method according to claim 1, characterized in that image regions of the two images that do not contain gradient information are removed in advance.
8. The collision prediction method according to claim 1, characterized by further comprising:
comparing the determined time to collision with a predetermined time threshold;
and alarming when the determined collision time is smaller than the time threshold.
9. An inter-object collision prediction apparatus, comprising:
the characteristic point determining module is used for determining the same characteristic point pairs in the areas where the second object is located in the two images, wherein the two images are obtained by shooting the second object at the first object in sequence;
the distance determining module is used for determining the distance between two characteristic points of the characteristic point pair in each image;
and the time determining module is used for determining the collision time of the first object and the second object according to the distance between the two characteristic points of the characteristic point pair in each image and the shooting time interval of the two images.
10. The collision prediction apparatus according to claim 9, wherein the time determination module is configured to determine the collision time of the first object colliding with the second object by:
wherein TTC is a collision time of the first object colliding with the second object,is the distance between the feature point a and the feature point B of the feature point pair in the first image,the distance between the characteristic point A and the characteristic point B of the characteristic point pair in the second image is delta t, and the shooting time interval of the first image and the second image is delta t.
11. The collision prediction apparatus according to claim 9, characterized in that the feature point determination module determines the feature point pairs in the area where the second object is located in each image to include a plurality of feature point pairs; based on the plurality of pairs of characteristic points, the time determination module is configured to determine a collision time for the first object to collide with the second object by:
wherein N is the number of the characteristic point pairs in each image, N is the serial number of the characteristic point pairs in each image,represents a distance between a feature point i and a feature point j of an nth feature point pair of the N feature point pairs of the first image,and a distance between the feature point i and the feature point j of the nth feature point pair of the N feature point pairs of the second image is represented.
12. The collision prediction apparatus according to claim 9, wherein the time determination module is configured to determine, for each second object, a collision time at which the first object collides with the second object, when the number of the second objects in the two images captured in the past is plural.
13. The collision prediction apparatus according to claim 9, wherein the feature point determination module is configured to determine, in a first image captured before and a second image captured after, a pair of feature points in a region in which the second object is located in the first image and the second image, respectively, by a predetermined corner point determination method; for the first image, the feature point determination module is used for determining whether a first matching relation is satisfied between feature points in the first image and feature points in the second image through an optical flow tracking method; and for the second image, the characteristic point determining module is further used for determining whether a second matching relation is satisfied between the characteristic points in the second image and the characteristic points in the first image through a backlight flow tracking method;
and the same pairs of feature points determined by the feature point determination module are composed of feature points satisfying the first matching relationship and the second matching relationship.
14. The collision predicting device according to claim 9, further comprising:
the correction module is used for carrying out distortion correction on the two shot images in advance; and/or
The removing module is used for removing image areas which do not contain gradient information in the two shot images in advance;
and, the collision predicting apparatus further includes:
an analysis module for comparing the determined time of collision with a predetermined time threshold;
and the alarm module is used for giving an alarm under the condition that the determined collision time is less than the time threshold.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610990048.0A CN106570487A (en) | 2016-11-10 | 2016-11-10 | Method and device for predicting collision between objects |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610990048.0A CN106570487A (en) | 2016-11-10 | 2016-11-10 | Method and device for predicting collision between objects |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106570487A true CN106570487A (en) | 2017-04-19 |
Family
ID=58541227
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610990048.0A Pending CN106570487A (en) | 2016-11-10 | 2016-11-10 | Method and device for predicting collision between objects |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106570487A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108674313A (en) * | 2018-06-05 | 2018-10-19 | 浙江零跑科技有限公司 | A kind of blind area early warning system and method based on vehicle-mounted back vision wide angle camera |
| CN110309755A (en) * | 2019-06-25 | 2019-10-08 | 广州文远知行科技有限公司 | Time correction method, device, equipment and storage medium for traffic signal lamp |
| CN111457431A (en) * | 2019-01-18 | 2020-07-28 | 宁波方太厨具有限公司 | Anti-collision range hood and control method thereof |
| CN112700474A (en) * | 2020-12-31 | 2021-04-23 | 广东美的白色家电技术创新中心有限公司 | Collision detection method, device and computer-readable storage medium |
| CN113188521A (en) * | 2021-05-11 | 2021-07-30 | 江晓东 | Monocular vision-based vehicle collision early warning method |
| CN113705501A (en) * | 2021-09-02 | 2021-11-26 | 浙江索思科技有限公司 | Offshore target detection method and system based on image recognition technology |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1916562A (en) * | 2005-08-18 | 2007-02-21 | 中国科学院半导体研究所 | Early-warning method and apparatus for anticollision of cars |
| CN102303563A (en) * | 2011-06-16 | 2012-01-04 | 广东铁将军防盗设备有限公司 | Front vehicle collision warning system and method |
| CN102642510A (en) * | 2011-02-17 | 2012-08-22 | 汽车零部件研究及发展中心有限公司 | Image-based vehicle anti-collision early warning method |
| CN103871079A (en) * | 2014-03-18 | 2014-06-18 | 南京金智视讯技术有限公司 | Vehicle tracking method based on machine learning and optical flow |
| CN105718888A (en) * | 2016-01-22 | 2016-06-29 | 北京中科慧眼科技有限公司 | Obstacle prewarning method and obstacle prewarning device |
| CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
-
2016
- 2016-11-10 CN CN201610990048.0A patent/CN106570487A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1916562A (en) * | 2005-08-18 | 2007-02-21 | 中国科学院半导体研究所 | Early-warning method and apparatus for anticollision of cars |
| CN102642510A (en) * | 2011-02-17 | 2012-08-22 | 汽车零部件研究及发展中心有限公司 | Image-based vehicle anti-collision early warning method |
| CN102303563A (en) * | 2011-06-16 | 2012-01-04 | 广东铁将军防盗设备有限公司 | Front vehicle collision warning system and method |
| CN103871079A (en) * | 2014-03-18 | 2014-06-18 | 南京金智视讯技术有限公司 | Vehicle tracking method based on machine learning and optical flow |
| CN105718888A (en) * | 2016-01-22 | 2016-06-29 | 北京中科慧眼科技有限公司 | Obstacle prewarning method and obstacle prewarning device |
| CN105844222A (en) * | 2016-03-18 | 2016-08-10 | 上海欧菲智能车联科技有限公司 | System and method for front vehicle collision early warning based on visual sense |
Non-Patent Citations (1)
| Title |
|---|
| 吴伟仁 等: "《深空探测器自主导航原理与技术》", 31 May 2011, 中国宇航出版社 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108674313A (en) * | 2018-06-05 | 2018-10-19 | 浙江零跑科技有限公司 | A kind of blind area early warning system and method based on vehicle-mounted back vision wide angle camera |
| CN111457431A (en) * | 2019-01-18 | 2020-07-28 | 宁波方太厨具有限公司 | Anti-collision range hood and control method thereof |
| CN111457431B (en) * | 2019-01-18 | 2023-02-28 | 宁波方太厨具有限公司 | Anti-collision range hood and control method thereof |
| CN110309755A (en) * | 2019-06-25 | 2019-10-08 | 广州文远知行科技有限公司 | Time correction method, device, equipment and storage medium for traffic signal lamp |
| CN110309755B (en) * | 2019-06-25 | 2021-11-02 | 广州文远知行科技有限公司 | Time correction method, device, equipment and storage medium for traffic signal lamp |
| CN112700474A (en) * | 2020-12-31 | 2021-04-23 | 广东美的白色家电技术创新中心有限公司 | Collision detection method, device and computer-readable storage medium |
| CN113188521A (en) * | 2021-05-11 | 2021-07-30 | 江晓东 | Monocular vision-based vehicle collision early warning method |
| CN113705501A (en) * | 2021-09-02 | 2021-11-26 | 浙江索思科技有限公司 | Offshore target detection method and system based on image recognition technology |
| CN113705501B (en) * | 2021-09-02 | 2024-04-26 | 浙江索思科技有限公司 | Marine target detection method and system based on image recognition technology |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11348266B2 (en) | Estimating distance to an object using a sequence of images recorded by a monocular camera | |
| US9323992B2 (en) | Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications | |
| CN110443225B (en) | Virtual and real lane line identification method and device based on feature pixel statistics | |
| CN106570487A (en) | Method and device for predicting collision between objects | |
| CN112580456B (en) | System and method for curb detection and pedestrian hazard assessment | |
| CN110065494B (en) | Vehicle anti-collision method based on wheel detection | |
| EP1671216B1 (en) | Moving object detection using low illumination depth capable computer vision | |
| CN106289159B (en) | Vehicle distance measurement method and device based on distance measurement compensation | |
| EP3403216B1 (en) | Systems and methods for augmenting upright object detection | |
| US10832428B2 (en) | Method and apparatus for estimating a range of a moving object | |
| US20190180121A1 (en) | Detection of Objects from Images of a Camera | |
| US11087150B2 (en) | Detection and validation of objects from sequential images of a camera by using homographies | |
| CN107458308B (en) | Driving assisting method and system | |
| CN107229906A (en) | A kind of automobile overtaking's method for early warning based on units of variance model algorithm | |
| CN113435237A (en) | Object state recognition device, recognition method, recognition program, and control device | |
| JP2018092596A (en) | Information processing device, imaging device, apparatus control system, mobile body, information processing method, and program | |
| US10970870B2 (en) | Object detection apparatus | |
| CN110780287A (en) | Distance measurement method and distance measurement system based on monocular camera | |
| CN111144415A (en) | Method for detecting micro pedestrian target | |
| CN110809228B (en) | Speed measurement method, device, equipment and computer readable storage medium | |
| CN108399357A (en) | A kind of Face detection method and device | |
| CN111192290A (en) | Blocking processing method for pedestrian image detection | |
| JP2011090490A (en) | Obstacle recognition device | |
| Li et al. | Lane departure estimation by side fisheye camera | |
| Wei et al. | Research on preceding vehicle distance measurement with monocular vision based on lane plane geometric model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information | ||
| CB02 | Change of applicant information |
Address after: 201203 Shanghai Pudong New Area free trade trial area, 1 spring 3, 400 Fang Chun road. Applicant after: Shanghai Sen Sen vehicle sensor technology Co., Ltd. Address before: 201210 301B room 560, midsummer Road, Pudong New Area Free Trade Zone, Shanghai Applicant before: New software technology (Shanghai) Co., Ltd. |
|
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170419 |