[go: up one dir, main page]

CN114782916A - ADAS rear vehicle identification system carried by rearview mirror and based on multi-sensor fusion - Google Patents

ADAS rear vehicle identification system carried by rearview mirror and based on multi-sensor fusion Download PDF

Info

Publication number
CN114782916A
CN114782916A CN202210374465.8A CN202210374465A CN114782916A CN 114782916 A CN114782916 A CN 114782916A CN 202210374465 A CN202210374465 A CN 202210374465A CN 114782916 A CN114782916 A CN 114782916A
Authority
CN
China
Prior art keywords
image
determining
module
limb
rear vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210374465.8A
Other languages
Chinese (zh)
Other versions
CN114782916B (en
Inventor
苏泳
谭小球
刘柏林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ULTRONIX PRODUCTS Ltd
Original Assignee
ULTRONIX PRODUCTS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ULTRONIX PRODUCTS Ltd filed Critical ULTRONIX PRODUCTS Ltd
Priority to CN202210374465.8A priority Critical patent/CN114782916B/en
Publication of CN114782916A publication Critical patent/CN114782916A/en
Application granted granted Critical
Publication of CN114782916B publication Critical patent/CN114782916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an ADAS rear vehicle identification system carried by a rearview mirror and based on multi-sensor fusion, which comprises: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to obtain a video data stream; the de-framing processing module is used for performing de-framing processing on the video data stream to acquire a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video images and determining the vehicle area of each frame of video image; and the display module is used for determining the driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data. The rear environment of the vehicle is comprehensively sensed, the driving data of the rear vehicle is timely acquired, the driving state of the rear vehicle is found, corresponding countermeasures are conveniently taken according to the driving state of the rear vehicle, and potential safety hazards are eliminated.

Description

ADAS rear vehicle identification system carried by rearview mirror and based on multi-sensor fusion
Technical Field
The invention relates to the technical field of environment perception, in particular to an ADAS rear vehicle identification system carried by a rearview mirror and based on multi-sensor fusion.
Background
The ADAS system senses the surrounding environment at any time in the driving process of the automobile by using various sensors installed on the automobile, collects data, identifies, detects and tracks static and dynamic objects, and performs systematic operation and analysis by combining map data of a navigator, so that a driver can perceive possible dangers in advance, and the comfort and the safety of automobile driving are effectively improved. In the prior art, the sensing data of the environment behind the vehicle is not comprehensive and less aiming at sensing the environment in front of the vehicle and at the two sides of the vehicle, meanwhile, the driving data of the vehicle behind is not effectively sensed in the driving process of the vehicle, the driving state of the vehicle behind cannot be timely found, and then corresponding countermeasures cannot be taken according to the found driving state of the vehicle behind, so that certain potential safety hazards exist for the safety of the vehicle and a driver.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the art described above. Therefore, the invention aims to provide an ADAS rear vehicle identification system carried by a rearview mirror and based on multi-sensor fusion, which can comprehensively sense the rear environment of a vehicle, timely acquire driving data of the rear vehicle, find the driving state of the rear vehicle, conveniently take corresponding countermeasures according to the driving state of the rear vehicle and eliminate potential safety hazards.
In order to achieve the above object, an embodiment of the present invention provides an ADAS rear vehicle identification system based on multi-sensor fusion and carried by a rear view mirror, including: the shooting module is arranged on a rearview mirror of the target vehicle and used for shooting a scene behind the target vehicle to obtain a video data stream; the de-framing processing module is used for performing de-framing processing on the video data stream to acquire a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video images and determining the vehicle area of each frame of video image; and the display module is used for determining the driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
According to some embodiments of the invention, the camera module is an ADAS smart camera.
According to some embodiments of the invention, further comprising:
the first identification module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line identification according to the binary image to obtain first identification data;
the second identification module is used for carrying out graying processing on the video image to obtain a grayscale image, and carrying out obstacle identification according to the grayscale image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
According to some embodiments of the invention, the driving data comprises high beam and low beam light states, vehicle speed, steering, braking states, historical driving trajectories.
According to some embodiments of the invention, further comprising:
the prediction module is used for predicting the driving path of a rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
According to some embodiments of the invention, further comprising:
a driving behavior determination module to:
determining a driving image in a local image corresponding to the vehicle area;
inputting the driving image into a human body recognition model trained in advance, and determining head key points and limb key points;
determining a first head image according to the head key points;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on preset rules that infrared information of different parts of a human body is different;
matching the first head image with the second head image, and after the first head image is successfully matched with the second head image, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing so as to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with a real face on the first fused image, and determining overlapping information;
calculating SIFT characteristics of pixel points on the estimation face, and clustering the pixel points according to the SIFT characteristics to obtain a plurality of cluster sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of cluster sets, and determining the face features according to the corresponding relation;
matching the first limb image with the second limb image, and after the first limb image and the second limb image are successfully matched, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing so as to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion track of the limb based on the plurality of limb characteristics;
determining driving behaviors according to the human face features and the motion tracks of the limbs;
and the alarm module is used for matching the driving behaviors with the abnormal driving behaviors in the preset abnormal driving behavior database and sending out an alarm prompt when the matching is determined to be successful.
According to some embodiments of the invention, further comprising:
a positioning module to:
acquiring Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module to:
calling a planning map according to the positioning information, and determining a preset road scene image;
carrying out image segmentation on the video image, and determining a scene area of each frame of video image;
performing space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image which are in the same time and space, and mutually correcting the preset road scene image and the road scene image according to a matching result.
According to some embodiments of the invention, further comprising: and the preprocessing module is used for preprocessing the video image before the video image is subjected to image segmentation by the determining module, and the preprocessing comprises noise point removing processing and white balance adjusting processing.
According to some embodiments of the invention, further comprising:
the distance sensor is used for sensing distance information between the target vehicle and a rear vehicle;
and the warning module is used for sending warning information when the distance information is determined to be smaller than the preset distance information.
According to some embodiments of the invention, the facial features include an emotional state of the driver, a degree of opening and closing of eyes.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of a rearview mirror mounted multi-sensor fusion based ADAS rear vehicle identification system according to a first embodiment of the present invention;
FIG. 2 is a block diagram of a rearview mirror mounted multi-sensor fusion based ADAS rear vehicle identification system according to a second embodiment of the present invention;
fig. 3 is a block diagram of a rear view mirror mounted ADAS rear vehicle identification system based on multi-sensor fusion according to a third embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, an embodiment of the present invention provides a rear view mirror-mounted ADAS rear vehicle identification system based on multi-sensor fusion, including: the shooting module is arranged on a rearview mirror of the target vehicle and used for shooting a scene behind the target vehicle to obtain a video data stream; the de-framing processing module is used for performing de-framing processing on the video data stream to acquire a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video images and determining the vehicle area of each frame of video image; and the display module is used for determining the driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
The working principle of the technical scheme is as follows: the shooting module is arranged on a rearview mirror of the target vehicle and used for shooting a scene behind the target vehicle to obtain a video data stream; the de-framing processing module is used for performing de-framing processing on the video data stream to acquire a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video images and determining the vehicle area of each frame of video image; and the display module is used for determining the driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
The beneficial effects of the above technical scheme are that: the vehicle rear environment is comprehensively sensed, the driving data of the rear vehicle are timely acquired, the driving state of the rear vehicle is found, corresponding countermeasures are conveniently taken according to the driving state of the rear vehicle, and potential safety hazards are eliminated.
According to some embodiments of the invention, the camera module is an ADAS smart camera.
According to some embodiments of the invention, further comprising:
the first identification module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line identification according to the binary image to obtain first identification data;
the second identification module is used for carrying out graying processing on the video image to obtain a grayscale image, and carrying out obstacle identification according to the grayscale image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
The working principle of the technical scheme is as follows: the first identification module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line identification according to the binary image to obtain first identification data; the second identification module is used for carrying out graying processing on the video image to obtain a grayscale image, and carrying out obstacle identification according to the grayscale image to obtain second identification data; the display module is further configured to display the first identification data and the second identification data.
The beneficial effects of the above technical scheme are that: the rear vehicle monitoring system is convenient to timely acquire information such as whether a rear vehicle runs in a lane line and meets an obstacle, accurately and comprehensively determines the state of the rear vehicle, and realizes comprehensive perception of the rear environment of a target vehicle.
According to some embodiments of the invention, the driving data comprises high beam and low beam light states, vehicle speed, steering, braking states, historical driving trajectories.
According to some embodiments of the invention, further comprising:
the prediction module is used for predicting the driving path of a rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
The working principle of the technical scheme is as follows: the prediction module is used for predicting the driving path of the rear vehicle according to the driving data; and the display module is used for displaying the running path.
The beneficial effects of the above technical scheme are as follows: and predicting the driving path of the rear vehicle according to the driving data, so that a driver of the target vehicle can clearly perceive the driving path of the rear vehicle, and when the driving path is determined to be abnormal, the driving path of the target vehicle can be conveniently adjusted at any time, and the occurrence of traffic accidents is reduced.
As shown in fig. 2, according to some embodiments of the invention, further comprising:
a driving behavior determination module to:
determining a driving image in a local image corresponding to the vehicle area;
inputting the driving image into a human body recognition model trained in advance, and determining head key points and limb key points;
determining a first head image according to the head key points;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on preset rules that the infrared information of different parts of the human body is different;
matching the first head image with the second head image, and after the first head image and the second head image are successfully matched, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with a real face on the first fused image, and determining overlapping information;
calculating SIFT features of pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of cluster sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of cluster sets, and determining the face features according to the corresponding relation;
matching the first limb image with the second limb image, and after the first limb image and the second limb image are successfully matched, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing so as to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion track of the limb based on the plurality of limb characteristics;
determining driving behaviors according to the human face features and the motion tracks of the limbs;
and the alarm module is used for matching the driving behaviors with the abnormal driving behaviors in a preset abnormal driving behavior database and sending out an alarm prompt when the matching is determined to be successful.
The working principle of the technical scheme is as follows: a driving behavior determination module to: determining a driving image in a local image corresponding to the vehicle area; inputting the driving image into a human body recognition model trained in advance, and determining head key points and limb key points; determining a first head image according to the head key points; determining a first limb image according to the limb key points; acquiring an infrared image of a driver on a rear vehicle; dividing the infrared image into a second head image and a second limb image based on preset rules that infrared information of different parts of a human body is different; matching the first head image with the second head image, and after the first head image and the second head image are successfully matched, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing to obtain a first fusion image; normalizing the first fusion image and determining an estimated face; overlapping the estimated face with a real face on the first fused image, and determining overlapping information; calculating SIFT features of pixel points on the estimated face, and clustering the pixel points according to the SIFT features to obtain a plurality of cluster sets; determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of cluster sets, and determining the face characteristics according to the corresponding relation; matching the first limb image with the second limb image, and after the first limb image and the second limb image are successfully matched, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing so as to obtain a second fusion image; inputting the second fusion image into a limb feature extraction model to determine limb features; constructing a motion track of the limb based on the plurality of limb characteristics; determining driving behaviors according to the human face features and the motion tracks of the limbs; and the alarm module is used for matching the driving behaviors with the abnormal driving behaviors in a preset abnormal driving behavior database and sending out an alarm prompt when the matching is determined to be successful. Several cluster sets are for different parts of the face.
The beneficial effects of the above technical scheme are that: the driving image is a visible light image. And realizing image segmentation on the driving image based on a pre-trained human body recognition model, and determining a first head image and a first limb image. Acquiring an infrared image of a driver on a rear vehicle based on an infrared image acquisition device; dividing the infrared image into a second head image and a second limb image based on preset rules that infrared information of different parts of a human body is different; and fusing the first head image and the second head image, and accurately extracting the human face features based on the first fused image. The problems of loss and inaccuracy of extracted feature information caused by feature extraction of the first head image and the second head image are solved, the first fusion image is processed, and mutual supplement of information between the first head image and the second head image can be realized. When the face features are determined, the overlapping information and the plurality of cluster sets are accurately determined, the corresponding relation between the estimated face and the real face is determined, the operation parameters of the algorithm are determined, the face in the first fusion image is accurately positioned, and the face features are accurately obtained. Inputting the second fusion image into a limb feature extraction model to determine limb features; constructing a motion track of the limb based on the plurality of limb characteristics; and accurately determining the driving behavior according to the human face characteristics and the movement track of the limbs, further judging whether the driving behavior is abnormal driving behavior (dangerous driving behavior), and sending an alarm prompt to prompt the driver of the target vehicle to pay attention to the condition when determining that the driver of the rear vehicle has abnormal timely behavior, so that corresponding measures can be taken conveniently, and potential safety hazards can be eliminated.
As shown in fig. 3, according to some embodiments of the invention, further comprising:
a positioning module to:
obtaining Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of the target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module to:
calling a planning map according to the positioning information, and determining a preset road scene image;
carrying out image segmentation on the video image, and determining a scene area of each frame of video image;
performing space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image in the same time and space, and mutually correcting the preset road scene image and the road scene image according to a matching result.
The working principle of the technical scheme is as follows: a positioning module to: acquiring Beidou positioning information and GPS positioning information of the target vehicle; determining positioning information of the target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information; a matching module to: calling a planning map according to the positioning information, and determining a preset road scene image; carrying out image segmentation on the video image, and determining a scene area of each frame of video image; performing space-time alignment on the preset road scene image and the road scene image corresponding to the scene area; and matching the preset road scene image and the road scene image in the same time and space, and mutually correcting the preset road scene image and the road scene image according to a matching result. Spatiotemporal alignment means alignment in time and space.
The beneficial effects of the above technical scheme are that: the method and the device are convenient for accurately determining the preset road scene image based on the positioning information of the target vehicle, realize accurate perception of the actual road scene based on the preset road scene image, and simultaneously facilitate correction of the preset road scene image according to the road scene image, thereby ensuring the precision of the planned map. And the accurate perception of the scene behind the target vehicle is realized.
In one embodiment, before overlapping the estimated face with the real face on the first fused image, the method further comprises:
determining a center region of the estimated face;
dividing the real face on the first fused image into a plurality of standard areas;
respectively calculating the matching degrees between the central area and a plurality of standard areas:
Figure BDA0003589740100000121
wherein, PiIs the matching degree between the central region and the ith standard region, A is the length of the central region, B is the width of the central region, Qs,tThe pixel value of the pixel point of the s-th row and t-th column of the central area,
Figure BDA0003589740100000122
is the pixel mean of the pixels of the central region,
Figure BDA0003589740100000123
is the pixel value of the pixel point of the s-th row and t-th column of the ith standard area,
Figure BDA0003589740100000124
the pixel mean value of the pixel points of the ith standard area is i ═ 1, 2 and 3 … M; m is the number of standard areas;
screening out a standard area corresponding to the maximum matching degree as a target standard area;
and determining an overlapping mode according to the central region and the target standard region, and overlapping the estimated face and the real face on the first fused image according to the overlapping mode.
The working principle and the beneficial effects of the technical scheme are as follows: when a first fused image is identified, determining an estimated face, further determining a central region of the estimated face, dividing a real face on the first fused image into a plurality of standard regions, and respectively calculating matching degrees between the central region and the plurality of standard regions; screening out a standard area corresponding to the maximum matching degree as a target standard area; and determining an overlapping mode according to the central region and the target standard region, and overlapping the estimated face with the real face on the first fusion image according to the overlapping mode. Based on the estimated face, accurate recognition of the real face on the first fused image is facilitated. The central area is the same size as the standard area. The overlapping mode is that the central area is overlapped with the target standard area, so that the accurate overlapping of the estimated face and the real face on the first fusion image is realized, and the image recognition rate is improved. Based on the formula, the matching degree between the central area and the plurality of standard areas can be calculated conveniently and accurately, and the target standard area can be determined accurately.
According to some embodiments of the invention, further comprising: and the preprocessing module is used for preprocessing the video image before the determining module performs image segmentation on the video image, and the preprocessing comprises noise point removing processing and white balance adjusting processing.
The beneficial effects of the above technical scheme are as follows: the signal-to-noise ratio and the definition of the video image are improved conveniently, and the accuracy of image segmentation is further improved.
According to some embodiments of the invention, further comprising:
the distance sensor is used for sensing distance information between the target vehicle and the rear vehicle;
and the warning module is used for sending warning information when the distance information is determined to be smaller than the preset distance information.
The beneficial effects of the above technical scheme are that: the distance information between the driver of the target vehicle and the rear vehicle can be determined conveniently in time, and the rear information can be sensed accurately.
According to some embodiments of the invention, the facial features include an emotional state of the driver, a degree of opening and closing of eyes.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. The utility model provides a ADAS rear-view mirror carries based on multisensor fuses vehicle identification system which characterized in that includes: the shooting module is arranged on a rearview mirror of the target vehicle and is used for shooting a scene behind the target vehicle to obtain a video data stream; the de-framing processing module is used for performing de-framing processing on the video data stream to acquire a plurality of frames of video images; the determining module is used for carrying out image segmentation on the video images and determining the vehicle area of each frame of video image; and the display module is used for determining the driving data of the rear vehicle according to the pixel information of the vehicle area and displaying the driving data.
2. The rearview mirror-mounted ADAS rear vehicle identification system based on multi-sensor fusion as claimed in claim 1, wherein said camera module is an ADAS smart camera.
3. A rearview mirror-mounted, multi-sensor fusion based ADAS rear vehicle identification system as claimed in claim 1 further comprising:
the first identification module is used for carrying out binarization processing on the video image to obtain a binary image, and carrying out lane line identification according to the binary image to obtain first identification data;
the second identification module is used for carrying out graying processing on the video image to obtain a grayscale image, and carrying out obstacle identification according to the grayscale image to obtain second identification data;
the display module is further configured to display the first identification data and the second identification data.
4. A rearview mirror-mounted ADAS rear vehicle identification system based on multi-sensor fusion as claimed in claim 1 wherein said driving data includes high beam and low beam light status, vehicle speed, steering, braking status, historical driving trajectory.
5. A rearview mirror-mounted, multi-sensor fusion based ADAS rear vehicle identification system as claimed in claim 1 further comprising:
the prediction module is used for predicting the driving path of a rear vehicle according to the driving data;
and the display module is used for displaying the driving path.
6. A rearview mirror-mounted, multi-sensor fusion based ADAS rear vehicle identification system as claimed in claim 1 further comprising:
a driving behavior determination module to:
determining a driving image in a local image corresponding to the vehicle area;
inputting the driving image into a human body recognition model trained in advance, and determining head key points and limb key points;
determining a first head image according to the head key points;
determining a first limb image according to the limb key points;
acquiring an infrared image of a driver on a rear vehicle;
dividing the infrared image into a second head image and a second limb image based on preset rules that the infrared information of different parts of the human body is different;
matching the first head image with the second head image, and after the first head image is successfully matched with the second head image, carrying out pixel weighting on the first head image and the second head image to realize image fusion processing so as to obtain a first fusion image;
normalizing the first fusion image and determining an estimated face;
overlapping the estimated face with a real face on the first fused image and determining overlapping information;
calculating SIFT characteristics of pixel points on the estimation face, and clustering the pixel points according to the SIFT characteristics to obtain a plurality of cluster sets;
determining the corresponding relation between the estimated face and the real face according to the overlapping information and the plurality of cluster sets, and determining the face characteristics according to the corresponding relation;
matching the first limb image with the second limb image, and after the first limb image and the second limb image are successfully matched, carrying out pixel weighting on the first limb image and the second limb image to realize image fusion processing so as to obtain a second fusion image;
inputting the second fusion image into a limb feature extraction model to determine limb features;
constructing a motion track of the limb based on the plurality of limb characteristics;
determining driving behaviors according to the human face features and the motion tracks of the limbs;
and the alarm module is used for matching the driving behaviors with the abnormal driving behaviors in a preset abnormal driving behavior database and sending out an alarm prompt when the matching is determined to be successful.
7. The rearview mirror-mounted multiple sensor fusion-based ADAS rear vehicle identification system as claimed in claim 1, further comprising:
a positioning module to:
acquiring Beidou positioning information and GPS positioning information of the target vehicle;
determining positioning information of a target vehicle based on a Kalman filtering algorithm according to the Beidou positioning information and the GPS positioning information;
a matching module to:
calling a planning map according to the positioning information, and determining a preset road scene image;
performing image segmentation on the video image, and determining a scene area of each frame of video image;
performing space-time alignment on the preset road scene image and the road scene image corresponding to the scene area;
and matching the preset road scene image and the road scene image which are in the same time and space, and mutually correcting the preset road scene image and the road scene image according to a matching result.
8. The rearview mirror-mounted multiple sensor fusion-based ADAS rear vehicle identification system as claimed in claim 1, further comprising: and the preprocessing module is used for preprocessing the video image before the video image is subjected to image segmentation by the determining module, and the preprocessing comprises noise point removing processing and white balance adjusting processing.
9. The rearview mirror-mounted multiple sensor fusion-based ADAS rear vehicle identification system as claimed in claim 1, further comprising:
the distance sensor is used for sensing distance information between the target vehicle and a rear vehicle;
and the warning module is used for sending warning information when the distance information is determined to be smaller than the preset distance information.
10. The rearview mirror-mounted ADAS rear vehicle identification system based on multi-sensor fusion as claimed in claim 6, wherein said facial features include emotional state of the driver, degree of eye openness.
CN202210374465.8A 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror Active CN114782916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210374465.8A CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210374465.8A CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Publications (2)

Publication Number Publication Date
CN114782916A true CN114782916A (en) 2022-07-22
CN114782916B CN114782916B (en) 2024-03-29

Family

ID=82429151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210374465.8A Active CN114782916B (en) 2022-04-11 2022-04-11 ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror

Country Status (1)

Country Link
CN (1) CN114782916B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012795A (en) * 2023-01-05 2023-04-25 智道网联科技(北京)有限公司 Lane line recognition method, device and electronic equipment based on roadside equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088600A1 (en) * 2011-10-05 2013-04-11 Xerox Corporation Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
US20150002745A1 (en) * 2013-07-01 2015-01-01 Xerox Corporation System and method for enhancing images and video frames
CN105676253A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Longitudinal positioning system and method based on city road marking map in automatic driving
CN105912984A (en) * 2016-03-31 2016-08-31 大连楼兰科技股份有限公司 A method for assisted driving by fusing polymorphic information
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
CN108596064A (en) * 2018-04-13 2018-09-28 长安大学 Driver based on Multi-information acquisition bows operating handset behavioral value method
CN109318799A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Automobile, automobile ADAS system and control method thereof
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Lane line recognition method, system and vehicle
CN111178161A (en) * 2019-12-12 2020-05-19 重庆邮电大学 A vehicle tracking method and system based on FCOS
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN113205686A (en) * 2021-06-04 2021-08-03 华中科技大学 A 360-degree panoramic wireless safety assistance system for the rear of a motor vehicle

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130088600A1 (en) * 2011-10-05 2013-04-11 Xerox Corporation Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
US20150002745A1 (en) * 2013-07-01 2015-01-01 Xerox Corporation System and method for enhancing images and video frames
CN105676253A (en) * 2016-01-15 2016-06-15 武汉光庭科技有限公司 Longitudinal positioning system and method based on city road marking map in automatic driving
CN105912984A (en) * 2016-03-31 2016-08-31 大连楼兰科技股份有限公司 A method for assisted driving by fusing polymorphic information
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
CN109318799A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Automobile, automobile ADAS system and control method thereof
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Lane line recognition method, system and vehicle
CN108596064A (en) * 2018-04-13 2018-09-28 长安大学 Driver based on Multi-information acquisition bows operating handset behavioral value method
CN111178161A (en) * 2019-12-12 2020-05-19 重庆邮电大学 A vehicle tracking method and system based on FCOS
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN113205686A (en) * 2021-06-04 2021-08-03 华中科技大学 A 360-degree panoramic wireless safety assistance system for the rear of a motor vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012795A (en) * 2023-01-05 2023-04-25 智道网联科技(北京)有限公司 Lane line recognition method, device and electronic equipment based on roadside equipment

Also Published As

Publication number Publication date
CN114782916B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US11315026B2 (en) Systems and methods for classifying driver behavior
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
KR102098140B1 (en) Method for monotoring blind spot of vehicle and blind spot monitor using the same
CN105324275B (en) Movement pattern device and movement pattern method
EP2544449B1 (en) Vehicle perimeter monitoring device
US20030083790A1 (en) Vehicle information providing apparatus
EP2949534A2 (en) Driver assistance apparatus capable of diagnosing vehicle parts and vehicle including the same
EP1583035A2 (en) Eye tracking method based on correlation and detected eye movement
CN112700470A (en) Target detection and track extraction method based on traffic video stream
CN112349144A (en) Monocular vision-based vehicle collision early warning method and system
CN102792314A (en) Cross traffic collision alert system
EP2833096B1 (en) Method for determining a current distance and/or a current speed of a target object based on a reference point in a camera image, camera system and motor vehicle
EP3286056B1 (en) System and method for a full lane change aid system with augmented reality technology
Yuan et al. Adaptive forward vehicle collision warning based on driving behavior
KR102388806B1 (en) System for deciding driving situation of vehicle
US20090010499A1 (en) Advertising impact measuring system and method
CN109145805B (en) Moving target detection method and system under vehicle-mounted environment
EP1640937B1 (en) Collision time estimation apparatus and method for vehicles
DE102019109491A1 (en) DATA PROCESSING DEVICE, MONITORING SYSTEM, WECKSYSTEM, DATA PROCESSING METHOD AND DATA PROCESSING PROGRAM
US20200223429A1 (en) Vehicular control system
CN117367438A (en) Intelligent driving method and system based on binocular vision
CN114782916B (en) ADAS rear-car recognition system based on multi-sensor fusion and carried on rearview mirror
JP4967758B2 (en) Object movement detection method and detection apparatus
CN113071500A (en) Method and device for acquiring lane line, computer equipment and storage medium
CN118545081A (en) Lane departure warning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant