[go: up one dir, main page]

CN113066263A - Method and device for preventing fatigue driving - Google Patents

Method and device for preventing fatigue driving Download PDF

Info

Publication number
CN113066263A
CN113066263A CN202010002665.1A CN202010002665A CN113066263A CN 113066263 A CN113066263 A CN 113066263A CN 202010002665 A CN202010002665 A CN 202010002665A CN 113066263 A CN113066263 A CN 113066263A
Authority
CN
China
Prior art keywords
driver
behavior
test
voice
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010002665.1A
Other languages
Chinese (zh)
Inventor
熊群芳
胡云卿
林军
岳伟
刘悦
肖罡
袁浩
游俊
丁驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRRC Zhuzhou Institute Co Ltd
Original Assignee
CRRC Zhuzhou Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRRC Zhuzhou Institute Co Ltd filed Critical CRRC Zhuzhou Institute Co Ltd
Priority to CN202010002665.1A priority Critical patent/CN113066263A/en
Publication of CN113066263A publication Critical patent/CN113066263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Ophthalmology & Optometry (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for preventing fatigue driving, which are applied to rail transit. The method comprises the following steps: responding to the running of the rail train, outputting a voice instruction according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice instruction; responding to the output of the voice instruction, and acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction; extracting key features of the driver behavior data based on the test behaviors; judging whether the driver executes the test behavior based on key features in the driver behavior data; and outputting an alarm signal in response to the driver not performing the test action. The invention also provides a device for realizing the method. The method and the device for preventing fatigue driving provided by the invention have no contact with a driver, and can safely, reliably, real-timely and effectively prevent the driver from fatigue driving.

Description

Method and device for preventing fatigue driving
Technical Field
The invention relates to an interaction method applied to rail transit, in particular to a method and a device for preventing fatigue driving.
Background
Fatigue driving refers to that the reaction level generated after a period of driving is reduced, the reaction is slow, the judgment is slow, the rhythm is slow and the like, which are main manifestations of fatigue driving of a driver, the fatigue driving is an important factor for causing traffic accidents, as the living standard of people is continuously improved, railway transportation is increasingly busy, the running density of trains is more crowded, accidents of car damage and people death can happen in a slight sense, according to related department statistics, the traffic accidents happen to trains nationwide from China, wherein the proportion of the accidents caused by fatigue driving of the driver accounts for more than 40% of the whole accidents, safety education and punishment on the driver are increased for each locomotive service section, and still a part of people still build the train, i go our fatigue driving, and the safety of railway transportation order and people life and property are seriously influenced.
At present, the method for preventing fatigue driving at home and abroad mainly comprises the traditional methods of customizing a special anti-dozing safety instrument, intelligent glasses, a vibration head band device, an intelligent bracelet and the like, most of the prevention methods need to be in direct contact with the body of a driver, the body of the driver is easy to be discomfortable by light people, and the driving of the driver is influenced by heavy people, so that traffic accidents are caused.
Therefore, there is a need for a novel method and device for preventing driver fatigue driving, which can prevent the driver from physically separating from the driver, so as to ensure that the driver can drive in a comfortable state, and at the same time, can ensure the function of preventing driver fatigue driving, thereby improving driving safety.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In order to solve the problems in the prior art, the invention provides a method for preventing fatigue driving, which is applied to rail transit and specifically comprises the following steps:
responding to the running of the rail train, outputting a voice instruction according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice instruction;
responding to the output of the voice instruction, and acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction;
extracting key features of the driver behavior data based on the test behavior;
determining whether the driver executes the test behavior based on key features in the driver behavior data; and
and responding to the condition that the driver does not execute the test behavior, and outputting an alarm signal.
In an embodiment of the method, optionally, the test behavior is an action behavior;
acquiring driver behavior data further comprises: and acquiring continuous multi-frame images acquired by the camera module in the preset time period as the driver behavior data.
In an embodiment of the method, optionally, the action behavior further includes a blinking behavior;
performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting eye feature points as the key features in response to the detection that the image contains the human face features;
determining whether the driver performs the test further comprises: and judging whether the driver carries out blinking behavior or not based on the aspect ratio of the eye feature points in the continuous multi-frame images.
In an embodiment of the method, optionally, the action further includes a mouth opening action;
performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting the characteristic points of the mouth as the key features in response to the detection that the image contains the human face features;
determining whether the driver performs the test further comprises: and judging whether the driver performs mouth opening behavior or not based on the inner lip contour curve fitted to the mouth feature points of the continuous multi-frame images.
In an embodiment of the method, optionally, the action behavior further includes performing a gesture behavior;
performing key feature extraction further comprises: for each frame of image, judging and extracting the hand feature points in the image as the key features;
determining whether the driver performs the test further comprises: and comparing the hand characteristic points in the continuous multi-frame images based on a preset gesture model to judge whether the driver executes gesture behaviors.
In an embodiment of the foregoing method, optionally, the test behavior is a read-after behavior;
acquiring driver behavior data further comprises: acquiring voice audio acquired by a sound pickup module within the preset time period as the driver behavior data;
performing key feature extraction further comprises: judging and extracting the voice signal in the voice audio as the key feature;
determining whether the driver performs the test further comprises: and carrying out voice recognition on the voice signal to judge whether the driver executes the follow-up reading behavior.
The invention also provides a device for preventing fatigue driving, which is applied to rail transit and specifically comprises the following components:
a memory; and
a processor coupled to the memory, the processor configured to:
responding to the running of the rail train, outputting a voice instruction according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice instruction;
responding to the output of the voice instruction, and acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction;
extracting key features of the driver behavior data based on the test behavior;
determining whether the driver executes the test behavior based on key features in the driver behavior data; and
and responding to the condition that the driver does not execute the test behavior, and outputting an alarm signal.
In an embodiment of the apparatus, optionally, the test behavior is an action behavior;
the processor obtaining driver behavior data further comprises: and acquiring continuous multi-frame images acquired by the camera module in the preset time period as the driver behavior data.
In an embodiment of the apparatus, optionally, the action behavior further includes a blinking behavior;
the above processor further comprises: judging whether the image contains human face features or not for each frame of image, and extracting eye feature points as the key features in response to the detection that the image contains the human face features;
the processor determining whether the driver performs the test further comprises: and judging whether the driver carries out blinking behavior or not based on the aspect ratio of the eye feature points in the continuous multi-frame images.
In an embodiment of the apparatus, optionally, the action further comprises a mouth opening action;
the above processor further comprises: judging whether the image contains human face features or not for each frame of image, and extracting the characteristic points of the mouth as the key features in response to the detection that the image contains the human face features;
the processor determining whether the driver performs the test further comprises: and judging whether the driver performs mouth opening behavior or not based on the inner lip contour curve fitted to the mouth feature points of the continuous multi-frame images.
In an embodiment of the apparatus, optionally, the action behavior further includes a gesture execution behavior;
the above processor further comprises: for each frame of image, judging and extracting the hand feature points in the image as the key features;
the processor determining whether the driver performs the test further comprises: and comparing the hand characteristic points in the continuous multi-frame images based on a preset gesture model to judge whether the driver executes gesture behaviors.
In an embodiment of the apparatus, optionally, the test behavior is a read-after behavior;
the processor obtaining driver behavior data further comprises: acquiring voice audio acquired by a sound pickup module within the preset time period as the driver behavior data;
the above processor further comprises: judging and extracting the voice signal in the voice audio as the key feature;
the processor determining whether the driver performs the test further comprises: and carrying out voice recognition on the voice signal to judge whether the driver executes the follow-up reading behavior.
The invention also provides a computer readable medium having stored thereon computer readable instructions which, when executed by a processor, perform the steps in any of the embodiments of the method of preventing fatigue driving as described above.
According to the method and the device for preventing fatigue driving, provided by the invention, the body of the driver does not need to be removed, so that the driver can be ensured to drive in a comfortable state, meanwhile, the man-machine interaction with the driver is realized through a voice system, the effect of preventing the fatigue driving of the driver can be ensured, and meanwhile, the driver does not need to use eyes to see and only needs to use ears to listen, so that the driving safety can be further improved.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 shows a general flowchart of a method for preventing fatigue driving according to the present invention.
Fig. 2 shows a partial flowchart of an embodiment of a method for preventing fatigue driving according to the present invention.
Fig. 3 shows a partial flowchart of another embodiment of the method for preventing fatigue driving according to the present invention.
Fig. 4 shows a partial flowchart of another embodiment of the method for preventing fatigue driving according to the present invention.
Fig. 5 shows a partial flowchart of another embodiment of the method for preventing fatigue driving according to the present invention.
Fig. 6 shows a schematic diagram of the device for preventing fatigue driving provided by the invention.
Reference numerals
600 device
610 processor
620 memory
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the invention and is incorporated in the context of a particular application. Various modifications, as well as various uses in different applications will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the practice of the invention may not necessarily be limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
The reader's attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
Note that where used, the designations left, right, front, back, top, bottom, positive, negative, clockwise, and counterclockwise are used for convenience only and do not imply any particular fixed orientation. In fact, they are used to reflect the relative position and/or orientation between the various parts of the object. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is noted that, where used, further, preferably, still further and more preferably is a brief introduction to the exposition of the alternative embodiment on the basis of the preceding embodiment, the contents of the further, preferably, still further or more preferably back band being combined with the preceding embodiment as a complete constituent of the alternative embodiment. Several further, preferred, still further or more preferred arrangements of the belt after the same embodiment may be combined in any combination to form a further embodiment.
In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "mounted" and "coupled" are to be construed broadly, e.g., as meaning fixedly attached, detachably attached, or integrally attached; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
First, please refer to fig. 1 to understand the steps of the method for preventing fatigue driving according to the present invention. As shown in fig. 1, the method for preventing fatigue driving provided by the present invention specifically includes: step S100: judging whether the train is in a running state; responding to the train being in the running state, executing the step S200: outputting a voice instruction according to a preset frequency; step S300: acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction; step S400: extracting key features of the driver behavior data based on the test behaviors; step S500: judging whether the driver executes the test behavior based on key features in the driver behavior data; and in response to the driver performing the test behavior, repeatedly performing steps S100-S500; in response to the driver not performing the test behavior, step S600 is performed: an alarm signal is output, and then steps S100-S500 are repeated.
As for step S100, as described above, in order to ensure the order of railway transportation and the safety of people' S lives and properties, the method for preventing fatigue driving provided by the present invention is performed throughout the operation of the train, so as to ensure that the driver of the train can keep awake all the time during the operation of the train. Therefore, the method for preventing fatigue driving according to the present invention includes step S100, and determines whether the train is in a running state.
Responding to the train being in the running state, executing the step S200: and outputting a voice command according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice command. Specifically, in the step S200, as described above, the voice command is output throughout the operation of the train, and therefore, the driver needs to be reminded at a preset frequency. The preset frequency can be set according to actual needs, for example, in an embodiment, a voice broadcast is performed every 3 minutes.
Further, in step S200, the test behavior corresponding to the voice command that is played each time according to the preset frequency to instruct the driver to execute the voice command is random. In the invention, the test behaviors corresponding to the output voice commands are mainly divided into action behaviors and voice behaviors. Wherein the action behaviors further comprise blinking behaviors, mouth opening behaviors and gesture behaviors. The speech behavior mainly comprises a read-after behavior. In step S200, a voice instruction is played at a preset frequency to randomly cause the driver to perform one of the above behaviors at a time to keep the driver awake, and it is also possible to determine whether the driver is in a fatigue driving state.
Please refer to fig. 2-5 together to understand different embodiments of the present invention for preventing fatigue driving corresponding to different testing behaviors.
First, referring to fig. 2, in fig. 2, step S200 is further detailed as step S210: and outputting the voice command corresponding to the blinking behavior. Therefore, step S300 is further specifically step S310: and acquiring continuous multi-frame images collected by the camera module in a preset time period as driver behavior data.
Specifically, for the action-type behavior, the behavior data of the driver may be collected by the camera module, so that it can be determined whether the driver has performed a predetermined behavior according to the collected behavior data of the driver. Furthermore, in response to the output of the voice command, the camera module may be controlled to collect behavior data of the driver, and in order to enable the whole method for preventing fatigue driving to be smoothly performed, an image within a preset time after the voice command is output may be collected, for example, the preset time may be 30s, that is, after the voice command is output, the driver needs to execute a behavior corresponding to the voice command in time to be considered as not being in a fatigue driving state.
It is understood that, in order to acquire behavior data of the driver, the camera module needs to be installed at a position where the front face image and the hand gesture image of the driver can be captured, and the installation position of the camera module can be determined according to the situation. Meanwhile, the camera module mentioned in the present invention can be realized by the existing or future technologies, and the specific arrangement and implementation method of the camera module should not unduly limit the scope of the present invention.
After the behavior data of the driver is collected, step S400 is executed, and key features are extracted from the behavior data of the driver, and further, corresponding to the blinking behavior, step S400 is further specifically the step S410: and judging whether the image contains the human face features or not for each frame of image, and extracting eye feature points as key features in response to the detection that the image contains the human face features.
More specifically, in an embodiment, an SSD face detection algorithm is used for each frame of image acquired by the camera, and the network model is trained by taking the RGB image of the image to be detected as the input of the network. In the training process, a complete picture is sent to a network to obtain each feature layer, a target window is regressed in the feature layer, and whether a face exists in the window or not is judged. Once the human face features are detected, the landmark algorithm is used for extracting eye feature points, eye regions are intercepted according to feature point coordinates, then obtained eye region images are input into an eye classification model trained by a convolutional neural network, and finally the state of the current eye region images is judged.
Therefore, in the embodiment as shown in fig. 2, the step S500 of determining whether the driver performs the corresponding test behavior based on the key feature is further embodied as the step S510: and judging whether the driver performs the blinking action or not based on the aspect ratio of the eye feature points in the continuous multi-frame images.
Specifically, in step S510, eye blink determination is implemented using an Eye Aspect Ratio (EAR) based concept. It will be appreciated that for an open eye, the aspect ratio is greater than for a closed eye. That is, with respect to the obtained eye classification model, if the aspect ratio of the eye classification model in a multi-frame image undergoes a process from large to small and then from small to large, it can be considered that the driver performs a blinking behavior in the consecutive multi-frame images. On the contrary, if no human face is detected, no eye feature ratio is extracted, or the aspect ratio of the extracted eye feature ratio is always kept at substantially the same value, it is considered that the driver does not perform the blinking behavior, it can be considered that the driver may have fatigue driving, and the step S600 needs to be performed: and outputting an alarm signal.
Please refer to fig. 3 to understand another embodiment of the method for preventing fatigue driving according to the present invention. In fig. 3, step S200 is further specifically step S220: and outputting a voice command corresponding to the mouth opening behavior. Therefore, step S300 is further specifically step S310: and acquiring continuous multi-frame images collected by the camera module in a preset time period as driver behavior data.
Specifically, for the action-type behavior, the behavior data of the driver may be collected by the camera module, so that it can be determined whether the driver has performed a predetermined behavior according to the collected behavior data of the driver. Furthermore, in response to the output of the voice command, the camera module may be controlled to collect behavior data of the driver, and in order to enable the whole method for preventing fatigue driving to be smoothly performed, an image within a preset time after the voice command is output may be collected, for example, the preset time may be 30s, that is, after the voice command is output, the driver needs to execute a behavior corresponding to the voice command in time to be considered as not being in a fatigue driving state.
It is understood that, in order to acquire behavior data of the driver, the camera module needs to be installed at a position where the front face image and the hand gesture image of the driver can be captured, and the installation position of the camera module can be determined according to the situation. Meanwhile, the camera module mentioned in the present invention can be realized by the existing or future technologies, and the specific arrangement and implementation method of the camera module should not unduly limit the scope of the present invention.
After the behavior data of the driver is collected, step S400 is executed to extract key features from the behavior data of the driver, and further, step S400 is further specifically a step S420 corresponding to the mouth opening behavior: and judging whether the image contains the human face features or not for each frame of image, and extracting the characteristic points of the mouth as key features in response to the detection that the image contains the human face features.
More specifically, in an embodiment, an SSD face detection algorithm is used for each frame of image acquired by the camera, and the network model is trained by taking the RGB image of the image to be detected as the input of the network. In the training process, a complete picture is sent to a network to obtain each feature layer, a target window is regressed in the feature layer, and whether a face exists in the window or not is judged. Once the face features are detected, extracting the feature points of the mouth by using a landmark algorithm, intercepting the mouth area according to the coordinates of the feature points, inputting the obtained mouth area image into a mouth classification model trained by a convolutional neural network, and finally judging the state of the current mouth area image.
Therefore, in the embodiment as shown in fig. 3, the step S500 of determining whether the driver performs the corresponding test behavior based on the key feature is further embodied as the step S520: and judging whether the driver performs the mouth opening action or not based on the inner lip contour curve fitted by the mouth feature points of the continuous multi-frame images.
Specifically, in step S520, the mouth opening degree is calculated by using the method for detecting the inner contour of the lip based on the characteristic point curve fitting, so as to determine whether the mouth is opened. It will be appreciated that for an open mouth, the curvature of the curve of the inner contour of the lips is greater than for a closed mouth. That is, with respect to the resulting mouth classification model, if the curvature of the inner contour curve of the mouth classification model in the multi-frame image undergoes a process from large to small and then from small to large, it can be considered that the driver performs the mouth opening behavior in the consecutive multi-frame images. On the contrary, if the human face is not detected, or the mouth feature ratio is not extracted, or the inner lip contour curve of the extracted mouth feature ratio is always substantially consistent, it is considered that the driver does not perform the mouth opening behavior, and it may be considered that the driver may have fatigue driving, and the step S600 needs to be performed: and outputting an alarm signal.
Please refer to fig. 4 to understand another embodiment of the method for preventing fatigue driving according to the present invention. In fig. 4, step S200 is further specifically step S230: and outputting the gesture behavior corresponding to the voice command. Therefore, step S300 is further specifically step S310: and acquiring continuous multi-frame images collected by the camera module in a preset time period as driver behavior data.
Specifically, for the action-type behavior, the behavior data of the driver may be collected by the camera module, so that it can be determined whether the driver has performed a predetermined behavior according to the collected behavior data of the driver. Furthermore, in response to the output of the voice command, the camera module may be controlled to collect behavior data of the driver, and in order to enable the whole method for preventing fatigue driving to be smoothly performed, an image within a preset time after the voice command is output may be collected, for example, the preset time may be 30s, that is, after the voice command is output, the driver needs to execute a behavior corresponding to the voice command in time to be considered as not being in a fatigue driving state.
It is understood that, in order to acquire behavior data of the driver, the camera module needs to be installed at a position where the front face image and the hand gesture image of the driver can be captured, and the installation position of the camera module can be determined according to the situation. Meanwhile, the camera module mentioned in the present invention can be realized by the existing or future technologies, and the specific arrangement and implementation method of the camera module should not unduly limit the scope of the present invention.
After the behavior data of the driver is collected, step S400 is executed, and key features are extracted from the behavior data of the driver, and further, corresponding to gesture behaviors, step S400 is further specifically the step S430: and judging and extracting the hand characteristic points as key characteristics for each frame of image.
More specifically, in one embodiment, the hand position is detected using the YOLOV3 algorithm for each frame of image captured by the camera versus the frame of image captured by the camera. It should be understood that the detection of the hand gesture is not limited to the YOLOV3 algorithm, but may also be implemented by, for example, SSD or openspace detection methods, and further, may also be implemented by any other algorithm, and the specific algorithm for extracting the hand feature point should not limit the scope of the present invention.
Therefore, in the embodiment as shown in fig. 4, the step S500 of determining whether the driver performs the corresponding test behavior based on the key feature is further embodied as the step S530: and comparing hand characteristic points in the continuous multi-frame images based on a preset gesture model to judge whether the driver executes gesture behaviors.
Specifically, in step S530, first, a standard gesture model of a standard gesture behavior corresponding to the voice command needs to be acquired. The standard gesture model can be preset in a gesture feature database. It will be appreciated that the gesture behavior described above may include some static gesture strokes, such as a stroke of the number "5", etc., as well as some dynamic gestures, such as a punch-making action. Thus, standard gesture models may include static models as well as dynamic models. It will be appreciated that the above-described static model may also be represented in the form of a dynamic model that is the same for each frame. Whether the driver executes the related gesture behavior can be judged by comparing the hand characteristic points in the standard frames in the dynamic module with the hand characteristic points of each frame in the continuous multi-frame images and setting a preset comparison threshold. It is to be understood that the above description of the method of comparing standard gesture models is merely illustrative and should not unduly limit the scope of the present invention. If the driver is deemed not to perform the relevant gesture behavior, step S600 needs to be performed: and outputting an alarm signal.
Please refer to fig. 5 to understand another embodiment of the method for preventing fatigue driving according to the present invention. In fig. 5, step S200 is further specifically step S230: and outputting the voice command corresponding to the reading following behavior. Therefore, step S300 is further specifically step S320: and acquiring voice audio collected by the sound pickup module within a preset time period as driver behavior data.
Specifically, for voice-like behaviors, behavior data of the driver may be collected by a sound pickup module (microphone), so that whether the driver performs a predetermined behavior can be determined according to the collected behavior data of the driver. Furthermore, in response to the output of the voice command, the sound pickup module may be controlled to collect behavior data of the driver, and in order to enable the whole method for preventing fatigue driving to be smoothly performed, a voice audio within a preset time period after the voice command is output may be collected, for example, the preset time period may be 30s, that is, after the voice command is output, the driver needs to execute a behavior corresponding to the voice command in time, and thus the driver may be considered not to be in a fatigue driving state.
It will be appreciated that the present invention may be implemented using existing or future technologies, and the specific arrangement and implementation of the pickup module should not unduly limit the scope of the present invention.
After the behavior data of the driver is collected, step S400 is executed, and a key feature is extracted from the behavior data of the driver, and further, corresponding to the follow-up reading behavior, step S400 is further specifically the step S440: and judging and extracting the voice signal in the voice audio as a key feature.
It can be understood that in a section of voice audio, the driver is not always following the reading action, so that the obtained voice audio is not all signals that can be effectively used for voice recognition, and therefore, it is necessary to extract effective voice signals to partially eliminate noise and the influence of different speakers, so that the processed voice signals can reflect the essential characteristics of voice. The above object can be achieved by endpoint detection and speech enhancement. The endpoint detection is to distinguish the speech signal from the non-speech signal in the speech signal to accurately determine the starting point of the speech signal. After the endpoint detection, the subsequent processing can be carried out on the voice signal only, which plays an important role in improving the accuracy of the model and the recognition accuracy. The main task of speech enhancement is to eliminate the influence of environmental noise on speech, and the current general method adopts wiener filtering, which has better effect than other filters under the condition of larger noise. It is understood that the above-mentioned extraction method for the speech signal can be performed by other methods existing or will exist, and the extraction method for the speech signal should not unduly limit the scope of the present invention.
Therefore, in the embodiment as shown in fig. 5, the step S500 of determining whether the driver performs the corresponding test behavior based on the key feature is further embodied as the step S540: and carrying out voice recognition on the voice signals to judge whether the driver performs the reading following behavior. The above method for performing speech recognition on a speech signal can be implemented by existing or future methods, and the scope of the present invention should not be unduly limited with respect to the implementation method of speech recognition. If it is determined that the driver does not perform the follow-up action after the voice recognition, the step S600 is required to be performed: and outputting an alarm signal.
In step S600, in response to determining that the driver does not execute the behavior corresponding to the voice instruction, it is considered that the driver may have fatigue driving, and therefore, an alarm signal may be output to remind the driver to keep awake.
It is understood that the alarm signal may be outputted through a speaker to give a warning sound, or may be vibrated by a vibrator provided in the seat to remind the driver to keep awake. It should be understood by those skilled in the art that the above-mentioned alarm signal, including but not limited to the above-mentioned alarm sound or vibration signal, can be adjusted according to actual requirements and actual hardware conditions.
In a preferred embodiment, in step S600, the method further includes determining the state of the driver within a period of time, for example, counting the number of times of the related actions of blinking, opening the mouth, making a finger punch, and data stroke of the driver within a certain period of time, and if the number of times of the actions of the driver is greater than or equal to 60% of the total number of times, determining that the driver has made the related actions and is not in the fatigue driving state. Correspondingly, if the total times of the driver executing the related behaviors are lower than a certain threshold value, the driver is considered to be in a very serious fatigue driving state, and at the moment, the output alarm signal can also comprise mandatory measures of informing background monitoring staff, limiting the speed of the train and the like so as to preferentially ensure the safe operation of the train.
Accordingly, the method for preventing fatigue driving provided by the invention has been described, in the invention, voice instructions are continuously sent out through a voice system, so that the driver needs to keep a human-computer interaction state all the time, and the fatigue driving of the driver can be effectively prevented. Meanwhile, whether the driver is in a fatigue driving state or not is judged in advance by detecting whether the driver executes related behaviors or not, so that the driver can be reminded to keep clear-headed by outputting an alarm signal, and the driving safety can be ensured.
Furthermore, the method for preventing fatigue driving provided by the invention does not need any limb relief with the driver, and can ensure that the driver drives in a comfortable state. And because of the voice system adopted, all the human-computer interaction is realized without the need of seeing by eyes of the driver and listening by ears of the driver, thereby further ensuring the safety performance of driving.
The invention also provides a device for preventing fatigue driving, please refer to fig. 6, and fig. 6 shows a schematic diagram of the device for preventing fatigue driving. As shown in fig. 6, the apparatus 600 for preventing fatigue driving includes a processor 610 and a memory 620. The processor 610 of the apparatus 600 for preventing fatigue driving can implement the method for preventing fatigue driving described above when executing the computer program stored in the memory 620, for which reference is specifically made to the description of the method for preventing fatigue driving, which is not repeated herein.
The invention also provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps as described in any of the embodiments of the method of preventing fatigue driving as described above.
According to the method and the device for preventing fatigue driving, provided by the invention, the body of the driver does not need to be removed, so that the driver can be ensured to drive in a comfortable state, meanwhile, the man-machine interaction with the driver is realized through a voice system, the effect of preventing the fatigue driving of the driver can be ensured, and meanwhile, the driver does not need to use eyes to see and only needs to use ears to listen, so that the driving safety can be further improved.
The various illustrative logical modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. It is to be understood that the scope of the invention is to be defined by the appended claims and not by the specific constructions and components of the embodiments illustrated above. Those skilled in the art can make various changes and modifications to the embodiments within the spirit and scope of the present invention, and these changes and modifications also fall within the scope of the present invention.

Claims (13)

1. A method for preventing fatigue driving is applied to rail transit and is characterized by comprising the following steps:
responding to the running of the rail train, outputting a voice instruction according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice instruction;
responding to the output of the voice instruction, and acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction;
extracting key features of the driver behavior data based on the test behavior;
determining whether the driver performs the test behavior based on key features in the driver behavior data; and
outputting an alarm signal in response to the driver not performing the test action.
2. The method of claim 1, wherein the test behavior is an action behavior;
acquiring driver behavior data further comprises: and acquiring continuous multi-frame images collected by a camera module in the preset time period as the driver behavior data.
3. The method of claim 2, wherein the action behavior further comprises a blinking behavior;
performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting eye feature points as the key features in response to the detection that the image contains the human face features;
determining whether the driver performed the test behavior further comprises: and judging whether the driver carries out blinking behavior or not based on the aspect ratio of the eye feature points in the continuous multi-frame images.
4. The method of claim 2, wherein the action behavior further comprises a mouth opening behavior;
performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting the feature points of the mouth as the key features in response to the detection that the image contains the human face features;
determining whether the driver performed the test behavior further comprises: and judging whether the driver performs mouth opening behavior or not based on the inner lip contour curve fitted to the mouth feature points of the continuous multiframe images.
5. The method of claim 2, wherein the action behavior further comprises a gesture behavior;
performing key feature extraction further comprises: for each frame of image, judging and extracting the hand feature points in the image as the key features;
determining whether the driver performed the test behavior further comprises: and comparing the hand characteristic points in the continuous multi-frame images based on a preset gesture model to judge whether the driver executes gesture behaviors.
6. The method of claim 1, wherein the test behavior is a read-after behavior;
acquiring driver behavior data further comprises: acquiring voice audio collected by a sound pickup module within the preset time period as the driver behavior data;
performing key feature extraction further comprises: judging and extracting a voice signal in the voice audio as the key feature;
determining whether the driver performed the test behavior further comprises: and performing voice recognition on the voice signal to judge whether the driver executes the reading following behavior.
7. A device for preventing fatigue driving is applied to rail transit and is characterized by comprising:
a memory; and
a processor coupled with the memory, the processor configured to:
responding to the running of the rail train, outputting a voice instruction according to a preset frequency so as to instruct a driver of the rail train to execute a test behavior corresponding to the voice instruction;
responding to the output of the voice instruction, and acquiring driver behavior data within a preset time period after the voice instruction is output from a corresponding acquisition device according to the test behavior corresponding to the voice instruction;
extracting key features of the driver behavior data based on the test behavior;
determining whether the driver performs the test behavior based on key features in the driver behavior data; and
outputting an alarm signal in response to the driver not performing the test action.
8. The apparatus of claim 7, in which the test behavior is an action behavior;
the processor obtaining driver behavior data further comprises: and acquiring continuous multi-frame images collected by a camera module in the preset time period as the driver behavior data.
9. The apparatus of claim 8, wherein the action behavior further comprises a blinking behavior;
the processor performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting eye feature points as the key features in response to the detection that the image contains the human face features;
the processor determining whether the driver performed the test behavior further comprises: and judging whether the driver carries out blinking behavior or not based on the aspect ratio of the eye feature points in the continuous multi-frame images.
10. The apparatus of claim 8, wherein the act of acting further comprises an act of mouth opening;
the processor performing key feature extraction further comprises: judging whether the image contains human face features or not for each frame of image, and extracting the feature points of the mouth as the key features in response to the detection that the image contains the human face features;
the processor determining whether the driver performed the test behavior further comprises: and judging whether the driver performs mouth opening behavior or not based on the inner lip contour curve fitted to the mouth feature points of the continuous multiframe images.
11. The apparatus of claim 8, wherein the action behavior further comprises a gesture behavior;
the processor performing key feature extraction further comprises: for each frame of image, judging and extracting the hand feature points in the image as the key features;
the processor determining whether the driver performed the test behavior further comprises: and comparing the hand characteristic points in the continuous multi-frame images based on a preset gesture model to judge whether the driver executes gesture behaviors.
12. The apparatus of claim 7, wherein the test behavior is a read-after behavior;
the processor obtaining driver behavior data further comprises: acquiring voice audio collected by a sound pickup module within the preset time period as the driver behavior data;
the processor performing key feature extraction further comprises: judging and extracting a voice signal in the voice audio as the key feature;
the processor determining whether the driver performed the test behavior further comprises: and performing voice recognition on the voice signal to judge whether the driver executes the reading following behavior.
13. A computer readable medium having stored thereon computer readable instructions which, when executed by a processor, carry out the steps of the method of preventing fatigue driving according to any one of claims 1 to 6.
CN202010002665.1A 2020-01-02 2020-01-02 Method and device for preventing fatigue driving Pending CN113066263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010002665.1A CN113066263A (en) 2020-01-02 2020-01-02 Method and device for preventing fatigue driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010002665.1A CN113066263A (en) 2020-01-02 2020-01-02 Method and device for preventing fatigue driving

Publications (1)

Publication Number Publication Date
CN113066263A true CN113066263A (en) 2021-07-02

Family

ID=76559346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010002665.1A Pending CN113066263A (en) 2020-01-02 2020-01-02 Method and device for preventing fatigue driving

Country Status (1)

Country Link
CN (1) CN113066263A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 A detection method, device and storage medium for driver's drunk driving and fatigue driving

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256706A (en) * 2007-02-26 2008-09-03 株式会社电装 Sleep warning apparatus
US20130188838A1 (en) * 2012-01-19 2013-07-25 Utechzone Co., Ltd. Attention detection method based on driver's reflex actions
CN103247149A (en) * 2012-02-08 2013-08-14 由田信息技术(上海)有限公司 Attention detection device and attention detection method according to reflex action of driver
CN103253275A (en) * 2012-02-17 2013-08-21 由田新技股份有限公司 Driving attention detection device and method for interactive voice question
CN104828095A (en) * 2014-09-02 2015-08-12 北汽福田汽车股份有限公司 Method, device and system of detecting driving status of driver
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256706A (en) * 2007-02-26 2008-09-03 株式会社电装 Sleep warning apparatus
US20130188838A1 (en) * 2012-01-19 2013-07-25 Utechzone Co., Ltd. Attention detection method based on driver's reflex actions
CN103247149A (en) * 2012-02-08 2013-08-14 由田信息技术(上海)有限公司 Attention detection device and attention detection method according to reflex action of driver
CN103253275A (en) * 2012-02-17 2013-08-21 由田新技股份有限公司 Driving attention detection device and method for interactive voice question
CN104828095A (en) * 2014-09-02 2015-08-12 北汽福田汽车股份有限公司 Method, device and system of detecting driving status of driver
CN107126224A (en) * 2017-06-20 2017-09-05 中南大学 A kind of real-time monitoring of track train driver status based on Kinect and method for early warning and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792663A (en) * 2021-09-15 2021-12-14 东北大学 A detection method, device and storage medium for driver's drunk driving and fatigue driving
CN113792663B (en) * 2021-09-15 2024-05-14 东北大学 A method, device and storage medium for detecting drunk driving and fatigue driving of a driver

Similar Documents

Publication Publication Date Title
EP3890342B1 (en) Waking up a wearable device
US20180089499A1 (en) Face recognition method and device and apparatus
WO2021159987A1 (en) Method and device for predicting operating state of vehicle, terminal, and storage medium
CN108564942A (en) One kind being based on the adjustable speech-emotion recognition method of susceptibility and system
CN110265036A (en) Voice awakening method, system, electronic equipment and computer readable storage medium
WO2021169742A1 (en) Method and device for predicting operating state of transportation means, and terminal and storage medium
CN101599207A (en) A kind of fatigue driving detection device and automobile
CN106686223A (en) Auxiliary dialogue system, method and smart phone for deaf-mute and normal people
CN105292124A (en) Driving monitoring method and driving monitoring device
CN112754498B (en) Driver fatigue detection method, device, equipment and storage medium
CN113599052B (en) Snore monitoring method and system based on deep learning algorithm and corresponding electric bed control method and system
CN114821796A (en) Dangerous driving behavior recognition method, device, equipment and storage medium
CN113331841A (en) Bus risk coefficient evaluation method, algorithm box and system
WO2021151310A1 (en) Voice call noise cancellation method, apparatus, electronic device, and storage medium
CN116386277A (en) Fatigue driving detection method and device, electronic equipment and medium
CN117765975A (en) Sleep sound event identification method and device, storage medium and electronic equipment
CN111325058A (en) Driving behavior detection method, device and system and storage medium
CN117333853A (en) Driver fatigue monitoring method and device based on image processing and storage medium
CN113066263A (en) Method and device for preventing fatigue driving
CN206710589U (en) a wearable device
CN118506805A (en) A method and device for transparently transmitting ambient sound in an intelligent car cabin
CN112738364A (en) User monitoring device and method based on facial expression and voice recognition
CN115394292B (en) Vehicle audio control method, device, vehicle and storage medium
CN117636878A (en) Voice information processing method and device and nonvolatile storage medium
CN114537412A (en) Vehicle safety monitoring method and device, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210702