[go: up one dir, main page]

CN110991277B - Multi-dimensional multi-task learning evaluation system based on deep learning - Google Patents

Multi-dimensional multi-task learning evaluation system based on deep learning Download PDF

Info

Publication number
CN110991277B
CN110991277B CN201911139266.3A CN201911139266A CN110991277B CN 110991277 B CN110991277 B CN 110991277B CN 201911139266 A CN201911139266 A CN 201911139266A CN 110991277 B CN110991277 B CN 110991277B
Authority
CN
China
Prior art keywords
user
learning
recognition
recognition module
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911139266.3A
Other languages
Chinese (zh)
Other versions
CN110991277A (en
Inventor
李剑峰
张进
宋志远
史吉光
王洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Jianxin Intelligent Technology Co ltd
Original Assignee
Hunan Jianxin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Jianxin Intelligent Technology Co ltd filed Critical Hunan Jianxin Intelligent Technology Co ltd
Priority to CN201911139266.3A priority Critical patent/CN110991277B/en
Publication of CN110991277A publication Critical patent/CN110991277A/en
Application granted granted Critical
Publication of CN110991277B publication Critical patent/CN110991277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Resources & Organizations (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Game Theory and Decision Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a multi-dimensional multi-task learning evaluation system based on deep learning, which comprises a first sleepiness tiredness recognition module, an eye movement recognition module, a second sleepiness tiredness recognition module, a third sleepiness tiredness recognition module and a third sleepiness tiredness recognition module, wherein the eye movement recognition module is used for recognizing eye movement through opening and closing; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head gesture to judge that the reading and learning gesture of the user is correct and incorrect, and the tired and sleepy state of the user is judged by combining the actions of eyes. The application has the functions of face recognition, drowsiness and tiredness recognition, emotion learning evaluation, automatic scoring module for scoring paper, myopia recognition and the like, and can evaluate learning and repair in multiple dimensions and the like.

Description

Multi-dimensional multi-task learning evaluation system based on deep learning
Technical Field
The application relates to the technical field of intelligent equipment, in particular to a multidimensional multi-task learning evaluation system based on deep learning.
Background
The prior art has the following defects:
the prior art has the advantages of controlling the brightness of lamplight by voice recognition, but has no problems of low intelligent degree, poor myopia prevention effect, small help promotion for user learning and the like when in combination with the recognition of head gesture and sitting posture, eye movement track and eye opening and closing actions.
Disclosure of Invention
The application aims to overcome the defects of the prior art, and provides a multi-dimensional multi-task learning evaluation system based on deep learning, which has the functions of face recognition, drowsiness and tiredness recognition, emotion learning evaluation, automatic scoring module for examination paper, myopia recognition and the like, can evaluate learning and repair in multiple dimensions, improves the learning-aiding intelligence degree, improves the myopia prevention effect, and has great effect on helping and promoting the learning of users.
The aim of the application is realized by the following technical scheme:
a multi-dimensional multi-task learning evaluation system based on deep learning comprises a first drowsiness tiredness recognition module, an eye movement recognition module, a second drowsiness tiredness recognition module and a third drowsiness tiredness recognition module, wherein the eye movement recognition module is used for recognizing the movements of eyes through opening and closing; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the method comprises the steps of combining head gestures to identify a user to judge whether the reading and learning gestures of the user are correct or incorrect, and combining the actions of eyes to judge the tired and sleepy state of the user; the facial expression analysis module is used for judging the happy, tension and excitation states of the user in the learning process and carrying out specific evaluation on the learning process; the second drowsiness tiredness recognition module judges the learning drowsiness tiredness state of the user through eye opening and closing and head gesture recognition, and establishes a data set by collecting different drowsiness gestures for training the data set and testing the data set; the learning course subject recognition module is used for establishing a learning course subject data set through the reading and writing contents of the user and is used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject recognition module, and simultaneously recognizing and learning the expression index value of the subject by combining with the facial expression recognition module, and is used for evaluating the interest degree and the grasping capability of the user on the course content when learning different subjects in a multi-dimensional manner; and the marking module is used for marking the paper, and inputting standard answers to the background management system through the user control terminal when the user writes the task, scanning the image input with the answers, inputting the standard answer of each small question according to the content structure of the task, collecting the actual result of the answer of the user, and comparing and identifying the actual result with the standard answer of each small question for marking the paper.
Further, the myopia prevention recognition module is used for performing myopia prevention early warning through threshold value calculation of the linear distance.
Further, the method comprises the following steps:
s1, determining initial positions of two points of a line segment;
s2, confirming the position of a plane observed and read by eyes through image recognition, confirming the center line of the reading plane, and finding the shortest distance point between the eyes and the reading plane through straight line detection by using Hough transformation;
s3, comparing the data of the calculated minimum reading distance with a designed threshold value, if the data is smaller than the threshold value, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold value, confirming that the user belongs to a normal reading mode.
Further, in step S1, a center point between two points of the center of the binocular eye axis is set as a starting point.
Further, in step S2, the end point is the contact point between the writing pen point and the operation text; the distance between the axial center point of the connecting line of the center points of the two eyes and the operation text point contacted by the writing pen point is detected by utilizing Hough transformation.
Further, the system comprises a management module for user identity information management.
Further, the intelligent desk lamp comprises a face recognition module which is used for establishing personal identity face recognition data through collected face data, and identifying user identity information through collecting the face data through a digital camera when a user uses the intelligent desk lamp.
Further, the cloud server is used for distributing updated firmware programs and data backup.
The beneficial effects of the application are as follows:
(1) The application has the functions of face recognition, drowsiness and tiredness recognition, emotion assessment, automatic examination paper scoring module, myopia recognition and the like, can evaluate learning and repair in multiple dimensions, improves the degree of intelligence, improves the myopia prevention effect, and helps to promote the learning of users greatly. Specifically, the desk lamp has the common voice recognition control brightness and working mode, and can also recognize and judge myopia prevention according to two working modes of different reading and writing answering modes of a user, and in the embodiment, the intelligent desk lamp can provide illumination learning for the user, and can also evaluate the learning state of the user in the process of using the desk lamp by combining the head gesture and the eye opening and closing state; the center point position of each eye of the user is detected in a straight line through Hough transformation, and the user is reminded of paying attention to the eye habit in a set threshold early warning mode, so that the optimal eye using state for preventing myopia is achieved.
(2) According to the application, the user is evaluated in the learning process according to the head gesture and the eye opening and closing state and the eye movement track state, and whether the user has sleepy and tired actions, such as the head is repeatedly swayed in a certain frequency, or the eyes are in a sleep state and the like, so that the user is automatically identified and reminded, and the rest, the lifting spirit and the intelligent identification are achieved.
(3) According to the application, the communication module is used for communicating with the intelligent mobile phone control terminal, when detecting an error in detecting the image of the user, the loudspeaker sounds to remind the user or adjust the brightness of the light to inform the user of the attention posture so as to prevent the improper posture of myopia; or reminds the user to improve the attention and prevents the improper mode of drowsiness and tiredness when the user learns to work.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a block diagram of the structure of the present application.
Detailed Description
The technical solution of the present application will be described in further detail with reference to the accompanying drawings, but the scope of the present application is not limited to the following description. All of the features disclosed in this specification, or all of the steps in a method or process disclosed implicitly, may be combined in any combination, except for the mutually exclusive features and/or steps.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. That is, each feature is one example only of a generic series of equivalent or similar features, unless expressly stated otherwise.
Specific embodiments of the application will be described in detail below, it being noted that the embodiments described herein are for illustration only and are not intended to limit the application. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. However, it will be apparent to one of ordinary skill in the art that: no such specific details are necessary to practice the application. In other instances, well-known circuits, software, or methods have not been described in detail in order not to obscure the application.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Before describing the embodiments, some necessary terms need to be explained. For example:
if the terms "first," "second," etc. are used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. Accordingly, a "first" element discussed below could also be termed a "second" element without departing from the teachings of the present application. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present.
The various terms presented in this disclosure are used solely for the purpose of describing particular embodiments and are not intended to be limiting of the disclosure, as singular forms are intended to include plural forms as well, unless the context clearly indicates otherwise.
When the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence and/or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As shown in fig. 1, a multi-dimensional multi-task learning evaluation system based on deep learning includes a first drowsiness tiredness recognition module, eye movement recognition by opening and closing, and eye movement track recognition; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the method comprises the steps of combining head gestures to identify a user to judge whether the reading and learning gestures of the user are correct or incorrect, and combining the actions of eyes to judge the tired and sleepy state of the user; the facial expression analysis module is used for judging the happy, tension and excitation states of the user in the learning process and carrying out specific evaluation on the learning process; the second drowsiness tiredness recognition module judges the learning drowsiness tiredness state of the user through eye opening and closing and head gesture recognition, and establishes a data set by collecting different drowsiness gestures for training the data set and testing the data set; the learning course subject recognition module is used for establishing a learning course subject data set through the reading and writing contents of the user and is used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject recognition module, and simultaneously recognizing and learning the expression index value of the subject by combining with the facial expression recognition module, and is used for evaluating the interest degree and the grasping capability of the user on the course content when learning different subjects in a multi-dimensional manner; and the marking module is used for marking the paper, and inputting standard answers to the background management system through the user control terminal when the user writes the task, scanning the image input with the answers, inputting the standard answer of each small question according to the content structure of the task, collecting the actual result of the answer of the user, and comparing and identifying the actual result with the standard answer of each small question for marking the paper.
Further, the myopia prevention recognition module is used for performing myopia prevention early warning through threshold value calculation of the linear distance.
Further, the method comprises the following steps:
s1, determining initial positions of two points of a line segment;
s2, confirming the position of a plane observed and read by eyes through image recognition, confirming the center line of the reading plane, and finding the shortest distance point between the eyes and the reading plane through straight line detection by using Hough transformation;
s3, comparing the data of the calculated minimum reading distance with a designed threshold value, if the data is smaller than the threshold value, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold value, confirming that the user belongs to a normal reading mode.
Further, in step S1, a center point between two points of the center of the binocular eye axis is set as a starting point.
Further, in step S2, the end point is the contact point between the writing pen point and the operation text; the distance between the axial center point of the connecting line of the center points of the two eyes and the operation text point contacted by the writing pen point is detected by utilizing Hough transformation.
Further, the system comprises a management module for user identity information management.
Further, the intelligent desk lamp comprises a face recognition module which is used for establishing personal identity face recognition data through collected face data, and identifying user identity information through collecting the face data through a digital camera when a user uses the intelligent desk lamp.
Further, the cloud server is used for distributing updated firmware programs and data backup.
Example 1
As shown in fig. 1, a multi-dimensional multi-task learning evaluation system based on deep learning includes a first drowsiness tiredness recognition module, eye movement recognition by opening and closing, and eye movement track recognition; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the method comprises the steps of combining head gestures to identify a user to judge whether the reading and learning gestures of the user are correct or incorrect, and combining the actions of eyes to judge the tired and sleepy state of the user; the facial expression analysis module is used for judging the happy, tension and excitation states of the user in the learning process and carrying out specific evaluation on the learning process; the second drowsiness tiredness recognition module judges the learning drowsiness tiredness state of the user through eye opening and closing and head gesture recognition, and establishes a data set by collecting different drowsiness gestures for training the data set and testing the data set; the learning course subject recognition module is used for establishing a learning course subject data set through the reading and writing contents of the user and is used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject recognition module, and simultaneously recognizing and learning the expression index value of the subject by combining with the facial expression recognition module, and is used for evaluating the interest degree and the grasping capability of the user on the course content when learning different subjects in a multi-dimensional manner; and the marking module is used for marking the paper, and inputting standard answers to the background management system through the user control terminal when the user writes the task, scanning the image input with the answers, inputting the standard answer of each small question according to the content structure of the task, collecting the actual result of the answer of the user, and comparing and identifying the actual result with the standard answer of each small question for marking the paper.
In the embodiment, user identity information authentication is performed, the face recognition module establishes personal identity face recognition data through collected face data, and the digital camera collects the face data and recognizes user identity information when a user uses the intelligent desk lamp; in terms of function implementation, a sleepiness and tiredness recognition module: opening and closing eye movement recognition and eye movement track recognition; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the head posture recognition user may judge that the posture of the user for reading and learning is correct or incorrect, or may judge the tired and sleepy state of the user in combination with the movements of eyes.
Facial expression analysis, judging states of happiness, tension, excitation and the like of a user in a learning process, specifically evaluating the learning process, for example, answering a mathematical test paper which needs to be completed today, and judging emotion change of the user in the process of completing the test paper through process analysis of the test paper completion; a ranking may be made to the user's learning tension.
A drowsiness tiredness recognition module that judges a user's learning drowsiness tiredness state by opening and closing of eyes and head posture recognition, for example, when the user is learning, the eyes are opened without movement, but the head movement is continuously reciprocated; the head posture is unchanged, the pupil detail of the eyes is small, and the like, and a data set is established by collecting different sleepiness postures and is used for training the data set and testing the data set.
The learning course subject recognition module is used for establishing a learning course subject data set through the content read (written) by a user and is used for training the data set and testing the data set; the digital camera confirms whether the content of the reading (writing) subject is language, mathematics, physics or chemistry through the identification of the corresponding training set and test set through the collected reading (writing) image.
The learning emotion evaluation module is used for identifying a specific course learned by a user through the learning course subject identification module, and simultaneously identifying expression index values of the subject to be learned by combining the facial expression identification module to evaluate different dimension evaluations such as interest degrees of the user when learning different subjects, grasping ability of course contents and the like; automatic scoring module for scoring paper: when the image identification confirms that a user writes a task, a user inputs a standard answer into a background management system through a user control terminal, the image input with the answer can be scanned, the standard answer of each small question can also be input according to the content structure of the task, the actual result of the user answer is collected, and the comparison identification is carried out with the standard answer of each small question; and a management module: and managing the user identity information, wherein the cloud server is used for distributing and updating the desk lamp firmware and the data backup.
In the other technical features of the embodiment, those skilled in the art can flexibly select and use the other technical features according to actual situations so as to meet different specific actual requirements. However, it will be apparent to one of ordinary skill in the art that: no such specific details are necessary to practice the application. In other instances, well-known algorithms, methods, or systems, etc., have not been described in detail in order to avoid obscuring the present application, and are within the scope of the present application as defined by the appended claims.
For the purposes of simplicity of explanation, the foregoing method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the application. Further, it should be understood by those skilled in the art that the embodiments described in the specification are all preferred embodiments, and the acts and elements referred to are not necessarily required for the present application.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The disclosed systems, modules, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units may be merely a logical functional division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be said to be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described by the discrete components may or may not be physically separated, and components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the processes in implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the program may include processes of the embodiments of the methods as described above when executed. Wherein the storage medium may be a magnetic disk, an optical disk, a ROM, a RAM, etc.
The foregoing is merely a preferred embodiment of the application, and it is to be understood that the application is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the application are intended to be within the scope of the appended claims.

Claims (2)

1. The multidimensional multi-task learning evaluation system for deep learning is applied to an intelligent desk lamp and is characterized by comprising:
a first drowsiness tiredness recognition module for recognizing an eye movement by opening and closing the eye and recognizing an eye movement track; the opening and closing motion recognition is used for recognizing the tired and sleepy state of the user and judging the attention of the user by combining the eye movement track; the method comprises the steps of combining head gestures to identify a user to judge whether the reading and learning gestures of the user are correct or incorrect, and combining the actions of eyes to judge the tired and sleepy state of the user;
the facial expression analysis module is used for judging the happy, tension and excitation states of the user in the learning process and carrying out specific evaluation on the learning process;
the second drowsiness tiredness recognition module judges the learning drowsiness tiredness state of the user through eye opening and closing and head gesture recognition, and establishes a data set by collecting different drowsiness gestures for training the data set and testing the data set;
the learning course subject recognition module is used for establishing a learning course subject data set through the reading and writing contents of the user and is used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set;
the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject recognition module, and simultaneously recognizing and learning the expression index value of the subject by combining with the facial expression recognition module, and is used for evaluating the interest degree and the grasping capability of the user on the course content when learning different subjects in a multi-dimensional manner;
the marking module of the paper marking, the image recognition confirms that users input standard answers into the background management system through the user control terminal when writing the job task, can scan the image input with the answers, can input the standard answer of each small question according to the content structure of the job, then collect the actual result of the answer of the users, and then compare and recognize with the standard answer of each small question input for marking the paper;
the myopia prevention recognition module is used for performing myopia prevention early warning through threshold value calculation of the linear distance;
the myopia prevention early warning method comprises the following steps:
s1, determining initial positions of two points of a line segment;
s2, confirming the position of a plane observed and read by eyes through image recognition, confirming the center line of the reading plane, and finding the shortest distance point between the eyes and the reading plane through straight line detection by using Hough transformation;
s3, comparing the data of the calculated minimum reading distance with a designed threshold value, if the data is smaller than the threshold value, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold value, confirming that the user belongs to a normal reading mode;
in step S1, taking a center point between two center points of the eyes' axes of eyes as a starting point;
in the step S2, the end point is the contact point between the writing pen point and the operation text; detecting the distance between the axial center point of the connecting line of the center points of the two eyes and the operation text point contacted with the writing pen point by utilizing Hough transformation;
the management module is used for managing the user identity information;
the face recognition module is used for establishing personal identity face recognition data through the collected face data, and identifying user identity information through the digital camera when the user uses the intelligent desk lamp.
2. The deep learning multi-dimensional multi-task learning evaluation system of claim 1 applied to an intelligent desk lamp, comprising a cloud server for distributing updated firmware programs and data backups.
CN201911139266.3A 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning Active CN110991277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911139266.3A CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911139266.3A CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Publications (2)

Publication Number Publication Date
CN110991277A CN110991277A (en) 2020-04-10
CN110991277B true CN110991277B (en) 2023-09-22

Family

ID=70085109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911139266.3A Active CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Country Status (1)

Country Link
CN (1) CN110991277B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
CN112132922B (en) * 2020-09-24 2024-10-15 扬州大学 Method for cartoon image and video in online class
CN114648808A (en) * 2021-11-29 2022-06-21 杭州好学童科技有限公司 Method for detecting learning concentration degree of children
CN116453384A (en) * 2023-06-19 2023-07-18 江西德瑞光电技术有限责任公司 Immersion type intelligent learning system based on TOF technology and control method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6419638B1 (en) * 1993-07-20 2002-07-16 Sam H. Hay Optical recognition methods for locating eyes
WO2006081505A1 (en) * 2005-01-26 2006-08-03 Honeywell International Inc. A distance iris recognition system
KR20100016696A (en) * 2008-08-05 2010-02-16 주식회사 리얼맨토스 Student learning attitude analysis systems in virtual lecture
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
KR20170025245A (en) * 2015-08-28 2017-03-08 주식회사 코코넛네트웍스 Method For Providing Smart Lighting Service Based On Face Expression Recognition, Apparatus and System therefor
CN106599881A (en) * 2016-12-30 2017-04-26 首都师范大学 Student state determination method, device and system
WO2017092526A1 (en) * 2015-11-30 2017-06-08 广东百事泰电子商务股份有限公司 Smart table lamp with face distance measurement and near light reminder functions
CN108647657A (en) * 2017-05-12 2018-10-12 华中师范大学 A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
CN108664932A (en) * 2017-05-12 2018-10-16 华中师范大学 A kind of Latent abilities state identification method based on Multi-source Information Fusion
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108826071A (en) * 2018-07-12 2018-11-16 太仓煜和网络科技有限公司 A kind of reading desk lamp based on artificial intelligence
WO2019050074A1 (en) * 2017-09-08 2019-03-14 주식회사 듀코젠 Studying system capable of providing cloud-based digital question writing solution and implementing distribution service platform, and control method thereof
WO2019075820A1 (en) * 2017-10-20 2019-04-25 深圳市鹰硕技术有限公司 Test paper reviewing system
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A remote user attention assessment method and system based on multimodal interaction

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4285012B2 (en) * 2003-01-31 2009-06-24 株式会社日立製作所 Learning situation judgment program and user situation judgment system
US7680357B2 (en) * 2003-09-09 2010-03-16 Fujifilm Corporation Method and apparatus for detecting positions of center points of circular patterns
US8320708B2 (en) * 2004-04-02 2012-11-27 K-Nfb Reading Technology, Inc. Tilt adjustment for optical character recognition in portable reading machine
JP4659631B2 (en) * 2005-04-26 2011-03-30 富士重工業株式会社 Lane recognition device
JP5181704B2 (en) * 2008-02-07 2013-04-10 日本電気株式会社 Data processing apparatus, posture estimation system, posture estimation method and program
US20120208166A1 (en) * 2011-02-16 2012-08-16 Steve Ernst System and Method for Adaptive Knowledge Assessment And Learning
EP2733671B1 (en) * 2011-07-14 2019-08-21 MegaChips Corporation Straight line detection device and straight line detection method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6419638B1 (en) * 1993-07-20 2002-07-16 Sam H. Hay Optical recognition methods for locating eyes
WO2006081505A1 (en) * 2005-01-26 2006-08-03 Honeywell International Inc. A distance iris recognition system
KR20100016696A (en) * 2008-08-05 2010-02-16 주식회사 리얼맨토스 Student learning attitude analysis systems in virtual lecture
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
KR20170025245A (en) * 2015-08-28 2017-03-08 주식회사 코코넛네트웍스 Method For Providing Smart Lighting Service Based On Face Expression Recognition, Apparatus and System therefor
WO2017092526A1 (en) * 2015-11-30 2017-06-08 广东百事泰电子商务股份有限公司 Smart table lamp with face distance measurement and near light reminder functions
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106599881A (en) * 2016-12-30 2017-04-26 首都师范大学 Student state determination method, device and system
CN108647657A (en) * 2017-05-12 2018-10-12 华中师范大学 A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
CN108664932A (en) * 2017-05-12 2018-10-16 华中师范大学 A kind of Latent abilities state identification method based on Multi-source Information Fusion
WO2019050074A1 (en) * 2017-09-08 2019-03-14 주식회사 듀코젠 Studying system capable of providing cloud-based digital question writing solution and implementing distribution service platform, and control method thereof
WO2019075820A1 (en) * 2017-10-20 2019-04-25 深圳市鹰硕技术有限公司 Test paper reviewing system
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108826071A (en) * 2018-07-12 2018-11-16 太仓煜和网络科技有限公司 A kind of reading desk lamp based on artificial intelligence
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A remote user attention assessment method and system based on multimodal interaction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
左国才 ; 王海东 ; 吴小平 ; 苏秀芝 ; .基于深度学习的人脸识别技术在学习效果评价中的应用研究.智能计算机与应用.2019,(03),全文. *
赵帅 ; 黄晓婷 ; .依然在路上:教学人工智能的发展与局限.北京大学教育评论.2019,(04),全文. *
陈靓影 ; 罗珍珍 ; 徐如意 ; .课堂教学环境下学生学习兴趣智能化分析.电化教育研究.2018,(08),全文. *

Also Published As

Publication number Publication date
CN110991277A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110991277B (en) Multi-dimensional multi-task learning evaluation system based on deep learning
Fridman et al. ‘Owl’and ‘Lizard’: Patterns of head pose and eye pose in driver gaze classification
US20200242416A1 (en) Systems and methods for machine learning enhanced by human measurements
Maxfield et al. Effects of target typicality on categorical search
CN106599881A (en) Student state determination method, device and system
CN110969099A (en) Threshold value calculation method for myopia prevention and early warning linear distance and intelligent desk lamp
EP3370181B1 (en) Segment-block-based handwritten signature authentication system and method
CN110674664A (en) Visual attention recognition method and system, storage medium and processor
US20160104385A1 (en) Behavior recognition and analysis device and methods employed thereof
CN115607156B (en) Multi-mode-based psychological cognitive screening evaluation method, system and storage medium
CN114219224A (en) Teaching quality detection method and system for intelligent classroom
Shukla et al. An efficient approach of face detection and prediction of drowsiness using SVM
Baray et al. Eog-based reading detection in the wild using spectrograms and nested classification approach
CN119863738A (en) User learning concentration evaluation method and device, electronic equipment and storage medium
CN115132027A (en) Smart programming learning system and learning method based on multimodal deep learning
Roy et al. Students attention monitoring and alert system for online classes using face landmarks
Boels et al. Automated gaze-based identification of students’ strategies in histogram tasks through an interpretable mathematical model and a machine learning algorithm
KR102092633B1 (en) Method and apparatus for modeling based on cognitive response of smart senior
CN112163462A (en) Face-based juvenile recognition method and device and computer equipment
Madake et al. Vision-based monitoring of student attentiveness in an e-learning environment
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
Kavitha et al. Framework for Detecting Student Behaviour (Nail Biting, Sleep, and Yawn) Using Deep Learning Algorithm
JP2020115175A (en) Information processing apparatus, information processing method, and program
Prome et al. LDNet: A Robust Hybrid Approach for Lie Detection Using Deep Learning Techniques.
CN115690867A (en) Classroom concentration detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant