[go: up one dir, main page]

CN101887513A - Expression detection device and expression detection method thereof - Google Patents

Expression detection device and expression detection method thereof Download PDF

Info

Publication number
CN101887513A
CN101887513A CN2009101412991A CN200910141299A CN101887513A CN 101887513 A CN101887513 A CN 101887513A CN 2009101412991 A CN2009101412991 A CN 2009101412991A CN 200910141299 A CN200910141299 A CN 200910141299A CN 101887513 A CN101887513 A CN 101887513A
Authority
CN
China
Prior art keywords
eyes
reference point
expression
face
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2009101412991A
Other languages
Chinese (zh)
Other versions
CN101887513B (en
Inventor
宋开泰
韩孟儒
王仕杰
林家合
林季谊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Novatek Microelectronics Corp
Original Assignee
Novatek Microelectronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novatek Microelectronics Corp filed Critical Novatek Microelectronics Corp
Priority to CN2009101412991A priority Critical patent/CN101887513B/en
Publication of CN101887513A publication Critical patent/CN101887513A/en
Application granted granted Critical
Publication of CN101887513B publication Critical patent/CN101887513B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an expression detection device and an expression detection method thereof. The expression detection device comprises a gray-scale image generation unit, a human face edge detection unit, a dynamic skin color capturing unit, a human face contour generation unit and an expression detection unit. The gray-scale image generating unit generates a gray-scale image according to the original image. The human face edge detection unit outputs a human face edge detection result according to the gray-scale image. The dynamic skin color picking unit generates a dynamic skin color picking result according to the original image and generates a human face and background segmentation result according to the dynamic skin color picking result. The face contour generating unit outputs a face contour according to the gray-scale image, the face edge detection result and the face and background segmentation result. And the expression detection unit outputs an expression detection result according to the face contour.

Description

Expression pick-up unit and expression detection method thereof
Technical field
Relevant a kind of expression pick-up unit of the present invention and expression detection method thereof, and particularly about a kind of expression pick-up unit of low operand and the detection method of expressing one's feelings thereof.
Background technology
In life, people usually express the mood of heart by countenance.The presentation of emotion mainly comprises: face, eyes, eyebrow and cheek etc.People only change the local feature of face (raising up as the corners of the mouth) just can express a kind of phychology when expressing the heart mood.Along with making rapid progress of technology, people expect further and Expression Recognition can be applied among the electronic installation, significantly to improve the convenience on using.
The smiling face detects one of big event that detects for human face expression, can be divided into face characteristic detection technique and sorter two on its disposal route partly.The fixed frame that tradition face characteristic detection technique is set people's face eye, nose, mouth more is on human face region, then with statistical mode (the TaiWan, China patent No. 00445434; TW226589B, U.S. Pat 6,526,161), go to calculate the face characteristic position.In addition aspect classifier technique,, in 307,, more whether meet again originally and at the bottom of the sample that increases newly puts into PCA and remove to calculate orthogonal basis in U.S. Pat 6,430.
Yet not only operand is huge for traditional human face expression detection technique, and is difficult for being used in (as digital camera) in the embedded platform.In addition, traditional human face expression detection technique is subjected to the influence of light source easily, when brightness irregularities, promptly directly has influence on the correctness of expression testing result.
Summary of the invention
The purpose of this invention is to provide a kind of expression pick-up unit and expression detection method thereof, it comprises following advantage at least:
One, when changing, people's face generation attitude also can capture feature locations.
Two, reduce the light source variable effect.
Three, calculate the face characteristic position fast.
Four, obtain the expression testing result fast.
Five, operand is low, quite is fit to be applied in the embedded system.
According to an aspect of the present invention, a kind of expression pick-up unit is proposed.The expression pick-up unit comprises grey-tone image generation unit, people's face edge detection unit, dynamic colour of skin acquisition unit, facial contour generation unit and expression detecting unit.The grey-tone image generation unit produces grey-tone image according to raw video.People's face edge detection unit is according to grey-tone image output people face edge detection results.Dynamically colour of skin acquisition unit produces dynamic colour of skin capturing result according to raw video, and produces people's face and background segment result according to dynamic colour of skin capturing result.The facial contour generation unit is exported facial contour according to grey-tone image, people's face edge detection results and people's face and background segment result.The expression detecting unit is according to facial contour output expression testing result.
According to a further aspect in the invention, a kind of expression detection method is proposed.The expression detection method comprises: produce grey-tone image according to raw video; According to grey-tone image output people face edge detection results; Produce dynamic colour of skin capturing result according to raw video, and produce people's face and background segment result according to dynamic colour of skin capturing result; Export facial contour according to grey-tone image, people's face edge detection results and people's face and background segment result; And according to facial contour output expression testing result.
Description of drawings
For foregoing of the present invention can be become apparent, below conjunction with figs. is elaborated to preferred embodiment of the present invention, wherein:
Fig. 1 is the synoptic diagram according to the expression pick-up unit of the embodiment of the invention.
Fig. 2 is the synoptic diagram of expression detecting unit.
Fig. 3 is the synoptic diagram of feature extraction unit.
Fig. 4 is the synoptic diagram in face zone.
Fig. 5 is to be the synoptic diagram of 32 five equilibriums with the face area dividing.
Fig. 6 is the synoptic diagram of reference point acquisition unit.
Fig. 7 is the synoptic diagram of frame-choosing unit.
Fig. 8 is the process flow diagram according to the expression detection method of the embodiment of the invention.
Embodiment
Following embodiment provides a kind of expression pick-up unit and expression detection method thereof.The expression pick-up unit comprises grey-tone image generation unit, people's face edge detection unit, dynamic colour of skin acquisition unit, facial contour generation unit and expression detecting unit.The grey-tone image generation unit produces grey-tone image according to raw video.People's face edge detection unit is according to grey-tone image output people face edge detection results.Dynamically colour of skin acquisition unit produces dynamic colour of skin capturing result according to raw video, and produces people's face and background segment result according to dynamic colour of skin capturing result.The facial contour generation unit is exported facial contour according to grey-tone image, people's face edge detection results and people's face and background segment result.The expression detecting unit is according to facial contour output expression testing result.
Embodiment
Please refer to Fig. 1, Fig. 1 is the synoptic diagram according to the expression pick-up unit of the embodiment of the invention.Expression pick-up unit 10 comprises grey-tone image generation unit 110, people's face edge detection unit 120, dynamic colour of skin acquisition unit 130, facial contour generation unit 140 and expression detecting unit 150.Grey-tone image generation unit 110 produces grey-tone image S2 according to raw video S1.People's face edge detection unit 120 is according to grey-tone image S2 output people face edge detection results S3.Wherein, people's face edge detection unit 120 for example is that horizontal edge detects grey-tone image S2, with output people face edge detection results S3.Dynamically colour of skin acquisition unit 130 produces dynamic colour of skin capturing result according to raw video S1, and produces people's face and background segment S4 as a result according to dynamic colour of skin capturing result.Facial contour generation unit 140 is according to grey-tone image S2, people's face edge detection results S3 and people's face and background segment S4 output as a result facial contour S5.Expression detecting unit 150 is according to facial contour S5 output expression testing result S6.
What need to specify is, what image can be not average is distributed on 0 to 255 each GTG value, and most pixel can fall within a certain interval the variation.For instance, 80%~90% GTG value changes between can fall within 50~100 in people's face image.And the so-called dynamically colour of skin promptly is meant according to the different at that time different threshold values of people's face image setting.Because threshold value is to set according to whole GTG value ratios of whole people's face image, therefore has splendid adaptability, to reduce the influence that light source changes.Hence one can see that, and aforementioned dynamic colour of skin acquisition unit 130 produces people's face and background segment S4 as a result adaptively according to dynamic colour of skin capturing result, will significantly reduce the influence that light source changes.In addition, because expression pick-up unit 10 is not the use fixed frame, therefore when changing, people's face generation attitude also can correctly capture feature locations.Moreover facial contour generation unit 140 calculates facial contour S5, with the favourable follow-up face characteristic position of calculating apace.Moreover, expression pick-up unit 10 operands are low, quite are fit to be applied in the embedded system.
Please refer to Fig. 2, Fig. 2 is the synoptic diagram of expression detecting unit.Expression detecting unit 150 further comprises feature extraction unit 152, specifies expression and non-appointment expression database 156 and sorter 154.Feature extraction unit 152 is according to facial contour S5 output characteristic vector S7.Specify expression and non-appointment expression database 156 to store many and specify expression image and non-appointment expression image, and according to specifying expression image and non-appointment expression image output characteristic vector S8.Sorter 154 is according to proper vector S7 and proper vector S8 output expression testing result S6.
Sorter 154 for example is support vector machine (Support Vector Machine, a SVM) sorter.Specify expression and non-appointment expression database 156 stored images to be divided into and specify expression image and non-appointment expression image two classes, training by support vector machine can obtain support vector (Support Vectors, SVs), and obtain the middle differentiation plane (Separating Hyper-planes) of two class data, make these two classes data be maximum from this distance of distinguishing the plane.
Aforementioned sorter 154 for example belongs to appointment expression image or non-appointment expression image according to the inner product result of proper vector S7 and proper vector S8 with decision expression testing result S6.For instance, the inner product result as proper vector S7 and proper vector S8 belongs to appointment expression image greater than 0 expression expression testing result S6.On the contrary, the inner product result as proper vector S7 and proper vector S8 belongs to non-appointment expression image less than 0 expression expression testing result S6.
Please refer to Fig. 3 to Fig. 5, Fig. 3 is the synoptic diagram of feature extraction unit, and Fig. 4 is the synoptic diagram in face zone, and Fig. 5 is to be the synoptic diagram of 32 five equilibriums with the face area dividing.Feature extraction unit 152 further comprises reference point acquisition unit 1522, frame-choosing unit 1524 and eigenwert acquisition unit 1526.Reference point acquisition unit 1522 is according to facial contour S5 and grey-tone image S2 output characteristic point data S9.Characteristic point data S9 can be any face reference point, as eyes reference point and face reference point.Frame-choosing unit 1524 selects characteristic area S10 according to characteristic point data S9 frame.Characteristic area S10 can be any the face zone, as face zone and eye areas.Eigenwert acquisition unit 1526 is divided into several five equilibriums with characteristic area S10, and calculates the mean value of each five equilibrium, with output characteristic vector S7.For instance, the face zone that for example illustrates of characteristic area S10 for Fig. 4.Eigenwert acquisition unit 1526 is divided into 4 * 8 five equilibriums with the face zone, and calculates the grey scale average value of each five equilibrium.When practice; because upper left, the lower-left in face zone, upper right and bottom right lattice can exceed the scope of lip usually; therefore will give up this four values at this, and get 28 remaining grey scale average value, in order to the sorter 154 of training earlier figures 2 as the proper vector S7 that represents mouth region.Because the GTG value of characteristic area S10 can be considered an eigenvectors, therefore the sorter 154 that earlier figures 2 is illustrated obtains expression testing result S6 apace.
Please refer to Fig. 6, Fig. 6 is the synoptic diagram of reference point acquisition unit.Reference point acquisition unit 1522 further comprises first half integral optical density (Integrated Optical Density, IOD) computing unit 15222, binarization unit 15224, eyes reference point generation unit 15226 and face reference point generation unit 15228.First half integral optical density computing unit 15222 calculates the first half integral optical density S11 of the first half of grey-tone image S2.Binarization unit 15224 is exported binaryzation S12 as a result according to first half integral optical density S11.Eyes reference point generation unit 15226 according to binaryzation as a result S12 find out the two eyes reference point S91 of characteristic point data S9.Face reference point generation unit 15228 is found out the face reference point S92 of characteristic S9 according to two eyes reference point S91 and facial contour S5.
For instance, first half integral optical density computing unit 15222 is found out in the first half of grey-tone image S2 5% the most black part to obtain the eyebrow position according to first half integral optical density.Binarization unit 15224 is given binaryzation (Binary) according to threshold value with first half integral optical density S11.Part greater than threshold value among the first half integral optical density S11 promptly is made as 255, and the part less than threshold value promptly is made as 0 among the first half integral optical density S11.Two eyes reference point S91 comprise left eye reference point and right eye reference point.Eyes reference point generation unit 15226 in binaryzation as a result in the S12 left side from lower to upper first breakpoint place be place, left eye reference point place.Similarly, eyes reference point generation unit 15226 in binaryzation as a result in the S12 right side from lower to upper first breakpoint place be place, right eye reference point place.Face reference point generation unit 15228 calculates the horizontal coordinate of the mid point of left eye reference point and right eye reference point as face reference point S92, and the mean flow rate lowest part of the Lower Half of selection facial contour S5 is as the vertical coordinate of face reference point S92.
Please refer to Fig. 7, Fig. 7 is the synoptic diagram of frame-choosing unit.Frame-choosing unit 1524 further comprises estimates scope frame-choosing unit 15241, eyes edge detection unit 15242, eyes integral optical density (Integrated Optical Density, IOD) computing unit 15243, arithmetic logic unit 15244 and feature locations frame-choosing unit 15245.Estimating scope frame-choosing unit 15241 selects general eyes and estimates scope S93 according to two reference point S91 frame of going ahead of the rest.Eyes edge detection unit 15242 is estimated scope S93 output eyes edge detection results S94 according to eyes.(Integrated Optical Density, IOD) computing unit 15243 is estimated scope S93 output eyes integral optical density S95 according to eyes to the eyes integral optical density.Arithmetic logic unit 15244 is according to eyes edge detection results S94 and eyes integral optical density S95 output logic operation result S96, and logic operation result S96 for example is the common factor of eyeball edge detection results S94 and eyes integral optical density S95.Feature locations frame-choosing unit 15245 selects the eye areas of characteristic area S10 according to logic operation result S96 frame, and selects the face zone of characteristic area S10 according to face reference point S92 frame.After frame-choosing unit 1524 frames were selected the face zone, the sorter 154 that earlier figures 2 illustrates can detect the expression whether smiling face is arranged according to the grey scale average value in the face zone and produce.Similarly, after frame-choosing unit 1524 frames were selected eye areas, the sorter 154 that earlier figures 2 illustrates can detect the expression whether arranged nictation according to the grey scale average value in the eye areas and produce.
Please be simultaneously with reference to Fig. 1 and Fig. 8, Fig. 8 is the process flow diagram according to the expression detection method of the embodiment of the invention.The expression detection method can be applicable to the expression pick-up unit 10 of aforesaid embodiment.The expression detection method comprises the steps: at least at first that shown in step 810 grey-tone image generation unit 110 produces grey-tone image S2 according to raw video S1.Then shown in step 820, people's face edge detection unit 120 is according to grey-tone image S2 output people face edge detection results S3.And then shown in step 830, dynamically colour of skin acquisition unit 130 produces dynamic colour of skin capturing result according to raw video S1, and produces people's face and background segment S4 as a result according to dynamic colour of skin capturing result.Shown in step 840, facial contour generation unit 140 is according to grey-tone image S2, people's face edge detection results S3 and people's face and background segment S4 output as a result facial contour S5 then.Shown in step 850, expression detecting unit 150 is according to facial contour S5 output expression testing result S6 at last.
Disclosed expression pick-up unit of the above embodiment of the present invention and expression detection method thereof have multiple advantages, below only enumerate the part advantage and are described as follows:
One, when changing, people's face generation attitude also can capture feature locations.
Two, reduce the light source variable effect.
Three, calculate the face characteristic position fast.
Four, obtain the expression testing result fast.
Five, operand is low, quite is fit to be applied in the embedded system.
In sum, though the present invention with the preferred embodiment exposure as above, yet it is not in order to limit the present invention.The persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when doing various changes that are equal to or replacement.Therefore, protection scope of the present invention is when looking accompanying being as the criterion that the application's claim defined.

Claims (20)

  1. One kind the expression detection method, comprising:
    Produce a grey-tone image according to a raw video;
    Export people's face edge detection results according to this grey-tone image;
    Produce a dynamic colour of skin capturing result according to a raw video, and produce people's face and background segment result according to this dynamic colour of skin capturing result;
    Export a facial contour according to this grey-tone image, this people's face edge detection results and this people's face and background segment result; And
    According to this facial contour output one expression testing result.
  2. 2. expression detection method according to claim 1 is characterized in that this step of exporting an expression testing result comprises:
    Export one first proper vector according to this facial contour;
    Store many and specify expression image and non-appointment expression image, and export at least one second proper vector according to described appointment expression image and non-appointment expression image; And
    Export this expression testing result according to this first proper vector and this second proper vector.
  3. 3. according to the described expression detection method of claim 2., it is characterized in that this step of exporting one first proper vector comprises:
    Export a characteristic point data according to this facial contour and this grey-tone image;
    Select a characteristic area according to this characteristic point data frame; And
    This characteristic area is divided into a plurality of five equilibriums, and calculates the mean value of each described five equilibrium, to export this first proper vector.
  4. 4. expression detection method according to claim 3 is characterized in that this step of exporting a characteristic point data comprises:
    Calculate a first half integral optical density of the first half of this grey-tone image;
    Export a binaryzation result according to this first half integral optical density;
    Find out the one first eyes reference point and the one second eyes reference point of this characteristic point data according to this binaryzation result; And
    Find out a face reference point of this characteristic according to this first eyes reference point, this second eyes reference point and this facial contour.
  5. 5. expression detection method according to claim 4 is characterized in that frame selects this step of a characteristic area to comprise:
    Select eyes to estimate scope according to this eyes reference point frame;
    Estimate scope according to these eyes and export an eyes edge detection results;
    Estimate scope according to these eyes and export an eyes integral optical density;
    Export a logic operation result according to this eyes edge detection results and this eyes integral optical density; And
    Select an eyes zone of this characteristic area according to this logic operation result frame, and select a face zone of this characteristic area according to this face reference point frame.
  6. 6. expression detection method according to claim 5, this step that it is characterized in that exporting a logic operation result are the common factors of this eyes edge detection results of output and this eyes integral optical density.
  7. 7. expression detection method according to claim 4, this step that it is characterized in that finding out a face reference point of this characteristic comprises:
    Calculate the horizontal coordinate of this face reference point according to this first eyes reference point and this second eyes reference point; And
    Find out the vertical coordinate of this face reference point according to the mean flow rate of the Lower Half of this facial contour.
  8. 8. expression detection method according to claim 7, this step that it is characterized in that calculating the horizontal coordinate of this face reference point are to calculate the horizontal coordinate of the mid point of this first eyes reference point and this second eyes reference point as this face reference point.
  9. 9. expression detection method according to claim 7, this step that it is characterized in that finding out the vertical coordinate of this face reference point are to select the mean flow rate lowest part of Lower Half of this facial contour as the vertical coordinate of this face reference point.
  10. 10. expression detection method according to claim 1, this step that it is characterized in that exporting people's face edge detection results is that horizontal edge detects this grey-tone image to export this people's face edge detection results.
  11. 11. an expression pick-up unit comprises:
    One grey-tone image generation unit is in order to produce a grey-tone image according to a raw video;
    One people's face edge detection unit is in order to export people's face edge detection results according to this grey-tone image;
    One dynamic colour of skin acquisition unit in order to producing a dynamic colour of skin capturing result according to a raw video, and produces people's face and background segment result according to this dynamic colour of skin capturing result;
    One facial contour generation unit is in order to export a facial contour according to this grey-tone image, this people's face edge detection results and this people's face and background segment result; And
    One expression detecting unit is in order to the testing result of expressing one's feelings according to this facial contour output one.
  12. 12. expression pick-up unit according to claim 11 is characterized in that this expression detecting unit comprises:
    One feature extraction unit is in order to export one first proper vector according to this facial contour;
    One specifies expression and non-appointment expression database, specifies expression image and non-appointment expression image in order to store many, and exports at least one second proper vector according to described appointment expression image and non-appointment expression image; And
    One sorter is in order to export this expression testing result according to this first proper vector and this second proper vector.
  13. 13. expression pick-up unit according to claim 12 is characterized in that this feature extraction unit comprises:
    One reference point acquisition unit is in order to export a characteristic point data according to this facial contour and this grey-tone image;
    One frame-choosing unit is in order to select a characteristic area according to this characteristic point data frame; And
    One eigenwert acquisition unit in order to this characteristic area is divided into a plurality of five equilibriums, and calculates the mean value of each described five equilibrium, to export this first proper vector.
  14. 14. expression pick-up unit according to claim 13 is characterized in that this reference point acquisition unit comprises:
    One first half integral optical density (Integrated Optical Density, IOD) computing unit is in order to a first half integral optical density of the first half that calculates this grey-tone image;
    One binarization unit is in order to export a binaryzation result according to this first half integral optical density;
    One eyes reference point generation unit is in order to find out the one first eyes reference point and the one second eyes reference point of this characteristic point data according to this binaryzation result; And
    One face reference point generation unit is in order to find out a face reference point of this characteristic according to this first eyes reference point, this second eyes reference point and this facial contour.
  15. 15. expression pick-up unit according to claim 14 is characterized in that this frame-choosing unit comprises:
    One estimates the scope frame-choosing unit, in order to select eyes to estimate scope according to this eyes reference point frame;
    One eyes edge detection unit is exported an eyes edge detection results in order to estimate scope according to these eyes;
    (Integrated Optical Density, IOD) computing unit are exported an eyes integral optical density in order to estimate scope according to these eyes to one eyes integral optical density;
    One arithmetic logic unit is in order to export a logic operation result according to this eyes edge detection results and this eyes integral optical density; And
    One feature locations frame-choosing unit in order to selecting an eyes zone of this characteristic area according to this logic operation result frame, and selects a face zone of this characteristic area according to this face reference point frame.
  16. 16. expression pick-up unit according to claim 15 is characterized in that this arithmetic logic unit exports the common factor of this eyes edge detection results and this eyes integral optical density.
  17. 17. expression pick-up unit according to claim 14, it is characterized in that face reference point generation unit calculates the horizontal coordinate of this face reference point according to this first eyes reference point and this second eyes reference point, and find out the vertical coordinate of this face reference point according to the mean flow rate of the Lower Half of this facial contour.
  18. 18. expression pick-up unit according to claim 17 is characterized in that face reference point generation unit calculates the horizontal coordinate of the mid point of this first eyes reference point and this second eyes reference point as this face reference point.
  19. 19. expression pick-up unit according to claim 17 is characterized in that face reference point generation unit selects the mean flow rate lowest part of Lower Half of this facial contour as the vertical coordinate of this face reference point.
  20. 20. expression pick-up unit according to claim 11 is characterized in that this people's face edge detection unit horizontal edge detects this grey-tone image to export this people's face edge detection results.
CN2009101412991A 2009-05-12 2009-05-12 Expression detection device and expression detection method thereof Expired - Fee Related CN101887513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101412991A CN101887513B (en) 2009-05-12 2009-05-12 Expression detection device and expression detection method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101412991A CN101887513B (en) 2009-05-12 2009-05-12 Expression detection device and expression detection method thereof

Publications (2)

Publication Number Publication Date
CN101887513A true CN101887513A (en) 2010-11-17
CN101887513B CN101887513B (en) 2012-11-07

Family

ID=43073429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101412991A Expired - Fee Related CN101887513B (en) 2009-05-12 2009-05-12 Expression detection device and expression detection method thereof

Country Status (1)

Country Link
CN (1) CN101887513B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335691A (en) * 2014-08-14 2016-02-17 南京普爱射线影像设备有限公司 Smiling face identification and encouragement system
CN105354527A (en) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 Negative expression recognizing and encouraging system
CN106339658A (en) * 2015-07-09 2017-01-18 阿里巴巴集团控股有限公司 Data processing method and device
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN113011386A (en) * 2021-04-13 2021-06-22 重庆大学 Expression recognition method and system based on equally divided characteristic graphs

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4799104B2 (en) * 2005-09-26 2011-10-26 キヤノン株式会社 Information processing apparatus and control method therefor, computer program, and storage medium
CN100397410C (en) * 2005-12-31 2008-06-25 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
JP2007213378A (en) * 2006-02-10 2007-08-23 Fujifilm Corp Specific facial expression detection method, imaging control method and apparatus, and program
JP4999570B2 (en) * 2007-06-18 2012-08-15 キヤノン株式会社 Facial expression recognition apparatus and method, and imaging apparatus

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335691A (en) * 2014-08-14 2016-02-17 南京普爱射线影像设备有限公司 Smiling face identification and encouragement system
CN105354527A (en) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 Negative expression recognizing and encouraging system
CN106339658A (en) * 2015-07-09 2017-01-18 阿里巴巴集团控股有限公司 Data processing method and device
CN106446753A (en) * 2015-08-06 2017-02-22 南京普爱医疗设备股份有限公司 Negative expression identifying and encouraging system
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN113011386A (en) * 2021-04-13 2021-06-22 重庆大学 Expression recognition method and system based on equally divided characteristic graphs

Also Published As

Publication number Publication date
CN101887513B (en) 2012-11-07

Similar Documents

Publication Publication Date Title
US8437516B2 (en) Facial expression recognition apparatus and facial expression recognition method thereof
US8379920B2 (en) Real-time clothing recognition in surveillance videos
US9471831B2 (en) Apparatus and method for face recognition
JP2020522807A (en) System and method for guiding a user to take a selfie
JP6351243B2 (en) Image processing apparatus and image processing method
KR101727438B1 (en) Deformable expression detector
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
Asi et al. A coarse-to-fine approach for layout analysis of ancient manuscripts
CN101887513A (en) Expression detection device and expression detection method thereof
CN102184016B (en) Noncontact type mouse control method based on video sequence recognition
CN101853397A (en) A bionic face detection method based on human visual characteristics
Chalup et al. Simulating pareidolia of faces for architectural image analysis
CN110334631B (en) A Sitting Posture Detection Method Based on Face Detection and Binary Operation
CN113673378B (en) Face recognition method and device based on binocular camera and storage medium
JP2009289210A (en) Device and method for recognizing important object and program thereof
KR20200072238A (en) Apparatus of character area extraction in video
KR100910754B1 (en) Skin region detection method using grid based approach in real time input image including human body
KR101385373B1 (en) Method for face detection-based hand gesture recognition
US12342098B2 (en) System and method for generating virtual background for people frames
CN104866825B (en) A kind of sign language video frame sequence classification method based on Hu square
Puri et al. Coarse head pose estimation using image abstraction
Chaw et al. Facial expression recognition using correlation of eyes regions
Alzubaydi et al. Face Clip Detection System Using HSV Color Model
Khaliluzzaman et al. Human facial feature detection based on skin color and edge labeling
Hirata et al. Recognizing facial expression for man-machine interaction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121107

Termination date: 20140512