CN111027483A - A clothing detection method for kitchen staff based on HSV color space processing - Google Patents
A clothing detection method for kitchen staff based on HSV color space processing Download PDFInfo
- Publication number
- CN111027483A CN111027483A CN201911260996.9A CN201911260996A CN111027483A CN 111027483 A CN111027483 A CN 111027483A CN 201911260996 A CN201911260996 A CN 201911260996A CN 111027483 A CN111027483 A CN 111027483A
- Authority
- CN
- China
- Prior art keywords
- human body
- max
- key points
- points
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kitchen personnel dressing detection method based on HSV color space processing, which is characterized in that based on the existing character detection model and human body 2D posture estimation model, the wearing detection of kitchen personnel can be realized by using the HSV color space method even if scene label data of the kitchen personnel is lacked. Moreover, detection aiming at different color dressing requirements can be realized only by adjusting HSV color space processing parameters. The method comprises the steps of firstly providing character candidate frames of a target person by using a character detection model, then sending each candidate frame into a human body 2D posture estimation model, estimating key points of a human body, and positioning a head area and a body trunk area of the target person according to the given key points of the human body and by combining joint proportion of the human body. And then, respectively applying HSV (hue, saturation and value) processing methods to the two areas, counting the proportion of white pixel points in the obtained binary image area, and finally judging whether the person wears the binary image area according with the standard.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for detecting dressing of kitchen staff based on HSV color space processing.
Background
At present, the field of dressing detection of domestic kitchen staff is almost blank; the prior art related to the present invention is:
chinese invention patent, the patent name is: a method for detecting dressing safety of personnel on an electric power facility operation site is disclosed in the patent number CN 201310745896.1;
chinese invention patent, the patent name is: the method for detecting the safe dressing of the power operators based on the Yolov3 target detection has the patent number of CN 201811475125.4;
chinese invention patent, the patent name is: a construction site personnel uniform wearing identification method based on deep learning is disclosed in patent number CN 201810366469.5.
The prior art disclosed above is consistent with the mainstream idea in the implementation stage, that is, a neural network model capable of detecting relevant features is trained based on a large amount of labeled data to implement the detection task. But the above approach is difficult to implement in the absence of a relevant tagged data set. Meanwhile, the traditional image processing algorithm is difficult to effectively realize the detection of a specific target under the condition of a complex scene.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for detecting dressing of kitchen staff based on HSV color space processing, which solves the defects in the prior art.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
a kitchen staff dressing detection method based on HSV color space processing comprises the following steps: and providing character candidate frames of the target person by using a character detection model, then sending each candidate frame into a human body 2D posture estimation model, estimating human body key points, and positioning the head area and the body trunk area of the target person according to the given human body key points and by combining the joint proportion of the human body. And then, respectively applying HSV (hue, saturation and value) processing methods to the two areas, counting the proportion of white pixel points in the obtained binary image area, and finally judging whether the person wears the binary image area according with the standard.
Further, a kitchen staff dressing detection method based on HSV color space processing comprises the following steps:
step S1: and carrying out personnel detection on the given kitchen scene image by using a character detection model, and outputting a target personnel candidate frame.
Step S2: and sending the image area contained in each candidate frame into a human body 2D posture estimation model, and estimating the human body key points of the personnel.
Step S3: six key points of the human body, namely the left ear, the right ear, the left eye, the right eye, the nose and the neck are selected as key points for assisting in positioning the head region. If the neck key point exists in the output of the attitude estimation model, two points (marked as A, B) with the farthest distance are selected from the four key points (l _ ear, r _ ear, l _ eye and r _ eye) to be connected into a straight line AB, and the vertical distance h from the neck key point to the straight line AB is obtained. Let H be H + δ (where δ is the offset added according to the height of the shape of the identified hat), turn A, B two points upward by H distance along the normal direction of the straight line AB, resulting in two new points C, D.
Step S4: let A, B, C, D in S3 correspond to a pixel coordinate of (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) Calculate xmin=min(x1,x2,x3,x4),xmax=max(x1,x2,x3,x4),ymin=min(y1,y2,y3,y4),ymax=max(y1,y2,y3,y4) Then with (x)min,ymin)、(xmax,ymax) A horizontal rectangle which is a diagonal point, is the head region part1 of the detection hat.
Step S5: the four key points of the left and right shoulders (l _ shoulder and r _ shoulder) and the left and right buttocks (l _ hip and r _ hip) are selected as key points for assisting in positioning the trunk region of the body. When at least one of the two sets of key points (l _ egress, r _ egress), (l _ hip, r _ hip) exists, the rootBased on the pixel coordinates of the several key points, the minimum value x of the abscissa is obtained by comparisonminAnd maximum value xmaxMinimum value y of the ordinateminAnd maximum value ymaxThen with (x)min,ymin)、(xmax,ymax) A horizontal rectangle with diagonal corners, namely the body trunk region part2 of the test garment.
Step S6: and (3) performing HSV color space processing on the obtained target detection regions part1 and part2 respectively, wherein the obtained processing results are binary images.
Step S7: calculating the proportion P of white pixel points in each binary image, and for the region part1, P1More than or equal to 40 percent, the person wears the cap with the regulated color, otherwise, the regulation is violated. For region part2, P2More than or equal to 60 percent, the person wears clothes with specified colors, otherwise, the person violates the regulations.
Compared with the prior art, the invention has the advantages that: the existing neural network model is combined with the traditional image processing method, and under the condition of lacking labeled data, the dressing detection of kitchen staff based on specific color requirements can be realized. The image area where the target person is located is directly framed by the character detection model, so that the subsequent image processing range can be narrowed, the interference of a complex scene can be reduced, and the detection precision is improved. The human body 2D posture estimation model is adopted to predict the main key point positions of target personnel so as to help the positioning of a target detection area, improve the positioning precision and reduce the scene interference. And selecting final connecting key points according to the number of key points actually output by the attitude estimation model and the length of the connecting distance of the key points, so that the target area positioning has certain adaptability, and under the condition that partial key points are missing, the target area positioning can be realized. The positioning of the target area refers to the distribution rule and the joint proportion of key points of the human body, so that the positioning can adapt to different human posture changes, the positioning precision is high, and the adaptability is strong. By applying the HSV color space processing method, the colors of the target area can be distinguished more easily, and the color identification effect of the target area is improved.
Drawings
FIG. 1 is a schematic diagram of the detection process and results of the present invention applied by a cook wearing a garment of the present invention in compliance with the specification (white clothes and hat);
FIG. 2 is a schematic diagram of the detection process and results of the present invention applied by a cook wearing a garment of the present invention that is not compliant with the specifications (black garment and white hat);
FIG. 3 is a schematic diagram of the detection process and results of the present invention applied by a cook-in wearing a non-compliant garment (non-white work-wear, not wearing a white hat) in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart of a method for detecting clothing of kitchen staff in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings by way of examples.
As shown in fig. 4, a kitchen staff dressing detection method based on HSV color space processing includes the following steps:
step S1: and carrying out personnel detection on the given kitchen scene image by using a character detection model, and outputting a target personnel candidate frame. Referring to fig. 1, 2 and 3, reference numeral 1 is a kitchen staff scene image of an input staff detection model, reference numeral 2 is a detection result of the staff detection model, a red rectangular frame contains target detection staff, and an image area in each candidate frame is cut out, such as reference numeral 3 in fig. 1 and 2, and reference numerals 3 and 11 in fig. 3.
Step S2: and sending the image area contained in each candidate frame into a human body 2D posture estimation model, and estimating the human body key points of the personnel. Reference numerals 4 in figures 1 and 2 and 4 and 12 in figure 3 show the human body key points detected by the 2D pose estimation model,
step S3: six key points of the human body, namely the left ear, the right ear, the left eye, the right eye, the nose and the neck are selected as key points for assisting in positioning the head region. If the neck key point exists in the output of the attitude estimation model, two points (marked as A, B) with the farthest distance are selected from the four key points (l _ ear, r _ ear, l _ eye and r _ eye) to be connected into a straight line AB, and the vertical distance h from the neck key point to the straight line AB is obtained. Let H be H + δ (where δ is the offset added according to the height of the shape of the identified hat), turn A, B two points upward by H distance along the normal direction of the straight line AB, resulting in two new points C, D. With respect to fig. 1, the kitchen worker loses the key point of the left ear as shown by reference numeral 4, and selects the key point of the right ear and the key point of the left eye to connect. With respect to fig. 2, the kitchen worker loses the key point of the left ear as shown by reference numeral 4, and selects the key point of the right ear and the key point of the left eye to connect.
Step S4: note that A, B, C, D in S3 corresponds to a pixel coordinate of (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) Calculate xmin=min(x1,x2,x3,x4),xmax=max(x1,x2,x3,x4),ymin=min(y1,y2,y3,y4),ymax=max(y1,y2,y3,y4) Then with (x)min,ymin)、(xmax,ymax) A horizontal rectangle which is a diagonal point, is the head region part1 of the detection hat. Reference numerals 5 and 7 in fig. 1 and 2 denote the head regions located.
Step S5: the four key points of the left and right shoulders (l _ shoulder and r _ shoulder) and the left and right buttocks (l _ hip and r _ hip) are selected as key points for assisting in positioning the trunk region of the body. When at least one of the two groups of key points (l _ outputting, r _ outputting), (l _ hip, r _ hip) exists, comparing the minimum value x of the abscissa according to the pixel coordinates of the existing key pointsminAnd maximum value xmaxMinimum value y of the ordinateminAnd maximum value ymaxThen with (x)min,ymin)、(xmax,ymax) A horizontal rectangle with diagonal corners, namely the body trunk region part2 of the test garment. FIG. 1 and FIG. 1Reference numerals 6 and 8 in fig. 2 denote respective trunk regions of the body.
Step S6: and (3) performing HSV color space processing on the obtained target detection regions part1 and part2 respectively, wherein the obtained processing results are binary images. Such as reference numerals 9 and 10 in fig. 1 and 2.
Step S7: calculating the proportion P of white pixel points in each binary image, and for the region part1, P1More than or equal to 0.4, the person wears the cap with the regulated color, otherwise, the regulation is violated. For region part2, P2More than or equal to 0.5, the person wears clothes with the regulated color, otherwise, the regulation is violated. In fig. 1, P1 is 0.79 and P2 is 1, so the person wears a white hat and white clothes, and the wearing clothes meet the specification. In fig. 2, P1 is 0.57 and P2 is 0, so the person wears a white hat and no white clothes, and the garment does not meet the standard. In fig. 3, both P1 and P2 of the two kitchen staff are smaller than the threshold value, and the detection result shows that the white clothes and caps are not worn and the clothes are not in accordance with the specification. The detection results of fig. 1, 2 and 3 are consistent with the actual situation and correct. Therefore, the method can adapt to complex kitchen scenes and human posture changes.
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911260996.9A CN111027483A (en) | 2019-12-10 | 2019-12-10 | A clothing detection method for kitchen staff based on HSV color space processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911260996.9A CN111027483A (en) | 2019-12-10 | 2019-12-10 | A clothing detection method for kitchen staff based on HSV color space processing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111027483A true CN111027483A (en) | 2020-04-17 |
Family
ID=70208728
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911260996.9A Pending CN111027483A (en) | 2019-12-10 | 2019-12-10 | A clothing detection method for kitchen staff based on HSV color space processing |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111027483A (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110190055A1 (en) * | 2010-01-29 | 2011-08-04 | Microsoft Corporation | Visual based identitiy tracking |
| CN102486816A (en) * | 2010-12-02 | 2012-06-06 | 三星电子株式会社 | Apparatus and method for calculating human body shape parameters |
| CN110096983A (en) * | 2019-04-22 | 2019-08-06 | 苏州海赛人工智能有限公司 | The safe dress ornament detection method of construction worker in a kind of image neural network based |
| CN110135290A (en) * | 2019-04-28 | 2019-08-16 | 中国地质大学(武汉) | A safety helmet wearing detection method and system based on SSD and AlphaPose |
| CN110288531A (en) * | 2019-07-01 | 2019-09-27 | 山东浪潮人工智能研究院有限公司 | A method and tool for assisting operators in making standard ID card photos |
| CN110502965A (en) * | 2019-06-26 | 2019-11-26 | 哈尔滨工业大学 | A Construction Helmet Wearing Monitoring Method Based on Computer Vision Human Pose Estimation |
-
2019
- 2019-12-10 CN CN201911260996.9A patent/CN111027483A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110190055A1 (en) * | 2010-01-29 | 2011-08-04 | Microsoft Corporation | Visual based identitiy tracking |
| CN102486816A (en) * | 2010-12-02 | 2012-06-06 | 三星电子株式会社 | Apparatus and method for calculating human body shape parameters |
| CN110096983A (en) * | 2019-04-22 | 2019-08-06 | 苏州海赛人工智能有限公司 | The safe dress ornament detection method of construction worker in a kind of image neural network based |
| CN110135290A (en) * | 2019-04-28 | 2019-08-16 | 中国地质大学(武汉) | A safety helmet wearing detection method and system based on SSD and AlphaPose |
| CN110502965A (en) * | 2019-06-26 | 2019-11-26 | 哈尔滨工业大学 | A Construction Helmet Wearing Monitoring Method Based on Computer Vision Human Pose Estimation |
| CN110288531A (en) * | 2019-07-01 | 2019-09-27 | 山东浪潮人工智能研究院有限公司 | A method and tool for assisting operators in making standard ID card photos |
Non-Patent Citations (2)
| Title |
|---|
| 潘坚跃等: "人体及穿戴特征识别在电力设施监控中的应用", 《电子设计工程》 * |
| 罗浩等: "基于深度学习的行人重识别研究进展", 《自动化学报》 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105447466B (en) | A kind of identity integrated recognition method based on Kinect sensor | |
| CN103745226B (en) | Dressing safety detection method for worker on working site of electric power facility | |
| CN106548165B (en) | A kind of face identification method of the convolutional neural networks based on image block weighting | |
| CN101996407B (en) | A multi-camera color calibration method | |
| CN105488490A (en) | Judge dressing detection method based on video | |
| JP5569990B2 (en) | Attribute determination method, attribute determination apparatus, program, recording medium, and attribute determination system | |
| CN103093180B (en) | A kind of method and system of pornographic image detecting | |
| CN106599781A (en) | Electric power business hall dressing normalization identification method based on color and Hu moment matching | |
| CN107607540B (en) | Machine vision-based T-shirt online detection and sorting method | |
| CN106485222A (en) | A kind of method for detecting human face being layered based on the colour of skin | |
| CN106446862A (en) | Face detection method and system | |
| CN102592141A (en) | Method for shielding face in dynamic image | |
| CN104318266A (en) | Image intelligent analysis processing early warning method | |
| CN112699760B (en) | Face target area detection method, device and equipment | |
| CN108564037B (en) | Salutation posture detection and correction method | |
| CN113743199A (en) | Tool wearing detection method and device, computer equipment and storage medium | |
| CN110197490A (en) | Portrait based on deep learning scratches drawing method automatically | |
| US20160345887A1 (en) | Moisture feeling evaluation device, moisture feeling evaluation method, and moisture feeling evaluation program | |
| Dwina et al. | Skin segmentation based on improved thresholding method | |
| CN111027483A (en) | A clothing detection method for kitchen staff based on HSV color space processing | |
| CN102163277B (en) | Area-based complexion dividing method | |
| JP4076777B2 (en) | Face area extraction device | |
| CN105791815B (en) | A kind of TV line automatic judging methods | |
| CN102968636A (en) | Human face contour extracting method | |
| Manaf et al. | Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200417 |
|
| RJ01 | Rejection of invention patent application after publication |