[go: up one dir, main page]

CN111027483A - A clothing detection method for kitchen staff based on HSV color space processing - Google Patents

A clothing detection method for kitchen staff based on HSV color space processing Download PDF

Info

Publication number
CN111027483A
CN111027483A CN201911260996.9A CN201911260996A CN111027483A CN 111027483 A CN111027483 A CN 111027483A CN 201911260996 A CN201911260996 A CN 201911260996A CN 111027483 A CN111027483 A CN 111027483A
Authority
CN
China
Prior art keywords
human body
max
key points
points
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911260996.9A
Other languages
Chinese (zh)
Inventor
常永鑫
刘高铭
罗喆
舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN201911260996.9A priority Critical patent/CN111027483A/en
Publication of CN111027483A publication Critical patent/CN111027483A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kitchen personnel dressing detection method based on HSV color space processing, which is characterized in that based on the existing character detection model and human body 2D posture estimation model, the wearing detection of kitchen personnel can be realized by using the HSV color space method even if scene label data of the kitchen personnel is lacked. Moreover, detection aiming at different color dressing requirements can be realized only by adjusting HSV color space processing parameters. The method comprises the steps of firstly providing character candidate frames of a target person by using a character detection model, then sending each candidate frame into a human body 2D posture estimation model, estimating key points of a human body, and positioning a head area and a body trunk area of the target person according to the given key points of the human body and by combining joint proportion of the human body. And then, respectively applying HSV (hue, saturation and value) processing methods to the two areas, counting the proportion of white pixel points in the obtained binary image area, and finally judging whether the person wears the binary image area according with the standard.

Description

Kitchen worker dressing detection method based on HSV color space processing
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for detecting dressing of kitchen staff based on HSV color space processing.
Background
At present, the field of dressing detection of domestic kitchen staff is almost blank; the prior art related to the present invention is:
chinese invention patent, the patent name is: a method for detecting dressing safety of personnel on an electric power facility operation site is disclosed in the patent number CN 201310745896.1;
chinese invention patent, the patent name is: the method for detecting the safe dressing of the power operators based on the Yolov3 target detection has the patent number of CN 201811475125.4;
chinese invention patent, the patent name is: a construction site personnel uniform wearing identification method based on deep learning is disclosed in patent number CN 201810366469.5.
The prior art disclosed above is consistent with the mainstream idea in the implementation stage, that is, a neural network model capable of detecting relevant features is trained based on a large amount of labeled data to implement the detection task. But the above approach is difficult to implement in the absence of a relevant tagged data set. Meanwhile, the traditional image processing algorithm is difficult to effectively realize the detection of a specific target under the condition of a complex scene.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for detecting dressing of kitchen staff based on HSV color space processing, which solves the defects in the prior art.
In order to realize the purpose, the technical scheme adopted by the invention is as follows:
a kitchen staff dressing detection method based on HSV color space processing comprises the following steps: and providing character candidate frames of the target person by using a character detection model, then sending each candidate frame into a human body 2D posture estimation model, estimating human body key points, and positioning the head area and the body trunk area of the target person according to the given human body key points and by combining the joint proportion of the human body. And then, respectively applying HSV (hue, saturation and value) processing methods to the two areas, counting the proportion of white pixel points in the obtained binary image area, and finally judging whether the person wears the binary image area according with the standard.
Further, a kitchen staff dressing detection method based on HSV color space processing comprises the following steps:
step S1: and carrying out personnel detection on the given kitchen scene image by using a character detection model, and outputting a target personnel candidate frame.
Step S2: and sending the image area contained in each candidate frame into a human body 2D posture estimation model, and estimating the human body key points of the personnel.
Step S3: six key points of the human body, namely the left ear, the right ear, the left eye, the right eye, the nose and the neck are selected as key points for assisting in positioning the head region. If the neck key point exists in the output of the attitude estimation model, two points (marked as A, B) with the farthest distance are selected from the four key points (l _ ear, r _ ear, l _ eye and r _ eye) to be connected into a straight line AB, and the vertical distance h from the neck key point to the straight line AB is obtained. Let H be H + δ (where δ is the offset added according to the height of the shape of the identified hat), turn A, B two points upward by H distance along the normal direction of the straight line AB, resulting in two new points C, D.
Step S4: let A, B, C, D in S3 correspond to a pixel coordinate of (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) Calculate xmin=min(x1,x2,x3,x4),xmax=max(x1,x2,x3,x4),ymin=min(y1,y2,y3,y4),ymax=max(y1,y2,y3,y4) Then with (x)min,ymin)、(xmax,ymax) A horizontal rectangle which is a diagonal point, is the head region part1 of the detection hat.
Step S5: the four key points of the left and right shoulders (l _ shoulder and r _ shoulder) and the left and right buttocks (l _ hip and r _ hip) are selected as key points for assisting in positioning the trunk region of the body. When at least one of the two sets of key points (l _ egress, r _ egress), (l _ hip, r _ hip) exists, the rootBased on the pixel coordinates of the several key points, the minimum value x of the abscissa is obtained by comparisonminAnd maximum value xmaxMinimum value y of the ordinateminAnd maximum value ymaxThen with (x)min,ymin)、(xmax,ymax) A horizontal rectangle with diagonal corners, namely the body trunk region part2 of the test garment.
Step S6: and (3) performing HSV color space processing on the obtained target detection regions part1 and part2 respectively, wherein the obtained processing results are binary images.
Step S7: calculating the proportion P of white pixel points in each binary image, and for the region part1, P1More than or equal to 40 percent, the person wears the cap with the regulated color, otherwise, the regulation is violated. For region part2, P2More than or equal to 60 percent, the person wears clothes with specified colors, otherwise, the person violates the regulations.
Compared with the prior art, the invention has the advantages that: the existing neural network model is combined with the traditional image processing method, and under the condition of lacking labeled data, the dressing detection of kitchen staff based on specific color requirements can be realized. The image area where the target person is located is directly framed by the character detection model, so that the subsequent image processing range can be narrowed, the interference of a complex scene can be reduced, and the detection precision is improved. The human body 2D posture estimation model is adopted to predict the main key point positions of target personnel so as to help the positioning of a target detection area, improve the positioning precision and reduce the scene interference. And selecting final connecting key points according to the number of key points actually output by the attitude estimation model and the length of the connecting distance of the key points, so that the target area positioning has certain adaptability, and under the condition that partial key points are missing, the target area positioning can be realized. The positioning of the target area refers to the distribution rule and the joint proportion of key points of the human body, so that the positioning can adapt to different human posture changes, the positioning precision is high, and the adaptability is strong. By applying the HSV color space processing method, the colors of the target area can be distinguished more easily, and the color identification effect of the target area is improved.
Drawings
FIG. 1 is a schematic diagram of the detection process and results of the present invention applied by a cook wearing a garment of the present invention in compliance with the specification (white clothes and hat);
FIG. 2 is a schematic diagram of the detection process and results of the present invention applied by a cook wearing a garment of the present invention that is not compliant with the specifications (black garment and white hat);
FIG. 3 is a schematic diagram of the detection process and results of the present invention applied by a cook-in wearing a non-compliant garment (non-white work-wear, not wearing a white hat) in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart of a method for detecting clothing of kitchen staff in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings by way of examples.
As shown in fig. 4, a kitchen staff dressing detection method based on HSV color space processing includes the following steps:
step S1: and carrying out personnel detection on the given kitchen scene image by using a character detection model, and outputting a target personnel candidate frame. Referring to fig. 1, 2 and 3, reference numeral 1 is a kitchen staff scene image of an input staff detection model, reference numeral 2 is a detection result of the staff detection model, a red rectangular frame contains target detection staff, and an image area in each candidate frame is cut out, such as reference numeral 3 in fig. 1 and 2, and reference numerals 3 and 11 in fig. 3.
Step S2: and sending the image area contained in each candidate frame into a human body 2D posture estimation model, and estimating the human body key points of the personnel. Reference numerals 4 in figures 1 and 2 and 4 and 12 in figure 3 show the human body key points detected by the 2D pose estimation model,
step S3: six key points of the human body, namely the left ear, the right ear, the left eye, the right eye, the nose and the neck are selected as key points for assisting in positioning the head region. If the neck key point exists in the output of the attitude estimation model, two points (marked as A, B) with the farthest distance are selected from the four key points (l _ ear, r _ ear, l _ eye and r _ eye) to be connected into a straight line AB, and the vertical distance h from the neck key point to the straight line AB is obtained. Let H be H + δ (where δ is the offset added according to the height of the shape of the identified hat), turn A, B two points upward by H distance along the normal direction of the straight line AB, resulting in two new points C, D. With respect to fig. 1, the kitchen worker loses the key point of the left ear as shown by reference numeral 4, and selects the key point of the right ear and the key point of the left eye to connect. With respect to fig. 2, the kitchen worker loses the key point of the left ear as shown by reference numeral 4, and selects the key point of the right ear and the key point of the left eye to connect.
Step S4: note that A, B, C, D in S3 corresponds to a pixel coordinate of (x)1,y1)、(x2,y2)、(x3,y3)、(x4,y4) Calculate xmin=min(x1,x2,x3,x4),xmax=max(x1,x2,x3,x4),ymin=min(y1,y2,y3,y4),ymax=max(y1,y2,y3,y4) Then with (x)min,ymin)、(xmax,ymax) A horizontal rectangle which is a diagonal point, is the head region part1 of the detection hat. Reference numerals 5 and 7 in fig. 1 and 2 denote the head regions located.
Step S5: the four key points of the left and right shoulders (l _ shoulder and r _ shoulder) and the left and right buttocks (l _ hip and r _ hip) are selected as key points for assisting in positioning the trunk region of the body. When at least one of the two groups of key points (l _ outputting, r _ outputting), (l _ hip, r _ hip) exists, comparing the minimum value x of the abscissa according to the pixel coordinates of the existing key pointsminAnd maximum value xmaxMinimum value y of the ordinateminAnd maximum value ymaxThen with (x)min,ymin)、(xmax,ymax) A horizontal rectangle with diagonal corners, namely the body trunk region part2 of the test garment. FIG. 1 and FIG. 1Reference numerals 6 and 8 in fig. 2 denote respective trunk regions of the body.
Step S6: and (3) performing HSV color space processing on the obtained target detection regions part1 and part2 respectively, wherein the obtained processing results are binary images. Such as reference numerals 9 and 10 in fig. 1 and 2.
Step S7: calculating the proportion P of white pixel points in each binary image, and for the region part1, P1More than or equal to 0.4, the person wears the cap with the regulated color, otherwise, the regulation is violated. For region part2, P2More than or equal to 0.5, the person wears clothes with the regulated color, otherwise, the regulation is violated. In fig. 1, P1 is 0.79 and P2 is 1, so the person wears a white hat and white clothes, and the wearing clothes meet the specification. In fig. 2, P1 is 0.57 and P2 is 0, so the person wears a white hat and no white clothes, and the garment does not meet the standard. In fig. 3, both P1 and P2 of the two kitchen staff are smaller than the threshold value, and the detection result shows that the white clothes and caps are not worn and the clothes are not in accordance with the specification. The detection results of fig. 1, 2 and 3 are consistent with the actual situation and correct. Therefore, the method can adapt to complex kitchen scenes and human posture changes.
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (2)

1.一种基于HSV颜色空间处理的后厨人员着装检测方法,其特征在于,包括:运用人物检测模型给出目标人员的人物候选框,然后将每个候选框送入人体2D姿态估计模型,估算出人体关键点,根据所给出的人体关键点,结合人体的关节比例,定位出目标人员的头部区域和身体主干区域。再分别对这两个区域运用HSV的处理方法,统计所得的二值图像区域内白色像素点所占比例,最终判断该人员的穿着是否符合规范。1. a kitchen staff dressing detection method based on HSV color space processing, is characterized in that, comprises: use person detection model to provide the character candidate frame of target person, then each candidate frame is sent into human body 2D pose estimation model, The key points of the human body are estimated, and according to the given key points of the human body, combined with the joint ratio of the human body, the head area and the trunk area of the target person are located. Then, the HSV processing method is applied to these two areas respectively, and the proportion of white pixels in the obtained binary image area is counted, and finally it is judged whether the person's clothing conforms to the specification. 2.根据权利要求1所述的方法,其特征在于,检测方法的具体步骤如下:2. method according to claim 1, is characterized in that, the concrete steps of detection method are as follows: 步骤S1:运用人物检测模型对所给后厨场景图像进行人员检测,并输出目标人员候选框。Step S1: Use the person detection model to perform person detection on the given kitchen scene image, and output a target person candidate frame. 步骤S2:将每个候选框所包含的图像区域送入人体2D姿态估计模型,估测出该人员的人体关键点。Step S2: Send the image area included in each candidate frame into the human body 2D pose estimation model, and estimate the human body key points of the person. 步骤S3:选定左耳、右耳、左眼、右眼、鼻子和脖子这六个人体关键点为辅助定位头部区域的关键点;如果姿态估计模型的输出中,脖子关键点存在的话,就在左耳、右耳、左眼和右眼这四个关键点中选择距离最远的两个点,记为A、B,连成一条直线AB,求出脖子关键点到这条直线AB的垂直距离h,令H=h+δ,其中,δ为根据所识别帽子的形状高度所加的补偿值。A、B两点沿着直线AB的法线方向向上翻折H距离,得到两个新的点C、D。Step S3: Select the six human key points of the left ear, the right ear, the left eye, the right eye, the nose and the neck as the key points to assist in locating the head area; if the neck key point exists in the output of the pose estimation model, Just select the two points with the farthest distance among the four key points of the left ear, right ear, left eye and right eye, record them as A and B, and connect them to form a straight line AB, and find the key point of the neck to this straight line AB. The vertical distance h of , let H=h+δ, where δ is the compensation value added according to the height of the recognized hat shape. Points A and B are folded up by a distance H along the normal direction of the straight line AB, and two new points C and D are obtained. 步骤S4:记S3中A、B、C、D的对应的像素坐标为(x1,y1)、(x2,y2)、(x3,y3)、(x4,y4),计算出xmin=min(x1,x2,x3,x4),xmax=max(x1,x2,x3,x4),ymin=min(y1,y2,y3,y4),ymax=max(y1,y2,y3,y4),那么以(xmin,ymin)、(xmax,ymax)为对角点的水平矩形,就是检测帽子的头部区域part1。Step S4: Denote the corresponding pixel coordinates of A, B, C, and D in S3 as (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ), (x 4 , y 4 ) , calculate x min =min(x 1 ,x 2 ,x 3 ,x 4 ), x max =max(x 1 ,x 2 ,x 3 ,x 4 ),y min =min(y 1 ,y 2 , y 3 , y 4 ), y max = max(y 1 , y 2 , y 3 , y 4 ), then a horizontal rectangle with (x min , y min ) and (x max , y max ) as diagonal points, It is to detect the head area part1 of the hat. 步骤S5:选定左肩膀、右肩膀、左臀部和右臀部这四个人体关键点为辅助定位身体主干区域的关键点。当左肩膀和右肩膀关键点至少存在一个,且左臀部和右臀部关键点至少存在一个时,根据存在的这几个关键点的像素坐标,比较得出横坐标的最小值xmin和最大值xmax,纵坐标的最小值ymin和最大值ymax,以(xmin,ymin)、(xmax,ymax)为对角点的水平矩形,检测衣服的身体主干区域part2。Step S5: Select the four key points of the human body, the left shoulder, the right shoulder, the left hip and the right hip, as the key points for assisting in locating the trunk area of the body. When there is at least one key point of the left shoulder and right shoulder, and at least one key point of the left hip and right hip, according to the pixel coordinates of the existing key points, compare the minimum value x min and the maximum value of the abscissa x max , the minimum value y min and the maximum value y max of the ordinate, and a horizontal rectangle with (x min , y min ) and (x max , y max ) as diagonal points to detect the body trunk area part2 of the clothes. 步骤S6:针对所得的目标检测区域part1和part2,分别进行HSV颜色空间处理,得到的处理结果为二值图像。Step S6: Perform HSV color space processing on the obtained target detection areas part1 and part2 respectively, and the obtained processing result is a binary image. 步骤S7:计算每个二值图像中白色像素点所占比例P,对于区域part1而言,P1≥40%,即表示该人员有佩戴规定颜色的帽子,反之,则违反规定。对于区域part2,P2≥60%,表示该人员有穿规定颜色的衣服,反之,则违反规定。Step S7: Calculate the proportion P of white pixels in each binary image. For the area part1, if P 1 ≥ 40%, it means that the person wears a hat of the specified color, otherwise, it violates the rules. For the area part2, P 2 ≥ 60%, it means that the person wears clothes of the specified color, otherwise, it violates the rules.
CN201911260996.9A 2019-12-10 2019-12-10 A clothing detection method for kitchen staff based on HSV color space processing Pending CN111027483A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911260996.9A CN111027483A (en) 2019-12-10 2019-12-10 A clothing detection method for kitchen staff based on HSV color space processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911260996.9A CN111027483A (en) 2019-12-10 2019-12-10 A clothing detection method for kitchen staff based on HSV color space processing

Publications (1)

Publication Number Publication Date
CN111027483A true CN111027483A (en) 2020-04-17

Family

ID=70208728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911260996.9A Pending CN111027483A (en) 2019-12-10 2019-12-10 A clothing detection method for kitchen staff based on HSV color space processing

Country Status (1)

Country Link
CN (1) CN111027483A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190055A1 (en) * 2010-01-29 2011-08-04 Microsoft Corporation Visual based identitiy tracking
CN102486816A (en) * 2010-12-02 2012-06-06 三星电子株式会社 Apparatus and method for calculating human body shape parameters
CN110096983A (en) * 2019-04-22 2019-08-06 苏州海赛人工智能有限公司 The safe dress ornament detection method of construction worker in a kind of image neural network based
CN110135290A (en) * 2019-04-28 2019-08-16 中国地质大学(武汉) A safety helmet wearing detection method and system based on SSD and AlphaPose
CN110288531A (en) * 2019-07-01 2019-09-27 山东浪潮人工智能研究院有限公司 A method and tool for assisting operators in making standard ID card photos
CN110502965A (en) * 2019-06-26 2019-11-26 哈尔滨工业大学 A Construction Helmet Wearing Monitoring Method Based on Computer Vision Human Pose Estimation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190055A1 (en) * 2010-01-29 2011-08-04 Microsoft Corporation Visual based identitiy tracking
CN102486816A (en) * 2010-12-02 2012-06-06 三星电子株式会社 Apparatus and method for calculating human body shape parameters
CN110096983A (en) * 2019-04-22 2019-08-06 苏州海赛人工智能有限公司 The safe dress ornament detection method of construction worker in a kind of image neural network based
CN110135290A (en) * 2019-04-28 2019-08-16 中国地质大学(武汉) A safety helmet wearing detection method and system based on SSD and AlphaPose
CN110502965A (en) * 2019-06-26 2019-11-26 哈尔滨工业大学 A Construction Helmet Wearing Monitoring Method Based on Computer Vision Human Pose Estimation
CN110288531A (en) * 2019-07-01 2019-09-27 山东浪潮人工智能研究院有限公司 A method and tool for assisting operators in making standard ID card photos

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
潘坚跃等: "人体及穿戴特征识别在电力设施监控中的应用", 《电子设计工程》 *
罗浩等: "基于深度学习的行人重识别研究进展", 《自动化学报》 *

Similar Documents

Publication Publication Date Title
CN105447466B (en) A kind of identity integrated recognition method based on Kinect sensor
CN103745226B (en) Dressing safety detection method for worker on working site of electric power facility
CN106548165B (en) A kind of face identification method of the convolutional neural networks based on image block weighting
CN101996407B (en) A multi-camera color calibration method
CN105488490A (en) Judge dressing detection method based on video
JP5569990B2 (en) Attribute determination method, attribute determination apparatus, program, recording medium, and attribute determination system
CN103093180B (en) A kind of method and system of pornographic image detecting
CN106599781A (en) Electric power business hall dressing normalization identification method based on color and Hu moment matching
CN107607540B (en) Machine vision-based T-shirt online detection and sorting method
CN106485222A (en) A kind of method for detecting human face being layered based on the colour of skin
CN106446862A (en) Face detection method and system
CN102592141A (en) Method for shielding face in dynamic image
CN104318266A (en) Image intelligent analysis processing early warning method
CN112699760B (en) Face target area detection method, device and equipment
CN108564037B (en) Salutation posture detection and correction method
CN113743199A (en) Tool wearing detection method and device, computer equipment and storage medium
CN110197490A (en) Portrait based on deep learning scratches drawing method automatically
US20160345887A1 (en) Moisture feeling evaluation device, moisture feeling evaluation method, and moisture feeling evaluation program
Dwina et al. Skin segmentation based on improved thresholding method
CN111027483A (en) A clothing detection method for kitchen staff based on HSV color space processing
CN102163277B (en) Area-based complexion dividing method
JP4076777B2 (en) Face area extraction device
CN105791815B (en) A kind of TV line automatic judging methods
CN102968636A (en) Human face contour extracting method
Manaf et al. Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417

RJ01 Rejection of invention patent application after publication