CN110377201A - Terminal equipment control method, device, computer installation and readable storage medium storing program for executing - Google Patents
Terminal equipment control method, device, computer installation and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN110377201A CN110377201A CN201910487841.2A CN201910487841A CN110377201A CN 110377201 A CN110377201 A CN 110377201A CN 201910487841 A CN201910487841 A CN 201910487841A CN 110377201 A CN110377201 A CN 110377201A
- Authority
- CN
- China
- Prior art keywords
- change information
- state change
- feature points
- information
- key feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of terminal equipment control method, device, computer installation and computer readable storage medium.The terminal equipment control method includes: acquisition images to be recognized, and carries out Face datection to the images to be recognized;Judge whether to detect facial image;If detecting facial image, the initial state information of the default key feature points of the facial image is obtained;The state change information of the default key feature points of the facial image is determined based on the initial state information;And when the state change information of the default key feature points is the first preset state change information in preset state change information library, triggers the corresponding control instruction of the first preset state change information and execute corresponding control operation.The present invention relates to technical field of face recognition, it can be achieved that interacting more vivid and interesting with terminal device, user experience is improved.
Description
Technical field
The present invention relates to technical field of electronic communication more particularly to a kind of terminal equipment control method, device, computer dresses
It sets and computer readable storage medium.
Background technique
With the development of communication technologies, the use of the equipment such as computer, mobile phone is more more and more universal.At present to computer, hand
The operation of machine is completed by button operation or touch operation, however, either button operation or touch operation is both needed to
It to be completed by manual operation.This manual manipulation mode is too single, may bring inconvenience to user's use process, and answer
With being limited in scope, affect user experience.
Summary of the invention
In view of above-mentioned, the present invention provides a kind of terminal equipment control method, device, computer installation and computer-readable deposits
Storage media controlling terminal device without can be realized by manual operation, improving user experience.
One embodiment of the application provides a kind of terminal equipment control method, which comprises
Images to be recognized is obtained, and Face datection is carried out to the images to be recognized;
Judge whether to detect facial image;
If detecting facial image, the initial state information of the default key feature points of the facial image is obtained;
The state change information of the default key feature points of the facial image is determined based on the initial state information;And
When the state change information of the default key feature points is the first default shape in preset state change information library
When state change information, triggers the corresponding control instruction of the first preset state change information and execute corresponding control operation.
Preferably, before the step of acquisition images to be recognized further include:
The detection boundary information up and down of the default key feature points is configured, obtains characteristic point detection to establish
Frame;And
Multiple preset state change informations of the default key feature points are associated with the instruction of multiple default controls.
Preferably, described the step of carrying out Face datection to the images to be recognized, includes:
The convolutional neural networks model for carrying out Face datection is obtained according to multiple face sample trainings are preset;And
Face datection is carried out to the images to be recognized using the convolutional neural networks model.
Preferably, the initial state information includes initial position message or initial expression information;When the initial shape
When state information is the initial position message of the default key feature points, to be held according to the movement state information of the facial image
The corresponding control operation of row;When the initial state information is the initial expression information of the default key feature points, with root
Corresponding control operation is executed according to the facial expression change information of the facial image.
Preferably, described when the state change information of the default key feature points is in preset state change information library
When the first preset state change information, triggers the corresponding control instruction of the first preset state change information and execute corresponding control
Making the step of operating includes:
When the state change information of the default key feature points is pre- for first in preset state change information library
If when state change information, judging whether the state change information of the default key feature points is effective status change information;
And
When the state change information of the default key feature points is effective status change information, it is pre- to trigger described first
If the corresponding control instruction of state change information executes corresponding control operation.
Preferably, whether the state change information for judging the default key feature points is effective status change information
The step of include:
When the initial state information is the initial position message of the default key feature points, the face figure is obtained
As average deflection speed and/or deflection angle in state change process, according to the average deflection speed and/or described
Deflection angle judges whether the state change information of the default key feature points is effective status change information;And
When the initial state information is the initial expression information of the default key feature points, the face figure is obtained
As the expression duration in state change process, to judge the default key feature points according to the expression duration
State change information whether be effective status change information.
Preferably, whether the state change information for judging the default key feature points is effective status change information
The step of include:
Obtain the generation moment of the state change information of the default key feature points;
Judge the hair at the generation moment and laststate change information of the state change information of the default key feature points
Whether the difference at raw moment is more than or equal to preset time;And
Whether the state change information that the default key feature points are determined according to the judging result is that effective status becomes
Change information.
One embodiment of the application provides a kind of terminal equipment control, and described device includes:
Detection module carries out Face datection for obtaining images to be recognized, and to the images to be recognized;
Judgment module detects facial image for judging whether;
Obtain module, for when detecting facial image, obtain the facial image default key feature points just
Beginning status information;
Determining module, the shape of the default key feature points for determining the facial image based on the initial state information
State change information;And
Control module is in preset state change information library for the state change information in the default key feature points
The first preset state change information when, trigger the corresponding control instruction of the first preset state change information execute it is corresponding
Control operation.
One embodiment of the application provides a kind of computer installation, and the computer installation includes processor and memory,
Several computer programs are stored on the memory, the processor is for when executing the computer program stored in memory
The step of realizing terminal equipment control method as elucidated before.
One embodiment of the application provides a kind of computer readable storage medium, is stored thereon with computer program, described
The step of terminal equipment control method as elucidated before is realized when computer program is executed by processor.
Above-mentioned terminal equipment control method, device, computer installation and computer readable storage medium, by identifying user
Expression shape change or head deflection state realize controlling terminal equipment, liberate the both hands of user, relatively traditional manual behaviour
Make mode, more vivid and interesting is interacted with terminal device, improves user experience.
Detailed description of the invention
It, below will be to required in embodiment description in order to illustrate more clearly of the technical solution of embodiment of the present invention
The attached drawing used is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is the step flow chart of terminal equipment control method in one embodiment of the invention.
Fig. 2 is the step flow chart of terminal equipment control method in another embodiment of the present invention.
Fig. 3 is the functional block diagram of terminal equipment control in one embodiment of the invention.
Fig. 4 is computer schematic device in one embodiment of the invention.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real
Applying mode, the present invention will be described in detail.It should be noted that in the absence of conflict, presently filed embodiment and reality
The feature applied in mode can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, described embodiment
Only some embodiments of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this field
Those of ordinary skill's every other embodiment obtained without making creative work, belongs to guarantor of the present invention
The range of shield.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Preferably, terminal equipment control method of the invention is applied in one or more computer installation.The meter
Calculation machine device be it is a kind of can be according to the instruction for being previously set or store, automatic progress numerical value calculating and/or information processing are set
Standby, hardware includes but is not limited to microprocessor, specific integrated circuit (Application Specific Integrated
Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit
(Digital Signal Processor, DSP), embedded device etc..
The computer installation can be the meter such as desktop PC, laptop, tablet computer, server, mobile phone
Calculate equipment.The computer installation can with user by the modes such as keyboard, mouse, remote controler, touch tablet or voice-operated device into
Row human-computer interaction.
Embodiment one:
Fig. 1 is the step flow chart of terminal equipment control method preferred embodiment of the present invention.It is described according to different requirements,
The sequence of step can change in flow chart, and certain steps can be omitted.
As shown in fig.1, the terminal equipment control method specifically includes following steps.
Step S11, images to be recognized is obtained, and Face datection is carried out to the images to be recognized.
In one embodiment, can by communicated with camera (such as camera of the computer installation) come
Images to be recognized is obtained, the images to be recognized may include inhuman face image, therefore need to carry out the images to be recognized
Face datection, to identify the facial image in the images to be recognized including face.
In one embodiment, it can be realized by establishing and training a convolution neural network model to described to be identified
Image carries out Face datection.Specifically, it can be accomplished by the following way and Face datection is carried out to the images to be recognized: can
First to construct face sample database and establish one for carrying out the convolutional neural networks model of Face datection, the face sample
Database includes the face information of multiple people, everyone face information may include multiple angles, the face letter of every kind of angle
Breath can have plurality of pictures;Facial image in face sample database is input to the convolutional neural networks model, is used
The default parameters of convolutional neural networks model carries out convolutional neural networks training;According to training intermediate result, to default parameters
Initial weight, training rate, the number of iterations etc. are constantly adjusted, until obtaining the network of optimal convolutional neural networks model
Parameter, finally using the convolutional neural networks model with optimal network parameter as final identification model, after the completion of training, i.e.,
Face datection is carried out using the finally obtained convolutional neural networks model.
It should be understood that the images to be recognized can be input to the finally obtained convolutional neural networks model, mould
The output of type is Face datection result.
Step S12, judge whether to detect facial image.
It in one embodiment, can be according to the output of the convolutional neural networks model to determine whether detecting face
Image.If detecting facial image, go to step S13.If facial image is not detected, it is back to step S11.
Step S13 obtains the original state of the default key feature points of the facial image if detecting facial image
Information.
In one embodiment, the default key feature points of the facial image can be by parts such as eyes, nose, mouthes
It constitutes.The initial state information may include initial position message or initial expression information;When the initial state information
For the default key feature points initial position message when, may be implemented to be held according to the movement state information of the facial image
The corresponding control operation of row;It, can be with when the initial state information is the initial expression information of the default key feature points
It realizes and corresponding control operation is executed according to the facial expression change information of the facial image.
In one embodiment, when the initial state information is initial position message, the facial image is preset
The initial state information of key feature points is the location information of the original state of the default key feature points of the facial image.
The location information of the default key feature points of the facial image can pass through integral projection mode or face alignment algorithm (ratio
Such as: ASM algorithm, AAM algorithm, STASM algorithm) determined from facial image.Since eyes are in face than more prominent
Face characteristic, first eyes can be accurately positioned, then other organs of face, such as: eyebrow, mouth, nose, Ke Yiyou
Potential distribution relation, which obtains, more accurately to be positioned.
For example, the position for presetting key feature points is located through corresponding to the wave generated under different integral projection modes
Peak or trough carry out.Wherein, integral projection is divided into upright projection and floor projection, if f (x, y) indicates the ash at image (x, y)
Angle value, in the horizontal integral projection M of image [y1, y2] and the region [x1, x2]h(y) and vertical integral projection Mv(x) it respectively indicates
Are as follows:Wherein, horizontal integral projection is i.e. by a line
The gray value of all pixels point carry out it is cumulative after show again, and vertical integral projection i.e. by the gray value of a column all pixels point into
It is shown again after row is cumulative.By positioning two trough points x1, x2 the image interception in the region horizontal axis [x1, x2] from facial image
Out, the positioning of facial image right boundary can be realized.To binaryzation facial image to be identified after right boundary positioning, respectively
Carry out horizontal integral projection and vertical integral projection.
Further, using the priori knowledge to facial image it is found that eyebrow and eyes are closer black in facial image
Color region corresponds to the first two minimum point in horizontal integral projection curve.Corresponding first minimum point is eyebrow
Position on longitudinal axis, is denoted as ybrow, corresponding second minimum point is the position of eyes on longitudinal axis, is denoted as yeye, third
Corresponding a minimum point is the position of nose on longitudinal axis, is denoted as ynose, corresponding the 4th minimum point is mouth vertical
Position on axis, is denoted as ymonth.Equally, there are two minimum points in facial image central symmetry axis two sides, respectively correspond left and right
The position of eye on transverse axis, is denoted as xleft-eye、xright-eye;The position of eyebrow on transverse axis is identical with eyes;Mouth and nose exist
Position on horizontal axis is (xleft-eye+xright-eye)/2。
In one embodiment, when the initial state information is initial expression information, the facial image is preset
The initial state information of key feature points is the expression information of the original state of the default key feature points of the facial image.
The human facial expression information such as can have the following form of expression: face action when glad: the corners of the mouth is tilted, is lifted on cheek
Wrinkle, eyelid are shunk, eyes tail portion will form " crow's feet ".Facial characteristics when sad: narrow eye, eyebrow tightening, the corners of the mouth drop-down, under
It bar lifts or tightens.Facial characteristics when fearing: mouth and eyes opening, eyebrow raises up, nostril is magnified.Face when angry is special
Sign: eyebrow is sagging, forehead is knitted tightly, eyelid and lip are nervous.Facial characteristics when detest: nose is sneered, on upper lip under lift, eyebrow
It hangs down, narrow eye.Facial characteristics when surprised: lower jaw is sagging, lip and mouth loosens, eyes magnify, eyelid and the micro- lift of eyebrow.Contempt
When facial characteristics: corners of the mouth side lifts, ridicule or proud laughs at shape etc..
Can by extracting the feature vectors to be identified of the default key feature points, and according to the feature to be identified to
The default feature vector of the default expression of each of amount and default expression library, determines the facial image and each preset table
Feelings belong to likelihood probability, and then obtain human facial expression information according to the likelihood probability being calculated.The wherein spy to be identified
Levying vector may include shape eigenvectors and/or texture feature vector.
In one embodiment, when the feature vector to be identified is shape eigenvectors, then the default pass is extracted
Shape eigenvectors in key characteristic point;When the feature vector to be identified is texture feature vector, then extract described default
Texture feature vector in key feature points;When the feature vector to be identified is shape eigenvectors and texture feature vector
When, then extract the shape eigenvectors and texture feature vector in the default key feature points.
In one embodiment, the facial image and each default expression can be determined in the following manner
Likelihood probability: the distance between the default feature vector of feature vector to be identified and each default expression value is obtained;According to distance
Value determines that the facial image default expression corresponding with distance value belongs to the likelihood probability of expression of the same race.Wherein, the distance
Value can be broad sense mahalanobis distance.Can be determined by following formula the default feature of feature vector to be identified and default expression to
The distance between amount value:
dM(y,xj)=(y-xj)T*M*(y-xj);
Wherein, y is feature vector to be identified, xjFor preset expression library in j-th of default expression default feature vector,
M is goal-griven metric matrix;J is the integer more than or equal to 1;dM(y,xj) it is in feature vector to be identified and default expression library
The distance between the default feature vector of j-th of default expression value;(y-xj) it is feature vector to be identified and j-th of preset table
The difference of the default feature vector of feelings;(y-xj)TFor the default feature vector of feature vector to be identified and j-th of default expression
The transposition of difference.
In one embodiment, the facial image default expression corresponding with distance value can be determined by following formula
Belong to the likelihood probability of expression of the same race:
P={ 1+exp [D-b] }-1;
Wherein, p is the likelihood probability that the facial image default expression corresponding with distance value belongs to expression of the same race;D is
Distance value;B is amount of bias.
Step S14, determine that the state of the default key feature points of the facial image becomes based on the initial state information
Change information.
In one embodiment, after getting the initial state information of default key feature points of the facial image,
The state change information of the default key feature points of the facial image can be determined based on the initial state information.The shape
State change information is such as the timing since the initial state information on the basis of the initial state information, preset time
Interior state change information.
Step S15, when that the state change information of the default key feature points is in preset state change information library
When one preset state change information, triggers the corresponding control instruction of the first preset state change information and execute corresponding control
Operation.
In one embodiment, when the state change information of the default key feature points is preset state change information library
In the first preset state change information when, trigger the corresponding control instruction of the first preset state change information, Jin Ersuo
Corresponding control operation can be executed according to the control instruction by stating terminal device.For example, described default crucial special when what is got
When the state change information of sign point is the movement that head deflects to the left, the terminal device executes page up control instruction, when obtaining
When the state change information for the default key feature points got is the movement that head deflects to the right, the terminal device is executed
Lower one page control instruction, when the state change information of the default key feature points got is to nod, the terminal is set
It is standby to execute broadcasting or pause instruction.
In one embodiment, in order to improve operating accuracy, the step S15 can further comprise: when described default
When the state change information of key feature points is the first preset state change information in preset state change information library, sentence
Whether the state change information of the default key feature points of breaking is effective status change information;And work as the default key feature
When the state change information of point is effective status change information, triggers the corresponding control of the first preset state change information and refer to
It enables and executes corresponding control operation.
In one embodiment, when the initial position message that the initial state information is the default key feature points
When, average deflection speed and/or deflection angle of the facial image in state change process are obtained, according to described average
Deflection speed and/or the deflection angle judge whether the state change information of the default key feature points is that effective status becomes
Change information.For example, if the initial state information is head movement movement, it can be by obtaining in state change process
Head is averaged deflection speed and/or deflection angle, come judge the default key feature points this state change information whether
For effective status change information.
For example, under normal circumstances, when the inclined head of a user and people link up, inclined head watches affairs or link up progress of nodding
When confirmation, head movement speed generally compares comparatively fast, in order to avoid occurring accidentally to control, can set a pre-set velocity value to avoid
Accidentally controlling terminal equipment occurs.Such as, it can be determined that whether head movement average speed is less than in this state change process
One pre-set velocity value, if head movement average speed determines the default key feature points less than the first pre-set velocity value
This state change information is effective status change information, generates corresponding control instruction based on the effective status change information,
If head movement average speed is not less than the first pre-set velocity value, this state change of the default key feature points is determined
Information is invalid state change information, does not generate corresponding control instruction.The pre-set velocity value can have 30% it is positive and negative
Deviation.
In one embodiment, whether can also be more than or equal to by judging the deflection angle on head first angle threshold value come
Whether this state change information for judging the default key feature points is effective status change information, if the deflection angle on head
Degree is more than or equal to first angle threshold value, then determines this state change information of the default key feature points for effective status change
Change information, corresponding control instruction is generated based on the effective status change information, if the deflection angle on head is less than first angle
Threshold value then determines this state change information of the default key feature points for invalid state change information.Described first jiao
The bigger angle value of deflection angle caused by communication more usual than user can be set into degree threshold value.
It should be understood that can also by judge simultaneously head movement average speed whether less than the first pre-set velocity value and
Whether the deflection angle on head is more than or equal to first angle threshold value to judge this state change of the default key feature points
Whether information is effective status change information.
In one embodiment, when the initial expression information that the initial state information is the default key feature points
When, expression duration of the facial image in state change process is obtained, to judge according to the expression duration
Whether the state change information of the default key feature points is effective status change information.For example, can be by obtaining in shape
Whether the duration of the countenance in state change procedure is more than or equal to preset time, to judge the default key feature points
This state change information whether be effective status change information.If countenance in this state change process is held
The continuous time is more than or equal to preset time, then judges that this state change information for effective status change information, is based on effective shape
State change information generates corresponding control instruction, if the duration of the countenance in this state change process is less than in advance
If the time, then judge that this state change information for invalid state change information, does not generate corresponding control instruction.
In one embodiment, it is also based on the when segmentum intercalaris of this state change information of the default key feature points
Difference between point and the timing node of the last control instruction generated by the default key feature points, to judge
Whether this state change information for stating default key feature points is effective status change information.For example, obtaining the default pass
The generation moment of the state change information of key characteristic point, when judging the generation of the state change information of the default key feature points
Between with the difference of the time of origin of laststate change information whether be more than or equal to preset time.If this state change information
The difference of the time of origin of time of origin and laststate change information is more than or equal to the preset time, then determines this next state
Change information is effective status change information, and generates corresponding control instruction based on the effective status change information;If this
The difference of the time of origin of the time of origin and laststate change information of state change information is less than the preset time, then sentences
This fixed state change information is invalid state change information, does not generate corresponding control instruction.
Referring to Fig. 2, Fig. 2 shows the terminal equipment control method at place also compared with terminal equipment control method shown in fig. 1
Including step S16 and step S17.
Step S16, the detection boundary information up and down of the default key feature points is configured, obtains a feature to establish
Point detection block.
In one embodiment, the characteristic point detection block is to be used to detect the state change of the default key feature points
Information.By configuring the detection boundary information of upper and lower left key feature points, it can establish to obtain a characteristic point detection block, carry out
When face behavioural information (head deflection, facial expression) detects, it need to ensure that the key feature points of face are still fallen in the spy
In sign point detection block, avoid influencing detection accuracy.
Step S17, multiple preset state change informations of the default key feature points and multiple default controls are instructed
It is associated.
In one embodiment, multiple default controls of multiple preset state change informations and terminal device can be pre-established
Make the mapping relations of instruction.For example, the first default control of default first state change information and terminal device is instructed in advance
It is associated, the second preset state change information is associated with the instruction of the second default control of terminal device, third is preset into shape
State change information is associated with the instruction of the third default control of terminal device.The default control instruction can be in terminal device
Some usual instructions, such as lower one page, page up, broadcasting, pause, left mouse button, right mouse button etc..Such as default first shape
State change information is that head deflects to the right corresponding lower one page control instruction, and default second state change information is that head deflects to the left
Corresponding page up control instruction, default third state change information are nod corresponding broadcasting or pause control instruction;For another example
Default first state change information, which is that face is amimia, is converted to the corresponding lower one page control instruction of happiness expression, presets the second state
Change information, which is that face is amimia, to be converted to sad expression and corresponds to page up control instruction.
Above-mentioned terminal equipment control method realizes control by identifying expression shape change or the head deflection state of user
Terminal device liberates the both hands of user, and relatively traditional manual manipulation mode interacts more vivid and interesting with terminal device, improves
User experience.
Embodiment two:
Fig. 3 is the functional block diagram of terminal equipment control preferred embodiment of the present invention.
As shown in fig.3, the terminal equipment control 10 may include configuration module 101, relating module 102, inspection
It surveys module 103, judgment module 104, obtain module 105, determining module 106 and control module 107.
The configuration module 101 is used to configure the detection boundary information up and down of the default key feature points, to build
It is vertical to obtain a characteristic point detection block.
In one embodiment, the characteristic point detection block is to be used to detect the state change of the default key feature points
Information.The configuration module 101 can establish to obtain a feature by configuring the detection boundary informations of upper and lower left key feature points
Point detection block need to ensure that the key feature points of face begin when carrying out face behavioural information (head deflection, facial expression) detection
It is fallen in the characteristic point detection block eventually, avoids influencing detection accuracy.
The relating module 102 be used for by multiple preset state change informations of the default key feature points with it is multiple pre-
If control instruction is associated.
In one embodiment, the relating module 102 can pre-establish multiple preset state change informations and terminal
The mapping relations of multiple default controls instruction of equipment.Believe for example, the relating module 102 in advance changes default first state
Breath is associated with the instruction of the first default control of terminal device, pre- by the second of the second preset state change information and terminal device
If control instruction is associated, and third preset state change information is associated with the instruction of the third default control of terminal device.Institute
Stating default control instruction can be some usual instructions in terminal device, such as lower one page, page up, broadcasting, pause, mouse
Left button, right mouse button etc..For example default first state change information is that head deflects to the right corresponding lower one page control instruction, is preset
Second state change information is that head deflect corresponding page up control instruction to the left, and default third state change information is nods pair
It should play or suspend control instruction;First state change information is preset for another example is converted to happiness expression pair for face is amimia
One page control instruction should be descended, default second state change information be face it is amimia be converted to sad expression and correspond to page up control
Instruction.
The detection module 103 carries out Face datection for obtaining images to be recognized, and to the images to be recognized.
In one embodiment, the detection module 103 can be by the way that (for example the computer installation is taken the photograph with camera
As head) it is communicated to obtain images to be recognized, the images to be recognized may include inhuman face image, therefore need to described
Images to be recognized carries out Face datection, to identify the facial image in the images to be recognized including face.
In one embodiment, the detection module 103 can by establish and train a convolution neural network model come
It realizes and Face datection is carried out to the images to be recognized.Specifically, it can be accomplished by the following way to the figure to be identified
As carrying out Face datection: can first construct face sample database and establish one for carrying out the convolutional neural networks of Face datection
Model, the face sample database include the face information of multiple people, everyone face information may include multiple angles,
The face information of every kind of angle can have plurality of pictures;Facial image in face sample database is input to the convolution mind
Through network model, convolutional neural networks training is carried out using the default parameters of convolutional neural networks model;According to the intermediate knot of training
Fruit constantly adjusts the initial weight of default parameters, training rate, the number of iterations etc., until obtaining optimal convolution mind
Network parameter through network model, finally using the convolutional neural networks model with optimal network parameter as final identification mould
Type after the completion of training, i.e., carries out Face datection using the finally obtained convolutional neural networks model.
It should be understood that the images to be recognized can be input to the finally obtained convolution by the detection module 103
Neural network model, the output of model are Face datection result.
The judgment module 104 is for judging whether to detect facial image.
In one embodiment, the judgment module 104 can be sentenced according to the output of the convolutional neural networks model
It is disconnected whether to detect facial image.If detecting facial image, subsequent key Feature point recognition is carried out.If face is not detected
Image then carries out Face datection to the images to be recognized again.
The acquisition module 105 is used for when detecting facial image, obtains the default key feature of the facial image
The initial state information of point.
In one embodiment, the default key feature points of the facial image can be by parts such as eyes, nose, mouthes
It constitutes.The initial state information may include initial position message or initial expression information;When the initial state information
For the default key feature points initial position message when, may be implemented to be held according to the movement state information of the facial image
The corresponding control operation of row;It, can be with when the initial state information is the initial expression information of the default key feature points
It realizes and corresponding control operation is executed according to the facial expression change information of the facial image.
In one embodiment, when the initial state information is initial position message, the facial image is preset
The initial state information of key feature points is the location information of the original state of the default key feature points of the facial image.
The location information of the default key feature points of the facial image can pass through integral projection mode or face alignment algorithm (ratio
Such as: ASM algorithm, AAM algorithm, STASM algorithm) determined from facial image.Since eyes are in face than more prominent
Face characteristic, first eyes can be accurately positioned, then other organs of face, such as: eyebrow, mouth, nose, Ke Yiyou
Potential distribution relation, which obtains, more accurately to be positioned.
For example, the position for presetting key feature points is located through corresponding to the wave generated under different integral projection modes
Peak or trough carry out.Wherein, integral projection is divided into upright projection and floor projection, if f (x, y) indicates the ash at image (x, y)
Angle value, in the horizontal integral projection M of image [y1, y2] and the region [x1, x2]h(y) and vertical integral projection Mv(x) it respectively indicates
Are as follows:Wherein, horizontal integral projection is i.e. by a line
The gray value of all pixels point carry out it is cumulative after show again, and vertical integral projection i.e. by the gray value of a column all pixels point into
It is shown again after row is cumulative.By positioning two trough points x1, x2 the image interception in the region horizontal axis [x1, x2] from facial image
Out, the positioning of facial image right boundary can be realized.To binaryzation facial image to be identified after right boundary positioning, respectively
Carry out horizontal integral projection and vertical integral projection.
Further, using the priori knowledge to facial image it is found that eyebrow and eyes are closer black in facial image
Color region corresponds to the first two minimum point in horizontal integral projection curve.Corresponding first minimum point is eyebrow
Position on longitudinal axis, is denoted as ybrow, corresponding second minimum point is the position of eyes on longitudinal axis, is denoted as yeye, third
Corresponding a minimum point is the position of nose on longitudinal axis, is denoted as ynose, corresponding the 4th minimum point is mouth vertical
Position on axis, is denoted as ymonth.Equally, there are two minimum points in facial image central symmetry axis two sides, respectively correspond left and right
The position of eye on transverse axis, is denoted as xleft-eye、xright-eye;The position of eyebrow on transverse axis is identical with eyes;Mouth and nose exist
Position on horizontal axis is (xleft-eye+xright-eye)/2。
In one embodiment, when the initial state information is initial expression information, the facial image is preset
The initial state information of key feature points is the expression information of the original state of the default key feature points of the facial image.
The human facial expression information such as can have the following form of expression: face action when glad: the corners of the mouth is tilted, is lifted on cheek
Wrinkle, eyelid are shunk, eyes tail portion will form " crow's feet ".Facial characteristics when sad: narrow eye, eyebrow tightening, the corners of the mouth drop-down, under
It bar lifts or tightens.Facial characteristics when fearing: mouth and eyes opening, eyebrow raises up, nostril is magnified.Face when angry is special
Sign: eyebrow is sagging, forehead is knitted tightly, eyelid and lip are nervous.Facial characteristics when detest: nose is sneered, on upper lip under lift, eyebrow
It hangs down, narrow eye.Facial characteristics when surprised: lower jaw is sagging, lip and mouth loosens, eyes magnify, eyelid and the micro- lift of eyebrow.Contempt
When facial characteristics: corners of the mouth side lifts, ridicule or proud laughs at shape etc..
It is described obtain module 105 can by extracting the feature vectors to be identified of the default key feature points, and according to
Each of the feature vector to be identified and default expression library preset the default feature vector of expression, determine the facial image
Belong to likelihood probability with each default expression, and then human facial expression information is obtained according to the likelihood probability being calculated.
Wherein the feature vector to be identified may include shape eigenvectors and/or texture feature vector.
In one embodiment, when the feature vector to be identified is shape eigenvectors, then the default pass is extracted
Shape eigenvectors in key characteristic point;When the feature vector to be identified is texture feature vector, then extract described default
Texture feature vector in key feature points;When the feature vector to be identified is shape eigenvectors and texture feature vector
When, then extract the shape eigenvectors and texture feature vector in the default key feature points.
In one embodiment, the acquisition module 105 can determine the facial image and every in the following manner
The likelihood probability of a default expression: it obtains between feature vector to be identified and the default feature vector of each default expression
Distance value;Determine that the facial image default expression corresponding with distance value belongs to the similar general of expression of the same race according to distance value
Rate.Wherein, the distance value can be broad sense mahalanobis distance.Feature vector to be identified can be determined by following formula and is preset
The distance between default feature vector of expression value:
dM(y,xj)=(y-xj)T*M*(y-xj);
Wherein, y is feature vector to be identified, xjFor preset expression library in j-th of default expression default feature vector,
M is goal-griven metric matrix;J is the integer more than or equal to 1;dM(y,xj) it is in feature vector to be identified and default expression library
The distance between the default feature vector of j-th of default expression value;(y-xj) it is feature vector to be identified and j-th of preset table
The difference of the default feature vector of feelings;(y-xj)TFor the default feature vector of feature vector to be identified and j-th of default expression
The transposition of difference.
In one embodiment, the acquisition module 105 can determine the facial image and distance by following formula
It is worth the likelihood probability that corresponding default expression belongs to expression of the same race:
P={ 1+exp [D-b] }-1;
Wherein, p is the likelihood probability that the facial image default expression corresponding with distance value belongs to expression of the same race;D is
Distance value;B is amount of bias.
The determining module 106 is used to determine the default key feature of the facial image based on the initial state information
The state change information of point.
In one embodiment, after getting the initial state information of default key feature points of the facial image,
The determining module 106 can determine the state of the default key feature points of the facial image based on the initial state information
Change information.The state change information is opened on the basis of the initial state information, such as from the initial state information
Beginning timing, the state change information in preset time.
When the state change information of the default key feature points is the first default shape in preset state change information library
When state change information, the control module 107 triggers the corresponding control instruction of the first preset state change information and executes phase
The control operation answered.
In one embodiment, when the state change information of the default key feature points is preset state change information library
In the first preset state change information when, it is corresponding that the control module 107 triggers the first preset state change information
Control instruction, and then the terminal device can execute corresponding control operation according to the control instruction.For example, ought get
When the state change information of the default key feature points is the movement that head deflects to the left, the terminal device executes page up
Control instruction, when the state change information of the default key feature points got is the movement that head deflects to the right, institute
It states terminal device and executes lower one page control instruction, when the state change information of the default key feature points got is to nod
When, the terminal device executes broadcasting or pause instruction.
In one embodiment, in order to improve operating accuracy, the determining module 106 need to determine described default crucial special
Whether the state change information of sign point is effective status change message, can be specifically accomplished by the following way: when described default
When the state change information of key feature points is the first preset state change information in preset state change information library, sentence
Whether the state change information of the default key feature points of breaking is effective status change information;And work as the default key feature
When the state change information of point is effective status change information, triggers the corresponding control of the first preset state change information and refer to
It enables and executes corresponding control operation.
In one embodiment, when the initial position message that the initial state information is the default key feature points
When, average deflection speed and/or deflection angle of the facial image in state change process are obtained, according to described average
Deflection speed and/or the deflection angle judge whether the state change information of the default key feature points is that effective status becomes
Change information.For example, if the initial state information is head movement movement, it can be by obtaining in state change process
Head is averaged deflection speed and/or deflection angle, come judge the default key feature points this state change information whether
For effective status change information.
For example, under normal circumstances, when the inclined head of a user and people link up, inclined head watches affairs or link up progress of nodding
When confirmation, head movement speed generally compares comparatively fast, in order to avoid occurring accidentally to control, can set a pre-set velocity value to avoid
Accidentally controlling terminal equipment occurs.Such as, it can be determined that whether head movement average speed is less than in this state change process
One pre-set velocity value, if head movement average speed determines the default key feature points less than the first pre-set velocity value
This state change information is effective status change information, generates corresponding control instruction based on the effective status change information,
If head movement average speed is not less than the first pre-set velocity value, this state change of the default key feature points is determined
Information is invalid state change information, does not generate corresponding control instruction.The pre-set velocity value can have 30% it is positive and negative
Deviation.
In one embodiment, whether the determining module 106 can also be greater than by judging the deflection angle on head
In first angle threshold value come judge the default key feature points this state change information whether be effective status variation letter
Breath determines that this next state of the default key feature points becomes if the deflection angle on head is more than or equal to first angle threshold value
Change information is effective status change information, corresponding control instruction is generated based on the effective status change information, if head is inclined
Gyration is less than first angle threshold value, then determines this state change information of the default key feature points for invalid state change
Change information.The bigger angle value of deflection angle caused by communication more usual than user can be set into the first angle threshold value.
It should be understood that can also by judge simultaneously head movement average speed whether less than the first pre-set velocity value and
Whether the deflection angle on head is more than or equal to first angle threshold value to judge this state change of the default key feature points
Whether information is effective status change information.
In one embodiment, when the initial expression information that the initial state information is the default key feature points
When, expression duration of the available facial image in state change process, according to the expression duration
Whether the state change information for judging the default key feature points is effective status change information.For example, acquisition can be passed through
Whether the duration of the countenance in state change process is more than or equal to preset time, described default crucial special to judge
Whether this state change information of sign point is effective status change information.If the countenance in this state change process
Duration be more than or equal to preset time, then judge that this state change information for effective status change information, is had based on this
It imitates state change information and generates corresponding control instruction, if the duration of the countenance in this state change process is small
In preset time, then judge that this state change information for invalid state change information, does not generate corresponding control instruction.
In one embodiment, the determining module 106 is also based on this next state of the default key feature points
The timing node of change information and the timing node of the last control instruction generated by the default key feature points it
Between difference, to judge whether this state change information of the default key feature points is effective status change information.Than
Such as, at the generation moment for obtaining the state change information of the default key feature points, judge the shape of the default key feature points
Whether the difference of the time of origin of the time of origin and laststate change information of state change information is more than or equal to preset time.If
The difference of the time of origin of the time of origin and laststate change information of this state change information is more than or equal to described default
Time, then determine this state change information for effective status change information, and based on the effective status change information generate pair
The control instruction answered;If the time of origin of this state change information and the difference of the time of origin of laststate change information are small
In the preset time, then determine that this state change information for invalid state change information, does not generate corresponding control instruction.
Above-mentioned terminal equipment control realizes control by identifying expression shape change or the head deflection state of user
Terminal device liberates the both hands of user, and relatively traditional manual manipulation mode interacts more vivid and interesting with terminal device, improves
User experience.
Fig. 4 is the schematic diagram of computer installation preferred embodiment of the present invention.
The computer installation 1 includes memory 20, processor 30 and is stored in the memory 20 and can be in institute
The computer program 40 run on processor 30 is stated, such as terminal device controls program.The processor 30 executes the calculating
Realize the step in above-mentioned terminal equipment control method embodiment when machine program 40, for example, step S11~S15 shown in FIG. 1 or
Step S11~S17 shown in Fig. 2.Alternatively, the processor 30 realizes above-mentioned terminal device when executing the computer program 40
The function of each module in control device embodiment, such as the module 101~107 in Fig. 3.
Illustratively, the computer program 40 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 20, and are executed by the processor 30, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, and described instruction section is used
In implementation procedure of the description computer program 40 in the computer installation 1.For example, the computer program 40 can be with
It is divided into the configuration module 101 in Fig. 3, relating module 102, detection module 103, judgment module 104, obtains module 105, is true
Cover half block 106 and control module 107.Each module concrete function is referring to embodiment two.
The computer installation 1 can be desktop PC, notebook, palm PC, mobile phone, tablet computer and cloud
Server etc. calculates equipment.It will be understood by those skilled in the art that the schematic diagram is only the example of computer installation 1, and
The restriction to computer installation 1 is not constituted, may include components more more or fewer than diagram, or combine certain components, or
The different component of person, such as the computer installation 1 can also include input-output equipment, network access equipment, bus etc..
Alleged processor 30 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 30 is also possible to any conventional processing
Device etc., the processor 30 are the control centres of the computer installation 1, utilize various interfaces and the entire computer of connection
The various pieces of device 1.
The memory 20 can be used for storing the computer program 40 and/or module/unit, and the processor 30 passes through
Operation executes the computer program and/or module/unit being stored in the memory 20, and calls and be stored in memory
Data in 20 realize the various functions of the computer installation 1.The memory 20 can mainly include storing program area and deposit
Store up data field, wherein storing program area can application program needed for storage program area, at least one function (for example sound is broadcast
Playing function, image player function etc.) etc.;Storage data area, which can be stored, uses created data (ratio according to computer installation 1
Such as audio data, phone directory) etc..In addition, memory 20 may include high-speed random access memory, it can also include non-easy
The property lost memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital
(Secure Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or other
Volatile solid-state part.
If the integrated module/unit of the computer installation 1 is realized in the form of SFU software functional unit and as independence
Product when selling or using, can store in a computer readable storage medium.Based on this understanding, of the invention
It realizes all or part of the process in above-described embodiment method, can also instruct relevant hardware come complete by computer program
At the computer program can be stored in a computer readable storage medium, and the computer program is held by processor
When row, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, institute
Stating computer program code can be source code form, object identification code form, executable file or certain intermediate forms etc..It is described
Computer-readable medium may include: any entity or device, recording medium, U that can carry the computer program code
Disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), arbitrary access
Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It needs
It is bright, the content that the computer-readable medium includes can according in jurisdiction make laws and patent practice requirement into
Row increase and decrease appropriate, such as do not include electric load according to legislation and patent practice, computer-readable medium in certain jurisdictions
Wave signal and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed computer installation and method, it can be with
It realizes by another way.For example, computer installation embodiment described above is only schematical, for example, described
The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in same treatment unit
It is that each unit physically exists alone, can also be integrated in same unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds software function module.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.It is stated in computer installation claim
Multiple units or computer installation can also be implemented through software or hardware by the same unit or computer installation.The
One, the second equal words are used to indicate names, and are not indicated any particular order.
Finally it should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although reference
Preferred embodiment describes the invention in detail, those skilled in the art should understand that, it can be to of the invention
Technical solution is modified or equivalent replacement, without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
1. a kind of terminal equipment control method, which is characterized in that the described method includes:
Images to be recognized is obtained, and Face datection is carried out to the images to be recognized;
Judge whether to detect facial image;
If detecting facial image, the initial state information of the default key feature points of the facial image is obtained;
The state change information of the default key feature points of the facial image is determined based on the initial state information;And
When the state change information of the default key feature points is the first preset state change in preset state change information library
When changing information, triggers the corresponding control instruction of the first preset state change information and execute corresponding control operation.
2. terminal equipment control method as described in claim 1, which is characterized in that the step of the acquisition images to be recognized it
Before further include:
The detection boundary information up and down of the default key feature points is configured, obtains a characteristic point detection block to establish;And
Multiple preset state change informations of the default key feature points are associated with the instruction of multiple default controls.
3. terminal equipment control method as claimed in claim 1 or 2, which is characterized in that it is described to the images to be recognized into
The step of row Face datection includes:
The convolutional neural networks model for carrying out Face datection is obtained according to multiple face sample trainings are preset;And
Face datection is carried out to the images to be recognized using the convolutional neural networks model.
4. terminal equipment control method as claimed in claim 1 or 2, which is characterized in that the initial state information includes just
Beginning location information or initial expression information;When the initial bit confidence that the initial state information is the default key feature points
When breath, to execute corresponding control operation according to the movement state information of the facial image;When the initial state information is
It is corresponding to be executed according to the facial expression change information of the facial image when the initial expression information of the default key feature points
Control operation.
5. terminal equipment control method as claimed in claim 4, which is characterized in that described when the default key feature points
When state change information is the first preset state change information in preset state change information library, the first default shape is triggered
The corresponding control instruction of state change information executes the step of control operates accordingly
When the state change information of the default key feature points is the first default shape in preset state change information library
When state change information, judge whether the state change information of the default key feature points is effective status change information;And
When the state change information of the default key feature points is effective status change information, the first default shape is triggered
The corresponding control instruction of state change information executes corresponding control operation.
6. terminal equipment control method as claimed in claim 5, which is characterized in that the judgement default key feature points
State change information the step of whether being effective status change information include:
When the initial state information is the initial position message of the default key feature points, obtains the facial image and exist
Average deflection speed and/or deflection angle in state change process, according to the average deflection speed and/or the deflection
Angle judges whether the state change information of the default key feature points is effective status change information;And
When the initial state information is the initial expression information of the default key feature points, obtains the facial image and exist
The expression duration in state change process, to judge the shape of the default key feature points according to the expression duration
Whether state change information is effective status change information.
7. terminal equipment control method as claimed in claim 5, which is characterized in that the judgement default key feature points
State change information the step of whether being effective status change information include:
Obtain the generation moment of the state change information of the default key feature points;
When judging the generation that moment and laststate change information occurs of the state change information of the default key feature points
Whether the difference at quarter is more than or equal to preset time;And
Whether the state change information that the default key feature points are determined according to the judging result is effective status variation letter
Breath.
8. a kind of terminal equipment control, which is characterized in that described device includes:
Detection module carries out Face datection for obtaining images to be recognized, and to the images to be recognized;
Judgment module detects facial image for judging whether;
Obtain module, the initial shape of the default key feature points for when detecting facial image, obtaining the facial image
State information;
Determining module, for determining that the state of default key feature points of the facial image becomes based on the initial state information
Change information;And
Control module is the in preset state change information library for the state change information in the default key feature points
When one preset state change information, triggers the corresponding control instruction of the first preset state change information and execute corresponding control
Operation.
9. a kind of computer installation, the computer installation includes processor and memory, is stored on the memory several
Computer program, which is characterized in that such as right is realized when the processor is for executing the computer program stored in memory
It is required that the step of terminal equipment control method described in any one of 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of terminal equipment control method as described in any one of claim 1-7 is realized when being executed by processor.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910487841.2A CN110377201A (en) | 2019-06-05 | 2019-06-05 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
| PCT/CN2019/118974 WO2020244160A1 (en) | 2019-06-05 | 2019-11-15 | Terminal device control method and apparatus, computer device, and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910487841.2A CN110377201A (en) | 2019-06-05 | 2019-06-05 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN110377201A true CN110377201A (en) | 2019-10-25 |
Family
ID=68249821
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910487841.2A Pending CN110377201A (en) | 2019-06-05 | 2019-06-05 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110377201A (en) |
| WO (1) | WO2020244160A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020244160A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Terminal device control method and apparatus, computer device, and readable storage medium |
| CN113504831A (en) * | 2021-07-23 | 2021-10-15 | 电光火石(北京)科技有限公司 | IOT (input/output) equipment control method based on facial image feature recognition, IOT and terminal equipment |
| CN114549843A (en) * | 2022-04-22 | 2022-05-27 | 珠海视熙科技有限公司 | Stroboscopic stripe detection and elimination method and device, camera equipment and storage medium |
| CN115082989A (en) * | 2022-06-25 | 2022-09-20 | 平安银行股份有限公司 | Internet of Things-based air conditioning method, device and computer-readable storage medium |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114217693B (en) * | 2021-12-17 | 2025-05-09 | 广州轻游信息科技有限公司 | A software interaction method, system and storage medium for face recognition |
| CN115497131B (en) * | 2022-08-09 | 2025-07-18 | 平安科技(深圳)有限公司 | Head motion detection method, device, equipment and storage medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010142455A2 (en) * | 2009-06-12 | 2010-12-16 | Star Nav | Method for determining the position of an object in an image, for determining an attitude of a persons face and method for controlling an input device based on the detection of attitude or eye gaze |
| US20160156838A1 (en) * | 2013-11-29 | 2016-06-02 | Intel Corporation | Controlling a camera with face detection |
| CN106371551A (en) * | 2015-07-20 | 2017-02-01 | 深圳富泰宏精密工业有限公司 | Operation system and operation method for facial expression, and electronic device |
| CN106681509A (en) * | 2016-12-29 | 2017-05-17 | 北京七鑫易维信息技术有限公司 | Interface operating method and system |
| US20180121711A1 (en) * | 2015-07-09 | 2018-05-03 | Tencent Technology (Shenzhen) Company Limited | Display control method and apparatus |
| CN109819100A (en) * | 2018-12-13 | 2019-05-28 | 平安科技(深圳)有限公司 | Mobile phone control method, device, computer installation and computer readable storage medium |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI356355B (en) * | 2007-12-03 | 2012-01-11 | Inst Information Industry | Motion transition method and system for dynamic im |
| CN107562203A (en) * | 2017-09-14 | 2018-01-09 | 北京奇艺世纪科技有限公司 | A kind of input method and device |
| CN110377201A (en) * | 2019-06-05 | 2019-10-25 | 平安科技(深圳)有限公司 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
-
2019
- 2019-06-05 CN CN201910487841.2A patent/CN110377201A/en active Pending
- 2019-11-15 WO PCT/CN2019/118974 patent/WO2020244160A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2010142455A2 (en) * | 2009-06-12 | 2010-12-16 | Star Nav | Method for determining the position of an object in an image, for determining an attitude of a persons face and method for controlling an input device based on the detection of attitude or eye gaze |
| US20160156838A1 (en) * | 2013-11-29 | 2016-06-02 | Intel Corporation | Controlling a camera with face detection |
| US20180121711A1 (en) * | 2015-07-09 | 2018-05-03 | Tencent Technology (Shenzhen) Company Limited | Display control method and apparatus |
| CN106371551A (en) * | 2015-07-20 | 2017-02-01 | 深圳富泰宏精密工业有限公司 | Operation system and operation method for facial expression, and electronic device |
| CN106681509A (en) * | 2016-12-29 | 2017-05-17 | 北京七鑫易维信息技术有限公司 | Interface operating method and system |
| CN109819100A (en) * | 2018-12-13 | 2019-05-28 | 平安科技(深圳)有限公司 | Mobile phone control method, device, computer installation and computer readable storage medium |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020244160A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Terminal device control method and apparatus, computer device, and readable storage medium |
| CN113504831A (en) * | 2021-07-23 | 2021-10-15 | 电光火石(北京)科技有限公司 | IOT (input/output) equipment control method based on facial image feature recognition, IOT and terminal equipment |
| CN114549843A (en) * | 2022-04-22 | 2022-05-27 | 珠海视熙科技有限公司 | Stroboscopic stripe detection and elimination method and device, camera equipment and storage medium |
| CN115082989A (en) * | 2022-06-25 | 2022-09-20 | 平安银行股份有限公司 | Internet of Things-based air conditioning method, device and computer-readable storage medium |
| CN115082989B (en) * | 2022-06-25 | 2025-03-11 | 平安银行股份有限公司 | Air conditioning method, device and computer-readable storage medium based on the Internet of Things |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2020244160A1 (en) | 2020-12-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110377201A (en) | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing | |
| US11043011B2 (en) | Image processing method, apparatus, terminal, and storage medium for fusing images of two objects | |
| CN107894833B (en) | Multi-modal interaction processing method and system based on virtual human | |
| CN110363079A (en) | Expression exchange method, device, computer installation and computer readable storage medium | |
| CN108229278B (en) | Face image processing method and device and electronic equipment | |
| US20230128505A1 (en) | Avatar generation method, apparatus and device, and medium | |
| CN104732590B (en) | A kind of synthetic method of sign language animation | |
| CN110555507B (en) | Interaction method and device for virtual robot, electronic equipment and storage medium | |
| CN107610209A (en) | Human face countenance synthesis method, device, storage medium and computer equipment | |
| CN107765852A (en) | Multi-modal interaction processing method and system based on visual human | |
| CN107765856A (en) | Visual human's visual processing method and system based on multi-modal interaction | |
| CN111382648A (en) | Method, device and equipment for detecting dynamic facial expression and storage medium | |
| TWI780919B (en) | Method and apparatus for processing face image, electronic device and storage medium | |
| KR101743763B1 (en) | Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same | |
| CN110942501B (en) | Virtual image switching method and device, electronic equipment and storage medium | |
| CN111432267A (en) | Video adjusting method and device, electronic equipment and storage medium | |
| KR20120005587A (en) | Method and apparatus for generating facial animation in computer system | |
| CN114202615B (en) | Facial expression reconstruction method, device, equipment and storage medium | |
| WO2022257766A1 (en) | Image processing method and apparatus, device, and medium | |
| CN113633983B (en) | Methods, devices, electronic devices and media for controlling virtual character expressions | |
| Ioannou et al. | Robust feature detection for facial expression recognition | |
| CN109819100A (en) | Mobile phone control method, device, computer installation and computer readable storage medium | |
| CN110598719A (en) | Method for automatically generating face image according to visual attribute description | |
| WO2018103416A1 (en) | Method and device for detecting facial image | |
| CN106326980A (en) | Robot and method for simulating human facial movements by robot |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191025 |