CN109917921A - An air gesture recognition method for VR field - Google Patents
An air gesture recognition method for VR field Download PDFInfo
- Publication number
- CN109917921A CN109917921A CN201910240627.7A CN201910240627A CN109917921A CN 109917921 A CN109917921 A CN 109917921A CN 201910240627 A CN201910240627 A CN 201910240627A CN 109917921 A CN109917921 A CN 109917921A
- Authority
- CN
- China
- Prior art keywords
- data
- gesture
- gesture identification
- empty
- identification method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000005516 engineering process Methods 0.000 claims abstract description 30
- 238000003708 edge detection Methods 0.000 claims abstract description 21
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 3
- 238000003066 decision tree Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 239000000463 material Substances 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 241001136824 Pyrgotidae Species 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of for the field VR every empty gesture identification method, it is collected including images of gestures, images of gestures pretreatment, picture charge pattern, storage, discriminatory analysis, classification, matching and presentation, the present invention is scientific and reasonable, it is safe and convenient to use, pass through the pretreated effect of images of gestures, main gesture feature is captured using edge detection and normalized technology, and it is entered into the model for gesture identification, significantly reduce data volume, and it eliminates it is considered that incoherent information, improve the processing speed of data, pass through the effect of discriminatory analysis, the data of storage are judged and analyzed, judge that the digit position of gesture motion extrapolates the digit position of its next gesture, and then a possibility that predicting gesture variation, then pass through positioning adjacent domain, it can not directly catch and grab in display camera The gesture motion arrived, without doing gesture motion with a not only bored but also weight gloves, so that more convenient precisely every empty gesture identification.
Description
Technical field
The present invention relates to VR technical field of virtual reality, it is specially a kind of for the field VR every empty gesture identification method.
Background technique
VR virtual reality technology is a kind of computer simulation system that can be created with the experiencing virtual world, it utilizes calculating
Machine generates a kind of simulated environment, is a kind of Multi-source Information Fusion, the system of interactive Three-Dimensional Dynamic what comes into a driver's and entity behavior
Emulation is immersed to user in the environment, and virtual reality technology is an important directions of emulation technology, is emulation technology and meter
The set of the multiple technologies such as calculation machine graphics human-machine interface technology multimedia technology sensing technology network technology is one rich in choosing
The interleaving techniques front subject and research field of war property, virtual reality technology mainly include simulated environment, perception, natural technical ability and
Sensing equipment etc.;Simulated environment be generated by computer, dynamic 3 D stereo photorealism in real time, currently,
Establish the interactive mode of several kinds of man-machine communications, with the rapid development of computer technology, in relation to mechanical equipment and the mankind it
Between interactive application exponentially increase situation, from the technology for being initially only limitted to speech recognition and control.
It has been developed to the tracking of movement, position and gesture now, has been commonly to pass through wearing every empty gesture identification method
Bulky gloves make movement, are then captured using the sensor in gloves to the movement of user, are then input to meter
Complicated operation is carried out in calculation machine and then makes corresponding feedback, calculates complexity, and operation time is long, and operation is heavy, so being badly in need of
It is a kind of to solve the above problems for the field VR every empty gesture identification method.
Summary of the invention
The present invention provide it is a kind of for the field VR every empty gesture identification method, can effectively solve in above-mentioned background technique
It is proposed to be commonly to make movement by wearing bulky gloves every empty gesture identification method, then utilizes the sensor pair in gloves
The movement of user captures, and is then input to and carries out complicated operation in computer and then make corresponding feedback, calculates
Complexity, operation time is long, operates heavy problem.
To achieve the above object, the invention provides the following technical scheme: it is a kind of for the field VR every empty gesture identification side
Method;
S1, images of gestures are collected: being acquired user's hand motion by camera;
S2, images of gestures pretreatment: main gesture feature is captured using edge detection and normalized technology, and will
It is input in the model for gesture identification;
S3, picture charge pattern: carrying out pretreatment image to go deep into tracking, captures the direction specifically acted by sensor, really
Determine the spatial position of Moving Objects;
S4, storage: the data after tracking process are collected and are stored;
S5, discriminatory analysis: the data of storage are judged and is analyzed;
S6, classification: different data is subjected to gesture classification according to the rule feature of extraction by classifier;
S7, matching: the data of classification and big data network are subjected to Rapid matching;
S8, presentation: by matching correct gesture motion and being presented in VR environment.
According to the above technical scheme, in the step S1, user's hand motion is acquired by camera and is referred to
The movement of user's hand is captured by multiple cameras, and light filling work is carried out to environment by light compensating lamp.
According to the above technical scheme, it in the step S2, is captured using edge detection and normalized technology main
Gesture feature, wherein edge detection refers to the apparent point of brightness change in reference numbers image, the significant changes in image attributes
The critical event and variation for usually reflecting attribute, material property variation discontinuous including the discontinuous, surface direction in depth
Change with scene lighting, wherein the method for edge detection is divided into based on search and based on zero crossing, the edge detection based on search
Method calculates edge strength first, is usually indicated with first derivative, then, with the local direction for calculating estimation edge, is based on zero
The method of intersection finds the zero cross point of the second dervative obtained by image to position the direction that edge generallys use gradient, and benefit
The maximum value of partial gradient mould is found with this direction.
According to the above technical scheme, it in the step S2, is captured using edge detection and normalized technology main
Gesture feature, wherein normalized, which refers to, after treatment to be limited data to be treated in a certain range, and dimension is done
Dimension one, i.e. abstract normalizing determines set that is inessential and not having comparativity, and by such gather in attribute of an element go
Fall, retain specific and important attribute, guarantees that convergence is accelerated when program operation.
According to the above technical scheme, in the step S3, pretreatment image is carried out to go deep into tracking, is captured by sensor
The direction specifically acted determines the spatial position of Moving Objects, wherein using light fly time technology to the depth information of object into
Row acquisition tracking, light fly time technology and refer to that one light-emitting component of load, the photon that light-emitting component issues are encountering body surface
After can reflect, reuse a special cmos sensor to capture these by light-emitting component and issue, again from body surface
Reflected photon according to photon flight time and then can extrapolate photon flight to obtain the flight time of photon
Distance, also just obtained the depth information of object.
According to the above technical scheme, in the step S4, the data after tracking process are collected and store refer to by
The data that tracking obtains are collected arrangement, and are stored in network system, and the data of storage are backed up, and facilitate the later period
Do analogy.
According to the above technical scheme, in the step S5, the data of storage are judged and is analyzed, wherein passing through
DeepHand system judges the data of storage, by judging the digit position of gesture motion, each area of finger
A possibility that domain is all defined as number, extrapolates the digit position of its next gesture, and then predicts gesture variation, then by fixed
Position adjacent domain, the gesture motion caught can not directly be caught by showing in camera, it is ensured that phantom hand movement is fast and accurately shown
Show.
According to the above technical scheme, in the step S6, by different data by classifier according to the rule feature of extraction
Gesture classification is carried out, wherein classifier includes decision tree classifier, selection Tree Classifier and classification of evidence device, decision tree classifier
Refer to and an attribute set is provided, by making a series of decision, each decision of classifier on the basis of property set
It is indicated with a node for tree, patterned representation method helps user to understand sorting algorithm, provides to the valuable of data
Observation visual angle;Selecting Tree Classifier includes special selection node, and node is selected to have multiple branches to be divided in selection tree
When class, a variety of situations are comprehensively considered;Some is specific on the basis of giving an attribute by checking for classification of evidence device
As a result a possibility that generation, classifies to data.
According to the above technical scheme, in the step S7, the data of classification, which are carried out Rapid matching with big data network, is
Finger extracts sorted data, and the data lifted are matched with big data network, if successful match will match
Data output afterwards, and matching record is stored, former data are returned if it fails to match, and failure information is fed back to
User.
According to the above technical scheme, in the step S8, by matching correct gesture motion and being presented in VR environment
Refer to the data of successful match by showing that screen is presented in VR environment, and the feedback of record storage user, just
It is adjusted next time.
Compared with prior art, beneficial effects of the present invention: the present invention is scientific and reasonable, safe and convenient to use, passes through light filling
The effect of lamp avoids ambient light dimness from causing image movement acquisition unintelligible, after influencing convenient for carrying out light filling work to environment
It is special to capture main gesture using edge detection and normalized technology by the pretreated effect of images of gestures for phase work
Sign, and be entered into the model for gesture identification, significantly reduce data volume, and eliminate it is considered that not
Relevant information improves the processing speed of data, by the effect of discriminatory analysis, the data of storage is judged and is divided
Analysis judges that the digit position of gesture motion extrapolates the digit position of its next gesture, and then predicts the possibility of gesture variation
Property, then by positioning adjacent domain, the gesture motion caught can not directly be caught in camera by showing, not had to again bored with one
Gloves are weighed again to do gesture motion, so that more convenient precisely every empty gesture identification.
Detailed description of the invention
Attached drawing is used to provide further understanding of the present invention, and constitutes part of specification, with reality of the invention
It applies example to be used to explain the present invention together, not be construed as limiting the invention.
In the accompanying drawings:
Fig. 1 is of the invention every empty gesture identification method flow diagram.
Specific embodiment
Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings, it should be understood that preferred reality described herein
Apply example only for the purpose of illustrating and explaining the present invention and is not intended to limit the present invention.
Embodiment: as shown in Figure 1, the present invention provides a kind of technical solution, it is a kind of for the field VR every empty gesture identification
Method;
S1, images of gestures are collected: being acquired user's hand motion by camera;
S2, images of gestures pretreatment: main gesture feature is captured using edge detection and normalized technology, and will
It is input in the model for gesture identification;
S3, picture charge pattern: carrying out pretreatment image to go deep into tracking, captures the direction specifically acted by sensor, really
Determine the spatial position of Moving Objects;
S4, storage: the data after tracking process are collected and are stored;
S5, discriminatory analysis: the data of storage are judged and is analyzed;
S6, classification: different data is subjected to gesture classification according to the rule feature of extraction by classifier;
S7, matching: the data of classification and big data network are subjected to Rapid matching;
S8, presentation: by matching correct gesture motion and being presented in VR environment.
According to the above technical scheme, in step S1, user's hand motion is acquired by camera and refers to and passes through
Multiple cameras capture the movement of user's hand, and carry out light filling work to environment by light compensating lamp.
According to the above technical scheme, in step S2, main gesture is captured using edge detection and normalized technology
Feature, wherein edge detection refers to the apparent point of brightness change in reference numbers image, and the significant changes in image attributes are usual
The critical event and variation for reflecting attribute, material property variation discontinuous including the discontinuous, surface direction in depth and field
Scape illumination change, wherein the method for edge detection is divided into based on search and based on zero crossing, the edge detection method based on search
Edge strength is calculated first, is usually indicated with first derivative, then, with the local direction for calculating estimation edge, is based on zero crossing
Method find the zero cross point of the second dervative obtained by image to position the direction that edge generallys use gradient, and utilize this
Find the maximum value of partial gradient mould in direction.
According to the above technical scheme, in step S2, main gesture is captured using edge detection and normalized technology
Feature, wherein normalized refers to the dimension that data to be treated are limited after treatment and do dimension in a certain range
One, i.e. abstract normalizing, determine set that is inessential and not having comparativity, and by such gather in attribute of an element remove, guarantor
Specific and important attribute is stayed, guarantees that convergence is accelerated when program operation.
According to the above technical scheme, in step S3, pretreatment image is carried out to go deep into tracking, is captured by sensor specific
The direction of movement determines the spatial position of Moving Objects, adopts wherein flying time technology using light to the depth information of object
Collection tracking, light fly time technology and refer to one light-emitting component of load, photon meeting after encountering body surface that light-emitting component issues
It reflects, reuses a special cmos sensor to capture these by light-emitting component and issue, again from body surface reflection
Photon back, to obtain the flight time of photon, according to photon flight time so that can extrapolate photon flight away from
From also just having obtained the depth information of object.
According to the above technical scheme, in step S4, the data after tracking process are collected and are stored refer to and will track
The data of acquisition are collected arrangement, and are stored in network system, and the data of storage are backed up, and the later period is facilitated to do class
Than.
According to the above technical scheme, in step S5, the data of storage are judged and is analyzed, wherein passing through DeepHand
System judges the data of storage, and by judging the digit position of gesture motion, each region of finger is defined
For number, a possibility that extrapolating the digit position of its next gesture, and then predict gesture variation, then pass through positioning proximity
Domain, the gesture motion caught can not directly be caught by showing in camera, it is ensured that phantom hand movement is fast and accurately shown.
According to the above technical scheme, in step S6, different data is carried out by classifier according to the rule feature of extraction
Gesture classification, wherein classifier includes that decision tree classifier, selection Tree Classifier and classification of evidence device, decision tree classifier refer to
An attribute set is provided, by making a series of decision, each decision tree of classifier on the basis of property set
A node indicate that patterned representation method helps user to understand sorting algorithm, the valuable sight to data is provided
Examine visual angle;Selecting Tree Classifier includes special selection node, when node being selected to have multiple branches to be classified in selection tree,
A variety of situations are comprehensively considered;Classification of evidence device is by checking some specific result hair on the basis of giving an attribute
A possibility that raw, classifies to data.
According to the above technical scheme, in step S7, by the data of classification and big data network carry out Rapid matching refer to by
Sorted data extract, and the data lifted are matched with big data network, if successful match will be after matching
Data output, and matching record is stored, former data are returned if it fails to match, and failure information is fed back into use
Person.
According to the above technical scheme, in step S8, referred to by matching correct gesture motion and being presented in VR environment
By the data of successful match by showing that screen is presented in VR environment, and the feedback of record storage user, under being convenient for
Secondary adjustment.
Based on above-mentioned, the present invention has the advantages that first by the effect of light compensating lamp, convenient for carrying out light filling work to environment
Make, avoids ambient light dimness from causing image movement acquisition unintelligible, influence later stage work, then pre-processed by images of gestures
Effect, capture main gesture feature using edge detection and normalized technology, and be entered into and know for gesture
In other model, significantly reduce data volume, and eliminates it is considered that incoherent information, improves the place of data
The data of storage are judged and are analyzed, judge the digit order number of gesture motion then by the effect of discriminatory analysis by reason speed
A possibility that setting the digit position for extrapolating its next gesture, and then predicting gesture variation, then by positioning adjacent domain,
The gesture motion caught can not be directly caught in display camera, without doing gesture motion with a not only bored but also weight gloves, is made
It is more convenient precisely every empty gesture identification to obtain.
Finally, it should be noted that being not intended to restrict the invention the foregoing is merely preferred embodiment of the invention, to the greatest extent
Present invention has been described in detail with reference to the aforementioned embodiments for pipe, for those skilled in the art, still can be with
It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features.It is all
Within the spirit and principles in the present invention, any modification, equivalent replacement, improvement and so on should be included in guarantor of the invention
Within the scope of shield.
Claims (10)
1. it is a kind of for the field VR every empty gesture identification method, it is characterised in that:
S1, images of gestures are collected: being acquired user's hand motion by camera;
S2, images of gestures pretreatment: main gesture feature is captured using edge detection and normalized technology, and its is defeated
Enter into the model for gesture identification;
S3, picture charge pattern: carrying out pretreatment image to go deep into tracking, captures the direction specifically acted by sensor, determines fortune
The spatial position of dynamic object;
S4, storage: the data after tracking process are collected and are stored;
S5, discriminatory analysis: the data of storage are judged and is analyzed;
S6, classification: different data is subjected to gesture classification according to the rule feature of extraction by classifier;
S7, matching: the data of classification and big data network are subjected to Rapid matching;
S8, presentation: by matching correct gesture motion and being presented in VR environment.
2. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S1, user's hand motion is acquired by camera refer to by multiple cameras to the movement of user's hand into
Row captures, and carries out light filling work to environment by light compensating lamp.
3. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S2, main gesture feature is captured using edge detection and normalized technology, wherein edge detection refers to reference numbers
The apparent point of brightness change in image, the significant changes in image attributes usually reflect the critical event and variation of attribute, packet
Include discontinuous discontinuous, surface direction in depth, material property variation and scene lighting variation, wherein the side of edge detection
Method is divided into based on searching for and being based on zero crossing, and the edge detection method based on search calculates edge strength first, usually uses single order
Derivative indicates that then, with the local direction for calculating estimation edge, the method based on zero crossing is found to be led by the second order that image obtains
Several zero cross points positions the direction that edge generallys use gradient, and finds the maximum value of partial gradient mould using this direction.
4. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S2, main gesture feature is captured using edge detection and normalized technology, wherein normalized refers to needs
The data of processing limit the dimension one for doing dimension in a certain range after treatment, i.e. abstract normalizing, determine inessential and not
Have a set of comparativity, and by such gather in attribute of an element remove, retain specific and important attribute, guarantee program operation
When convergence accelerate.
5. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S3, pretreatment image is carried out to go deep into tracking, the direction specifically acted is captured by sensor, determines the space of Moving Objects
Position, wherein flying time technology using light is acquired tracking to the depth information of object, light flies time technology and refers to load one
A light-emitting component, the photon that light-emitting component issues can reflect after encountering body surface, reuse a special CMOS
Sensor issues, to capture these by light-emitting component again from the reflected photon of body surface, to obtain the flight of photon
Time can extrapolate the distance of photon flight in turn according to photon flight time, also just obtain the depth information of object.
6. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S4, the data after tracking process are collected and are stored refer to that the data for obtaining tracking are collected arrangement, and stores
It is backed up in network system, and by the data of storage, the later period is facilitated to do analogy.
7. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S5, the data of storage are judged and analyzed, wherein being judged by data of the DeepHand system to storage, is passed through
Each region of finger is defined as number, extrapolates the number of its next gesture by the digit position for judging gesture motion
Word location, and then predict a possibility that gesture changes, then by positioning adjacent domain, it shows directly catch in camera and catch
Gesture motion, it is ensured that phantom hand movement fast and accurately show.
8. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S6, different data is subjected to gesture classification according to the rule feature of extraction by classifier, wherein classifier includes decision tree
Classifier, selection Tree Classifier and classification of evidence device, decision tree classifier, which refers to, provides an attribute set, by property set
On the basis of make a series of decision, the node of each decision tree of classifier indicates, patterned expression
Method helps user to understand sorting algorithm, provides the valuable observation visual angle to data;It includes special for selecting Tree Classifier
Node is selected, when node being selected there are multiple branches to be classified in selection tree, a variety of situations are comprehensively considered;The classification of evidence
Device classifies to data by checking a possibility that some specific result occurs on the basis of giving an attribute.
9. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In S7, the data of classification are referred to big data network progress Rapid matching and extract sorted data, and will be lifted
Data matched with big data network, if successful match by after matching data export, and will matching record store,
Former data are returned if it fails to match, and failure information is fed back into user.
10. it is according to claim 1 it is a kind of for the field VR every empty gesture identification method, it is characterised in that: the step
In rapid S8, referred to the data of successful match by matching correct gesture motion and being presented in VR environment by showing screen
It is presented in VR environment, and the feedback of record storage user, convenient for adjustment next time.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910240627.7A CN109917921A (en) | 2019-03-28 | 2019-03-28 | An air gesture recognition method for VR field |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910240627.7A CN109917921A (en) | 2019-03-28 | 2019-03-28 | An air gesture recognition method for VR field |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109917921A true CN109917921A (en) | 2019-06-21 |
Family
ID=66967232
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910240627.7A Pending CN109917921A (en) | 2019-03-28 | 2019-03-28 | An air gesture recognition method for VR field |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109917921A (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101344816A (en) * | 2008-08-15 | 2009-01-14 | 华南理工大学 | Human-computer interaction method and device based on gaze tracking and gesture recognition |
| US9164596B1 (en) * | 2012-10-22 | 2015-10-20 | Google Inc. | Method and apparatus for gesture interaction with a photo-active painted surface |
| CN105045398A (en) * | 2015-09-07 | 2015-11-11 | 哈尔滨市一舍科技有限公司 | Virtual reality interaction device based on gesture recognition |
| CN205080499U (en) * | 2015-09-07 | 2016-03-09 | 哈尔滨市一舍科技有限公司 | Mutual equipment of virtual reality based on gesture recognition |
| CN105536205A (en) * | 2015-12-08 | 2016-05-04 | 天津大学 | Upper limb training system based on monocular video human body action sensing |
| CN106648103A (en) * | 2016-12-28 | 2017-05-10 | 歌尔科技有限公司 | Gesture tracking method for VR headset device and VR headset device |
| CN106815578A (en) * | 2017-01-23 | 2017-06-09 | 重庆邮电大学 | A kind of gesture identification method based on Depth Motion figure Scale invariant features transform |
| CN106970701A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of gesture changes recognition methods |
| CN107479715A (en) * | 2017-09-29 | 2017-12-15 | 广州云友网络科技有限公司 | Method and device for realizing virtual reality interaction by using gesture control |
| CN207198800U (en) * | 2017-10-17 | 2018-04-06 | 石家庄学院 | A kind of VR three-dimensional experiencing systems of gesture identification |
| CN108629272A (en) * | 2018-03-16 | 2018-10-09 | 上海灵至科技有限公司 | A kind of embedded gestural control method and system based on monocular cam |
-
2019
- 2019-03-28 CN CN201910240627.7A patent/CN109917921A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101344816A (en) * | 2008-08-15 | 2009-01-14 | 华南理工大学 | Human-computer interaction method and device based on gaze tracking and gesture recognition |
| US9164596B1 (en) * | 2012-10-22 | 2015-10-20 | Google Inc. | Method and apparatus for gesture interaction with a photo-active painted surface |
| CN105045398A (en) * | 2015-09-07 | 2015-11-11 | 哈尔滨市一舍科技有限公司 | Virtual reality interaction device based on gesture recognition |
| CN205080499U (en) * | 2015-09-07 | 2016-03-09 | 哈尔滨市一舍科技有限公司 | Mutual equipment of virtual reality based on gesture recognition |
| CN105536205A (en) * | 2015-12-08 | 2016-05-04 | 天津大学 | Upper limb training system based on monocular video human body action sensing |
| CN106970701A (en) * | 2016-01-14 | 2017-07-21 | 芋头科技(杭州)有限公司 | A kind of gesture changes recognition methods |
| CN106648103A (en) * | 2016-12-28 | 2017-05-10 | 歌尔科技有限公司 | Gesture tracking method for VR headset device and VR headset device |
| CN106815578A (en) * | 2017-01-23 | 2017-06-09 | 重庆邮电大学 | A kind of gesture identification method based on Depth Motion figure Scale invariant features transform |
| CN107479715A (en) * | 2017-09-29 | 2017-12-15 | 广州云友网络科技有限公司 | Method and device for realizing virtual reality interaction by using gesture control |
| CN207198800U (en) * | 2017-10-17 | 2018-04-06 | 石家庄学院 | A kind of VR three-dimensional experiencing systems of gesture identification |
| CN108629272A (en) * | 2018-03-16 | 2018-10-09 | 上海灵至科技有限公司 | A kind of embedded gestural control method and system based on monocular cam |
Non-Patent Citations (2)
| Title |
|---|
| WEIXIN_3375760: "无需VR外设,普林斯顿学霸用DeepHand解放你的双手", 《HTTPS://BLOG.CSDN.NET/WEIXIN_33757609/ARTICLE/DETAILS/89744998》 * |
| 孙沫丽: "基于云计算的图像分类算法", 《现代电子技术》 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7397786B2 (en) | Cross-modal processing methods, devices, electronic devices and computer storage media | |
| US10614310B2 (en) | Behavior recognition | |
| KR102462934B1 (en) | Video analysis system for digital twin technology | |
| Ojha et al. | Vehicle detection through instance segmentation using mask R-CNN for intelligent vehicle system | |
| CN114758362B (en) | Clothes-changing pedestrian re-identification method based on semantic-aware attention and visual masking | |
| CN111444968A (en) | Image description generation method based on attention fusion | |
| CN109325538A (en) | Object detection method, device and computer readable storage medium | |
| CN115797736B (en) | Object detection model training and object detection method, device, equipment and medium | |
| CN105051755A (en) | Part and state detection for gesture recognition | |
| CN110796018A (en) | A Hand Motion Recognition Method Based on Depth Image and Color Image | |
| CN108734194A (en) | A kind of human joint points recognition methods based on single depth map of Virtual reality | |
| CN118710883A (en) | A lightweight network target detection method based on structure optimization and feature fusion | |
| Tan et al. | A survey of zero shot detection: Methods and applications | |
| CN114241379B (en) | Passenger abnormal behavior identification method, device, equipment and passenger monitoring system | |
| Liu et al. | Towards interpretable and robust hand detection via pixel-wise prediction | |
| CN113419623A (en) | Non-calibration eye movement interaction method and device | |
| Mishra et al. | Sensing accident-prone features in urban scenes for proactive driving and accident prevention | |
| CN119479064B (en) | A mine worker violation target detection method, system, device and storage medium | |
| CN118840646A (en) | Image processing analysis system based on deep learning | |
| CN118212688A (en) | Personnel activity analysis method and system based on image recognition | |
| Sadiq et al. | Enhance the ai virtual system accuracy with novel hand gesture recognition algorithm comparing to convolutional neural network | |
| Musunuri et al. | Object detection using ESRGAN with a sequential transfer learning on remote sensing embedded systems | |
| Mohamed et al. | Sign Language Recognition System for Service-Oriented Environment | |
| CN112749701B (en) | License plate offset classification model generation method and license plate offset classification method | |
| CN109917921A (en) | An air gesture recognition method for VR field |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190621 |
|
| RJ01 | Rejection of invention patent application after publication |