CN108733287A - Detection method, device, equipment and the storage medium of physical examination operation - Google Patents
Detection method, device, equipment and the storage medium of physical examination operation Download PDFInfo
- Publication number
- CN108733287A CN108733287A CN201810463925.8A CN201810463925A CN108733287A CN 108733287 A CN108733287 A CN 108733287A CN 201810463925 A CN201810463925 A CN 201810463925A CN 108733287 A CN108733287 A CN 108733287A
- Authority
- CN
- China
- Prior art keywords
- gesture
- physical examination
- model
- examination operation
- virtual human
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application provides a kind of detection method, device, equipment and the storage medium of physical examination operation, by showing virtual human model on the screen, and detect the physical examination operation of user's execution, physical examination operation is mapped on the virtual human model of screen display, when physical examination operation is mapped on the target area of virtual human model, export preset feedback information, so as to simulate true physical examination scene, user is helped to identify inadequacies based on the physical examination operation that user specifically executes, the physical examination technical ability of promotion user rapidly and efficiently.
Description
Technical field
The invention relates to field of computer technology more particularly to a kind of detection method of physical examination operation, device, set
Standby and storage medium.
Background technology
Important link when physical examination is one of important skill of clinician and clinician's diagnosis and treatment.
The prior art is usually required when cultivating the physical examination technical ability of doctor by standard patient (SP) system on a kind of line
System, the system would generally show then manikin plane or that 3 dimensions are three-dimensional is shown on the screen on the screen
Either user can click some position of manikin, SP systems by mouse or touch screen for alternative physical examination operation
System provides the operating list of the operations such as palpation, percussion, visual examination, an auscultation comprising the position according to the position that user clicks,
User selects option to obtain physical examination result from operating list.
Although the system can play the role of training doctor's physical examination ability to a certain extent, to user
When operable physical examination option of operation is provided, user's operation enlightenment has also been given indirectly, and do not need to user in training and hold
The specific physical examination operation of row, differs greatly, result of training is bad with the operation executed needed for doctor in practical physical examination scene.
Invention content
The embodiment of the present application provides a kind of detection method, device, equipment and the storage medium of physical examination operation, true to simulate
Real physical examination scene helps the promotion physical examination technical ability of user rapidly and efficiently.
The embodiment of the present application first aspect provides a kind of detection method of physical examination operation, including:
Virtual human model is shown on the screen;
The physical examination operation that user executes is detected, and physical examination operation is mapped to the visual human of the screen display
On body Model;
When physical examination operation is mapped on the target area of the virtual human model, preset feedback letter is exported
Breath.
In a kind of possible design, the physical examination operation that the detection user executes, and physical examination operation is mapped to
On the virtual human model of the screen display, including:
Obtain the first depth map that user executes physical examination operation;
Based on preset first gesture disaggregated model, the gesture that first depth map is included is identified;
The dummy model of the gesture is mapped to the void by the depth information based on gesture in first depth map
In anthropomorphic phantom's type.
In a kind of possible design, the physical examination operation that the detection user executes, and physical examination operation is mapped to
On the virtual human model of the screen display, including:
Obtain RGB image and the second depth image that user executes physical examination operation, the RGB image and second depth
Picture registration;
Based on preset second gesture disaggregated model, gesture included in the RGB image is identified;
The depth information of the gesture is obtained from second depth image, and is based on the depth information by the hand
The dummy model of gesture is mapped on the virtual human model.
It is described to be based on preset second gesture disaggregated model in a kind of possible design, identify institute in the RGB image
Including gesture, including
Region where intercepting gesture in the RGB image;
Gray processing processing and/or compression processing are carried out to the region, obtain target image;
Based on preset second gesture disaggregated model, gesture is identified from the target image.
In a kind of possible design, it is described the dummy model of the gesture is mapped to based on the depth information it is described
After on virtual human model, the method further includes:
The gesture for tracking user, the dummy model for controlling the gesture execute corresponding move on the virtual human model
Make.
In a kind of possible design, the physical examination operation that the detection user executes, and physical examination operation is mapped to
On the virtual human model of the screen display, including:
Obtain the phonetic order of user;
Based on the phonetic order, controls the virtual human model and execute corresponding action.
In a kind of possible design, the method further includes:
Acquire the interrogation information of user;
The mapping position on the virtual human model, feedback are operated based on the interrogation information and the physical examination
Corresponding voice answering.
In a kind of possible design, the physical examination operation that the detection user executes, and physical examination operation is mapped to
After on the virtual human model of the screen display, the method further includes:
Physical examination operation based on user, gives a mark to user.The embodiment of the present application second aspect provides a kind of physical examination behaviour
The detection device of work, including:
Display module, for showing virtual human model on the screen;
Detection module, the physical examination operation for detecting user's execution, and physical examination operation is mapped to the screen and is shown
On the virtual human model shown;
First output module, for the physical examination operation be mapped on the target area of the virtual human model when,
Export preset feedback information.
In a kind of possible design, the detection module, including:
First acquisition submodule executes the first depth map of physical examination operation for obtaining user;
First identification submodule identifies that first depth map is wrapped for being based on preset first gesture disaggregated model
The gesture contained;
First mapping submodule, for the depth information based on gesture in first depth map, by the gesture
Dummy model is mapped on the virtual human model.
In a kind of possible design, the detection module, including:
Second acquisition submodule executes the RGB image and the second depth image of physical examination operation for obtaining user, described
RGB image and second depth image overlap;
Second identification submodule is identified for being based on preset second gesture disaggregated model included in the RGB image
Gesture;
Second mapping submodule for obtaining the depth information of the gesture from second depth image, and is based on
The dummy model of the gesture is mapped on the virtual human model by the depth information.
In a kind of possible design, the second identification submodule is specifically used for:
Region where intercepting gesture in the RGB image;
Gray processing processing and/or compression processing are carried out to the region, obtain target image;
Based on preset second gesture disaggregated model, gesture is identified from the target image.
In a kind of possible design, described device further includes tracking module, is used for:
The gesture for tracking user, the dummy model for controlling the gesture execute corresponding move on the virtual human model
Make.
In a kind of possible design, detection module further includes:
Third acquisition submodule, the phonetic order for obtaining user;
First control submodule controls the virtual human model and executes corresponding move for being based on the phonetic order
Make.
In a kind of possible design, described device further includes:
Voice acquisition module, the interrogation information for acquiring user;
Second output module, for being operated in the virtual human model based on the interrogation information and the physical examination
On mapping position, feed back corresponding voice answering.
In a kind of possible design, described device further includes:
Scoring modules give a mark to user for the physical examination operation based on user.
The embodiment of the present application third aspect provides a kind of detection device, including:
One or more processors;
Imaging sensor, described image sensor are connected to the processor, when executing physical examination operation for acquiring user
Image.
Speech transducer, the speech transducer are connected to the processor, phonetic order for acquiring user and/or
Interrogation information.
Speech player, the speech player are connected to the processor, for exporting the phonetic order and/or asking
Examine the corresponding voice feedback of information;
Display, the display are connected to the processor, for showing virtual human model and physical examination behaviour
Make the mapping on the virtual human model;
Storage device, for storing one or more programs, when one or more of programs are one or more of
Processor executes so that one or more of processors realize the method described in above-mentioned first aspect.
The embodiment of the present application fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program,
The method as described in above-mentioned first aspect is realized when the program is executed by processor.
Based on aspects above, the embodiment of the present application detects user's execution by showing virtual human model on screen
Physical examination operation, physical examination operation is mapped on the virtual human model of screen display, when physical examination operation be mapped to it is virtual
When on the target area of manikin, preset feedback information is exported.Since the embodiment of the present application can be executed user is practical
Physical examination operation be mapped on virtual human model, and be mapped to the target area of virtual human model in physical examination operation, than
When as on correct physical examination position, preset feedback information is exported, it is thus possible to be combined practical operation with dummy model, very
Real simulation physical examination scene, helps user's accelerated accumulation practical experience, improves physical examination technical ability.
It should be appreciated that the content described in foregoing invention content part is not intended to limit the pass of embodiments herein
Key or important feature, it is also non-for limiting scope of the present application.The other feature of this public affairs application will be become by description below
It is readily appreciated that.
Description of the drawings
Fig. 1 is a kind of flow chart of the detection method of physical examination operation provided by the embodiments of the present application;
Fig. 2 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of;
Fig. 3 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of;
Fig. 4 is a kind of structural schematic diagram of the detection device of physical examination operation provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of detection module 42 provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of detection module 42 provided by the embodiments of the present application.
Specific implementation mode
Embodiments herein is more fully described below with reference to accompanying drawings.Although showing that the application's is certain in attached drawing
Embodiment, it should be understood that, the application can be realized by various forms, and should not be construed as being limited to this
In the embodiment that illustrates, it is in order to more thorough and be fully understood by the application to provide these embodiments on the contrary.It should be understood that
It is that being given for example only property of the accompanying drawings and embodiments effect of the application is not intended to limit the protection domain of the application.
Term " first ", " second " in the specification and claims of the embodiment of the present application and above-mentioned attached drawing, "
Three ", the (if present)s such as " 4th " are for distinguishing similar object, without being used to describe specific sequence or priority time
Sequence.It should be appreciated that the data used in this way can be interchanged in the appropriate case, for example so as to the embodiment of the present application described herein
It can be implemented with the sequence other than those of illustrating or describing herein.In addition, term " comprising " and " having " and he
Any deformation, it is intended that cover it is non-exclusive include, for example, contain the process of series of steps or unit, method,
System, product or equipment those of are not necessarily limited to clearly to list step or unit, but may include not listing clearly
Or for the intrinsic other steps of these processes, method, product or equipment or unit.
The embodiment of the present application provides a kind of detection method of physical examination operation, and this method can be held by a kind of detection device
Row.It is a kind of flow chart of the detection method of physical examination operation provided by the embodiments of the present application referring to Fig. 1, Fig. 1, this method includes step
Rapid S11-S13:
S11, virtual human model is shown on the screen.
Wherein, the virtual human model that the present embodiment is related to can be plane manikin, three-dimensional stereo model, Huo Zheqi
The model of his form is not specifically limited in the present embodiment.
The physical examination operation that S12, detection user execute, and the physical examination is operated to the void for being mapped to the screen display
In anthropomorphic phantom's type.
The present embodiment so-called " physical examination " refers to that doctor is the rough body found out the cause of disease and/or disease locus and carried out
It checks.
The so-called physical examination operation of the present embodiment includes at least one of following operation:Gesture operation, phonetic order (ratio
Such as, fall, lie on the back, open one's mouth, but be not limited to several phonetic orders in the example above), interrogation operation, by stethoscope etc.
Other operations executed required for operation and physical examination that auxiliary physical examination equipment is carried out.
The physical examination operation that user executes can be detected by corresponding sensor to be obtained.For example, image sensing can be passed through
Device (for example, depth camera, RGB cameras etc.) detection obtains user's gesture operation and/or user performed in physical examination
By the operation that auxiliary physical examination equipment is carried out, the voice for obtaining user is acquired by speech transducer (for example, microphone etc.)
Instruction and/or interrogation operation.
Further, when physical examination operation to be mapped on virtual human model, it can be operated and be adopted based on different physical examinations
With different mapping methods, known from the image that imaging sensor acquisition obtains based on preset image recognition algorithm for example, working as
After not going out user gesture, the gesture can be subjected to the virtualization mould that virtualization process obtains user gesture by virtualization technology
The virtualization model is mapped on the predeterminated position of virtual human model or screen by type, and can be by tracking user hand
The movement of gesture controls the virtualization model and is moved accordingly on any position of virtual human model or screen.Or
Person can be according to depth of the user gesture in depth map when the image that imaging sensor acquisition obtains includes depth map
The virtualization model of user gesture is mapped on the corresponding position of screen coordinate system or virtual human model coordinate system by information
And corresponding gesture operation (for example press, but be not limited to press) is executed, which may also may be used on virtual human model
It can not be.The virtualization model for controlling user gesture with aforementioned similar method may be used after this to move on the screen.
Wherein, the mapping method that user is executed by auxiliary physical examination equipment when physical examination operates is similar with the mapping mode of gesture operation,
Which is not described herein again.
For another example, when speech transducer collects the phonetic order of user, first the phonetic order can be carried out semantic
Analysis, then the output control virtual human model based on semantic analysis execute corresponding action, refer to for example, user inputs " falling "
When enabling, control virtual human model is fallen.It, can asking based on user when speech transducer collects the interrogation information of user
Information, and mapping position of the user gesture on virtual human model in current physical examination operation are examined, corresponding voice is fed back
It answers, for example, when user gesture is mapped to the abdomen of virtual human model, if user inquires " aching here ", passes through
Loud speaker exports preset voice answering.Certainly herein merely to it is clear and do for example, rather than to the application's
It is unique to limit.
S13, when the physical examination operation be mapped on the target area of the virtual human model when, export preset feedback
Information.
So-called target area can be the region on preset virtual human model in the present embodiment.The target area
Domain is either one or more.Target area includes disease locus or can judge the position of symptom, it is also possible to be wrapped
Include the other positions on virtual human model, and " other positions " described here are not helped for judging user's state of an illness or
Help very little.Alternatively, in alternatively possible scene, target area can also be the area for executing the user of physical examination operation and selecting
Domain, user's selection execute further physical examination operation (such as pressing) in the region.As for the specific of user's selection target region
It is not limited in mode of operation the present embodiment.
Different target region may correspond to different feedback informations in the present embodiment, for example, be mapped to when physical examination operation
When target area is disease locus, may output for indicates be here disease locus feedback information (for example, " here pain " etc.
But be not limited to be used for the feedback information that illustrates here), the feedback information can be voice messaging, text information or its
He can be dressed or be held by user for example, when the target area that physical examination operation is mapped to is disease locus at information
Interactive device exports vibration or resistance feedback to user, so that user recognizes that the region is exactly disease locus, and simulation
True feel feedback of the disease locus in touch or pressing.Certainly it is only for illustrate rather than to the present invention only
One limits.
Further, after user terminates physical examination operation, detection device can be based on preset marking it is tactful to
The physical examination operation at family is given a mark, or even can also export the shortcoming that user occurs in this physical examination operation.On wherein
Stating marking strategy can be set as needed, and the present embodiment does not limit.
The present embodiment detects the physical examination operation of user's execution by showing virtual human model on screen, by the physical examination
Operation is mapped on the virtual human model of screen display, when physical examination operation is mapped on the target area of virtual human model
When, export preset feedback information.Since the practical physical examination operation executed of user can be mapped to virtual human body by the present embodiment
On model, and the target area of virtual human model is mapped in physical examination operation, such as when on correct physical examination position, output
Preset feedback information, it is thus possible to practical operation be combined, real simulation physical examination scene with dummy model, help user fast
Speed accumulation practical experience, improves physical examination technical ability.
Above-described embodiment is further optimized and extended below in conjunction with the accompanying drawings.
Fig. 2 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of, as shown in Fig. 2, in Fig. 1 realities
On the basis of applying example, step S12 may include:
S21, the first depth map that user executes physical examination operation is obtained.
In the present embodiment, the imaging sensor that detection device carries includes depth camera, which can
For shooting the gesture operation that user executes in physical examination, the first depth map for including gesture is obtained.
S22, it is based on preset first gesture disaggregated model, identifies the gesture that first depth map is included.
Wherein, first gesture disaggregated model can be using any one existing model training method training obtain can
The model of gesture is identified from depth map.In the present embodiment by taking convolutional neural networks model as an example.
Exemplary, the present embodiment can first obtain the depth of preset quantity in the above-mentioned convolutional neural networks model of training
Figure, these depth maps include various gestures, for example, single palm the five fingers are diverged, the singlehanded palm the five fingers close up, three finger of the singlehanded palm closes up,
One hand takes stethoscope, the both hands the five fingers to diverge to using thumb and index finger, further, before carrying out model training, in order to
The calculation amount of model training is reduced, preset image processing method can also be used to carry out those depth maps in the present embodiment pre-
Processing, for example, in a kind of possible design, can be examined by the method for edge detection from depth map for every depth map
The marginal position of gesture is measured, the position of the top further, then based on marginal position where gesture in depth map, most
The position of lower section, the position of the leftmost side and the position of the rightmost side intercept the region for including gesture from depth map, and to the region
Compression and/or gray processing processing are carried out, pretreated image is obtained, then the image training obtained based on pretreatment can be used
In the convolutional neural networks model of identification gesture.Certainly it above are only and illustrate rather than to unique restriction of the present invention.
S23, the depth information based on gesture in first depth map, institute is mapped to by the dummy model of the gesture
It states on virtual human model.
It is exemplary, it is assumed that the space coordinate of any is (x, y, z) in gesture in depth map, then can the point be mapped to void
Where anthropomorphic phantom's type in (x1, y1, the z1) coordinate points in space, alternatively, being also based on SDK gestures algorithm in the prior art
Library obtains 22 tie points in gesture space where virtual human model, and to this 22 points using Full-hand patterns
In be located at the point of outermost and carry out line, obtain gesture in depth map virtual human model profile in space, then
The virtualization in gesture space where virtual human model is obtained by the painting canvas technology to drawing of the browser built in detection device
Model.Certainly it is only for illustrating rather than to unique restriction of the application.
Depth map when the present embodiment executes physical examination operation by shooting user, identifies user gesture from depth map, and
The gesture is mapped on virtual human model by the depth information based on the gesture in depth map, the use that can will be captured
Family gesture is accurately mapped on virtual human model, is realized combining closely for virtual scene and reality scene, is improved experience
Authenticity.
Fig. 3 is the execution method flow diagram of step S12 provided by the embodiments of the present application a kind of, as shown in figure 3, in Fig. 1 realities
On the basis of applying example, step S12 may include:
S31, RGB image and the second depth image that user executes physical examination operation, the RGB image and described second are obtained
Depth image overlaps.
In the present embodiment, the imaging sensor that detection device carries includes depth camera and RGB cameras, the depth
Camera can be used for shooting the gesture operation that user executes in physical examination, obtain the second depth map for including gesture, wherein
The name of " the second depth map " is only used for distinguishing with the first depth map in previous embodiment, without other meanings,
Second depth map of the first depth map and the present embodiment shooting acquisition that acquisition is actually shot in previous embodiment can be phase
Same depth map, can also be different depth map.RGB cameras in the present embodiment can be used for shooting user to be looked into execution
RGB image when gymnastics is made, the RGB image and the second depth map that synchronization shooting obtains overlap.
S32, it is based on preset second gesture disaggregated model, identifies gesture included in the RGB image.
Wherein, the training method of second gesture disaggregated model is similar with previous embodiment, repeats no more herein.
In fact, any one in following operation when identifying gesture from RGB image, may be used in the present embodiment:
In a kind of possible operation, the RGB image that can directly obtain acquisition inputs preset second gesture classification
Model identifies gesture by second gesture disaggregated model from RGB image.
In alternatively possible operation, in order to reduce the calculation amount of gesture identification, recognition efficiency is promoted.Obtaining RGB
Can preset pretreatment operation first be carried out to RGB image after image, then gesture is obtained based on the image recognition obtained after processing.
For example, the preprocess method similar with above-described embodiment may be used in the present embodiment, gesture institute is first intercepted from RGB image
Region, then gray processing processing and/or compression processing are carried out to the region, obtain target image, classified based on second gesture
Model identifies gesture from target image, or even when obtained target image is not square-shaped image, can be first target
On image relatively narrow direction using pre-set color polishing at square after, then based on after polishing image carry out gesture identification.
Wherein, identify that the calculation amount of gesture is more relatively low compared to the calculation amount of the identification gesture from depth map based on RGB image.
The present embodiment other gesture preferably from RGB image.
S33, the depth information that the gesture is obtained from second depth image, and the depth information is based on by institute
The dummy model for stating gesture is mapped on the virtual human model.
In actual field, after the virtualization model of user gesture is mapped on virtual human model, depth camera
The depth map for being shot to the physical examination operation of user, and being obtained based on captured in real-time in real time carries out the gesture of user real-time
The gesture state that tracking obtains is mapped on the virtualization model of gesture so that the virtualization model of gesture is virtual by tracking
Corresponding action is executed on manikin.
The present embodiment identifies the gesture of user by RGB image, by this by shooting RGB image and depth map simultaneously
Depth information of the gesture in depth map, gesture is mapped on virtual human model, can grasp virtual scene with practical
In conjunction with while, reduce the calculation amount of gesture identification, improve efficiency.
Fig. 4 is a kind of structural schematic diagram of the detection device of physical examination operation provided by the embodiments of the present application, as shown in figure 4,
The device includes:
Display module 41, for showing virtual human model on the screen;
Detection module 42, the physical examination operation for detecting user's execution, and physical examination operation is mapped to the screen
On the virtual human model of display;
First output module 43, the target area for being mapped to the virtual human model in physical examination operation
When, export preset feedback information.
Optionally, the detection module 42 further includes:
Third acquisition submodule, the phonetic order for obtaining user;
First control submodule controls the virtual human model and executes corresponding move for being based on the phonetic order
Make.
Optionally, described device can also include:
Voice acquisition module, the interrogation information for acquiring user;
Second output module, for being operated in the virtual human model based on the interrogation information and the physical examination
On mapping position, feed back corresponding voice answering.
Optionally, described device can also include:
Scoring modules give a mark to user for the physical examination operation based on user.
Optionally, described device further includes tracking module, is used for:
The gesture for tracking user, the dummy model for controlling the gesture execute corresponding move on the virtual human model
Make.
Detection device provided in this embodiment can be used in the method for executing Fig. 1 embodiments, executive mode and beneficial effect
Fruit seemingly, repeats no more herein.
Fig. 5 is a kind of structural schematic diagram of detection module 42 provided by the embodiments of the present application, as shown in figure 5, implementing in Fig. 4
On the basis of example, detection module 42 includes:
First acquisition submodule 421 executes the first depth map of physical examination operation for obtaining user;
First identification submodule 422 identifies the first depth map institute for being based on preset first gesture disaggregated model
Including gesture;
First mapping submodule 423, for the depth information based on gesture in first depth map, by the gesture
Dummy model be mapped on the virtual human model.
Detection device provided in this embodiment can be used in the method for executing Fig. 2 embodiments, executive mode and beneficial effect
Fruit seemingly, repeats no more herein.
Fig. 6 is a kind of structural schematic diagram of detection module 42 provided by the embodiments of the present application, as shown in fig. 6, implementing in Fig. 4
On the basis of example, detection module 42 includes:
Second acquisition submodule 424 executes the RGB image and the second depth image of physical examination operation, institute for obtaining user
It states RGB image and second depth image overlaps;
Second identification submodule 425 identifies institute in the RGB image for being based on preset second gesture disaggregated model
Including gesture;
Second mapping submodule 426, the depth information for obtaining the gesture from second depth image, and base
The dummy model of the gesture is mapped on the virtual human model in the depth information.
Optionally, the second identification submodule 425, is specifically used for:
Region where intercepting gesture in the RGB image;
Gray processing processing and/or compression processing are carried out to the region, obtain target image;
Based on preset second gesture disaggregated model, gesture is identified from the target image.
Detection device provided in this embodiment can be used in the method for executing Fig. 3 embodiments, executive mode and beneficial effect
Fruit seemingly, repeats no more herein.
The embodiment of the present application also provides a kind of detection device, including:One or more processors;
Imaging sensor, described image sensor are connected to the processor, when executing physical examination operation for acquiring user
Image.
Speech transducer, the speech transducer are connected to the processor, phonetic order for acquiring user and/or
Interrogation information.
Speech player, the speech player are connected to the processor, for exporting the phonetic order and/or asking
Examine the corresponding voice feedback of information;
Display, the display are connected to the processor, for showing virtual human model and physical examination behaviour
Make the mapping on the virtual human model;
Storage device, for storing one or more programs, when one or more of programs are one or more of
Processor executes so that one or more of processors realize the method described in any of the above-described embodiment.
The embodiment of the present application is also provided in a kind of computer readable storage medium, is stored thereon with computer program, the journey
The method described in any of the above-described embodiment is realized when sequence is executed by processor.
Function described herein can be executed by one or more hardware logic components at least partly.Example
Such as, without limitation, the hardware logic component for the exemplary type that can be used includes:It is field programmable gate array (FPGA), special
Integrated circuit (ASIC), Application Specific Standard Product (ASSP), the system (SOC) of system on chip, load programmable logic device
(CPLD) etc..
Any combinations that one or more programming languages may be used in program code for implementing disclosed method are come
It writes.These program codes can be supplied to the place of all-purpose computer, special purpose computer or other programmable data processing units
Manage device or controller so that program code makes defined in flowchart and or block diagram when by processor or controller execution
Function/operation is carried out.Program code can execute completely on machine, partly execute on machine, as stand alone software
Is executed on machine and partly execute or executed on remote machine or server completely on the remote machine to packet portion.
In the context of the disclosure, machine readable media can be tangible medium, can include or be stored for
The program that instruction execution system, device or equipment are used or is used in combination with instruction execution system, device or equipment.Machine can
It can be machine-readable signal medium or machine-readable storage medium to read medium.Machine readable media can include but is not limited to electricity
Son, magnetic, optical, electromagnetism, infrared or semiconductor system, device or equipment or the above any conjunction
Suitable combination.The more specific example of machine readable storage medium will include being electrically connected of line based on one or more, portable meter
Calculation machine disk, hard disk, random access memory (RAM), read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM
Or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage facilities or
Any appropriate combination of the above.
Although in addition, depicting each operation using certain order, this should be understood as requirement operation in this way with shown
The certain order that goes out executes in sequential order, or requires the operation of all diagrams that should be performed to obtain desired result.
Under certain environment, it may be advantageous for multitask and parallel processing.Similarly, although containing several tools in being discussed above
Body realizes details, but these are not construed as the limitation to the scope of the present disclosure.In the context of individual embodiment
Described in certain features can also realize in combination in single realize.On the contrary, described in the context individually realized
Various features can also individually or in any suitable subcombination realize in multiple realizations.
Although having used specific to this theme of the language description of structure feature and/or method logical action, answer
When understanding that the theme defined in the appended claims is not necessarily limited to special characteristic described above or action.On on the contrary,
Special characteristic described in face and action are only to realize the exemplary forms of claims.
Claims (10)
1. a kind of detection method of physical examination operation, which is characterized in that including:
Virtual human model is shown on the screen;
The physical examination operation that user executes is detected, and physical examination operation is mapped to the virtual human body mould of the screen display
In type;
When physical examination operation is mapped on the target area of the virtual human model, preset feedback information is exported.
2. according to the method described in claim 1, it is characterized in that, the physical examination operation that the detection user executes, and will be described
Physical examination operation is mapped on the virtual human model of the screen display, including:
Obtain the first depth map that user executes physical examination operation;
Based on preset first gesture disaggregated model, the gesture that first depth map is included is identified;
The dummy model of the gesture is mapped to the visual human by the depth information based on gesture in first depth map
On body Model.
3. according to the method described in claim 1, it is characterized in that, the physical examination operation that the detection user executes, and will be described
Physical examination operation is mapped on the virtual human model of the screen display, including:
Obtain RGB image and the second depth image that user executes physical examination operation, the RGB image and second depth image
It overlaps;
Based on preset second gesture disaggregated model, gesture included in the RGB image is identified;
Obtain the depth information of the gesture from second depth image, and based on the depth information by the gesture
Dummy model is mapped on the virtual human model.
4. according to the method described in claim 3, it is characterized in that, described be based on preset second gesture disaggregated model, identification
Gesture included in the RGB image, including
Region where intercepting gesture in the RGB image;
Gray processing processing and/or compression processing are carried out to the region, obtain target image;
Based on preset second gesture disaggregated model, gesture is identified from the target image.
5. a kind of detection device of physical examination operation, which is characterized in that including:
Display module, for showing virtual human model on the screen;
Detection module, the physical examination operation for detecting user's execution, and physical examination operation is mapped to the screen display
On the virtual human model;
First output module, for when physical examination operation is mapped on the target area of the virtual human model, exporting
Preset feedback information.
6. device according to claim 5, which is characterized in that the detection module, including:
First acquisition submodule executes the first depth map of physical examination operation for obtaining user;
First identification submodule identifies that first depth map is included for being based on preset first gesture disaggregated model
Gesture;
First mapping submodule, for the depth information based on gesture in first depth map, by the virtual of the gesture
Model is mapped on the virtual human model.
7. device according to claim 5, which is characterized in that the detection module, including:
Second acquisition submodule executes the RGB image and the second depth image of physical examination operation, the RGB figures for obtaining user
Picture and second depth image overlap;
Second identification submodule identifies hand included in the RGB image for being based on preset second gesture disaggregated model
Gesture;
Second mapping submodule, the depth information for obtaining the gesture from second depth image, and based on described
The dummy model of the gesture is mapped on the virtual human model by depth information.
8. device according to claim 7, which is characterized in that the second identification submodule is specifically used for:
Region where intercepting gesture in the RGB image;
Gray processing processing and/or compression processing are carried out to the region, obtain target image;
Based on preset second gesture disaggregated model, gesture is identified from the target image.
9. a kind of detection device, which is characterized in that including:
One or more processors;
Imaging sensor, described image sensor are connected to the processor, for acquiring figure when user executes physical examination operation
Picture;
Speech transducer, the speech transducer are connected to the processor, the phonetic order for acquiring user and/or interrogation
Information;
Speech player, the speech player are connected to the processor, for exporting the phonetic order and/or interrogation letter
Cease corresponding voice feedback;
Display, the display are connected to the processor, for showing that virtual human model and physical examination operation exist
Mapping on the virtual human model;
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing
Device executes so that one or more of processors realize the method as described in any one of claim 1-4.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The method as described in any one of claim 1-4 is realized when execution.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810463925.8A CN108733287A (en) | 2018-05-15 | 2018-05-15 | Detection method, device, equipment and the storage medium of physical examination operation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810463925.8A CN108733287A (en) | 2018-05-15 | 2018-05-15 | Detection method, device, equipment and the storage medium of physical examination operation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108733287A true CN108733287A (en) | 2018-11-02 |
Family
ID=63938271
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810463925.8A Pending CN108733287A (en) | 2018-05-15 | 2018-05-15 | Detection method, device, equipment and the storage medium of physical examination operation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108733287A (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111508079A (en) * | 2020-04-22 | 2020-08-07 | 深圳追一科技有限公司 | Virtual clothing fitting method and device, terminal equipment and storage medium |
| CN112799507A (en) * | 2021-01-15 | 2021-05-14 | 北京航空航天大学 | Human body virtual model display method, device, electronic device and storage medium |
| CN113325954A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for processing virtual objects |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020188467A1 (en) * | 2001-05-02 | 2002-12-12 | Louis Eke | Medical virtual resource network |
| CN105975780A (en) * | 2016-05-10 | 2016-09-28 | 华南理工大学 | Machine inquiry system based on virtual reality interaction technology |
| CN106547356A (en) * | 2016-11-17 | 2017-03-29 | 科大讯飞股份有限公司 | Intelligent interactive method and device |
| CN107527542A (en) * | 2017-09-18 | 2017-12-29 | 南京梦宇三维技术有限公司 | Percussion training system based on motion capture |
-
2018
- 2018-05-15 CN CN201810463925.8A patent/CN108733287A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020188467A1 (en) * | 2001-05-02 | 2002-12-12 | Louis Eke | Medical virtual resource network |
| CN105975780A (en) * | 2016-05-10 | 2016-09-28 | 华南理工大学 | Machine inquiry system based on virtual reality interaction technology |
| CN106547356A (en) * | 2016-11-17 | 2017-03-29 | 科大讯飞股份有限公司 | Intelligent interactive method and device |
| CN107527542A (en) * | 2017-09-18 | 2017-12-29 | 南京梦宇三维技术有限公司 | Percussion training system based on motion capture |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111508079A (en) * | 2020-04-22 | 2020-08-07 | 深圳追一科技有限公司 | Virtual clothing fitting method and device, terminal equipment and storage medium |
| CN111508079B (en) * | 2020-04-22 | 2024-01-23 | 深圳追一科技有限公司 | Virtual clothes try-on method and device, terminal equipment and storage medium |
| CN112799507A (en) * | 2021-01-15 | 2021-05-14 | 北京航空航天大学 | Human body virtual model display method, device, electronic device and storage medium |
| CN113325954A (en) * | 2021-05-27 | 2021-08-31 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device, medium and product for processing virtual objects |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109074166A (en) | Change application state using neural deta | |
| US20250248642A1 (en) | Movement Disorder Diagnostics from Video Data Using Body Landmark Tracking | |
| US20210109598A1 (en) | Systems, methods and devices for gesture recognition | |
| WO2018064047A1 (en) | Performing operations based on gestures | |
| CN108537702A (en) | Foreign language teaching evaluation information generation method and device | |
| CN113657184B (en) | Piano playing fingering evaluation method and device | |
| CN103777748A (en) | Motion sensing input method and device | |
| WO2022052941A1 (en) | Intelligent identification method and system for giving assistance with piano teaching, and intelligent piano training method and system | |
| CN108733287A (en) | Detection method, device, equipment and the storage medium of physical examination operation | |
| CN110826835A (en) | Glove-based acupuncture training method, system, platform and storage medium | |
| CN109669661A (en) | Control method of dictation progress and electronic equipment | |
| KR20230054522A (en) | Augmented reality rehabilitation training system applied with hand gesture recognition improvement technology | |
| Menegozzo et al. | Surgical gesture recognition with time delay neural network based on kinematic data | |
| Aditya et al. | Recent trends in HCI: A survey on data glove, LEAP motion and microsoft kinect | |
| Ryumin et al. | Towards automatic recognition of sign language gestures using kinect 2.0 | |
| CN107390881A (en) | A kind of gestural control method | |
| CN113657185A (en) | Intelligent auxiliary method, device and medium for piano practice | |
| US9978287B2 (en) | Systems and methods for improving tennis stroke recognition | |
| TW200811767A (en) | Learning assessment method and device using a virtual tutor | |
| Pradeep et al. | Advancement of sign language recognition through technology using python and OpenCV | |
| Wang et al. | Virtual piano system based on monocular camera | |
| RU2836110C1 (en) | Method of generating and recognizing gestures of sign language and device for its implementation | |
| CN119006756B (en) | A virtual simulation teaching method, system and computer equipment | |
| Surasinghe et al. | An Efficient Real-Time Air Drumming Approach Using MediaPipe Hand Gesture Model | |
| CN108665777A (en) | The method for being accurately positioned minimum tissue based on virtual reality technology medical anatomy system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181102 |
|
| RJ01 | Rejection of invention patent application after publication |