[go: up one dir, main page]

CN102799344B - Virtual touch screen system and method - Google Patents

Virtual touch screen system and method Download PDF

Info

Publication number
CN102799344B
CN102799344B CN201110140079.4A CN201110140079A CN102799344B CN 102799344 B CN102799344 B CN 102799344B CN 201110140079 A CN201110140079 A CN 201110140079A CN 102799344 B CN102799344 B CN 102799344B
Authority
CN
China
Prior art keywords
depth
unicom
patch
touch operation
operation region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110140079.4A
Other languages
Chinese (zh)
Other versions
CN102799344A (en
Inventor
张文波
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN201110140079.4A priority Critical patent/CN102799344B/en
Publication of CN102799344A publication Critical patent/CN102799344A/en
Application granted granted Critical
Publication of CN102799344B publication Critical patent/CN102799344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention provides a virtual touch screen method used for a touch screen system and a virtual touch screen system using the virtual touch screen method. The virtual touch screen method comprises the following steps: initially obtaining depth information of an environment containing a touch operation area, establishing an initial depth map based on the initially obtained depth information, and determining the position of the touch operation area based on the initial depth map; continuously obtaining images of the environment of the determined touch operation area; detecting a candidate plaque of at least an object in a preset distance in front of the touch operation area from each frame of the obtained images; and classifying each plaque to a corresponding point sequence according to a relationship of a centroid point of the plaque obtained from front and back two adjacent frames of images in time and space.

Description

Virtual touch screen system and method
Technical field
The present invention relates to a kind of field of human-computer interaction, especially design digital image processing field.Particularly, the present invention relates to a kind of virtual touch method for touchscreen system and adopt virtual touch screen system in this way.
Background technology
Touchscreen technology is now by more and more widely for example, for example, for the portable equipment as HMI equipment (smart phone) and PC (Desktop PC).By touch screen, user can operate more comfortable and easily this equipment and bring good experience.Although touchscreen technology is extremely successful in handheld device, for the touch screen of large-sized monitor, but still there is some problems and chance
Belong to Canesta, Inc title for the US Patent No. 7151530B2 of " System and Method for Determining an Input Selected By a User through a Virtual Interface (determining the system and method for user-selected input by virtual interface) " proposed a kind of for select which key assignments to be designated as the method for current key assignments at one group of key assignments, therefore provided with virtual interface in the object that intersects of region.This virtual interface can be realized and in key assignments group, selects single key assignments and determine position with depth transducer, and this depth transducer can be determined the degree of depth of the position relevant to the position of depth transducer.In addition, the placement property of object or the style characteristic of object one of at least can be determined.Positional information can be similar to object with respect to the degree of depth of position transducer or other reference point.In the pel array of camera, during the existing of the pixel denoted object of sufficient amount, just think and this object detected.Determine with the shape of the object of the surface crosswise of virtual input region and for example, compare with multiple known shape (finger or stylus).
Belong to equally Canesta, Inc title for " Quasi-Three-Dimensional Method And Apparatus To Detect And Localize Interaction Of User-Object And Virtual Transfer Device (for detection of and mutual standard three method for position and the equipment of consumer positioning-object and virtual conversion equipment) " US Patent No. 6710770B2 disclosed a kind of virtual bench input or transmission information of adopting to the system of auxiliary equipment, comprise two optical system OS1 and OS2.In light structure embodiment, OS1 is on virtual bench and be parallel to the luminous energy of this virtual bench transmitting fan beam plane ().When user object penetrates interested beam plane, OS2 registers this event.Triangulation method can be located virtual contact, and user's predetermined information is transferred to subsystem.In non-structure active light structure, OS1 is preferably a kind of digital camera, and its visual field has defined interested plane, and this plane is illuminated by an active luminous energy source.
Belong to Touchtable, the US Patent No. 7728821B2 that the title of Inc is " Touch Detecting Interactive Display (touch and detect interactive display) " has disclosed a kind of by the interactive display touching lip-deep the identified user's attitude control of detection display.Image is projected onto on horizontal projection surface from the projector being positioned at projection surface.Use is arranged in one group of infrared transmitter of periphery of projection surface and the position that receiver detects user's contact projection surface.For each contact position, computer software application is stored the history of contact position information, and according to this position history, determines the speed of each contact position.Based on this position history and velocity information, identification attitude.The attitude of identifying is associated with display command, and correspondingly, these display commands are performed to upgrade shown image.Therefore, this United States Patent (USP) make user can by and image between direct physical control alternately display.
The US Patent No. 7599561B2 that the title that belongs to Microsoft is " Compact Interactive Tabletop with Projection-Vision (the small-sized interactive desktop with projection visual field) " has disclosed a kind of system and method that project to any surface of any image (static or mobile) based on vision of being convenient to.Especially provide a kind of orthogonal projection computer based in the interaction surface system of vision, it obtains the factor (factor) of a kind of small-sized independence (self-contained) form with a kind of novel commercially available shadow casting technique.The configuration of this system has solved installation, calibration and the portable problem of mainly paying close attention in the table top system based on vision at great majority.Wherein, the image of input by binaryzation to produce shadow image, and the shadow shapes in this image of subsequent analysis with determine the original of this shade be " touchs " on surperficial or " hovering " from the teeth outwards just.
From these prior aries of mentioning above, it seems, most of large scale touch screen are all based on magnetic board (such as electronic whiteboard), IR border (such as interactive large-sized monitor) etc.For the current technical scheme of large scale touch screen, still there are a lot of problems at that time, for example: in general, the equipment of these types is conventionally large and heavy due to its volume that its hardware causes, and is therefore difficult to carry, and does not have portability.And the device screen size of these types is subject to the restriction of hardware and size is fixed and can not be according to the needs of environment and freely regulate, but also need a kind of special time writer or a kind of IR pen to operate.
For some virtual whiteboard projector, user must control the on/off switch of laser pen, and this is very loaded down with trivial details, therefore has the unmanageable problem of laser pen.In addition, in this virtual whiteboard projector, once laser pen is closed, be just difficult to accurately laser spots be navigated to next position, therefore exist laser spots to locate difficult problem.In some virtual whiteboard projector, adopt finger mouse to substitute laser pen, still, adopted the virtual whiteboard projector of finger mouse touch beginning (touch on) can not be detected or touch end (touch up).
Summary of the invention
In order to solve these problems of the prior art of mentioning above, the present invention proposes a kind of virtual touch method for touchscreen system and make virtual touch screen system in this way.
Particularly, should comprise for the virtual touch method of touchscreen system: the initial depth information that obtains the environment that comprises a touch operation region, depth information based on described initial acquisition creates initial depth figure, and the position of determining described touch operation region based on described initial depth figure; Obtain continuously the image of the environment in described definite touch operation region; From obtained every two field picture, detect the candidate's patch be positioned at least one object of preset distance before described touch operation region; Relation according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back on time and space is included into corresponding point sequence by each patch.The step of the position in wherein said definite described touch operation region comprises: the UNICOM's component described in detection mark in initial depth figure; Determine whether also UNICOM's component of mark of detect comprises two cornerwise intersection points of described initial depth figure; In the situation that the also UNICOM's component of mark that detects comprises two cornerwise intersection points of described initial depth figure, calculate the diagonal line and the also intersection point of UNICOM's component of mark of detect of described initial depth figure; And connect successively the intersection point calculating, and the convex polygon of connect acquisition is defined as to described touch operation region.
According to the virtual touch method for touchscreen system of the present invention, the described virtual touch method for touchscreen system, wherein, the view field of described touch operation region and described touchscreen system is overlapping.
According to the virtual touch method for touchscreen system of the present invention, described touch operation region is not overlapping with the view field of described touchscreen system.
According to the virtual touch method for touchscreen system of the present invention, the step of the UNICOM's component described in wherein said detection mark in initial depth figure comprises: calculate each its area of UNICOM's component; And determine whether the area calculating is greater than a predetermined area threshold, and abandon UNICOM's component that its area is less than described predetermined area threshold value.
According to the virtual touch method for touchscreen system of the present invention, 1/4th of the area that described predetermined area threshold value is described initial depth figure.
According to the virtual touch method for touchscreen system of the present invention, the wherein said intersection point calculating that connects successively, and by connect the convex polygon the obtaining step that is defined as described touch operation region comprise: determine whether connect the formed shape of intersection point of calculating is successively convex polygon, if not convex polygon, abandon formed UNICOM component.
According to the virtual touch method for touchscreen system of the present invention, the described step that relation on time and space is included into corresponding point sequence by each patch according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back comprises: receive and from a frame new images, obtain the centroid point of a plurality of new patches; Retrieve a plurality of existing point sequences, for each existing point sequence, find the centroid point closing on apart from this existing point sequence, using the corresponding new patch of nearest centroid point as the nearest new patch of this existing sequence of points; And the new patch nearest apart from this existing point sequence is included into this existing point sequence.
According to the virtual touch method for touchscreen system of the present invention, the described step that relation on time and space is included into corresponding point sequence by each patch according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back also comprises: for any one existing point sequence, if not there is not any new patch closing on it, provide the notice that this existing point sequence finished and delete this existing point sequence.
According to the virtual touch method for touchscreen system of the present invention, the described step that relation on time and space is included into corresponding point sequence by each patch according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back also comprises: for any one the new patch in a plurality of new patch in an inputted two field picture, if there is no any existing point sequence closing on it, providing this new patch is the notice of the starting point of a new point sequence, and creates a new point sequence.
According to the virtual touch method for touchscreen system of the present invention, the described step for each existing point sequence searching centroid point nearest apart from this existing point sequence comprises: input an existing point sequence, and find the new patch closing on apart from this existing point sequence in a plurality of new patch from an inputted two field picture; In the situation that the new patch that the existing point sequence that a plurality of new patch from an inputted two field picture does not find distance to input closes on provides the notice that inputted existing point sequence will be deleted; A plurality of new patch the two field picture from inputted has found the new patch closing on apart from inputted existing point sequence, and the new patch closing on finding is only the new patch closing on of inputted existing point sequence, determine that this new patch belongs to inputted existing point sequence; And a plurality of new patch the two field picture from inputted has found the new patch closing on apart from inputted existing point sequence, and the new patch closing on finding or the new patch closing on of other existing point sequences, if the new patch closing on finding with the distance of inputted existing point sequence than little with the distance of other existing point sequences, determine that this new patch belongs to the new patch nearest with inputted existing point sequence, otherwise, provide the notice that inputted existing point sequence will be deleted; Examine and whether all existing sequence of points have been carried out to above-mentioned steps.
According to the virtual touch method for touchscreen system of the present invention, the step of finding the new patch close on apart from this existing point sequence in described a plurality of new patch from an inputted two field picture comprises: input a new patch, calculate the new patch of inputting and the distance between point sequence that has of inputting; In the situation that the distance between calculated the new patch of inputting and the existing point sequence inputted is less than a predetermined distance threshold Td, the candidate that inputted new patch is inserted into the existing track that belongs to inputted closes in new patch list, otherwise, be confirmed whether all new patches to have been carried out to above-mentioned steps; In the situation that determine that the size that the candidate of the existing point sequence belong to inputted closes on new patch list is less than a predetermined size threshold value Tsize, be confirmed whether all new patches to have been carried out to above-mentioned steps, otherwise, the candidate that deletion belongs to inputted existing point sequence closes on after in new patch list and inputted existing point sequence new patch farthest, is confirmed whether all new patches to have been carried out to above-mentioned steps; In the situation that confirming all new patches to have been carried out to above-mentioned steps, if belong to the candidate of inputted existing point sequence, close on new patch list for empty, the candidate from described list closes in new patch and finds out the new patch nearest with inputted existing point sequence as the recently new patch of inputted existing point sequence.
According to the virtual touch method for touchscreen system of the present invention, also comprise: to the point sequence of final acquisition, adopt following formula to carry out coordinate optimizing to carry out smoothing processing,
p n k = Σ j = n n + m - 1 p j k - 1 m ,
Wherein be the point in point sequence, k is iteration sign, and n is point sequence sign, the radix that m is iteration point.
According to another aspect of the present invention, provide a kind of virtual touch screen system, having comprised: projector, has projected image onto in Yi Yi projection surface; Depth cameras, the depth information of the environment that acquisition comprises a touch operation region; Depth map processing unit, the depth information obtaining under initial conditions based on depth cameras creates initial depth figure, and based on described initial depth figure, determines the position in described touch operation region; Subject detecting unit, after initial conditions continuous obtained every two field picture, detects the candidate's patch that is positioned at least one object of preset distance before determined touch operation region from depth cameras; Tracking cell, the relation according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back on time and space is included into corresponding point sequence by each patch.Wherein, described depth map processing unit is determined the position in described touch operation region by following process: the UNICOM's component described in detection mark in initial depth figure; Determine whether also UNICOM's component of mark of detect comprises two cornerwise intersection points of described initial depth figure; In the situation that the also UNICOM's component of mark that detects comprises two cornerwise intersection points of described initial depth figure, calculate the diagonal line and the also intersection point of UNICOM's component of mark of detect of described initial depth figure; And connect successively the intersection point calculating, and the convex polygon of connect acquisition is defined as to described touch operation region.
Accompanying drawing explanation
Shown in Fig. 1 is according to the schematic diagram of the framework of virtual touch screen system of the present invention.
Shown in Fig. 2 is the performed object detection of control module according to the present invention and the overview flow chart to image tracing processing.
Shown in Fig. 3 (a)-(b) is according to determining the schematic diagram in touch operation region in the virtual touch screen system that comprises hand touch operation region of the present invention.
Shown in Fig. 4 is according to determining the process flow diagram in touch operation region in virtual touch screen system of the present invention.
Shown in Fig. 5 is according to determining the process flow diagram on summit, touch operation region in virtual touch screen system of the present invention.
Shown in Fig. 6 is according to determining the schematic diagram on summit, touch operation region in virtual touch screen system of the present invention.
Shown in Fig. 7 (a)-(c) is from current depth map, to dispose the schematic diagram of background depth map
Shown in Fig. 8 is that the depth map of inputted current scene is carried out to binary conversion treatment to obtain the schematic diagram of the patch of candidate target.
Shown in Fig. 9 (a) is for patch being added to the schematic diagram in the UNICOM territory of numbering
Shown in Fig. 9 (b) is according to the schematic diagram of the bianry image of the patch with UNICOM's Field Number of depth map generation.
Shown in Figure 10 (a)-(d) is the schematic diagram of enhancing process of the bianry image of patch.
Shown in Figure 11 is the schematic diagram that detects the process of the coordinate of the centroid point of patch in the binary image of the patch as shown in Figure 10 (d).
Shown in Figure 12 is user's finger or the schematic diagram of the track that stylus moves on the screen of virtual touch screen.
Shown in Figure 13 for following the tracks of the process flow diagram of detected object.
Shown in Figure 14 is according to the process flow diagram that the present invention is directed to the recently new spot of every existing track of all existing tracks searchings.
Shown in Figure 15 is to find the process flow diagram apart from its nearest new patch for inputted existing track.
Shown in Figure 16 is the method that the point sequence of the motion track of a kind of detected object that a strip adoption the present invention is obtained on virtual touch screen carries out smoothing processing.
Shown in Figure 17 (a) is the schematic diagram of the motion track of the detected object that obtains of a kind of the present invention of employing on virtual touch screen.
Shown in Figure 17 (b) is the schematic diagram of the object motion track after smoothing processing.
Shown in Figure 18 is the schematic diagram of the concrete configuration of control module.
Embodiment
Below, describe with reference to the accompanying drawings specific embodiments of the invention in detail.
Shown in Fig. 1 is according to the schematic diagram of the framework of virtual touch screen system of the present invention.As shown in Figure 1, virtual touch screen system according to the present invention comprises projector equipment 1, optical device 2, control module 3 and pseudo operation region (when this region and projection screen are when overlapping, also can be referred to as projection surface and also can be referred to as projection screen or virtual screen below) 4.In the specific embodiment of the present invention, projector equipment can be projector, and it projects in projection surface 4 as a kind of virtual screen, so that user is at the enterprising line operate of this virtual screen for the image that needs are shown.Optical device 2 can be for example any equipment that can obtain image, depth cameras for example, and it obtains the depth information of the environment in pseudo operation region 4, and based on this depth information generating depth map.Control module 3 detects at least one object in the described surperficial preset distance of distance and follows the tracks of detected object along the direction away from described surface, to generate level and smooth point sequence.Interactive command etc. is painted, combined to described point sequence for further interactive task, for example, on virtual screen.
Projector equipment 1 projects image onto in projection surface 4 as a kind of virtual screen, for example, so that user, at the enterprising line operate of this virtual screen, draws or combination interactive command.Optical device 2 is caught the environment of the object (such as the user's finger or the stylus that touch in Gai projection surface 4) that includes projection virtual screen and be positioned at the place ahead of projection surface 4.This optical device 2 obtains the depth information of the environment of projection surface 4, and based on this depth information generating depth map.So-called depth map is exactly, depth cameras is positioned at the environment before camera lens by shooting, and calculate in captured environment each pixel apart from the distance of depth cameras, and adopt for example numerical value of 16 to record the object of each pixel representative and the distance between depth cameras, thereby there are 16 bit value of the incidental expression distance of these each pixels to form the figure that a width represents distance between each pixel and camera.Depth map is sent to control module 3 subsequently, and control module 3 detects at least one object in the described surperficial preset distance of distance along the direction away from described projection surface 4.When this object being detected, follow the tracks of the touch action of this object in projection surface to form touch point sequence.Subsequently, 3 pairs of formed touch point sequences of control module are carried out smoothing processing, thereby realize the painting function on this virtual interacting screen.And, these touch point sequences generation interactive command that can be combined, thus realize the interactive function of virtual touch screen, and final virtual touch screen can change according to generated interactive command.The present invention also can adopt other ordinary cameras and other common foreground object detection systems to carry out.For tracking mode of the present invention is convenient to understand, below the testing process of some foreground object of paper, but this testing process is not to realize multi-object to follow the tracks of necessary enforcement means, and is only the prerequisite of following the tracks of a plurality of objects.Namely, the detection of object does not belong to the content to image tracing.
Shown in Figure 18 is the schematic diagram of the concrete configuration of control module.Described control module 3 conventionally comprises depth map processing unit 31, subject detecting unit 32, image enhancing unit 33 and coordinate and calculates and converter unit 34 and tracking cell 35 and smooth unit 36.The depth information that first depth map processing unit 31 obtains under initial conditions based on depth cameras creates initial depth figure, and based on described initial depth figure, determine the position in described touch operation region, and using its depth map of being caught that depth cameras was sent as inputting and process this depth map to background is disposed from this depth map, and subsequently to the UNICOM's Field Number on this depth map.The depth information of the described depth map of subject detecting unit 32 based on from described depth map processing unit 31, based on a predetermined depth threshold, depth map is carried out to binary conversion treatment, formation is as a plurality of patches of candidate target, subsequently, relation based between each patch and UNICOM territory and the size of plaque area, determine the patch as object.Coordinate calculates and converter unit 34 calculates the centroid point coordinate of the patch of the object that is defined as, and its centroid point coordinate transform is arrived to target coordinate system, the i.e. coordinate system of virtual interacting screen.Follow the tracks of and smooth unit 35 tracking detected a plurality of patches in the multiple image of taking continuously, to generate the coordinate point sequence after a plurality of centroid points conversion, and subsequently generated coordinate point sequence is carried out to smoothing processing.
Shown in Fig. 2 is that the performed object detection of control module 3 is processed and the overview flow chart to image tracing.As shown in Figure 2.At step S21 place, depth map processing unit 31 receives the depth map that depth cameras 2 is obtained, this depth map obtains in the following way, be that depth cameras 2 is taken current environment image, and measure each pixel apart from the distance of depth cameras when taking, 16 (can be 8 or 32 the according to actual needs) depth informations that value was recorded of take form, and 16 bit depth values of these each pixels have formed described depth map.For follow-up treatment step, can be before obtaining the depth map of current scene, obtain in advance before pseudo operation region the background depth map without any detected object, thus, the depth information that can obtain under initial conditions based on depth cameras creates initial depth figure, and based on described initial depth figure, determines the position in described touch operation region.
Shown in Fig. 3 (a)-(b) is according to determining the schematic diagram in touch operation region in the virtual touch screen system that comprises hand touch operation region of the present invention.Touch operation region overlapping shown in Fig. 3 (a) is on projection screen.Particularly, a part for projection screen is exactly touch operation region, this means that they are on same physical plane.Thus, image frame is projected on metope user simultaneously and can on metope, carries out touch operation.Coloured image about view field is used to detect touch operation region.About this situation, there have been many known methods, just at this, do not described in detail.It is overlapping that projection screen is refused in touch operation region shown in Fig. 3 (b).In actual use, can be user's desktop on hand with the nonoverlapping operating area of projection screen.System must define one for the region of touch operation to detect touch operation, about the depth map of current scene, is used to detect this touch operation region.In the virtual touch screen system shown in Fig. 3 (a)-(b), projector equipment, projector for example, projection frame out on a physical surface, to show and output interface.Optical device, degree of depth camera for example, catch touch operation region around scene depth information and can be based on this depth information generating depth map.Depth map processing unit 31 comprises a touch operation region detecting unit, the coloured image that this touch operation region detecting unit is caught described optical device and depth map are as input, a suitable operating area detected and calculate its coordinate, this region is used for showing projected image and carries out touch operation.This touch operation region detecting unit can detect automatically system and is that user defines a regional extent and carries out touch operation, rather than needs user oneself manually to define.Described touch operation region detecting unit comprises as the coloured image about projection screen of being caught by degree of depth camera of input and about the depth map of current scene and calculate the apex coordinate in the touch operation region of virtual touch screen system.The output of this unit is the coordinate on 4 summits in touch operation region, and these coordinates are for system calibration and the coordinate transform of control module.
Shown in Fig. 4 is according to determining the process flow diagram in touch operation region in virtual touch screen system of the present invention.In virtual touch screen system, start initial, optical device 2, the depth information of the environment that for example the initial shooting of depth cameras comprises touch operation region, depth information based on described initial acquisition creates initial depth figure, and this initial depth figure is inputted to the touch operation region detecting unit in depth map processing unit 31.Touch operation region detecting unit after receiving this initial depth figure, the UNICOM's component in initial depth figure described in described initial depth figure mark detect in step S41 place.Subsequently, at step S42 place, determine and to state in initial depth figure, whether having UNICOM's component.If there is not UNICOM's component in stating initial depth figure, finish the testing process that touch operation region is detected.If there is UNICOM's component in stating initial depth figure, the UNICOM's component in initial depth figure described in mark.Subsequently, enter step S43, UNICOM's component of all UNICOM's components that are labeled being examined which finds be labeled at step S43 place has comprised two cornerwise intersection points of described initial depth figure.At step S44 place, whether judgement there is the UNICOM's component that has comprised two diagonal line intersection points of described initial depth figure in UNICOM's component of institute's mark afterwards.If there is no the UNICOM's component that has comprised two diagonal line intersection points of described initial depth figure, finish the testing process that touch operation region is detected, otherwise, enter step S45, at step S45 place, for each UNICOM's component, the UNICOM's component that has comprised two diagonal line intersection points of described initial depth figure described in calculating and cornerwise intersection point of initial depth figure, obtain the coordinate of these intersection points.Then, at step S46 place, by connecting successively the intersection point calculating, calculate the polygon forming after these intersection points of acquisition connect.Then at step S47 place, whether the polygon that judgement obtains is convex polygon.If not convex polygon, finish the testing process that touch operation region is detected, otherwise, entering step S48, the convex polygon that output obtains is as described touch operation region.
The detail flowchart of cornerwise intersection point of the UNICOM's component that has comprised two diagonal line intersection points of described initial depth figure described in calculating in step S45 in the process flow diagram shown in Fig. 4 shown in Fig. 5 and initial depth figure.As shown in Figure 5, first, at step S51 place, profile and the area of the UNICOM's component that has comprised two diagonal line intersection points of described initial depth figure described in calculating.This step can adopt the known means of ability those of ordinary skill to realize.Subsequently, at step S52 place, area and a predetermined area threshold value of the UNICOM's component relatively calculating.This predetermined area threshold value is generally 1/4 of the initial depth area of pictural surface, can be also 1/3,1/5,2/7 etc., and the set basis user's of this threshold value concrete needs are selected.By setting described predetermined area threshold value, can filter out the impact of some less UNICOM's components.Be that desired target area can not be too little.If the area of UNICOM's component is less than described predetermined area threshold value, finish the further processing of Dui Gai UNICOM component, otherwise, enter step S53, at step S53 place, calculate the number that described area is more than or equal to UNICOM's component of predetermined area threshold value and cornerwise intersection point of initial depth figure and adds up described intersection point.Situation to different number of intersections, processes respectively at step S54 place afterwards.If judge that at step S54 the quantity of the intersection point of described statistics is less than 4, not polygonal situation, enter step S541, finish subsequently the further processing of Dui Gai UNICOM component.If judge that at step S54 the quantity of the intersection point of described statistics equals 4, enter step S542, enter subsequently step S56.If judge that at step S54 the quantity of the intersection point of described statistics is greater than 4, enter step S543, enter subsequently step S55.At step S55 place, for each summit of initial depth figure, from obtained intersection point, find the nearest intersection point in the described summit of distance, thereby obtain the coordinate of four intersection points, and abandon remaining intersection point.Then enter step S56.At step S56 place, UNICOM's component of obtaining of output and cornerwise four intersection points with initial depth figure.Fig. 6 has schematically shown summit, definite touch operation region process.A in figure, b, c, d are exactly UNICOM's component and the cornerwise intersection point of initial depth figure.
After having obtained UNICOM's component and the cornerwise intersection point of initial depth figure, just obtained the position in touch operation region.Just can carry out on this basis simulated touch operation subsequently.In carrying out the process of touch operation, depth cameras is obtained the depth image in touch operation region continuously.At step S22 place, depth map processing unit 31 is processed received depth map to background is disposed from this depth map, and only retains the depth information of foreground object, and the UNICOM territory in retained depth map is numbered subsequently.Shown in Fig. 7 (a)-(c) is from current depth map, to dispose the schematic diagram of background depth map.The depth map that this employing 16 bit value of institute's diagram show is the convenience in order to illustrate, is not must demonstrate in implementing process of the present invention.Is a kind of schematic diagram of example of depth map of background as shown in Fig. 7 (a), is only background depth map in the depth map shown in it, i.e. the depth map of projection surface, and it does not comprise the depth image of any foreground object (being object).A kind of mode that obtains the depth map of background is, in the starting stage of the method for virtual touch screen System Implementation of the present invention virtual touch screen of the present invention function, first the snapshots in time of obtaining the depth map of current scene and preserving this depth map by optical device 2, thereby the depth map of acquisition background.When obtaining the depth map of this background, in current scene, the place ahead of projection surface 4 (between optical device 2 and projection surface 4) can not have any for touching the dynamic object of projection surface.The another kind of mode that obtains the depth map of background is not with transient state photo but generates a kind of average background depth map with a series of continuous transient state photos.
Shown in Fig. 7 (b) is the example of the depth map of catching of a frame current scene, wherein has an object (for example user's hand or stylus) to touch in projection surface.
Shown in Fig. 7 (c) is the wherein example of the depth map of background after being eliminated of a frame.The possible mode of removing background depth is to use a depth map for the depth map subtracting background of current scene, and another kind of mode is the depth map of scanning current scene and the corresponding point in the every bit of this depth map and background depth map are carried out to depth value comparison.If the absolute value of the degree of depth difference of the pixel that these are paired is similar and within a predetermined threshold value, the similar respective point of the absolute value of the degree of depth difference in current scene is disposed from the depth map of current scene, otherwise reservation is put accordingly, do not carried out any change.Subsequently the UNICOM territory of removing in background depth map current depth map is afterwards numbered.UNICOM of the present invention territory refers to such region: supposing that depth cameras is captured has two 3D points, if their the upper difference adjacent one another are and its depth value of XY plane (captured picture) that is projected in is not more than given threshold value D, claim their D-UNICOMs each other.If there is D-UNICOM path between any two points in one group of 3D point, claim Zhe Zu3DDian D-UNICOM.If for each the some P in the 3D point of Yi Zu D-UNICOM, in XY plane, each some P does not exist adjacent point to add in described group in the situation that not interrupting this UNICOM condition, the 3D point of Ze Chenggaizu D-UNICOM is maximum D-UNICOM.UNICOM of the present invention territory is that Yi Zu D-UNICOM's point in depth map and its are maximum D-UNICOMs.Continuous blocks (mass) region that catch with described depth cameras in the UNICOM territory of described depth map is corresponding, and UNICOM territory is that the D-UNICOM point set in described depth map merges and its maximum D is associated.Therefore, in fact UNICOM territory is numbered is exactly to having the 3D point of the above-mentioned D-UNICOM identical numbering of annotating, that is to say, for the pixel that belongs to same UNICOM territory add pour down with numbering, therefore, the numbering matrix in generation UNICOM territory.The continuous block (mass) that catch corresponding to described depth cameras in the UNICOM territory of described depth map.
The numbering matrix in described UNICOM territory be a kind of can mark described in which point in depth map whether be the data structure in UNICOM territory.Each element in described numbering matrix is corresponding to a point in depth map, and the value of this element is exactly the numbering (one, Yi Ge UNICOM territory numbering) in the UNICOM territory under this point.
Then, at step S23 place, based on a depth conditions, each point in depth map is carried out to binary conversion treatment, thereby generate some as the patch of alternative objects, and add UNICOM's Field Number to belonging to the pixel of the patch in same UNICOM territory.To describe concrete binary conversion treatment process below in detail.Shown in Fig. 8 is that the depth map of inputted current scene is carried out to binary conversion treatment to obtain the schematic diagram of the patch of alternative objects.。As shown in Figure 8, the present invention is based on each pixel in the depth map in current scene and the relative depth information between the corresponding pixel points of background depth map and carry out binary conversion treatment process.In an embodiment of the present invention, from the depth map of described current scene, retrieve the depth value of a pixel, i.e. distance between depth cameras and the object-point of the pixel representative retrieved.In Fig. 8, to travel through the mode of all pixels, from the depth map of inputted current scene, retrieve the depth d of a pixel, the depth value b of the pixel that the retrieval from background depth map is afterwards corresponding with the pixel of retrieving depth map from current scene, then calculate difference (subtraction value) s, the i.e. s=b-d of the depth d of two target points and the degree of depth b of background pixel point.If the difference obtaining is greater than zero and be less than a predetermined distance threshold t, i.e. 0<s<t, the gray-scale value of the pixel retrieving in current scene is set to 255, otherwise is made as 0.Certainly, this binaryzation also can be directly by two kinds of these positions 0 or 1 of situations difference, as long as these the two kinds binaryzation modes that make a distinction can be able to be adopted.By above-mentioned binaryzation mode, can obtain the patch with a plurality of alternative objects as shown in Fig. 9 (b).The size of threshold value t can be controlled the precision that object is detected.Threshold value t is also relevant to the hardware specification of depth cameras.The value of threshold value t is generally the thickness size of a finger or the diameter of common stylus, and for example 0.2-1.5 centimetre, is preferably 0.3 centimetre, 0.4 centimetre, 0.7 centimetre, 1.0 centimetres.Threshold value t can be according to wherein using the environment of virtual touch screen system to adjust, for distance threshold t, the setting of concrete size can be according to the present invention in application process the concrete size of the diameter of the finger thickness of the people before screen or the stylus used arrange accordingly.Conventionally the possible values of this threshold value t is 1 centimetre.
Shown in Fig. 9 (a) is for patch being added to the schematic diagram in the UNICOM territory of numbering.After obtaining the bianry image of patch, the pixel that scan search contains UNICOM's Field Number, Bing Jianggai UNICOM Field Number adds pixel corresponding in binaryzation patch image to, thereby makes some patch with UNICOM's Field Number, as shown in Fig. 9 (b).Patch in described bianry image (white portion or point) is the candidate that touches the possible destination object in projection surface.According to noted earlier, the binaryzation patch with UNICOM's Field Number in Fig. 9 (b) possess following two conditions: 1. patch belongs to UNICOM territory.2. the corresponding depth d of each pixel of patch and the difference of background depth b (subtraction value) s must be less than threshold value t, i.e. s=b-d<t.
Then, at step S24 place, the binaryzation patch image of obtained depth map is strengthened to processing, to reduce noise unnecessary in binaryzation patch image and to make the shape of patch become clearer and stable.This step has image enhancing unit 33 to carry out.Particularly, as follows carry out described enhancing processing.
First, remove the patch that does not belong to UNICOM territory, namely by the patch that does not add UNICOM's Field Number at step S23 place directly by its gray-scale value from the highest vanishing, for example by the gray-scale value of its pixel from 255 vanishing.In another kind of mode, by 1, become 0.Thereby obtain the patch binary image as shown in Figure 10 (a).
Secondly, remove and belong to the patch that its area S is less than the UNICOM territory of an area threshold Ts.In an embodiment of the present invention, patch belongs at least one Dian UNICOM territory that a certain UNICOM territory means this patch.If the area S in the UNICOM territory under this patch is less than an area threshold Ts, corresponding patch is regarded as noise and is removed the bianry image from patch.Otherwise patch is considered to the candidate of destination object.Area threshold Ts is that the environment that can use according to virtual touch screen system regulates.Area threshold Ts is generally 200 pixels.Thus, obtain the binary image of the patch as shown in Figure 10 (b).
Then, exactly the patch in the binary image of the obtained patch as shown in Figure 10 (b) is carried out to some morphology (morphology) operation.In the present embodiment, adopted expansion (dilation) to operate and close (close) operation.First carrying out an expansive working is then to carry out iteratively closed operation.The iterations that carries out closed operation is a predetermined value, and the environment that this predetermined value can be used according to virtual touch screen system regulates.This iterations for example can be set to 6 times.Finally obtain the binary image of the patch as shown in Figure 10 (c).
Finally, if there are a plurality of patches belong to same UNICOM territory, these patches have identical UNICOM's Field Number, retain in the patch with identical UNICOM Field Number, to have a patch of maximum area and remove other patches.In an embodiment of the present invention, Yi Ge UNICOM territory can include a plurality of patches.In these patches, the patch only with maximum area is just considered to destination object, and other patch is the noise that need to be removed.Finally obtain the binary image of the patch as shown in Figure 10 (d).
At step S25 place, detect the profile of the patch that obtains, the coordinate of the centroid point of calculating patch also arrives coordinates of targets by centroid point coordinate transform.This detection, calculating and map function are calculated by coordinate and converter unit 34 is carried out.Shown in Figure 11 is the schematic diagram that detects the process of the coordinate of the centroid point of patch in the binary image of the patch as shown in Figure 10 (d).Referring to Figure 11, according to the geological information of patch, calculate the coordinate of the centre of form of patch.This computation process comprises: the profile of detection of plaque, calculate the Hu square of this profile and the coordinate that calculates centroid point with described Hu square.In an embodiment of the present invention, can carry out by multiple known mode the profile of detection of plaque.Also can calculate with known algorithm Hu square.After obtaining the Hu square of described profile, can calculate by following formula the coordinate of centroid point:
(x 0,y 0)=(m 10/m 00,m 01/m 00)
(x wherein 0, y 0) be the coordinate of centroid point, and m 10, m 01, m 00it is exactly Hu square.
Coordinate transform is exactly the coordinate of centroid point to be transformed to the coordinate system of user interface from the coordinate system of the bianry image of patch.The conversion of coordinate system can adopt known method.
In order to obtain the continuous moving track of touch point, can be by the touch point in successive frame depth map captured in continuous detecting virtual touch screen system of the present invention, thereby follow the tracks of detected a plurality of patch to produce the sequence of a plurality of points, obtain thus the movement locus of touch point.
Particularly, be exactly at step S26 place, to obtaining the centroid point coordinate in user interface of the patch of every two field picture after every frame depth map execution step S21-S25 of continuous shooting, follow the tracks of, generate centroid point sequence (being track), and obtained centroid point sequence is carried out to smoothing processing.This tracking and smooth operation are undertaken by tracking cell 35.
Shown in Figure 12 is user's finger or the schematic diagram of the track that stylus moves on the screen of virtual touch screen.The movement locus that has wherein shown two objects (finger).This is only an example.In other cases, can a plurality of objects, for example 3,4,5 objects, are decided according to the actual requirements.
Shown in Figure 13 for following the tracks of the process flow diagram of detected object.Repeatedly carry out trace flow as shown in figure 13, finally can obtain the movement locus of the front any object of screen.Particularly, carry out following the tracks of operation is exactly that the centroid point coordinate in user interface obtains in any track before being included into by the patch in the depth map newly detecting.
Centroid point coordinate according to a plurality of institutes detection of plaque in user interface, follows the tracks of a plurality of patches that newly detect, thereby produces many tracks and trigger the relevant touch event about these tracks.To follow the tracks of patch, just need to patch classify and by the centroid point coordinate of patch be placed in one a little on time and space in relevant point sequence.Only have the point in identical sequence just can merge into a track.As shown in figure 12, if virtual touch screen system is supported painting function, the point in the sequence shown in Figure 12 just represents the paint command on pseudo operation region, and the point in same sequence just can couple together and form a curve as shown in Figure 12 so.
In the present invention, can follow the tracks of three kinds of touch events: touch and start, touch to move and touch and finish.Touch and start just to refer to that the object that will detect touches pseudo operation region and track is started.Touch mobile refer to wanted detected object just touching on pseudo operation region and track Zheng projection surface on continue.And touch to finish to refer to surface and the motion track that the object that will detect leaves pseudo operation region, finish.
As shown in figure 13, at step S91, receive based on a frame depth map according to the new patch of the detected object of step S21-S25 the centroid point coordinate in user interface, this is that coordinate calculates and the output of converter unit 34.
Subsequently, at step S92 place, for before the patch of each frame depth map is followed the tracks of process after each point sequence in all point sequences that obtain (i.e. all existing tracks, below all become existing track), calculate the new patch nearest apart from this existing track.The track of the object of all touches in touch screen (being pseudo operation region) is all retained in virtual touch screen system.Each track keeps a tracked patch, and this tracked patch is the last patch that is endowed this track.The distance of new patch of the present invention and existing track refers to the distance between the last patch in a new patch and an existing track.
Then, at step S93 place, new patch is included into apart from its nearest existing track, and triggers touch moving event.
Then, at step S94 place, if for an existing track, do not exist any new patch and its to approach, in other words, if all new patches have been included into respectively other existing tracks, delete this existing track, and trigger the touch End Event for this existing track.
Finally, at step S95 place, if for each new patch, do not exist any existing track and its to approach, in other words, obtain all existing tracks all deleted owing to triggering touch End Event, or the distance of new patch and all existing tracks is not within certain distance threshold scope before, determine the starting point that this new patch is new track, and trigger the beginning event that touches.
Repeatedly carry out above-mentioned steps S91-S95, realize the tracking to the centroid point of the patch in successive frame depth map coordinate in user interface, thereby consist of a little a track by what belong to same point sequence.
When there is many existing tracks, will repeatedly perform step S92 for every existing track.The particular flow sheet of tracking cell of the present invention 35 execution step S92 shown in Figure 14.
First, at step S101 place, examine and traveled through all existing tracks.This can just can realize by a simple counter.If all carried out step S92 for all existing tracks, end step S92.If no, advance to step S102.
At step S102 place, input next existing track.At step S103 place, for inputted existing track, find the new patch closing on apart from it subsequently.Then enter step S104.
At step S104 place, determine whether to have found for inputted existing track the new patch closing on apart from it.If found the new patch closing on apart from inputted existing track, advance to step S105, otherwise, enter step S108.
At step S108 place, because the existing track for inputted does not exist the new patch closing on, therefore, the existing Trajectories Toggle that this is inputted is " the existing track that will delete ".Turn back to subsequently step S101.Thus, will, at step S94 place, for this " the existing track that will delete ", trigger and touch End Event.
At step S105 place, determine whether the new patch closing on apart from the existing track of inputting is also the new patch closing on apart from other existing tracks.In other words, determine this new patch whether belong to simultaneously apart from two or more existing track close on new patch.If judge this new patch belong to simultaneously two or more existing track close on new patch, process and enter step S106, otherwise, process and enter step S109.
At step S109 place, because this new patch is only the new patch closing on of inputted existing track, therefore, this new patch is included into inputted existing track as its nearest new patch, become one of point in the point sequence of this existing track.Process and turn back to step S102 afterwards.
At step S106 place, due to this new patch belong to simultaneously two or more existing track close on new patch, calculate the distance of every existing track in this new patch and affiliated many existing tracks.Then at step S107 place, the size of the distance of relatively calculating in step S106 place, and determine between this new patch and the existing track of inputting distance whether in calculated distance for minimum, definite this new patch and the existing track inputted between distance whether than and other existing tracks between distance all little.If for minimum, process and enter step S109 in the distance that between the existing track of determining this new patch and inputting, distance is calculated in step S106 place, otherwise, process and enter step S108.
Repeatedly carry out above-mentioned steps S101-109, thus the processing that performing step S92 carries out, thus can travel through the patch of the new detection of all existing tracks and input.
Shown in Figure 15 is for inputted existing track, to find the process flow diagram of the new patch closing on apart from it.As shown in figure 15, at step S111 place, examine the distance of closing between the existing track that whether has all calculated and inputted for inputted all new patch.If close on distance between the existing track that has all calculated and inputted for all new patches, process and enter step S118, otherwise, process and enter step S112.
At step S118 place, whether the list of definite new patch closing on as inputted existing track is empty.If empty, end process, otherwise, enter step S119.At step S119 place, in all new patch lists that close on, find the nearest new patch of the existing track of inputting with this, and this nearest new patch is included into the point sequence of inputted existing track.End step S103 afterwards.
At step S112 place, the next new patch of input.Subsequently, at step S113 place, calculate next new patch and the existing track inputted between distance.Then, at step S114 place, determine whether the distance between the new patch of the next one calculating and the existing track of inputting is less than a predetermined threshold value.If determine that the distance between the new patch of the next one calculating and the existing track of inputting is less than a predetermined distance threshold Td, process and enter step S115, otherwise, turn back to step S111.Distance threshold Td is set to the distance of 10-20 pixel conventionally herein, is preferably the distance of 15 pixels.The environment that this threshold value Td is used according to virtual touch screen system is adjusted.In the present invention, if the distance between new patch and an existing track is less than described distance threshold Td, be referred to as this new patch and this existing track closes on.
At step S115 place, the new patch of the described next one is inserted in the new patch list of candidate of the existing track that belongs to inputted.At step S116 place, determine whether the size of the new patch list of candidate of the existing track that belongs to inputted is less than a predetermined size threshold value Tsize subsequently.If determine that the size of the new patch list of candidate of the existing track that belongs to inputted is less than a predetermined size threshold value Tsize, process and turn back to step S111, otherwise, process and enter step S117.At step S117 place, the longest new patch of candidate of distance belonging between existing track in the new patch list of candidate of inputted existing track and that input is deleted from described list, turn back to afterwards step S111.
Repeatedly carry out the step shown in Figure 15, thus completing steps S103.
The above flow process of following the tracks of patch coordinate in user interface for successive image frame of having described with reference to accompanying drawing 13-11.By above-mentioned tracking, operate, the touch that triggers institute's detected object starts event, touches moving event or touches End Event.Thus, finally obtain the motion track of institute's detected object on virtual touch screen.Shown in Figure 17 (a) is the schematic diagram of the motion track of the detected object that obtains of a kind of the present invention of employing on virtual touch screen.
Obviously, the motion track of this detected object as shown in Figure 17 (a) tentatively obtaining on virtual touch screen seems comparatively mixed and disorderly.This track also needs to carry out smoothing processing, to obtain level and smooth object motion track.Shown in Figure 17 (b) is the schematic diagram of the object motion track after smoothing processing.Shown in Figure 16 is the method that the point sequence of the motion track of a kind of detected object that a strip adoption the present invention is obtained on virtual touch screen carries out smoothing processing.
Point sequence smoothing processing is exactly that the coordinate of the point in this sequence is optimized so that point sequence is level and smooth.As shown in figure 16, input forms the original point sequence of a track (n is positive integer) is as the first round input of iteration, the output that patch is followed the tracks of.In accompanying drawing 16, original point sequence for the first row that sits up.Then use formula below according to the result from last round of iteration, to calculate the sequence of next round iteration
p n k = &Sigma; j = n n + m - 1 p j k - 1 m ,
Wherein be the point in point sequence, k is iteration sign, and n is point sequence sign, the radix that m is iteration point.
Repeat this iterative computation, until reach predetermined iteration threshold.In an embodiment of the present invention, m parameter can be 3-7, is set in an embodiment of the present invention 3, and this means this, and each next stage point has 3 some iteration of upper level to obtain.This iteration threshold is 3.
By above-mentioned iterative computation, finally obtain is the object motion track after smoothing processing as shown in Figure 17 (b).
Herein, in this manual, the processing of being carried out by computing machine according to program does not need according to carrying out with time series as the order of flowchart text.That is the processing of, being carried out by computing machine according to program comprises processing parallel or that carry out separately (for example parallel processing and target are processed).
Similarly, program can above be carried out at a computing machine (processor), or can be carried out by many computer distribution types.In addition, program can be transferred at the remote computer of executive routine there.
Will be understood by those skilled in the art that, according to designing requirement and other factors, as long as it falls in the scope of claims or its equivalent, can occur various modifications, combination, part combination and substitute.

Claims (7)

1. for a virtual touch method for touchscreen system, comprise:
The initial depth information that obtains the environment that comprises a touch operation region, the depth information based on described initial acquisition creates initial depth figure, and the position of determining described touch operation region based on described initial depth figure;
Obtain continuously the image of the environment in described definite touch operation region;
From obtained every two field picture, detect the candidate's patch be positioned at least one object of preset distance before described touch operation region;
Relation according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back on time and space is included into corresponding point sequence by each patch,
The step of the position in wherein said definite described touch operation region comprises:
UNICOM's component described in detection mark in initial depth figure;
Determine whether also UNICOM's component of mark of detect comprises two cornerwise intersection points of described initial depth figure;
In the situation that the also UNICOM's component of mark that detects comprises two cornerwise intersection points of described initial depth figure, calculate the diagonal line and the also intersection point of UNICOM's component of mark of detect of described initial depth figure; And
Connect successively the intersection point calculating, and the convex polygon of connect acquisition is defined as to described touch operation region.
2. the virtual touch method for touchscreen system according to claim 1, wherein, the view field of described touch operation region and described touchscreen system is overlapping.
3. the virtual touch method for touchscreen system according to claim 1, wherein, described touch operation region is not overlapping with the view field of described touchscreen system.
4. the virtual touch method for touchscreen system described in any one according to claim 1-3, the step of the UNICOM's component described in wherein said detection mark in initial depth figure comprises:
Calculate each its area of UNICOM's component;
Determine whether the area calculating is greater than a predetermined area threshold, and abandon UNICOM's component that its area is less than described predetermined area threshold value.
5. the virtual touch method for touchscreen system according to claim 4, wherein, 1/4th of the area that described predetermined area threshold value is described initial depth figure.
6. the virtual touch method for touchscreen system according to claim 4, the wherein said intersection point calculating that connects successively, and the step that the convex polygon of connects acquisition is defined as described touch operation region is comprised:
Determine that whether connect the formed shape of intersection point of calculating is successively convex polygon, if not convex polygon, abandons formed UNICOM component.
7. a virtual touch screen system, comprising:
Projector, projects image onto in Yi Yi projection surface;
Depth cameras, the depth information of the environment that acquisition comprises a touch operation region;
Depth map processing unit, the depth information obtaining under initial conditions based on depth cameras creates initial depth figure, and based on described initial depth figure, determines the position in described touch operation region;
Subject detecting unit, after initial conditions continuous obtained every two field picture, detects the candidate's patch that is positioned at least one object of preset distance before determined touch operation region from depth cameras;
Tracking cell, the relation according to the centroid point of the patch that obtains in adjacent two two field pictures in front and back on time and space is included into corresponding point sequence by each patch,
Wherein, described touch operation region determining unit is determined the position in described touch operation region by following process: the UNICOM's component described in detection mark in initial depth figure; Determine whether also UNICOM's component of mark of detect comprises two cornerwise intersection points of described initial depth figure; In the situation that the also UNICOM's component of mark that detects comprises two cornerwise intersection points of described initial depth figure, calculate the diagonal line and the also intersection point of UNICOM's component of mark of detect of described initial depth figure; And connect successively the intersection point calculating, and the convex polygon of connect acquisition is defined as to described touch operation region.
CN201110140079.4A 2011-05-27 2011-05-27 Virtual touch screen system and method Active CN102799344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110140079.4A CN102799344B (en) 2011-05-27 2011-05-27 Virtual touch screen system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110140079.4A CN102799344B (en) 2011-05-27 2011-05-27 Virtual touch screen system and method

Publications (2)

Publication Number Publication Date
CN102799344A CN102799344A (en) 2012-11-28
CN102799344B true CN102799344B (en) 2014-11-19

Family

ID=47198462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110140079.4A Active CN102799344B (en) 2011-05-27 2011-05-27 Virtual touch screen system and method

Country Status (1)

Country Link
CN (1) CN102799344B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101500098B1 (en) * 2013-07-05 2015-03-06 현대자동차주식회사 Apparatus and Method for Controlling of User Interface equipped Touch Screen
CN104951200B (en) * 2014-03-26 2018-02-27 联想(北京)有限公司 A kind of method and apparatus for performing interface operation
CN104571513A (en) * 2014-12-31 2015-04-29 东莞市南星电子有限公司 A method and system for simulating touch commands by shielding camera areas
CN112306331B (en) * 2020-10-26 2021-10-22 广州朗国电子科技股份有限公司 Touch penetration processing method and device, storage medium and all-in-one machine

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393497A (en) * 2008-10-30 2009-03-25 上海交通大学 Multi-touch method based on binocular stereo vision
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN101963840A (en) * 2009-07-22 2011-02-02 罗技欧洲公司 Systems and methods for remote, virtual screen input

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4258841B2 (en) * 2004-03-01 2009-04-30 株式会社セガ Image display program and information processing apparatus
US8487871B2 (en) * 2009-06-01 2013-07-16 Microsoft Corporation Virtual desktop coordinate transformation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101952818A (en) * 2007-09-14 2011-01-19 智慧投资控股67有限责任公司 Processing based on the user interactions of attitude
CN101393497A (en) * 2008-10-30 2009-03-25 上海交通大学 Multi-touch method based on binocular stereo vision
CN101963840A (en) * 2009-07-22 2011-02-02 罗技欧洲公司 Systems and methods for remote, virtual screen input

Also Published As

Publication number Publication date
CN102799344A (en) 2012-11-28

Similar Documents

Publication Publication Date Title
CN102841733B (en) Virtual touch screen system and method for automatically switching interaction modes
US10895868B2 (en) Augmented interface authoring
CN102566827A (en) Method and system for detecting object in virtual touch screen system
US20170024017A1 (en) Gesture processing
EP2790089A1 (en) Portable device and method for providing non-contact interface
US20120274550A1 (en) Gesture mapping for display device
EP2745237A2 (en) Dynamic selection of surfaces in real world for projection of information thereon
US20120319945A1 (en) System and method for reporting data in a computer vision system
WO2015026569A1 (en) System and method for creating an interacting with a surface display
CN111354018B (en) Object identification method, device and system based on image
CN103677240A (en) Virtual touch interaction method and equipment
CN102799344B (en) Virtual touch screen system and method
CN102541417B (en) Multi-object tracking method and system in virtual touch screen system
Geer Will gesture recognition technology point the way?
CN116301551A (en) Touch identification method, touch identification device, electronic equipment and medium
KR101575063B1 (en) multi-user recognition multi-touch interface apparatus and method using depth-camera
Hung et al. Free-hand pointer by use of an active stereo vision system
KR101461145B1 (en) System for Controlling of Event by Using Depth Information
CN102004584A (en) Method and device of positioning and displaying active pen
WO2022123929A1 (en) Information processing device and information processing method
CN112328164A (en) Control method and electronic equipment
CN109871178A (en) A kind of virtual touch screen system based on image recognition
US20130187893A1 (en) Entering a command
JP2018185563A (en) Information processing apparatus, information processing method, computer program, and storage medium
CN114611594B (en) Human-computer interaction method, device, storage medium, program product and robot

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant