[go: up one dir, main page]

HK1189687B - Encoding data in depth patterns - Google Patents

Encoding data in depth patterns Download PDF

Info

Publication number
HK1189687B
HK1189687B HK14102751.9A HK14102751A HK1189687B HK 1189687 B HK1189687 B HK 1189687B HK 14102751 A HK14102751 A HK 14102751A HK 1189687 B HK1189687 B HK 1189687B
Authority
HK
Hong Kong
Prior art keywords
depth
features
tag
imaging system
visible light
Prior art date
Application number
HK14102751.9A
Other languages
Chinese (zh)
Other versions
HK1189687A (en
Inventor
P.扎特洛卡
S.M.德塞拉诺
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1189687A publication Critical patent/HK1189687A/en
Publication of HK1189687B publication Critical patent/HK1189687B/en

Links

Description

Encoding data in depth patterns
Technical Field
The present application relates to a depth imaging system and to encoding data in a depth pattern.
Background
The barcode provides an optically machine-readable representation of the data. Typical barcode technology may encode data in only one or two dimensions, and may require that the barcode be in a particular orientation and/or position in order to identify and decode the barcode.
Disclosure of Invention
Embodiments are disclosed that relate to 3D tags and depth imaging systems for decoding 3D tags. One disclosed embodiment includes a depth imaging system comprising: a depth camera input to receive a depth map representing an observed scene imaged by a depth camera, the depth map comprising a plurality of pixels and a depth value for each pixel of the plurality of pixels. The depth imaging system further comprises: a tag identification module that identifies a 3D tag imaged by the depth camera and represented in the depth map, the 3D tag including one or more depth features, each of the one or more depth features including one or more characteristics recognizable by the depth camera. The depth imaging system also includes a tag decode module that converts the one or more depth features into machine readable data.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Drawings
FIG. 1 illustrates an embodiment of an example use environment for decoding 3D tags.
Fig. 2-4 illustrate example embodiments of 3D tags.
Fig. 5 illustrates an example embodiment of a surface comprising a 3D tag.
Detailed Description
Bar coding is a ubiquitous technology, particularly in retail environments. There are currently several ways to passOne-dimensional (e.g., Universal product code "UPC") and two-dimensional (e.g., Quick)"QR" (quick response "QR")) bar code to encode information. However, such techniques provide low encoding density due to resolution limitations in generating and/or imaging 1D/2D marks, and may thus rely on external information (e.g., a database of information) to extract meaningful data from the barcode. In other words, current barcode technology is only applicable as a pointer to desired information.
The present disclosure is directed to 3D tags that include one or more depth features that collectively encode information in three dimensions. While these 3D tags may alternatively or additionally be used as pointers, the increased data density allows such tags to store "self-contained" data (e.g., audio tracks, images, etc.) as compared to merely being "pointers". As used herein, the term "depth feature" refers to any feature (reflective surface, convex surface, depressed surface, or surface gradient, etc.) of a 3D tag that encodes human-unreadable information by one or more characteristics recognizable by a depth camera. The 3D tag may include any number and type of depth features and characteristics, and combinations thereof. Also, as used herein, the term "label" refers to one or more depth features that collectively encode information and does not imply any particular configuration (e.g., structure of a back adhesive) or use case (e.g., pricing or identifying a "label"). Specific examples of depth features and characteristics will be discussed in detail below.
FIG. 1 illustrates an embodiment of an example use environment 100 for decoding 3D tags. The use environment 100 includes a depth camera 102 configured to image an observed scene 104. The observed scene 104 includes 3D tags 106, 108, 110, each of which includes one or more depth features 112 (e.g., depth feature 112a, depth feature 112b, and depth feature 112 c). Although the discussion is directed to 3D tags, the depth camera 102 may also be configured to image more or fewer 3D tags. The ability to image multiple 3D tags may be determined by, for example, the resolution of the depth camera and/or the performance of the computing system.
Although a polygon including depressions and protrusions is shown, it should be understood that depth features 112 may include one or more features recognizable by a depth camera in any configuration and combination as described above. Specific examples of depth features are discussed in more detail below with reference to fig. 2-4.
It should be understood that the observed scene 104 may further include non-3D tagged objects (furniture, people, etc.). It should also be appreciated that while the 3D tags 106 and 108 are shown as self-contained objects of the observed scene 104, the 3D tags 106 and 108 may be attached to or incorporated into any suitable surface. For example, the observed scene 104 also includes a surface 114 (e.g., retail packaging, product surface, advertising, etc.) on which the 3D label 110 is embossed (represented by dashed lines). The surface 114 may also include human-readable information such as visible light images 116 and text 118. One example surface similar to surface 114 will be discussed in more detail below with reference to FIG. 5.
Moreover, while shown as being substantially square, it should be understood that the 3D tag may comprise any suitable shape. For example, in embodiments that incorporate a 3D label into a surface, the 3D label may not include a visually defined "shape," such as a printed border (printed border). Although. Surface 114 is shown as being substantially planar, but it should be understood that non-planar surfaces may also have 3D labels.
Moreover, although shown as having substantially the same orientation in the observed scene 104, the 3D tags (3D tags 106, 108, and 110) may have any position and orientation in the observed scene 104. Thus, a 3D tag may include one or more registration and/or boundary features to distinguish the 3D tag from surrounding surfaces. Such features will be discussed in more detail below with reference to fig. 2.
The use environment 100 also includes a computing system 120 that includes a depth camera input 122 that receives information from the depth camera 102. For example, the information may include a depth image representing the observed scene 104. The depth map may comprise a plurality of pixels and a depth value for each of said pixels. The depth map may take the form of substantially any suitable data structure, including, but not limited to, a matrix including depth values for each pixel of the observed scene. The depth camera 102 may be configured to measure depth using any suitable technique (e.g., time-of-flight, structured light, stereo image, etc.) or combination of techniques. In some embodiments, the depth camera 102 may also include a visible light sensor to decode 1D/2D tags (e.g., UPC, QR, etc.).
In some embodiments, the depth camera input 122 may include a physical connector (e.g., a universal serial bus connector). In other embodiments that incorporate depth camera 102 into computing system 120 (e.g., as in the case of a mobile device), depth camera input 122 may include one or more connections internal to computing system 120.
As described above, the observed scene 104 may include non-3D tagged objects (e.g., surface 114). Accordingly, the computing system 120 also includes a tag identification module 124 configured to identify one or more 3D tags (e.g., 3D tags 106, 108, and 110) imaged by the depth camera 102 and represented in the depth map. Computing system 120 also includes a decoding module 126 configured to convert one or more depth features 112 into machine-readable data (e.g., binary data and/or other forms of data that may be stored by and/or communicated between computers). The tag identification module 124 and tag decoding module 126 may provide the above-described functionality through any suitable mechanism or combination of mechanisms.
The machine-readable data may be processed according to a particular use case. For example, the machine-readable data may be presented to a user via a display subsystem 128 and a display device coupled thereto. As another example, the machine-readable data may be transmitted to a remote device (e.g., a pricing database server) via the communication subsystem 130. It should be understood that these scenarios are presented for purposes of example, and that the machine-readable data may be used by any mechanism or combination of mechanisms without departing from the scope of this disclosure.
The 3D tag may include any combination, type, and number of depth features and characteristics. 2-4 illustrate various example embodiments of a 3D tag. Beginning with fig. 2, an embodiment of a 3D tag 200 including multiple depth features is shown. For example, 3D tag 200 includes depth features 202 and 204 in the form of raised and depressed surfaces, respectively. Depth features 202 and 204 also include information encoded in a third dimension (i.e., depth) as compared to a 2D barcode in which each feature encodes information by width and/or length. For example, as shown, depth feature 202 is characterized by a width 206 and a length 208 as compared to depth feature 210. Thus, in the case of 2D or 1D, the depth features 202 and 210 may be interpreted as substantially comparable features. However, as shown, the depth feature 202 is characterized by a depth 212 that is different from the depth feature 210, such that the depth features 202 and 210 are distinguishable from a 3D imaging sensor (e.g., the depth camera 102 of fig. 1).
As can be appreciated from the above discussion, one or more physical characteristics (e.g., width 206, length 208, and depth 212 of depth feature 202) may be configured to encode data independently and/or in conjunction with one or more other characteristics. For example, in some embodiments, it should be understood that the shape of the depth feature may be configured to encode data.
Although the depth features 202, 204, and 210 are shown as having a substantially fixed depth across the entire feature, the 3D tag 200 also includes a depth feature 214 characterized by one or more depth gradients (i.e., continuously varying depths). The depth gradient may encode information by a change in depth across successive portions of the 3D tag.
The 3D tag 200 also includes a depth feature 216 that includes one or more optical properties. For example, the depth features 216 may include one or more materials that include very high reflectivity and/or very low reflectivity (i.e., high light absorption). In some embodiments, the depth features 216 may include optical characteristics that vary according to the wavelength of the light. For example, the depth features 216 are highly reflective or absorptive at infrared or near-infrared wavelengths (e.g., wavelengths used by infrared emitters in some depth camera technologies), and intermediately reflective at visible wavelengths. By using materials with such properties, some depth camera technologies (e.g., a transmission depth camera) may interpret the depth features 216 as having a depth relative to the surface of the 3D tag 200, even though the depth features 216 are substantially flush with the surface. When combined with other characteristics, the optical characteristics may vary with position along the depth feature. Thus, such characteristics may be configured to provide a "depth gradient" by location-dependent optical characteristics, as opposed to location-dependent physical characteristics.
As described above, a depth feature (e.g., depth feature 202, 204, 210, 214, or 216) may encode information through a number of characteristics. For example, depth features 214 may encode information by width and length in addition to by surface gradients as shown. It should also be understood that the 3D tag may include a single depth feature that includes one or more characteristics. For example, the 3D tag may include a single depth feature that includes a surface gradient.
Instead of (or in addition to) encoding information by characteristics of the respective depth features (e.g., depth, height, width, reflectivity, gradient, etc.), information may also be encoded by relationships (e.g., relative positions) between one or more depth features of the 3D tag 200. For example, the information is encoded by the distance 218 between one or more depth features. As another example, information may be encoded by the absolute position of one or more depth features on the 3D tag 200 (e.g., relative to one or more registration features as described below). For example, when positioned in the lower right corner of the 3D tag 200, the depth feature 210 may encode different information even if other characteristics remain unchanged. Such a scenario is presented for purposes of example only and is not intended to be limited to any form.
As described above with reference to fig. 1, the 3D label 200 may be incorporated (e.g., embossed, etc.) onto a surface such that there is no visually defined boundary of the 3D label 200. Moreover, the 3D tag 200 may be positioned in front of the depth camera in any direction (e.g., within the observed scene 104 imaged by the depth camera 102 of fig. 1). Accordingly, one or more features of the 3D tag may be configured to provide registration and/or orientation information regarding the 3D tag 200. For example, in some embodiments, one or more depth features may have a characteristic shape and/or other aspects that can be identified regardless of orientation, thus providing a reference by which to determine the orientation of the 3D tag. In such embodiments, the registration features may not encode additional information. It is understood that any depth feature or combination of features including any one or more characteristics may provide registration and/or orientation information without departing from the scope of this disclosure.
Fig. 3 shows another embodiment of a 3D tag 300. The 3D tag 300 includes a plurality of depth features 302 that encode information by one or more characteristics (e.g., length, width, depth, gradient, etc.) as described above. The 3D tag 300 further includes a plurality of distinguishable 2D features 304 (shown as printed overlays of depth features) corresponding to the one or more depth features. Although, the 3D tag 300 is shown as including a corresponding 2D feature 304 for each depth feature 302, it should be understood that other embodiments may include corresponding 2D features 304 for a subset of the depth features 302. Moreover, while the 2D features 304 are shown as being oriented to cover an entirety of each depth feature 302, it should be understood that the 2D features 304 may be oriented to cover a portion of one or more corresponding depth features 302. Also, 2D features may be positioned in the spaces between the depth features.
The 2D features 304 collectively encode non-human readable information. The 2D features are configured to be read by a visible light camera (e.g., one or more visible light imaging sensors of the depth camera 102). In some embodiments, the information encoded by the plurality of distinguishable 2D features may comprise at least a subset of the information encoded by the plurality of distinguishable depth features. For example, the 2D features 304 may provide a redundancy mechanism in which the depth features 302 are decoded by a depth camera, the 2D features 304 are decoded by a visible light camera, and the two results are compared to ensure the integrity of the 3D tag 300. As another example, the 2D feature 304 may be decoded when access to the 3D imaging system is unavailable, and thus may provide at least a portion of the same information regardless of the performance of the imaging system. Alternatively, the 2D features 304 may encode different information than the depth features 302. Such a configuration may be used to increase the data density of the 3D tag 300 and/or provide information that varies according to the performance of the imaging system.
The 3D tag 300 further includes a human-readable representation 306 of at least a subset of the information encoded by the depth features 302 and/or the 2D features 304. Representation 306 may provide an additional layer of redundancy. For example, if the 3D tag 300 is damaged, the representation 306 may allow a user to directly extract (i.e., without using an imaging system) at least a subset of the encoded information.
In other embodiments, the 2D features may not correspond to depth features, and/or different information may be encoded. An example of such a configuration is shown in the 3D tag 400 of fig. 4. The 3D tag 400 includes a plurality of 2D features 402 that collectively encode non-human readable information. For example, as shown, the 2D features 402 may comprise "bars" of a 1D barcode. It should be understood that while the 2D feature 402 is shown by dashed lines for clarity reasons, the 2D feature is a visible feature (e.g., a printed bar or square) that overlays the depth feature 404. The 3D tag 400 may also include a human-readable representation 406. The representation 406 may include a representation of information encoded by the 2D features 402 and/or the depth features 404 and/or other information.
Turning now to fig. 5, one example of an embodiment of a surface 500 including a 3D label 502 is shown. Surface 500 may include visible light image 504 and/or text 506. Although shown as planar, printed pages (e.g., pages from magazines, newspapers, etc.), it should be understood that surface 500 may have any suitable configuration including 3D labels 502 and one or more images 504 and text 506. For example, surface 500 may include a planar or non-planar surface of a product and/or its packaging.
As another example, surface 500 may include a poster or other advertisement. In such a case, the pedestrian may view the image 504, read the text 506, and decode the 3D tag 502 (e.g., by the depth camera 102 of fig. 1) to extract additional information (e.g., sound/video clips connected to the website, coupons/special offers, etc.). It is understood that such a case is presented for purposes of example, and that surface 500 and 3D label 502 may be used in a variety of use cases without departing from the scope of this disclosure.
It should be understood that although image 504 and text 506 are shown as at least partially overlaying 3D label 502, the information encoded by 3D label 502 and the human-readable information (e.g., image 504 and text 506) of surface 500 are still recognizable by a depth camera (e.g., depth camera 102 of fig. 1) and a human user, respectively. In other words, 3D label 502 and its features may be configured so as not to limit the human readability of surface 500. It should be appreciated that 3D label 502 may not be visible to the user, but more may be substantially unobtrusive such that any overlaid images and/or text (e.g., image 504 and text 506) remain human-readable. For example, 3D label 502 may be embossed onto surface 500 such that 3D label 502 appears as a set of bumps and depressions without obstructing image 504 and/or text 506. As another example, the 3D label 502 may include features that vary in reflectivity such that the 3D label 502 appears as an area of varying "gloss".
The methods and processes described above may be bundled into a computing system that includes one or more computing devices. In particular, the methods and processes may be implemented as computer applications or services, Application Programming Interfaces (APIs), libraries, and/or other computer program products.
FIG. 1 schematically illustrates a computing system 120 that may perform one or more of the above-described methods and processes. Computing system 120 is shown in simplified form. It is understood that substantially any computer architecture may be used without departing from the scope of this disclosure. In different embodiments, the computing system 120 may take the form of a mainframe computer, server computer, desktop computer, laptop computer, tablet computer, home entertainment computer, network computing device, gaming device, mobile computing device, mobile communication device (e.g., smartphone), and so forth.
The computing system 120 includes a logic subsystem 132, a storage subsystem 134, a tag identification module 124, and a tag decode module 126. Computing system 120 may optionally include a display subsystem 128, an input subsystem 136, a communication subsystem 130, and/or other components not shown in fig. 1.
Logic subsystem 132 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic subsystem may be single-core or multi-core, and the programs executing thereon may be configured for serial, parallel, or distributed processing. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Storage subsystem 134 may include one or more physical, non-transitory devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. In implementing these methods and processes, the state of the storage subsystem 134 may be transformed- (e.g., different data saved).
Storage subsystem 134 may include removable media and/or built-in devices. Storage subsystem 134 may include optical memory devices (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory devices (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage subsystem 134 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that the storage subsystem 134 includes one or more physical, non-transitory devices. However, in some embodiments, aspects of the instructions described herein may be propagated in a transient manner by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by the physical device for a finite duration. In addition, data and/or other forms of information pertaining to the present invention may propagate through a pure signal.
In some embodiments, aspects of the logic subsystem 132 and the storage subsystem 134 may be integrated together into one or more hardware-logic components by which the functionality described herein is performed. These hardware-logic components may include, for example, Field Programmable Gate Arrays (FPGAs), program-and application-specific integrated circuits (PASIC/ASIC), program-and application-specific standard products (PSSP/ASSP), systems-on-a-chip (SOC), and Complex Programmable Logic Devices (CPLDs).
The terms "module," "program," and "engine" may be used to describe an aspect of computing system 120 that is implemented to perform a particular function. The identification module 124 and tag decode module 126 are two non-limiting examples of modules. In some cases, a module, program, or engine may be instantiated via logic subsystem 132 executing instructions held by storage subsystem 134. It should be appreciated that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Similarly, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms "module," "program," and "engine" are intended to encompass a single or group of executable files, data files, libraries, drivers, scripts, database records, and the like.
When included, the display subsystem 128 may be used to present a visual representation of data held by the storage subsystem 134. The visual representation may take the form of a Graphical User Interface (GUI). As the herein described methods and processes change the data held by the storage subsystem, and thus transform the state of the storage subsystem, the state of display subsystem 128 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 128 may include one or more display devices using virtually any type of technology. Such display devices may be combined with logic subsystem 132 and/or storage subsystem 134 in a shared enclosure, or such display devices may be peripheral display devices.
When included, the input subsystem 136 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may include or interface with a selected Natural User Interface (NUI) element. Such elements may be integrated or peripheral and the conversion and/or processing of input actions may be processed on-board or off-board. Example NUI elements may include a microphone for speech and/or voice recognition; infrared, color, stereo and/or depth cameras for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing element for assessing brain activity.
When included, the communication subsystem 130 may be configured to communicatively couple the computing system 120 with one or more other computing devices. The communication subsystem 130 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured to communicate via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communication subsystem may allow computing system 120 to send and/or receive messages to and/or from other devices via a network (such as the internet).
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.

Claims (10)

1. A depth imaging system comprising:
a depth camera input for receiving a depth map representing an observed scene imaged by the depth camera, the depth map comprising a depth value for each pixel in a matrix of pixels;
a tag identification module to identify a 3D tag imaged by the depth camera and represented in the depth map, the 3D tag including one or more depth features, each of the one or more depth features including one or more characteristics recognizable by the depth camera; and
a tag decoding module to convert the one or more depth features into machine-readable data;
wherein the 3D tag further comprises a plurality of distinguishable 2D features corresponding to the one or more depth features, and the depth camera further comprises: a visible light sensor for reading the 2D features.
2. The depth imaging system of claim 1, wherein the one or more depth features comprise one or more depressed surfaces.
3. The depth imaging system of claim 1, wherein the one or more depth features comprise one or more convex surfaces.
4. The depth imaging system of claim 1, wherein the one or more characteristics include a depth gradient.
5. The depth imaging system of claim 1, wherein the one or more characteristics include reflectivity.
6. The depth imaging system of claim 1, wherein the one or more depth features collectively encode non-human readable information, the 3D tag further comprising a visible light image overlaying the one or more depth features, the visible light image providing the human readable information.
7. The depth imaging system of claim 6, wherein the visible light image comprises a human-readable representation of information encoded by the one or more depth features.
8. The depth imaging system of claim 1, wherein the one or more depth features comprise one or more registration features.
9. A 3D tag, comprising:
a plurality of distinguishable depth features comprising one or more characteristics readable by a depth camera, the plurality of distinguishable depth features collectively encoding non-human-readable information; and
a visible light image overlaying the one or more distinguishable depth features, the visible light image providing human-readable information;
wherein the visible light image comprises a plurality of distinguishable 2D features configured to be read by a visible light machine.
10. The 3D tag of claim 9, wherein the plurality of distinguishable 2D features collectively encode non-human readable information, wherein the information comprises at least a subset of the information encoded by the plurality of distinguishable depth features.
HK14102751.9A 2012-06-22 2014-03-19 Encoding data in depth patterns HK1189687B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/531,268 2012-06-22

Publications (2)

Publication Number Publication Date
HK1189687A HK1189687A (en) 2014-06-13
HK1189687B true HK1189687B (en) 2018-06-22

Family

ID=

Similar Documents

Publication Publication Date Title
US9471864B2 (en) Encoding data in depth patterns
US10055888B2 (en) Producing and consuming metadata within multi-dimensional data
US8451266B2 (en) Interactive three-dimensional augmented realities from item markers for on-demand item visualization
Li et al. Aircode: Unobtrusive physical tags for digital fabrication
US20160321502A1 (en) Methods and systems for contextually processing imagery
CN105074623A (en) Presenting object models in augmented reality images
JP2016100013A (en) Article of manufacture and method for encoding information in multiple patterned layers
US20170091504A1 (en) Authenticity tag and methods of encoding and verification
US8939363B2 (en) Creating a virtual bar code from a physical bar code
JP5635689B2 (en) Reference markers for augmented reality
WO2009102616A3 (en) Systems and methods for forming a composite image of multiple portions of an object from multiple perspectives
US20200259813A1 (en) Authentication device, server computer, authentication method, mobile terminal with camera, and code label
US20150324679A1 (en) Optical-reading code preparation device
WO2022265049A1 (en) Method for displaying video that corresponds to still image, system, tangible medium, and method for manufacturing tangible medium
HK1189687B (en) Encoding data in depth patterns
HK1189687A (en) Encoding data in depth patterns
US9747519B2 (en) Classifying ambiguous image data
CN107657291B (en) Dual-input content processing method and device
Beglov Object information based on marker recognition
CN111427446B (en) Virtual keyboard display method and device for head-mounted display device and head-mounted display device
US20130099109A1 (en) Image reading device and image reading system
US12405771B2 (en) System and method for determining physical coding block orientation
EP2961501B1 (en) Active tag codes
KR101889533B1 (en) Mobile terminal using hologram structure and system providing content inculding the same
LE Pictorial Markers with Hidden Codes and Their Potential Applications