CN112287949B - AR information display method and AR display device based on multiple feature information - Google Patents
AR information display method and AR display device based on multiple feature information Download PDFInfo
- Publication number
- CN112287949B CN112287949B CN202011203237.1A CN202011203237A CN112287949B CN 112287949 B CN112287949 B CN 112287949B CN 202011203237 A CN202011203237 A CN 202011203237A CN 112287949 B CN112287949 B CN 112287949B
- Authority
- CN
- China
- Prior art keywords
- information
- relationship
- targets
- state
- feature information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure relates to an AR information display method and device based on a plurality of characteristic information, wherein the method comprises the following steps: identifying a plurality of pieces of characteristic information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship, and the plurality of pieces of characteristic information reflect the specific relationship; acquiring a plurality of AR information corresponding to the plurality of feature information according to the plurality of feature information; based on the specific relationship, the plurality of AR information is displayed through an AR display device, and the displayed plurality of AR information presents the specific relationship to a user.
Description
Technical Field
The disclosure relates to the technical field of augmented reality, in particular to an AR information display method and device based on multiple feature information.
Background
The conventional AR information display method is to display matching AR information for a target by recognizing characteristic information of a plurality of targets. In some scenarios, multiple objects may be seen simultaneously in the field of view of the user, where the multiple objects have a certain relationship (such as a size relationship) between each other, and the certain relationship may change in some cases, for example, objects that are originally in a uniform proportion, where one object is enlarged for some reason, if the displayed AR information cannot show a relationship in a uniform proportion between the objects, and after the objects are enlarged, the objects cannot show a new relationship between the objects, which may cause trouble to the user in understanding the relationship between the AR information and the objects, and may reduce the user experience.
Disclosure of Invention
An object of the present disclosure is to provide an AR information display method and an AR display device based on a plurality of feature information.
The purpose of the present disclosure is achieved by adopting the following technical solutions. The AR information display method based on the plurality of characteristic information, which is provided by the present disclosure, comprises the following steps: identifying a plurality of pieces of characteristic information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship, and the plurality of pieces of characteristic information reflect the specific relationship; acquiring a plurality of AR information corresponding to the plurality of feature information according to the plurality of feature information; based on the specific relationship, the plurality of AR information is displayed through an AR display device, and the displayed plurality of AR information presents the specific relationship to a user.
The object of the present disclosure can be further achieved by the following technical measures.
In the aforementioned AR information displaying method based on a plurality of feature information, the identifying a plurality of feature information corresponding to a plurality of targets in a user field of view is implemented using a CV algorithm.
The aforementioned AR information displaying method based on the plurality of feature information, wherein the specific relationship includes a positional relationship, a size relationship, and a state relationship among the plurality of targets.
The AR information display method based on the plurality of feature information, wherein the state relation comprises a color state relation and an attachment state relation.
The foregoing method for displaying AR information based on a plurality of feature information, wherein the displaying, based on the specific relationship, the plurality of AR information by an AR display device, the displaying the plurality of AR information for a user includes: based on the change of the specific relation among the targets, the specific relation among the targets on the AR information is correspondingly adjusted.
In the above AR information displaying method based on a plurality of feature information, when the size relationships between the plurality of targets change, the change of the size relationships between the plurality of feature information is reflected, so that the size relationships on the displayed AR information also change correspondingly.
In the aforementioned AR information displaying method based on the plurality of feature information, when the positional relationship between the plurality of feature information changes, the positional relationship on the AR information correspondingly changes.
In the aforementioned AR information displaying method based on the plurality of feature information, when the state relationship between the plurality of feature information changes, the state relationship on the AR information correspondingly changes.
The object of the present disclosure can be further achieved by the following technical measures.
The object of the present disclosure is also achieved by the following technical solutions. An AR display device according to the present disclosure includes a processor and a memory storing a computer program that, when executed by the processor, performs the AR information display method based on a plurality of feature information.
The beneficial effects of the invention at least comprise: through identifying a plurality of characteristic information corresponding to a plurality of targets in a visual field, the targets have specific relations such as size relations, position relations, state relations and the like, AR information corresponding to the plurality of characteristic information is obtained according to the plurality of characteristic information, and based on the specific relations, the plurality of AR information is displayed through an AR display device, the displayed plurality of AR information presents the specific relations for a user, the user is helped to intuitively know the relations among the targets through the AR information, and user experience is improved.
The foregoing description is only an overview of the disclosed technology, and may be implemented in accordance with the disclosure of the present disclosure, so that the above-mentioned and other objects, features and advantages of the present disclosure can be more clearly understood, and the following detailed description of the preferred embodiments is given with reference to the accompanying drawings.
Drawings
Fig. 1 is a flowchart illustrating an AR information display method based on a plurality of feature information according to an embodiment of the present disclosure.
Detailed Description
In order to further describe the technical means and effects adopted by the present disclosure to achieve the preset invention purpose, the following detailed description refers to the specific implementation, structure, features and effects of the AR information displaying method and apparatus based on multiple feature information according to the present disclosure with reference to the accompanying drawings and the preferred embodiments.
Fig. 1 is a flowchart illustrating an AR information display method based on a plurality of feature information according to an embodiment of the present disclosure. Referring to fig. 1, an AR information display method based on multiple feature information according to an example of the present disclosure mainly includes the following steps:
Step S11, identifying a plurality of pieces of characteristic information corresponding to a plurality of targets in the user field of view, wherein the targets have a specific relationship, and the plurality of pieces of characteristic information reflect the specific relationship.
Specifically, the CV algorithm is used to identify a plurality of pieces of characteristic information corresponding to a plurality of objects in the field of view of the user, where the characteristic information may be information for distinguishing various objects (for example, characters, houses, rivers, flowers and plants, trees, stone bridges, etc.) in the field of view, may be characteristics of the objects extracted by the machine learning method, or may be image information such as size, position, color, brightness, saturation, etc. of the objects in the image. For ease of understanding, the objects will be described primarily by way of example in terms of a house, but the objects described in this disclosure are not intended to refer to only houses, but include all objects that can be identified by the CV algorithm. After the user wears the AR display device, after identifying the object in the field of view, AR information corresponding to the object is displayed on the AR display device, for example, a house model is in the field of view of the user, and after identifying the features of the preset house model, one or more AR information such as a house type map, a price, a real photo, a user evaluation, etc. of the house can be presented on the AR information. In the present application, the AR information is augmented reality information, i.e., additional information other than information that can be seen by the user is presented to the user. When the AR information is displayed, the AR information may preferably be displayed around the corresponding real target, or the relationship between the AR information and the real target may be prompted to the user using a connection line.
The CV algorithm can be used for extracting the characteristics of the real-time image by adopting the trained convolutional neural network, inputting the characteristics of the real-time image into the classifier to obtain a classification result, and determining the target in the real-time image according to the classification result. The CV algorithm can also extract the image features through SIFT (Scale-INVARIANT FEATURES TRANSFORM, scale invariant feature transform), HOG (histogram of Oriented Gradient, direction gradient rectangularity), SURF (Speeded Up Robust Features, acceleration robust features) or LBP (Local Binary Pattern ) and other algorithms, and perform feature matching by adopting a clustering algorithm, so as to determine the image recognition result. In addition, the target recognition processing of the real-time image can be based on local processing or cloud server processing, the selection of the CV algorithm is not limited, and the CV algorithm is only required to extract the characteristic points of the image and recognize the image.
In an embodiment of the present invention, one object corresponds to one feature information, and one feature information may include one or more features of the object. For example, for a house, the feature information may be a vector extracted by CV algorithm corresponding to the house, where the vector includes one or more features of color, shape, size, orientation, area, etc. of the house.
Besides identifying the characteristic information of the target by using a CV algorithm, the characteristic information of the target can be determined by adopting two-dimensional code identification, electronic tags and other modes. For example, two-dimensional codes can be placed beside a target in the user field of view, and when each two-dimensional code is identified, one AR information can be correspondingly acquired. Thus, the AR information corresponding to the target can be determined simply by scanning the binary code. Specifically, the specific relationship between the plurality of targets includes a positional relationship, a size relationship, and a state relationship. Whether a plurality of targets have association or not can be determined through system preset or big data analysis as a premise that a specific relationship exists among the plurality of targets. Taking two houses in the field of view as an example, the two houses can be located in a cell and have the same house type, and the system can preset that the two houses have a connection, so that the two houses have a specific relationship. Where the positional relationship may refer to the relative position between houses, i.e. may be adjacent, or spaced apart by a specific distance, or in the same row and column, it is to be understood that the above only indicates examples of positional relationships, which include virtually all positional relationships that can be identified by the CV algorithm. The size relationship refers to the actual size relationship between targets, and taking a house as an example, the size ratio between three houses is 1:1: 1. 2:1:1 or 3:1:1, etc., it is understood that the magnitude relation includes virtually all magnitude relations that can be identified by the CV algorithm. The state relationship refers to a color state of the object, such as a color state of the house in an on-state and a color state of the house in an off-state, and an appendant state; the state of the addition means that one other addition is added, so that the characteristic information of the object is correspondingly changed, for example, a chimney of a house is in a smoke state, and it is understood that the state relationship actually comprises all state relationships which can be identified through a CV algorithm. It will be appreciated that each object may have a plurality of specific relationships with other objects, for example, the ratio of the size relationships of two adjacent houses in a position relationship is 2:1, one house is in a light-on state and the other house is in a light-off state. Thereafter, the process advances to step S12.
Step S12, according to the plurality of feature information, a plurality of AR information corresponding to the plurality of feature information is obtained.
Specifically, the method comprises the following steps: according to the plurality of feature information, a plurality of AR information corresponding to the plurality of feature information is obtained, for example, if three different houses exist in the visual field, each house corresponds to one AR information, and after the respective feature information of the three houses is detected in the visual field, the respective AR information is displayed for the three houses respectively. The AR information may be in the form of multimedia such as text, graphics, video, etc. Thereafter, the process advances to step S13.
Step S13, based on the specific relation, the AR information is displayed through an AR display device, and the displayed AR information presents the specific relation for a user.
Specifically, the AR display device includes any form of device that can realize AR display, such as AR glasses, AR head rings, AR helmets, and the like.
For example, if three houses in the field of view are adjacent in the positional relationship, the positions of the AR information correspondingly presented are also adjacent, and if the three houses in the field of view are spaced apart by equal distances, the AR information correspondingly presented are spaced apart by equal distances; under the size relationship, if the size ratio of the three houses is 2:1:1, then the size ratio of the correspondingly presented AR information is also 2:1:1, a step of; under the state relation, if the chimney of the house is detected to be in a smoking state through a CV algorithm, certain special effects (such as smoke) can be added to the corresponding presented AR information so as to correspond to the identified smoking state; if the house is in a light-on state, a certain special effect (such as halation or highlighting) can be added to the corresponding presented AR information so as to correspond to the recognized light-on state; if the house is in the off state, the corresponding presented AR information house may also remove the special effect of turning on the light and/or add a certain special effect (e.g. darkening) to correspond to the off state of the house. It can be understood that if there are multiple specific relationships between each target and other targets, multiple specific relationships are correspondingly presented in the AR information, for example, a ratio of two adjacent house size relationships in the user's view is 2:1, one house is in a light-on state, the other house is in a light-off state, the AR information is presented in a position relationship adjacent, the ratio of the sizes is 2:1, one displays a special effect corresponding to the light-on state, and the other displays a special effect corresponding to the light-off state.
Preferably, when the size relationship in the specific relationship changes, the size relationship in the AR information also changes correspondingly. For example, the size ratio of house 1, house 2, and house 3 in the user's field of view is 2:1:1, the size ratio of the AR information of the three houses is 2:1:1, and when the size of the house 1 is enlarged, the ratio of the three houses is changed to 10:2:1, and then the size ratio of the AR information of the three houses is correspondingly adjusted to 10:2:1.
Preferably, when the positional relationship in the specific relationship changes, the positional relationship in the AR information correspondingly changes. For example, if the positional relationship of three houses in the user's field of view is adjacent, the positional relationship of the AR information of the three houses is also adjacent, and if the position of one house in the three houses starts to be far away from the other two houses in some cases, the position of the AR information of the one house corresponds to the AR information far away from the other two houses. For another example, if the positional relationship of three objects (A, B, C) in the user's field of view is the adjacent order of A-B-C, when the positional relationship is changed to the adjacent order of A-C-B, the AR information of the three objects is also changed from the adjacent order of A-B-C to the adjacent order of A-C-B.
Preferably, when the state relationship in the specific relationship changes, the state relationship in the AR information correspondingly changes. For example, if the chimney of the house in the user's view is in a smoking state, a certain special effect (such as smoke) may be added to the corresponding AR information to correspond to the identified smoking state, and if at a certain moment, when the chimney of the house in the user's view stops smoking, the corresponding AR information may also remove the special effect; the house in the user field of view is in a light-on state, a certain special effect (such as adding a halo or highlighting) can be added to the AR information, the corresponding recognized light-on state is corresponding, if at a certain moment, the house light in the user field of view is turned off, the corresponding special effect of light-on can be removed from the AR information, and/or a certain special effect (such as darkening) is added to the AR information, so that the corresponding house is not in the light-on state; the house in the user's field of view is in the state of turning off the light, adds certain special effect on the AR information, corresponds the state of turning off the light of house, if at a certain moment, the house light in the user's field of view is opened, also adds certain special effect on the corresponding AR information, corresponds the state of turning on the light of house.
It is understood that when the positional relationship, the magnitude relationship, and the state relationship in the specific relationship are simultaneously changed, the positional relationship, the magnitude relationship, and the state relationship in the AR information are also simultaneously changed correspondingly. And will not be described in detail here.
In another aspect of the present invention, one or more embodiments of the present invention also provide an AR display device including a processor and a memory storing a computer program which, when executed by the processor, performs the steps of:
Identifying a plurality of pieces of characteristic information corresponding to a plurality of targets in a user field of view, wherein the targets have a specific relationship, and the plurality of pieces of characteristic information reflect the specific relationship;
Acquiring a plurality of AR information corresponding to the plurality of feature information according to the plurality of feature information;
Based on the specific relationship, the plurality of AR information is displayed through an AR display device, and the displayed plurality of AR information presents the specific relationship to a user.
It will be appreciated that the above AR display device may also implement one or more of the steps described above, which are not described herein.
In the above, according to the AR information displaying method based on multiple feature information in the embodiments of the present disclosure, by identifying multiple feature information of multiple targets, and specific relationships such as size relationships, position relationships, and state relationships among the multiple targets, and presenting the specific relationships on the AR information, when these specific relationships change, the AR information is correspondingly adjusted, so as to form a differential display, thereby helping a user understand the relationships between the AR information and the targets, and improving user experience.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatus, and means referred to in this disclosure are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
In addition, as used herein, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C), for example. Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
Various changes, substitutions, and alterations are possible to the techniques described herein without departing from the teachings of the techniques defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
Claims (7)
1. An AR information display method based on a plurality of feature information, the method comprising:
identifying a plurality of characteristic information corresponding to a plurality of targets in the user field of view, and determining whether the targets have correlation or not through system preset or big data analysis; wherein, the plurality of targets have a specific relation, and the plurality of characteristic information reflects the specific relation;
acquiring a plurality of pieces of AR information corresponding to the plurality of pieces of characteristic information according to the plurality of pieces of characteristic information, wherein the AR information is augmented reality information, namely, extra information except information which can be seen by a user is presented to the user;
Based on the specific relationship, displaying the plurality of AR information through an AR display device, wherein the displayed plurality of AR information presents the specific relationship for a user;
wherein the specific relationship includes a positional relationship, a magnitude relationship, and a state relationship among the plurality of targets;
based on the change of the specific relation among the targets, the specific relation among the targets on the AR information is correspondingly adjusted.
2. The AR information display method based on a plurality of feature information according to claim 1, wherein the identifying a plurality of feature information corresponding to a plurality of targets in a user's field of view is implemented using a CV algorithm.
3. The AR information display method based on the plurality of feature information according to claim 1, wherein the state relation includes a color state relation and an append state relation.
4. The AR information display method based on a plurality of feature information according to claim 1, wherein when the size relationship between the plurality of targets changes, the change in the size relationship is also reflected between the plurality of feature information, so that the size relationship on the displayed AR information also changes correspondingly.
5. The AR information display method based on a plurality of feature information according to claim 1, wherein when the positional relationship between the plurality of targets changes, the positional relationship changes are also reflected between the plurality of feature information, so that the positional relationship on the displayed AR information also changes correspondingly.
6. The AR information display method based on a plurality of feature information according to claim 1, wherein when a state relationship between the plurality of targets changes, the change of the state relationship is reflected between the plurality of feature information, so that the state relationship on the displayed AR information also changes correspondingly.
7. An AR display device comprising a processor and a memory, said memory storing a computer program which, when executed by said processor, performs said AR information display method based on a plurality of feature information as claimed in claims 1-6.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011203237.1A CN112287949B (en) | 2020-11-02 | 2020-11-02 | AR information display method and AR display device based on multiple feature information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011203237.1A CN112287949B (en) | 2020-11-02 | 2020-11-02 | AR information display method and AR display device based on multiple feature information |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112287949A CN112287949A (en) | 2021-01-29 |
| CN112287949B true CN112287949B (en) | 2024-06-07 |
Family
ID=74353444
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011203237.1A Active CN112287949B (en) | 2020-11-02 | 2020-11-02 | AR information display method and AR display device based on multiple feature information |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112287949B (en) |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102377873A (en) * | 2010-08-16 | 2012-03-14 | Lg电子株式会社 | Method and displaying information and mobile terminal using the same |
| CN103207728A (en) * | 2012-01-12 | 2013-07-17 | 三星电子株式会社 | Method Of Providing Augmented Reality And Terminal Supporting The Same |
| CN106254848A (en) * | 2016-07-29 | 2016-12-21 | 宇龙计算机通信科技(深圳)有限公司 | A kind of learning method based on augmented reality and terminal |
| CN107219926A (en) * | 2017-06-01 | 2017-09-29 | 福州市极化律网络科技有限公司 | Virtual reality method of interaction experience and device |
| CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
| CN108648276A (en) * | 2018-05-17 | 2018-10-12 | 上海宝冶集团有限公司 | A kind of construction and decoration design method, device, equipment and mixed reality equipment |
| CN111258423A (en) * | 2020-01-15 | 2020-06-09 | 惠州Tcl移动通信有限公司 | Component display method and device, storage medium and augmented reality display equipment |
| CN111580679A (en) * | 2020-06-07 | 2020-08-25 | 浙江商汤科技开发有限公司 | Space capsule display method and device, electronic equipment and storage medium |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101329882B1 (en) * | 2010-08-12 | 2013-11-15 | 주식회사 팬택 | Apparatus and Method for Displaying Augmented Reality Window |
| US10600249B2 (en) * | 2015-10-16 | 2020-03-24 | Youar Inc. | Augmented reality platform |
| EP3716014B1 (en) * | 2019-03-26 | 2023-09-13 | Siemens Healthcare GmbH | Transfer of a condition between vr environments |
-
2020
- 2020-11-02 CN CN202011203237.1A patent/CN112287949B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102377873A (en) * | 2010-08-16 | 2012-03-14 | Lg电子株式会社 | Method and displaying information and mobile terminal using the same |
| CN103207728A (en) * | 2012-01-12 | 2013-07-17 | 三星电子株式会社 | Method Of Providing Augmented Reality And Terminal Supporting The Same |
| CN106254848A (en) * | 2016-07-29 | 2016-12-21 | 宇龙计算机通信科技(深圳)有限公司 | A kind of learning method based on augmented reality and terminal |
| CN107219926A (en) * | 2017-06-01 | 2017-09-29 | 福州市极化律网络科技有限公司 | Virtual reality method of interaction experience and device |
| CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
| CN108648276A (en) * | 2018-05-17 | 2018-10-12 | 上海宝冶集团有限公司 | A kind of construction and decoration design method, device, equipment and mixed reality equipment |
| CN111258423A (en) * | 2020-01-15 | 2020-06-09 | 惠州Tcl移动通信有限公司 | Component display method and device, storage medium and augmented reality display equipment |
| CN111580679A (en) * | 2020-06-07 | 2020-08-25 | 浙江商汤科技开发有限公司 | Space capsule display method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
| Title |
|---|
| Augmented reality technologies, systems and applications;Julie Carmigniani 等;《DOI 10.1007/s11042-010-0660-6》;全文 * |
| 适应移动智能设备的目标跟踪器;熊晶莹 等;《光学精密工程》;全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112287949A (en) | 2021-01-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107742311B (en) | Visual positioning method and device | |
| CN110232311B (en) | Method and device for segmenting hand image and computer equipment | |
| US10740963B2 (en) | 3D virtual environment generating method and device | |
| Xu et al. | Human re-identification by matching compositional template with cluster sampling | |
| US8814048B2 (en) | Content identification and distribution | |
| CN107016387B (en) | Method and device for identifying label | |
| CN111680632A (en) | Smoke and fire detection method and system based on deep learning convolutional neural network | |
| Tsai et al. | Learning and recognition of on-premise signs from weakly labeled street view images | |
| CN110019912B (en) | Shape-based graph search | |
| CN101551732A (en) | Method for strengthening reality having interactive function and a system thereof | |
| CN108876858A (en) | Method and apparatus for handling image | |
| CN111667005B (en) | Human interactive system adopting RGBD visual sensing | |
| CN109446929A (en) | A stick figure recognition system based on augmented reality technology | |
| CN112036362A (en) | Image processing method, image processing device, computer equipment and readable storage medium | |
| Lee et al. | Automatic recognition of flower species in the natural environment | |
| CN112598714A (en) | Static target tracking method based on video frame homography transformation | |
| CN112230765A (en) | AR display method, AR display device, and computer-readable storage medium | |
| US8724890B2 (en) | Vision-based object detection by part-based feature synthesis | |
| CN112287949B (en) | AR information display method and AR display device based on multiple feature information | |
| CN115830607B (en) | Text recognition method and device based on artificial intelligence, computer equipment and medium | |
| CN110119202B (en) | Method and system for realizing scene interaction | |
| KR102316042B1 (en) | Augmented reality content delivery methods and systems that multiple markers in 3D models are recognized | |
| CN112270275A (en) | Commodity searching method and device based on picture recognition and computer equipment | |
| CN114445823B (en) | A method, device, computer equipment and storage medium for processing passport images | |
| Prabaharan et al. | Text extraction from natural scene images and conversion to audio in smart phone applications |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant |