CN106980690A - A kind of data processing method and electronic equipment - Google Patents
A kind of data processing method and electronic equipment Download PDFInfo
- Publication number
- CN106980690A CN106980690A CN201710209558.4A CN201710209558A CN106980690A CN 106980690 A CN106980690 A CN 106980690A CN 201710209558 A CN201710209558 A CN 201710209558A CN 106980690 A CN106980690 A CN 106980690A
- Authority
- CN
- China
- Prior art keywords
- data
- target
- image
- information
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
This application discloses a kind of data processing method and electronic equipment, method includes:Gather the image of target;Based on the image collected, the target is recognized;The first data of the target identified are obtained, first data include:The object data of the dynamic data of the target and/or at least one object of the target;Based on the dynamic data and/or the object data, the first image and/or the first information of the target are generated, wherein, described first image or the first information can not directly be perceived by naked eyes;Described first image and/or the first information are shown.Thus, target internal is entered without user, it is possible to pass through the state and other information of the first image and/or the first information to recognize target internal shown by the application etc., hence it is evident that improve Consumer's Experience.
Description
Technical field
The application is related to technical field of data processing, more particularly to a kind of data processing method and electronic equipment.
Background technology
At present, user looks into by equipment such as augmented reality (Augmented Reality, AR) equipment or regular handset etc.
When seeing the information of some target, these information are typically the owner of the target to pre-set, the letter that user is viewed
Breath be it is fixed, it is also relatively simple.
For example, when user goes to dining room or market, can only see some fix informations that hotel owner pre-sets, Consumer's Experience
It is poor.
The content of the invention
In view of this, the purpose of the application is to provide a kind of data processing method and electronic equipment, to solve existing skill
User needs just recognize the information such as the state and menu in dining room, the poor skill of Consumer's Experience into dining room menu of leafing through in art
Art problem.
This application provides a kind of data processing method, including:
Gather the image of target;
Based on the image collected, the target is recognized;
The first data of the target identified are obtained, first data include:The dynamic data of the target and/or institute
State the object data of at least one object of target;
Based on the dynamic data and/or the object data, the first image and/or the first letter of the target are generated
Breath, wherein, described first image or the first information can not directly be perceived by naked eyes;
Described first image and/or the first information are shown.
The above method, it is preferred that the data that first data are uploaded by one or more data sources are constituted;
It is described to be based on the dynamic data and/or the object data, generate the first image and/or first of the target
Information, including:
Integration processing is carried out to the data that the data source is uploaded;
Based on the data for integrating processing, the first image of the target is generated, described first image is 3 D stereo
Image;
And/or
The data for integrating processing are sorted out and merged based on default data type, the of the generation target
One information.
The above method, it is preferred that the dynamic data is data of the target in the range of a period of time, the object
Data are data of the object in the range of described a period of time, and a period of time scope is to previous apart from current time
To the time range between the current time at the time of fixed duration;
Described first image and/or the first information are corresponding with the dynamic data and/or the object data, institute
State the first image and/or the first information characterizes the state and/or the object of the target in the range of described a period of time
State.
The above method, it is preferred that the dynamic data is data of the target at current time, the object data is
Data of the object at current time.
The above method, it is preferred that described first image is the image of the inside of the target.
Present invention also provides a kind of electronic equipment, including:
Image capture module, the image for gathering target;
Display module;
Processor, for based on the image collected, recognizing the target, obtains the first data of the target identified,
First data include:The object data of the dynamic data of the target and/or at least one object of the target, is based on
The dynamic data and/or the object data, generate the first image and/or the first information of the target, wherein, described the
One image or the first information are the data that can not be directly perceived by naked eyes, by described first image and/or first letter
Breath is transferred to the display module, and described first image and/or the first information are shown by the display module.
Above-mentioned electronic equipment, it is preferred that the display module is AR display modules.
A kind of data processing method and electronic equipment provided from such scheme, the application, is collecting target
After image, by recognizing target based on the image collected, and then the dynamic data and/or mesh of the target identified are obtained
Object data of at least one object of target etc., and then generating the first image and/or first of target based on these data
Information, then these progress are shown to user.It can be seen that, by identifying the dynamic data and/or target of target in the application
The data of object generate the first image and/or the first information of target, to be shown to user, are different from advance in the prior art
The fix information of setting, the application, which can be provided the user, more enriches real-time multidate information, and then is obviously improved user's body
Test.
Brief description of the drawings
In order to illustrate more clearly of the technical scheme in the embodiment of the present application, make required in being described below to embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present application, for
For those of ordinary skill in the art, without having to pay creative labor, it can also be obtained according to these accompanying drawings
His accompanying drawing.
Fig. 1 is a kind of implementation process figure for data processing method that the embodiment of the present application one is provided;
Fig. 2 is the partial process view of the embodiment of the present application one;
Figure Fig. 3 a and 3b are respectively the application exemplary plot of the embodiment of the present application;
Fig. 4 is another part flow chart of the embodiment of the present application one;
Fig. 5 and Fig. 6 are respectively the another application exemplary plot of the embodiment of the present application;
Fig. 7 is the structural representation for a kind of electronic equipment that the embodiment of the present application two is provided;
Fig. 8 and Fig. 9 are respectively the other application exemplary plot of the embodiment of the present application.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, the technical scheme in the embodiment of the present application is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only some embodiments of the present application, rather than whole embodiments.It is based on
Embodiment in the application, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of the application protection.
It is a kind of implementation process figure for data processing method that the embodiment of the present application one is provided, it is adaptable to by mesh with reference to Fig. 1
Target information is presented directly to user.
Method in the present embodiment can be using in the electronic device, such as mobile phone, pad terminal devices, and this method can be with
Comprise the following steps:
Step 101:Gather the image of target.
Wherein, the image of target can be:It is capable of the image in 2 D code, title figure picture or trademark image of unique mark target
As etc..
The first-class image to gather target can be such as imaged in the present embodiment using the image capture module on electronic equipment.
For example, the camera that user is opened on mobile phone gathers the image in 2 D code or title figure picture of target.
Step 102:Based on the image collected, target is recognized.
Wherein, the image collected can be identified using image recognition algorithm etc. in the present embodiment, to recognize
Go out target.Such as the identification of title figure picture.
Or, the image collected is carried out in the present embodiment after feature extraction, feature is identified, to identify
Target.Such as the identification of image in 2 D code.
Step 103:Obtain the first data of the target identified.
Wherein, the first data can include:The number of objects of the dynamic data of target and/or at least one object of target
According to.
The dynamic data of target refers to:Characterize the data of the current real-time state of target.
Such as, when target is market, dynamic data can be:The store location of the volume of the flow of passengers and client in market in market
Etc. data;
For another example, when target is dining room, dynamic data can be:The volume of the flow of passengers and client in dining room in dining room with regard to seats in a restaurant
The data such as put.
The object of target, refers in target or is under the jurisdiction of object or person of target etc..
And object data refers to:Characterize the feature of object or the data of feature.
Such as, when object is the shop in market, object data can be:The type of merchandise and/or merchandise news that shop is sold
Deng.
For another example, when object is the vegetable in dining room, object data can be:The materials of vegetable, dispensing, manufacturing process, source,
Relevant stories (name origin), customer comment etc..
Step 104:Based on dynamic data and/or object data, the first image and/or the first information of target are generated.
Wherein, the first image or the first information can not directly be perceived by naked eyes.
First image can be the image of virtual image, and the first information can be text information etc..
Step 105:First image and/or the first information are shown.
Such as, the first image and/or the first information are shown that there is provided referred to user on a display screen.
A kind of data processing method provided from such scheme, the embodiment of the present application one, is collecting the figure of target
As after, by recognizing target based on the image collected, and then the dynamic data and/or target of the target identified are obtained
The object data of at least one object etc., and then generating the first image and/or the first letter of target based on these data
Breath, then these progress are shown to user.It can be seen that, by identifying the dynamic data and/or target of target in the present embodiment
The data of object generate the first image and/or the first information of target, to be shown to user, are different from advance in the prior art
The fix information of setting, the present embodiment, which can be provided the user, more enriches real-time multidate information, and then is obviously improved user
Experience.
In one implementation, the data that the first data are uploaded by one or more data sources are constituted.
Here data source can be user oneself or target, such as client or dining room.These data sources pass through net
Network uploads the object data of object in the data each recognized, the dynamic data and/or target of such as target.Thus, this reality
Apply and obtain the dynamic data and/or object data of multiple data sources uploads in example to generate the first image and/or the first information, area
It can not provided the user in owner's upload information of individual data source such as target in the prior art, the present embodiment more complete
The information sharing in face, further improves Consumer's Experience.
Accordingly, the step 104 in Fig. 1 can be realized by following steps, as shown in Figure 2:
Step 201:Integration processing is carried out to the data that data source is uploaded.
For example, the dining client in dining room can upload the position of having dinner of oneself, such as table 1 or table 2.Will in the present embodiment
These positions of having dinner are integrated, and obtaining on the state of having dinner on each table in dining room, such as table 1 having on two clients that have dinner, table 2 has
There is the manufacturing process of the materials data of 0 client that has dinner, vegetable 1 and vegetable 2, vegetable 1 and vegetable 2 on 4 clients that have dinner, table 3
Data, etc..
For another example, user can upload the image oneself photographed, and such as multiple users are in different angle shots to the dining room arrived
The characteristics of corresponding angle of each image is according to three-dimensional or two dimensional image is based in image, the present embodiment, image is spliced,
Obtain the 3-D view or two dimensional image in dining room.By image portion overlapping in two images containing same desk in such as Fig. 3 a
Divide and sheared, and remaining image is spliced according to pixel, the image after being merged.
Step 202:Based on the data for integrating processing, the first image of target is generated.
Wherein, the first image is three-dimensional image.
For example, based on the data for integrating processing in the present embodiment, to generate the first figure that can characterize target current state
Picture, first image can be three-dimensional image, more directly perceived when being shown to user.Such as the two dimension of the first image in Fig. 3 b
Display effect.
Or, as shown in Figure 4, after step 201, it may comprise steps of:
Step 203:The data for integrating processing are sorted out and merged based on default data type, the of target is generated
One information.
For example, being sorted out in the present embodiment based on the data for integrating processing based on data type such as materials or manufacturing process
And merging:The materials data and manufacturing process data of same vegetable are merged, by the materials data of same vegetable and
Manufacturing process data are merged, and merge the first information of obtained information extremely target.
In one implementation, dynamic data can be data of the target in the range of a period of time, and object data is
Data of the object in the range of a period of time, and a period of time scope here is apart from current time certain duration forward
Moment is to the time range between current time.As shown in Figure 5, moment A is current time, and moment B is that moment A is certain forward
Duration such as 2 hours or 1 day at the time of, the dynamic data and/or mesh of target between moment A to moment B are obtained in the present embodiment
The object data of at least one object of target.
It should be noted that certain duration here can be configured according to user's request.
Accordingly, the first image and/or the first information are relative with dynamic data and/or object data, that is to say, that this reality
Apply the number of objects of the dynamic data based on the target obtained in the range of a period of time in example and/or at least one object of target
According to the first image and/or the first information is generated, thus, the first image and/or the first information characterize this period of time scope
The state of the state of interior target and/or at least one object of target.
In one implementation, dynamic data is data of the target at current time, and object data is object current
The data at moment, as shown in Figure 6, in the present embodiment obtain moment A on target dynamic data and/or target at least one
The object data of object.Accordingly, the first image and/or the first information are relative with dynamic data and/or object data, that is,
Say, the object of the dynamic data of the target obtained in the present embodiment based on current time and/or at least one object of target
Data generate the first image and/or the first information, thus, and first image and/or the first information characterize current target
The state of at least one object of state and/or target.
In one implementation, the first image is the image of target internal, it can be understood as user is in collection target
Directly can not visually it be perceived during image., can be by recognizing image in the present embodiment after the image of target is collected
In target generate the image of target internal, intuitively provide the user the state of target internal, user need not enter target
Inside although it is understood that the state of target internal.
In one implementation, the present embodiment, can be by common when showing the first image and/or the first information
Display module is shown that the touch-control panel type display on such as mobile phone or pad can also pass through augmented reality (Augmented
Reality, AR) display module, the display module on such as AR glasses provides the user and more presses close to real virtual image, come
Show the image or information of target internal or correlation.
It is the structural representation for a kind of electronic equipment that the embodiment of the present application two is provided, it is adaptable to by target with reference to Fig. 7
Information is presented directly to user.
In the present embodiment, the electronic equipment can be the terminal devices such as mobile phone, pad, can specifically include following structure:
Image capture module 701, the image for gathering target.
Display module 702.
Wherein, the display module 701 can be touch screen display module or AR display modules etc..
Processor 703, for based on the image collected, recognizing the target, obtains the first number of the target identified
According to first data include:The object data of the dynamic data of the target and/or at least one object of the target,
Based on the dynamic data and/or the object data, the first image and/or the first information of the target are generated, wherein, institute
State the first image or the first information is the data that can not be directly perceived by naked eyes, by described first image and/or described the
One information transfer gives the display module 702, and described first image and/or first letter are shown by the display module 703
Breath.
From such scheme, a kind of electronic equipment that the embodiment of the present application two is provided, collect target image it
Afterwards, by recognizing target based on the image collected, and then the dynamic data and/or target of the target identified are obtained extremely
Object data of a few object etc., and then the first image and/or the first information of target are being generated based on these data, then
These progress are shown to user.It can be seen that, by identifying the dynamic data of target and/or the object of target in the present embodiment
Data generate the first image and/or the first information of target, to be shown to user, are different from what is pre-set in the prior art
Fix information, the present embodiment, which can be provided the user, more enriches real-time multidate information, and then is obviously improved Consumer's Experience.
So that electronic equipment is mobile phone as an example, the embodiment of the present application is illustrated:
The application of this embodiment scheme can be realized by being installed on mobile phone, or, camera applications are directly used on mobile phone
(Application, APP) realize, using AR augmented realities, user can by mobile phone against dining room doorway title, build
Build or Quick Response Code, mobile phone can just fictionalize true environment, the menu in dining room etc., including newest speciality, user comment, very
Display is shown and comments on to all stars in comment application are fictionalized, including advertising message etc..Thus, user can look at it is not
The place oneself wanted, if it is desired to go to take another look in dining room whether available free position and internal environment etc., in enhancing
Seat is selected on the virtual machine interface of display, selects suitable vegetable etc. to place an order, or even can be with the opposite meal of user mobile phone in dining room
The vegetable of some in the Room, it is possible to which the source of vegetable, dispensing, way are seen by mobile phone, including story, tasted the big of this vegetable
Thus the comment of family has dinner for user and provides more enjoyment.
Comprise the following steps that, as shown in Figure 8:
Open the application software that the AR of mobile phone camera preview interface or some software is realized, mobile phone is against dining room;
Mobile phone interface augmented reality, fictionalizes the inside true environment in dining room, including vegetable, menu, dining room history stroke,
Comment, advertising message etc. can also be included;
User can see if appropriate for having dinner, and clicks on idle be used as again afterwards and checks, then selects seat of having dinner;
If user does not like wait, user can select lower uniterming from virtual interface, and select dish of this time having dinner
Product, are placed an order;
If user comes into dining room, coming for this vegetable can be ejected against some vegetable or diet with user mobile phone
Source, dispensing, way and the story related to this vegetable, can also by eater to evaluation or public praise of this vegetable etc.,
In addition, user can also input oneself view and suggestion to this road vegetable.
Or, a APP is realized on mobile phone based on the embodiment of the present application or directly realized with camera APP, is utilized
AR augmented realities, hold mobile phone against certain course in dining table of having dinner, and fictionalize the use material of the dish on mobile phone at once, dispensing,
The true environment of manufacturing process.Coordinate scene animation to tell about the story of this course, originate, certain famous person has dinner herein, including product
Tasted the comment of this dish, who cook come from, current eater can also be cook's thumb up in interface, income comment with
And oneself know source on this course, information etc., it is possible to plus friend circle, or various social platforms are shared in comment.
Comprise the following steps that, as shown in Figure 9:
Open the application software that the AR of camera preview interface or some software is realized, mobile phone is against some dish on dining table
Product;
Mobile phone interface augmented reality, fictionalizes the materials for carrying out certain course, dispensing, manufacturing process;
User can select vegetable to originate and story, and interface fictionalizes the story presentation on this course, or name
Source;
User can also check everybody comment, add oneself view and comment to this course;
User can add story, source, the way that the current eater of this course knows;
User can be cook's thumb up;
User can also share these information to various social platforms.
It should be noted that each embodiment in this specification is described by the way of progressive, each embodiment weight
Point explanation be all between difference with other embodiment, each embodiment identical similar part mutually referring to.
Finally, in addition it is also necessary to explanation, herein, such as first and second or the like relational terms be used merely to by
One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation
Between there is any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant meaning
Covering including for nonexcludability, so that process, method, article or equipment including a series of key elements not only include that
A little key elements, but also other key elements including being not expressly set out, or also include be this process, method, article or
The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged
Except also there is other identical element in the process including the key element, method, article or equipment.
A kind of data processing method provided by the present invention and electronic equipment are described in detail above, to disclosed
Embodiment described above, professional and technical personnel in the field is realized or using the present invention.To many of these embodiments
Plant modification to will be apparent for those skilled in the art, generic principles defined herein can be not
In the case of departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention is not intended to be limited to this
These embodiments shown in text, and it is to fit to the most wide scope consistent with features of novelty with principles disclosed herein.
Claims (7)
1. a kind of data processing method, including:
Gather the image of target;
Based on the image collected, the target is recognized;
The first data of the target identified are obtained, first data include:The dynamic data of the target and/or the mesh
The object data of at least one object of target;
Based on the dynamic data and/or the object data, the first image and/or the first information of the target are generated, its
In, described first image or the first information can not directly be perceived by naked eyes;
Described first image and/or the first information are shown.
2. according to the method described in claim 1, it is characterised in that what first data were uploaded by one or more data sources
Data are constituted;
It is described to be based on the dynamic data and/or the object data, generate the first image and/or the first letter of the target
Breath, including:
Integration processing is carried out to the data that the data source is uploaded;
Based on the data for integrating processing, the first image of the target is generated, described first image is three-dimensional image;
And/or
The data for integrating processing are sorted out and merged based on default data type, the first letter of the target is generated
Breath.
3. according to the method described in claim 1, it is characterised in that the dynamic data is the target in a period of time scope
Interior data, the object data is data of the object in the range of described a period of time, and a period of time scope is
Apart from current time forward certain duration at the time of to the time range between the current time;
Described first image and/or the first information are corresponding with the dynamic data and/or the object data, and described
One image and/or the first information characterize the state and/or the shape of the object of the target in the range of described a period of time
State.
4. according to the method described in claim 1, it is characterised in that the dynamic data is number of the target at current time
According to the object data is data of the object at current time.
5. according to the method described in claim 1, it is characterised in that described first image is the image of the inside of the target.
6. a kind of electronic equipment, including:
Image capture module, the image for gathering target;
Display module;
Processor, for based on the image collected, recognizing the target, obtains the first data of the target identified, described
First data include:The object data of the dynamic data of the target and/or at least one object of the target, based on described
Dynamic data and/or the object data, generate the first image and/or the first information of the target, wherein, first figure
Picture or the first information are the data that can not be directly perceived by naked eyes, and described first image and/or the first information are passed
The display module is defeated by, described first image and/or the first information are shown by the display module.
7. electronic equipment according to claim 6, it is characterised in that the display module is AR display modules.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710209558.4A CN106980690A (en) | 2017-03-31 | 2017-03-31 | A kind of data processing method and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710209558.4A CN106980690A (en) | 2017-03-31 | 2017-03-31 | A kind of data processing method and electronic equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106980690A true CN106980690A (en) | 2017-07-25 |
Family
ID=59339308
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710209558.4A Pending CN106980690A (en) | 2017-03-31 | 2017-03-31 | A kind of data processing method and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106980690A (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108319996A (en) * | 2018-01-19 | 2018-07-24 | 口碑(上海)信息技术有限公司 | Vegetable identification processing system and method, intelligent dining-table system |
| CN109191252A (en) * | 2018-08-31 | 2019-01-11 | 浙江口碑网络技术有限公司 | Vegetable recommended method and device based on augmented reality |
| CN111159460A (en) * | 2019-12-31 | 2020-05-15 | 维沃移动通信有限公司 | An information processing method and electronic device |
| WO2020259694A1 (en) * | 2019-06-27 | 2020-12-30 | 贝壳找房(北京)科技有限公司 | Method and apparatus for displaying item information in current space, and medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103002410A (en) * | 2012-11-21 | 2013-03-27 | 北京百度网讯科技有限公司 | Augmented reality method, system and mobile terminal for mobile terminal |
| CN103197980A (en) * | 2012-01-10 | 2013-07-10 | 华为终端有限公司 | Method, device and system for presenting augmented reality content |
| WO2013116699A1 (en) * | 2012-02-03 | 2013-08-08 | Apx Labs, Llc | Accessing applications in a mobile augmented reality environment |
| CN105468142A (en) * | 2015-11-16 | 2016-04-06 | 上海璟世数字科技有限公司 | Interaction method and system based on augmented reality technique, and terminal |
| US9432421B1 (en) * | 2014-03-28 | 2016-08-30 | A9.Com, Inc. | Sharing links in an augmented reality environment |
-
2017
- 2017-03-31 CN CN201710209558.4A patent/CN106980690A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103197980A (en) * | 2012-01-10 | 2013-07-10 | 华为终端有限公司 | Method, device and system for presenting augmented reality content |
| WO2013116699A1 (en) * | 2012-02-03 | 2013-08-08 | Apx Labs, Llc | Accessing applications in a mobile augmented reality environment |
| CN103002410A (en) * | 2012-11-21 | 2013-03-27 | 北京百度网讯科技有限公司 | Augmented reality method, system and mobile terminal for mobile terminal |
| US9432421B1 (en) * | 2014-03-28 | 2016-08-30 | A9.Com, Inc. | Sharing links in an augmented reality environment |
| CN105468142A (en) * | 2015-11-16 | 2016-04-06 | 上海璟世数字科技有限公司 | Interaction method and system based on augmented reality technique, and terminal |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108319996A (en) * | 2018-01-19 | 2018-07-24 | 口碑(上海)信息技术有限公司 | Vegetable identification processing system and method, intelligent dining-table system |
| CN109191252A (en) * | 2018-08-31 | 2019-01-11 | 浙江口碑网络技术有限公司 | Vegetable recommended method and device based on augmented reality |
| WO2020259694A1 (en) * | 2019-06-27 | 2020-12-30 | 贝壳找房(北京)科技有限公司 | Method and apparatus for displaying item information in current space, and medium |
| US11120618B2 (en) | 2019-06-27 | 2021-09-14 | Ke.Com (Beijing) Technology Co., Ltd. | Display of item information in current space |
| CN111159460A (en) * | 2019-12-31 | 2020-05-15 | 维沃移动通信有限公司 | An information processing method and electronic device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9418293B2 (en) | Information processing apparatus, content providing method, and computer program | |
| EP3783500B1 (en) | Schemes for retrieving and associating content items with real-world objects using augmented reality and object recognition | |
| US20200257121A1 (en) | Information processing method, information processing terminal, and computer-readable non-transitory storage medium storing program | |
| CN106980690A (en) | A kind of data processing method and electronic equipment | |
| JP6530794B2 (en) | Spatial object search sorting method and cloud system | |
| CN104850370B (en) | The method and device of order information is shown in background display area domain | |
| CN106127552B (en) | A virtual scene display method, device and system | |
| CN105468142A (en) | Interaction method and system based on augmented reality technique, and terminal | |
| JP2017531950A (en) | Method and apparatus for constructing a shooting template database and providing shooting recommendation information | |
| US9497249B2 (en) | Information processing apparatus, information processing method, program, and information processing system | |
| JP2014085796A (en) | Information processing device and program | |
| CN108369633A (en) | The visual representation of photograph album | |
| CN104598037B (en) | Information processing method and device | |
| US10474919B2 (en) | Method for determining and displaying products on an electronic display device | |
| CN111815782A (en) | Display method, device and equipment of AR scene content and computer storage medium | |
| CN119537714A (en) | Geolocation-based background generation for object images | |
| CN105138763A (en) | Method for real scene and reality information superposition in augmented reality | |
| CN107316011B (en) | Data processing method, device and storage medium | |
| US20110043520A1 (en) | Garment fitting system and operating method thereof | |
| CN107330018A (en) | The methods of exhibiting and display systems of a kind of photo | |
| CN108551420A (en) | Augmented reality equipment and its information processing method | |
| CN105359188B (en) | Attributes estimation system | |
| CN108092950B (en) | AR or MR social method based on position | |
| Drüeke et al. | Positioning the Veiled Woman | |
| CN105378626A (en) | Situation-aware presentation of information |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |