CN109032483A - Content search method and device - Google Patents
Content search method and device Download PDFInfo
- Publication number
- CN109032483A CN109032483A CN201810734943.5A CN201810734943A CN109032483A CN 109032483 A CN109032483 A CN 109032483A CN 201810734943 A CN201810734943 A CN 201810734943A CN 109032483 A CN109032483 A CN 109032483A
- Authority
- CN
- China
- Prior art keywords
- gesture
- user
- content
- screen
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of content search method and device, is related to the communications field, to solve the problems, such as that search efficiency is low and invents.This method comprises: obtaining the first gesture information that user is inputted by the screen of mobile terminal when needing to carry out content search;From in the mapping table of pre-set gesture and content, searching the corresponding object content of the first gesture information;The object content is shown to the user.The present invention can apply on the terminal devices such as mobile phone.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of content search methods and device.
Background technique
User will first search for its institute before the app by pre-installing on mobile phone watches video content from video content library
Want the video content of viewing.The prior art mainly provides following several searching methods:
1, it is searched for from the content that homepage is recommended;
2, it is scanned for from the channel page according to type;
3, it is searched for from collection;
4, it is searched for from broadcasting record;
5, keyword is inputted in search box to scan for.
However, above several searching methods, when content is more, search efficiency is lower, so that the search experience of user is poor.
Summary of the invention
The embodiment of the present invention provides a kind of content search method and device, to solve the problems, such as that search efficiency is low.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, the embodiment of the invention provides a kind of content search methods, comprising: when needing to carry out content search
When, obtain the first gesture information that user is inputted by the screen of mobile terminal;It is corresponding with content from pre-set gesture
In relation table, the corresponding object content of the first gesture information is searched;The object content is shown to the user.
Further, the acquisition user includes: to obtain institute by the first gesture information that the screen of mobile terminal inputs
State the full frame gesture typing instruction of starting of user's input;It is quiet that the screen is set according to the starting full frame gesture typing instruction
Only;In the screen quiescent period, the first gesture information that the user is inputted by the screen is obtained.
Further, the acquisition user includes described in acquisition by the first gesture information that the screen of mobile terminal inputs
The initiation gesture typing instruction of user's input;It is instructed according to the gesture typing, pops up first gesture typing on the screen
Window;Obtain the first gesture information that user is inputted by the first gesture typing window.
Further, the content search method, further includes: receive the setting gesture search instruction of user's input;
According to the setting gesture search instruction, second gesture typing window is popped up on the screen;User is obtained by described the
The second gesture information of two gesture typing windows input;It will be in the second gesture information and user's current browse webpage
Content establishes corresponding relationship, and stores into the mapping table of the gesture and content.
Further, the content search method, further includes: show that the gesture is corresponding with content to the user and close
It is table, so that the user modifies to the corresponding relationship of gesture therein and content.
Second aspect, the embodiment of the present invention also provide a kind of content search device, comprising:
First obtains module, is inputted for when needing to carry out content search, obtaining user by the screen of mobile terminal
First gesture information;
Searching module obtains mould for from the mapping table of pre-set gesture and content, searching described first
The corresponding object content of first gesture information that block obtains;
First display module, for showing the object content to the user.
Further, the first acquisition module includes:
First acquisition submodule, the full frame gesture typing instruction of starting for obtaining user's input;
Submodule is set, the full frame gesture typing instruction setting institute of the starting for being obtained according to first acquisition submodule
It is static to state screen;
Second acquisition submodule, for obtaining what the user was inputted by the screen in the screen quiescent period
First gesture information.
Further, the first acquisition module includes:
Third acquisition submodule, for obtaining the initiation gesture typing instruction of user's input;
Display sub-module, the gesture typing instruction for being obtained according to third acquisition submodule, on the screen
Pop up first gesture typing window;
4th acquisition submodule, it is defeated for obtaining the first gesture typing window that user is popped up by the display sub-module
The first gesture information entered.
Further, the content search device, further includes:
Receiving module, for receiving the setting gesture search instruction of user's input;
Second display module is used for according to the received setting gesture search instruction of the receiving module, on the screen
Pop up second gesture typing window;
Second obtains module, spacious for obtaining the second gesture typing window that user is popped up by second display module
The second gesture information of mouth input;
Memory module, for obtaining the second gesture information and the current browse page of the user that module obtains for described second
Content in face establishes corresponding relationship, and stores into the mapping table of the gesture and content.
Further, the content search device, further includes:
Third display module, for showing the mapping table of the gesture and content to the user, so that described
User modifies to the corresponding relationship of gesture therein and content.
Technical solution provided in an embodiment of the present invention receives the first-hand of user's input when needing to carry out content search
Gesture information obtains object content and shows according to the comparison result of the mapping table of first gesture information and gesture and content
To user.Technical solution provided by the invention can solve existing by first gesture information fast search to object content
The low problem of technology search efficiency.Further, object content can be searched due to only needing to input gesture information, be not required to
Search operation could be completed by wanting user that must look at screen, so that individual's essence that technical solution provided in an embodiment of the present invention occupies
Power is less, thus the operation being more suitable under such as startup procedure special screne.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, needed in being described below to the embodiment of the present invention
Attached drawing to be used is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention,
For those of ordinary skill in the art, without any creative labor, it can also obtain according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the flow chart one of content search method provided in an embodiment of the present invention;
Fig. 2 is the flow chart one of step 101 in content search method provided in an embodiment of the present invention shown in FIG. 1;
Fig. 3 is first gesture typing window in step 202 in content search method provided in an embodiment of the present invention shown in Fig. 2
The schematic diagram one of mouth;
Fig. 4 is first gesture typing window in step 202 in content search method provided in an embodiment of the present invention shown in Fig. 2
The schematic diagram two of mouth;
Fig. 5 is the flowchart 2 of step 101 in content search method provided in an embodiment of the present invention shown in FIG. 1;
Fig. 6 be another embodiment of the present invention provides content search method flow chart;
Fig. 7 is the flow chart for the content search method that further embodiment of this invention provides;
Fig. 8 is content search apparatus structure schematic diagram one provided in an embodiment of the present invention;
Fig. 9 is that the structure of the first acquisition module 801 in content search device provided in an embodiment of the present invention shown in Fig. 8 is shown
It is intended to one;
Figure 10 is the first structure for obtaining module 801 in content search device provided in an embodiment of the present invention shown in Fig. 8
Schematic diagram two;
Figure 11 is content search apparatus structure schematic diagram two provided in an embodiment of the present invention;
Figure 12 is content search apparatus structure schematic diagram three provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
As shown in Figure 1, content search method provided in an embodiment of the present invention, comprising:
Step 101, when needing to carry out content search, the first gesture that user is inputted by the screen of mobile terminal is obtained
Information.
Specifically, in a kind of situation, as shown in Fig. 2, step 101 may include:
Step 201, the initiation gesture typing instruction of user's input is obtained.
In the present embodiment, user can input initiation gesture typing instruction in several ways:
Such as: the virtual push button of initiation gesture typing can be preset in content presentation interface, when the user clicks the void
When quasi- button, step 201 obtains the initiation gesture typing instruction of user's input.
It should be noted that the present embodiment does not limit content presentation interface specifically, used actual
Cheng Zhong, content presentation interface can be arbitrary interface, such as: the homepage of video content playout software or some content broadcast page
Deng;In addition, the present embodiment does not also limit virtual push button specifically, in the actual use process, virtual push button can be with
It is arranged on the position arbitrarily needed, such as: can be set beside search button or be arranged for form can be hidden in page side
Side etc..
For another example: physical button can be arranged to start button, when the user clicks when the physical button, step 201, which obtains, to be used
The initiation gesture typing instruction of family input.Wherein, physical button can be fingerprint typing key or volume adjustment button etc..
It should be noted that, in order to avoid key conflict, being needed pre- advanced when using physical button as start button
Row setting, such as: when being set in advance under the homepage or some content broadcast page scene of video content playout software, physics
Key could be as start button etc..
For another example: step 201 can receive the initiation gesture typing instruction that user is inputted by way of voice.
Certainly, three of the above form is only specific citing, in the actual use process, can also be led to after step 201
The initiation gesture typing instruction that other modes obtain user's input is crossed, each case is not repeated one by one herein.
Step 202, it is instructed according to gesture typing, pops up first gesture typing window on the screen.
It should be noted that the present embodiment is not defined the concrete form of first gesture typing window, actual
In use process, step 202 can be according to the gesture pop-up pair stored in the mapping table of pre-set gesture and content
Answer the first gesture typing window of form.Such as: when the gesture stored in the mapping table of gesture and content is arbitrary graphic pattern
Gesture when, first gesture typing window can be blank form as shown in Figure 3;When in the mapping table of gesture and content
When the gesture of storage is the gesture drawn according to the form of JiuGongTu, or JiuGongTu form as shown in Figure 4.
It should be noted that figure 3 above and Fig. 4 are only a kind of schematic diagrames for citing, it can not be to actual first
The specific pattern in interface of gesture typing window is defined.
Step 203, it obtains user and passes through the first gesture information that first gesture typing window inputs.
The present embodiment obtains first gesture information by step shown in Fig. 2, since user passes through first gesture typing window
Mouthful input first gesture information, so that the first gesture information obtained is more acurrate, the problem of not easily causing misrecognition.
In another case, as shown in figure 5, step 101 may include:
Step 501, the full frame gesture typing instruction of starting of user's input is obtained.
The concrete methods of realizing of step 501 described in the present embodiment may refer to described in step 201 as shown in Figure 2, this
Place repeats no more.
Step 502, static according to full frame gesture typing instruction setting screen is started.
In the present embodiment, step 502 setting screen it is static there are many ways to, such as: can preset one with
The identical transparent window of screen size, when receive start full frame gesture typing instruction when, step 502 show the transparent window with
Achieve the purpose that setting screen is static.Certainly, in the actual use process, step 502 can also be arranged by other means
Screen is static, is not repeated herein.
Step 503, in the screen quiescent period, the first gesture information that user is inputted by screen is obtained.
The present embodiment obtains first gesture information by step shown in fig. 5, since screen can be arranged to static shape
State so that user can in full frame range input first gesture information, do not need as shown in Figure 2 in particular range into
Row input, so that user inputs, first gesture information is easier, and inputs first gesture information and do not need to occupy excessively
Attention is more suitable under such as driving conditions special scenes and uses.
Certainly, figure 2 above and step shown in fig. 5 are only specific citing, in the actual use process, step 101
First gesture information can also be obtained by other means, be not repeated herein.
It should be noted that the present embodiment does not limit content specifically, and in the actual use process, content
It can refer to video content, can also refer to audio content, certainly can also be the content of other forms, such as e-book, herein not
It repeats.
Step 102, from the mapping table of pre-set gesture and content, the corresponding mesh of first gesture information is searched
Mark content.
In the present embodiment, firstly, step 102 parses the first gesture information that step 101 obtains, parsing is obtained
As a result;Then, gesture and corresponding gesture in the mapping table of content are searched according to parsing result;Finally obtain the gesture pair
The object content answered.
It should be noted that the present embodiment is not defined analytic method, and in the actual use process, this field
Technical staff can be parsed using any means, be not repeated herein.
In addition, it is necessary to explanation, the present embodiment does not search the corresponding object content of the first gesture information to step 102
Executing subject is specifically limited.In the actual use process, in order to achieve the purpose that fast resolving, step 102 can be with
In the step of locally being parsed, parsing result is then sent to back-end server, is searched by back-end server, and is returned
Return object content;Alternatively, searching accurate purpose to reach, first gesture information can also be sent to rear end by step 102
Server the step of being completed parsing by back-end server and searched, and returns to object content.Certainly, both the above situation is only
It illustrates, can also be other situations in actual use process, details are not described herein again.
Finally, it should be noted that in the present embodiment, the gesture in the mapping table of gesture and content can be single
One gesture, such as: L, ∞ or △, or the combination of more gestures, such as: 81 or x √, the storage mode of gesture
It can be graphic form, be also possible to textual form, L as described above, certainly, the above is only citings, herein not to every kind
Situation is repeated one by one;Gesture and the content in the mapping table of content can be the corresponding broadcasting link of content or deposit
Storage space is set, and certainly, the above is only citings, is not repeated one by one each case herein.In addition, the present embodiment not to gesture with
The corresponding relationship of content is defined, and in the actual use process, gesture and content can be one-to-one relationship, certainly
It can be one-to-many or multi-to-multi relationship etc., be not repeated herein.
Step 103, to user's displaying target content.
Another embodiment of the present invention also provides a kind of content search method, this method and shown in FIG. 1 essentially identical, area
It is not, further includes the steps that the mapping table that gesture and content are set, specifically, as shown in Figure 6, comprising:
Step 104, the setting gesture search instruction of user's input is received.
In the present embodiment, the concrete methods of realizing of step 104 may refer to described in step 201 shown in Fig. 2, herein not
It repeats again.
Step 105, according to setting gesture search instruction, second gesture typing window is popped up on the screen.
It should be noted that the present embodiment is not defined the concrete form of second gesture typing window, actual
In use process, in order to allow users to be arranged according to the use habit of oneself gesture, second gesture typing window can be for such as
Blank form shown in Fig. 3;Alternatively, in order to make user facilitate memory its set by gesture, second gesture typing window can also
Think JiuGongTu form as shown in Figure 4.
Step 106, it obtains user and passes through the second gesture information that second gesture typing window inputs.
Step 107, second gesture information and content to be searched are established into corresponding relationship, and stores pair for arriving gesture and content
It answers in relation table.
In the present embodiment, content to be searched can be the title of some specific content, such as discriminate biography or some series
The title of content, such as avenger alliance;Or the name of deduction person, such as Liu Dehua.Certainly, in the actual use process
It can also be other content, be not repeated herein.It should be noted that content to be searched described in the present embodiment is specified for user
Content, such as: when content to be searched is Zhen Chuanshi, which is the collection of drama specified of user, or may be to use
The song that the specified Zhen in family passes, rather than all the elements relevant to biography is discriminated.
In the present embodiment, content to be searched can be specified by user, such as directly input the title of content;It can also basis
The operation of user obtains, and such as when user is browsing the corresponding page of some content, content to be searched can be current page
Corresponding content does not repeat each case one by one herein.
In the present embodiment, the corresponding relationship of second gesture information and content to be searched is specifically as follows second gesture information
With the broadcast address of content to be searched or the corresponding relationship linked.
It should be noted that the present embodiment is not defined the capacity to gesture and the mapping table of content, in reality
In the use process on border, the capacity of the mapping table of gesture and content can be set according to demand.
Above step 104-107 can before step 101, can also be with after step 103, the present embodiment is only according to step
104-107 is illustrated for occurring before step 101.
Further embodiment of this invention also provides a kind of content search method, this method and shown in FIG. 1 essentially identical, area
It is not, as shown in Figure 7, further includes:
Step 108, the mapping table that gesture and content are shown to user, so that user is to gesture therein and content
Corresponding relationship modify.
In the present embodiment, can preset preview option, user click preview option can show gesture with it is interior
The mapping table of appearance, user can gesture in the mapping table to gesture and content and/or content modify, delete
Or the operation such as addition.
In the present embodiment, step 108 can occur after step 103, in the actual use process, step 108
It can also occur before step 101, to be not repeated herein.
Due to that can play the role of prompting user to the mapping table of user's display gesture and content,
To prevent user as forget gesture and caused by search experience difference problem, also, due to that can be carried out to gesture and/or content
Modification allows user that gesture is arranged according to current habit at any time, or is searched for according to the curriculum offering currently liked, and makes
The mode searched for set content is more convenient.
Technical solution provided in an embodiment of the present invention receives the first-hand of user's input when needing to carry out content search
Gesture information obtains object content and shows according to the comparison result of the mapping table of first gesture information and gesture and content
To user.Technical solution provided by the invention can solve existing by first gesture information fast search to object content
The low problem of technology search efficiency.Further, object content can be searched due to only needing to input gesture information, be not required to
Search operation could be completed by wanting user that must look at screen, so that individual's essence that technical solution provided in an embodiment of the present invention occupies
Power is less, thus the operation being more suitable under such as startup procedure special screne.
As shown in figure 8, the embodiment of the present invention also provides a kind of content search device, comprising:
First obtains module 801, for it is defeated by the screen of mobile terminal to obtain user when needing to carry out content search
The first gesture information entered;
Searching module 802 is obtained for from the mapping table of pre-set gesture and content, searching described first
The corresponding object content of first gesture information that module 801 obtains;
First display module 803, for showing the object content to the user.
Further, as shown in figure 9, the first acquisition module 801 includes:
First acquisition submodule 901, the full frame gesture typing instruction of starting for obtaining user's input;
Submodule 902 is set, the full frame gesture typing instruction of the starting for being obtained according to first acquisition submodule 901
It is static that the screen is set;
Second acquisition submodule 903, for obtaining the user and being inputted by the screen in the screen quiescent period
First gesture information.
Further, as shown in Figure 10, the first acquisition module 801 includes:
Third acquisition submodule 1001, for obtaining the initiation gesture typing instruction of user's input;
Display sub-module 1002, the gesture typing instruction for being obtained according to third acquisition submodule 1001, in institute
State pop-up first gesture typing window on screen;
4th acquisition submodule 1003 is recorded for obtaining user by the first gesture that the display sub-module 1002 pops up
Enter the first gesture information of window input.
Further, as shown in figure 11, content search device, further includes:
Receiving module 804, for receiving the setting gesture search instruction of user's input;
Second display module 805 is used for according to the received setting gesture search instruction of the receiving module 804, described
Second gesture typing window is popped up on screen;
Second obtains module 806, the second gesture typing popped up for obtaining user by second display module 805
The second gesture information of window input;
Memory module 807, the second gesture information and the user for obtaining the second acquisition module 806 are current
Content in browsing pages establishes corresponding relationship, and stores into the mapping table of the gesture and content.
Further, as shown in figure 12, content search device, further includes:
Third display module 808, for showing the mapping table of the gesture and content to the user, so that institute
User is stated to modify to the corresponding relationship of gesture therein and content.
The concrete methods of realizing of content search device provided in an embodiment of the present invention may refer to offer of the embodiment of the present invention
Content search method described in, details are not described herein again.
Technical solution provided in an embodiment of the present invention receives the first-hand of user's input when needing to carry out content search
Gesture information obtains object content and shows according to the comparison result of the mapping table of first gesture information and gesture and content
To user.Technical solution provided by the invention can solve existing by first gesture information fast search to object content
The low problem of technology search efficiency.Further, object content can be searched due to only needing to input gesture information, be not required to
Search operation could be completed by wanting user that must look at screen, so that individual's essence that technical solution provided in an embodiment of the present invention occupies
Power is less, thus the operation being more suitable under such as startup procedure special screne.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (10)
1. a kind of content search method characterized by comprising
When needing to carry out content search, the first gesture information that user is inputted by the screen of mobile terminal is obtained;
From in the mapping table of pre-set gesture and content, the corresponding object content of the first gesture information is searched;
The object content is shown to the user.
2. the method according to claim 1, wherein described obtain what user was inputted by the screen of mobile terminal
First gesture information includes:
Obtain the full frame gesture typing instruction of starting of user's input;
It is static that the screen is set according to the starting full frame gesture typing instruction;
In the screen quiescent period, the first gesture information that the user is inputted by the screen is obtained.
3. the method according to claim 1, wherein described obtain what user was inputted by the screen of mobile terminal
First gesture information includes:
Obtain the initiation gesture typing instruction of user's input;
It is instructed according to the gesture typing, pops up first gesture typing window on the screen;
Obtain the first gesture information that user is inputted by the first gesture typing window.
4. the method according to claim 1, wherein further include:
Receive the setting gesture search instruction of user's input;
According to the setting gesture search instruction, second gesture typing window is popped up on the screen;
Obtain the second gesture information that user is inputted by the second gesture typing window;
The second gesture information and content to be searched are established into corresponding relationship, and stores to the gesture is corresponding with content and closes
It is in table.
5. the method according to claim 1, wherein further include:
The mapping table of the gesture and content is shown to the user, so that the user is to gesture therein and content
Corresponding relationship modify.
6. a kind of content search device characterized by comprising
First obtains module, for when needing to carry out content search, obtains user is inputted by the screen of mobile terminal the
One gesture information;
Searching module is obtained for from the mapping table of pre-set gesture and content, searching the first acquisition module
The corresponding object content of first gesture information taken;
First display module, for showing the object content to the user.
7. device according to claim 6, which is characterized in that described first, which obtains module, includes:
First acquisition submodule, the full frame gesture typing instruction of starting for obtaining user's input;
Submodule is set, and the screen is arranged in the full frame gesture typing instruction of the starting for being obtained according to first acquisition submodule
Curtain is static;
Second acquisition submodule, for obtaining the user is inputted by the screen first in the screen quiescent period
Gesture information.
8. device according to claim 6, which is characterized in that described first, which obtains module, includes:
Third acquisition submodule, for obtaining the initiation gesture typing instruction of user's input;
Display sub-module, the gesture typing instruction for being obtained according to third acquisition submodule, is popped up on the screen
First gesture typing window;
4th acquisition submodule, the first gesture typing window input popped up by the display sub-module for obtaining user
First gesture information.
9. device according to claim 6, which is characterized in that further include:
Receiving module, for receiving the setting gesture search instruction of user's input;
Second display module, for being popped up on the screen according to the received setting gesture search instruction of the receiving module
Second gesture typing window;
Second obtains module, what the second gesture typing window popped up for obtaining user by second display module inputted
Second gesture information;
Memory module, for being obtained described second in the second gesture information and user's current browse webpage that module obtains
Content establish corresponding relationship, and store into the mapping table of the gesture and content.
10. device according to claim 6, which is characterized in that further include:
Third display module, for showing the mapping table of the gesture and content to the user, so that the user
It modifies to the corresponding relationship of gesture therein and content.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810734943.5A CN109032483A (en) | 2018-07-05 | 2018-07-05 | Content search method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810734943.5A CN109032483A (en) | 2018-07-05 | 2018-07-05 | Content search method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN109032483A true CN109032483A (en) | 2018-12-18 |
Family
ID=64640466
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810734943.5A Pending CN109032483A (en) | 2018-07-05 | 2018-07-05 | Content search method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109032483A (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100169841A1 (en) * | 2008-12-30 | 2010-07-01 | T-Mobile Usa, Inc. | Handwriting manipulation for conducting a search over multiple databases |
| CN102750106A (en) * | 2012-07-02 | 2012-10-24 | 安徽科大讯飞信息科技股份有限公司 | Full-screen handwriting identification input method and system |
| CN104199968A (en) * | 2014-09-23 | 2014-12-10 | 陈包容 | Method and device for searching information of contacts on basis of custom pattern recognition |
| CN105635400A (en) * | 2016-01-27 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Contact information query method and system |
| CN107547714A (en) * | 2016-06-28 | 2018-01-05 | 中兴通讯股份有限公司 | A kind of method and apparatus for searching contact person |
-
2018
- 2018-07-05 CN CN201810734943.5A patent/CN109032483A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100169841A1 (en) * | 2008-12-30 | 2010-07-01 | T-Mobile Usa, Inc. | Handwriting manipulation for conducting a search over multiple databases |
| CN102750106A (en) * | 2012-07-02 | 2012-10-24 | 安徽科大讯飞信息科技股份有限公司 | Full-screen handwriting identification input method and system |
| CN104199968A (en) * | 2014-09-23 | 2014-12-10 | 陈包容 | Method and device for searching information of contacts on basis of custom pattern recognition |
| CN105635400A (en) * | 2016-01-27 | 2016-06-01 | 宇龙计算机通信科技(深圳)有限公司 | Contact information query method and system |
| CN107547714A (en) * | 2016-06-28 | 2018-01-05 | 中兴通讯股份有限公司 | A kind of method and apparatus for searching contact person |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP4336846A1 (en) | Audio sharing method and apparatus, device, and medium | |
| CN103621103A (en) | Method for displaying program information and image display apparatus thereof | |
| US20200065061A1 (en) | Method and apparatus for processing information | |
| US8400414B2 (en) | Method for displaying data and mobile terminal thereof | |
| JP5306506B1 (en) | Display device, television receiver, search method, program, and recording medium | |
| CN104427376A (en) | Information display apparatus, information display method, and computer program | |
| CN116419028B (en) | Page display method, device, equipment and storage medium | |
| CN103618956A (en) | Method for obtaining video associated information and mobile terminal | |
| CN104182168A (en) | Mobile terminal and method of controlling the same | |
| US20170168668A1 (en) | Method and electronic device for displaying menu on apparatus | |
| CN102685020B (en) | Microblog wall display method and system | |
| CN101739437A (en) | Implementation method for network sound-searching unit and specific device thereof | |
| RU2528147C2 (en) | Method and apparatus for displaying multiple elements | |
| EP4345591A1 (en) | Prop processing method and apparatus, and device and medium | |
| CN115103206B (en) | Video data processing methods, devices, equipment, systems and storage media | |
| CN102915194A (en) | Method, device and mobile terminal for implementing video previews based on ME (mobile equipment) | |
| CN103596061B (en) | A kind of quick method, device and intelligent television for starting intelligent television function | |
| CN103686417A (en) | User interface customizing method and system | |
| CN103796072A (en) | Control end, equipment end, server and system for digital television channel classification display | |
| CN113961828B (en) | Message display method and device | |
| CN110347459A (en) | Window minimization method and device, storage medium and interactive intelligent panel | |
| CN101753907A (en) | Program reminding method | |
| CN104853246B (en) | The providing method and device of personal customization channel | |
| CN106445341A (en) | Method and device for displaying page with information details | |
| CN114051160B (en) | Video display method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
| RJ01 | Rejection of invention patent application after publication |