[go: up one dir, main page]

WO2018138664A1 - Embedding interactive elements in the content and user interaction with the interactive elements - Google Patents

Embedding interactive elements in the content and user interaction with the interactive elements Download PDF

Info

Publication number
WO2018138664A1
WO2018138664A1 PCT/IB2018/050449 IB2018050449W WO2018138664A1 WO 2018138664 A1 WO2018138664 A1 WO 2018138664A1 IB 2018050449 W IB2018050449 W IB 2018050449W WO 2018138664 A1 WO2018138664 A1 WO 2018138664A1
Authority
WO
WIPO (PCT)
Prior art keywords
client
content
input
contents
processor
Prior art date
Application number
PCT/IB2018/050449
Other languages
French (fr)
Inventor
Rajesh SHENOY
Original Assignee
Varghese, Vinu
V C, Prasanth
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Varghese, Vinu, V C, Prasanth filed Critical Varghese, Vinu
Publication of WO2018138664A1 publication Critical patent/WO2018138664A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the invention relates to creating interactive contents and posting them on an online platform or an application connectable to internet, and further allowing interaction with the posted contents on the online platform or the application connectable to internet.
  • the content available today on most of the web and mobile platforms are basically one sided. The viewer mostly only has the option of viewing the content, and at best he can give a comment or like the video. If the viewer needs to explore more on the content or buy something that he saw, or he wants to take an action based on the content, he will have to open another application or a web browser and continue.
  • the object of the invention is achieved by a client-server system for providing contents with embedded actions.
  • the client-server system has a client device and a server device, where the client device is provided with a selection of input unit which is adapted to receive more than one pieces of contents and one or more action items to be embedded into the contents and a client processor.
  • the action items are selectable using a selection input and on selection a micro application related to the action item is executed in the client processor.
  • the server device has a server processor, wherein the client processor and the server processor are adapted to receive the contents and the action items from the input unit, process the contents along with the action items by integrating the contents with the actions in a layered fashion so as to generate an integrated view and store the integrated view in a memory device.
  • the client processor is adapted to receive the integrated view from the memory device and renders the integrated view onto a display unit of the client device.
  • the client processor receives a selection input from one of the input unit for selection of one or more of the action item, and stores the selected action item in a data storage.
  • the client processor is adapted to receive a retrieving input for retrieving the selected action items from the data storage and render the selected action items on the display unit
  • the client and server processors are adapted to receive an execution input for atleast one of the selected items displayed on the display unit. The memory device is accessed based on the execution input so as to fetch the micro application related to the selected action item and the said micro application is executed.
  • the processor is adapted to execute the micro application such that the contents are displayed by dividing a screen of the display unit logically into sections to execute and render the micro application, as well to render the integrated view.
  • the contents can also be displayed by rendering the micro application in a window smaller with respect to size of the screen, and overlapping the window onto the integrated view being displayed.
  • the processor is adapted to store the selected action item along with a part of the content associated with the selected action item.
  • the selected action items are rendered along with the part of the content on the display unit based on the retrieving input.
  • the client and server processors cooperate to generate an indication related to presence of the actionable item, and to render the indication onto the display unit whenever a part of the integrated view being displayed has the actionable item.
  • the client and server processors are adapted to integrate each of the action item in a separate layer
  • the content is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content.
  • the static content is defined as the content which has a single frame
  • the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention
  • the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit.
  • the processor is adapted to receive a location input from the input unit regarding a location on one of the frames of the static content or the self -playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and process the contents as well as the action items based on the location input to generate the integrated view.
  • the selection input should belong to atleast one of the groups such as a gesture-based input made by making a gesture or a sound-based input made by making a sound or a touch-based input made by making a touch pattern on a touch screen or a gyrometer-based input made by shaking or flicking the device or a keyboard-based input made by pressing one or more keys of a keyboard, or a tap-based input made by tapping on a tapping device, or any combination thereof.
  • Fig. 1 illustrates a client-server system for embedding of action items in content to generate an integrated view, and further enabling consuming of the integrated view by various clients in the client- server environment
  • Fig. 2 illustrates the mechanism for embedding content
  • Fig 3 illustrates the mechanism for accessing the embedded content by a client device
  • the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
  • the invention focuses on a technique for embedding content with action items that allows a user to interact with the content by performing actions onto the action items embedded within the contents which the user is currently consuming.
  • a user may view, read or hear the content and if any user desires to perform further actions with respect to the content, a new window has to be opened to perform the action onto an aspect described or related to the content.
  • Some of the action which a user may like to perform on content includes reading/listening to reviews of books/movies/shows, retrieving more details on a specific subject or topic, purchase a product etc.
  • Present invention provides a seamless way for a user to consume a content as well as interact with the content so that further actions can be performed on the content.
  • Interaction with the content is enabled by embedding action items with the content which are combined to form an integrated view which an end user can view when the content is requested by the user.
  • a mechanism is for providing contents with embedded actions in a client server environment. Such an implementation is shown in Fig. 1.
  • Fig. 1 shows a client server system 1 for providing an integrated view 19 of contents 2 having embedded action items 17 which would allow a user to perform actions onto the action items related to embedded into the contents 2 that is being currently consumed.
  • the client server system 1 has a client device 3 and server device 4, where the said client device 3 has client processor 5 and the server device 4 has a server processor 6.
  • the client processor 5 is capable of receiving one or more than one content 2 as well as action items 17 which are to be embedded with the contents 2.
  • the client processor 5 is also adapted to receive the contents 2 and the actionable items 17 from a plurality of selection inputs which includes gesture based input 11 made by making a gesture, sound-based input 12 made by making a sound, touch-based input 13 made by making a touch pattern on a touch screen, gyrometer-based input 14 made by shaking or flicking the device, keyboard-based input 15 made by pressing one or more keys of a keyboard, or a tap-based input 16 made by tapping on a tapping device, or location input 22 which indicates the geographic location of the user.
  • the selection inputs may also be a combination of any of the aforementioned inputs.
  • the integrated view 19 maybe created either by a content creator or a service provider.
  • the server processor 6 receives the contents 2 along with the action items 17 from the client processor 5 and performs processing on the received contents 2 and action items 17 to generate the integrated view 19.
  • the integrated view 19 is generated by combining the contents 2 and action items 17 in a layered manner wherein the initial layer may include the contents 2 which maybe video, audio, text, image etc., a second layer may include of further information that may pertain to the contents 2 while a further layer include the action items 17 that are associated with the contents 2.
  • the initial layer may include the contents 2 which maybe video, audio, text, image etc.
  • a second layer may include of further information that may pertain to the contents 2
  • a further layer include the action items 17 that are associated with the contents 2.
  • a user watching a video maybe interested in purchasing an item which has been shown in the video.
  • the characteristics of the item that the user is interested in should also be available for the user.
  • the video would form the first layer while the second layer would consist information on the item such as price, quantity, dimensions etc.
  • the third layer would be the payment gateway for completion on the purchasing activity.
  • a user watching a program/show might be interested in knowing the details of the participants, the participant details may be provided as an additional layer which shall be presented to the user if the user selects the relevant action item.
  • the second layer of information pertaining to the contents is not required. The said layers may be activated after the user has started consuming content 2 and may end after or along with the end of the content 2 being consumed.
  • each action items 17 that are associated with the content 2 is provided in a dedicated layer, therefore based on the action items 17 available for a content 2, the layers that facilitate the formation of an integrated view also increase accordingly.
  • the integrated view 19 which has been generated is stored in a memory device 7 from which it can be retrieved by the server processor 6 based upon request from client device 3, which can be same client device which has provided for the contents 2 along with the actionable items 17, or a different client device connected to the client server system.
  • the integrated view 19 requested by the processor 5 of a client device 3 is rendered on the display unit 8 of a client device 3.
  • the client device 3 receives a selection input 24 so that one or more actions items 17 are selected.
  • the selected action items 17 are stored in a data storage 9 for future retrieval.
  • a retrieving input 10 is received by the client processor 5, the selected actionable items are retrieved and displayed on the display unit 8.
  • the content 2 or an information related to the content 2 that is associated with the selected action item 17 is also displayed along with the selected action item 17.
  • the action items 17 that can be selected are determined by rules defined with respect to the specific action item 17. For example: a user desirous of buying a particular product may be allowed to select the action item pertaining to sale of the product of shipping facility for the product is available in the geographic region where the user resides.
  • the action item 17 Whenever the integrated view 19 is displayed onto the display unit 8, and while the integrated view is presented, whenever the action item 17 is present, it's presence is indicated through a indication, which can be a special sound, or a visual indicator, or any such indication 21 which can be easily noticed by an user of the client device 3.
  • the action items 17 maybe indicated as a numerical count in the indication 21 which is activated whenever the part of the integrated view 19 has an actionable item.
  • a user may select the indication 21 as the selection input 24 to perform further actions on the content 2 either after consuming the content 2 or while consuming the content 2.
  • the user may also opt to retrieve the actionable items present in the indication 21 at a later part of the day or the consecutive day as well.
  • the selected action items 17 are retrieved by providing a retrieving input 10 to the client processor 5.
  • the retrieving input 10 can be provided by shaking of the client device 5 such as one shake indicates retrieving of one action item 17, while shaking multiple times indicates retrieving of multiple action items 17.
  • the retrieving input 10 can be provided by making a whistling sound, where one whistle indicates retrieving of one action item 17, while more than one whistles indicates retrieving of multiple action items 17.
  • the client processor 5 may also be able to receive verbal comment made in a natural language for retrieving the action item/s 17 wherein the processor 5 maybe pre-configured to decipher the natural language.
  • the client processor 5 is also adapted to receive an execution input 18 for any of the action items 17 associated with the content 2. Based on the action item 17 that has been selected a micro application 20 pertaining to the action item 17 is executed onto the client processor 5.
  • the micro applications can be stored wither at the client device 3 or the server device 4. Generally, in case if the micro application 20 is frequently used, it can be stored at the client device 3, and is the micro application 20 is rarely used, it can be stored at the server device 4.
  • a web page pertaining to the action item 17 can be loaded onto a web browser installed at the client device 3.
  • the micro application 20 may be executed either by dividing the screen of the display unit 8 into sections, such that micro application 20 as well as the integrated view 19 can be rendered for the user. Alternatively, the micro application 20 may be displayed in a small window on the screen while the integrated view 19 continues to be rendered on the display unit 8.
  • the content 2 is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content.
  • the static content is defined as the content which has a single frame, for example images.
  • the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention. Its example is video, animation, graphic interchange format, etc.
  • the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit, for example document, webpage, etc.
  • the server processor 6 receives a location input 22 from the input unit regarding a location on one of the frame of the static content or the self-playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and processes the contents 2 and the action items 17 based on the location input 22 to generate the integrated view Fig. 2 shows a flowchart depicting the mechanism for embedding the content with action items which are selectable when the content is being viewed.
  • the content maybe static content, dynamic content or user-driven content and may include audios, videos, text, animations etc.
  • a content creator can create content such as a video with actionable items which may be accessed by another user.
  • the content creator can add action items that pertain to selling of a merchandise, knowing more about the content being displayed, responding to an invite etc. by selecting the any of the action items.
  • the content creator creates a list of action items as mentioned above that shall be added to the content such as booking a show ticket, subscribing to a service/website, purchasing merchandise, donating to a charity etc. in step 102.
  • the action items are embedded in to respective frames. The activity of embedding the action item with the content can be performed alternatively by a service provider instead of the content creator.
  • step 104 the details that pertain to the action items are entered into a database and the embedded action items are linked to this database. For example, if the action item pertains to purchasing a particular merchandise that is being displayed in the displayed video; the database would contain details such as price, dimensions of the merchandise, available quantity, delivery locations, sale locations etc.
  • the relevant actions items are connected to a payment gateway by the content creator which may be activated once a user has completed the previous steps and has expressed an interest in buying merchandise or making an donation etc.
  • the payment activity is completed by a user, in cases where the shipping of merchandise needs to performed appropriate information is shared with authorized vendors to ensure that the merchandise is delivered to the user in step 106.
  • the content along with embedded action items is shared with on an online platform making it accessing to users.
  • Fig. 3 shows a flowchart for depicting the mechanism for accessing the embedded content by user.
  • a user starts consuming content which maybe video, audio, webpage, animations etc.
  • the content being consumed is embedded with action items that pertain to different activities such as purchasing a product, donating to an organization, getting more information about the content being displayed, review of a book that the user is viewing etc.
  • a user can select the any of the actionable items by accessing the indication/icon that is provided in part of the screen to inform the user can perform further actions on the content being consumed as shown in step 202.
  • the user can perform different actions such as shaking the device once or multiple times, snapping fingers once or more than once, whistling or just clicking on the indication/icon that is provided for the action frames to appear as depicted in step 203.
  • all the action items that can be performed on the content become available at the indication/icon provided on the screen.
  • the numerical count indicates the number of action items that are available for the content.
  • a user can continue to consume the content or select any of the action items that is available for that content. Once a user completes consuming the content in step 206, then the user may proceed to step 207, where the user access the indication/icon for further actions and selects the action to be performed.
  • step 208 is performed wherein the user check for review, view price of a product, quantity available etc. and remove any details that the user is not interested in pursuing. Further in step 209, the user completes the selection procedure and then proceeds for payment activities that might be associated with the action item selected. The user may also need to provide shipping details along with payment in case the user is purchasing activity. Once the user completes the payment activity, the user can continue watching the content from the point where it had been halted or a new content maybe started.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The object of the invention is achieved by a client-server system for providing contents with embedded actions. The client-server system has a client device and a server device, where the client device is provided with a selection of input unit which is adapted to receive more than one pieces of contents and one or more action items to be embedded into the contents and a client processor. The action items are selectable using a selection input and on selection a micro application related to the action item is executed in the client processor. The server device has a server processor, wherein the client processor and the server processor are adapted to receive the contents and the action items from the input unit, process the contents along with the action items by integrating the contents with the actions in a layered fashion so as to generate an integrated view and store the integrated view in a memory device.

Description

Title of Invention
Embedding interactive elements in the content and user interaction with the interactive elements Field of Invention
The invention relates to creating interactive contents and posting them on an online platform or an application connectable to internet, and further allowing interaction with the posted contents on the online platform or the application connectable to internet. Background
The content available today on most of the web and mobile platforms are basically one sided. The viewer mostly only has the option of viewing the content, and at best he can give a comment or like the video. If the viewer needs to explore more on the content or buy something that he saw, or he wants to take an action based on the content, he will have to open another application or a web browser and continue.
For doing such interactions and consumption of content at a same time, there is no seamless possibilities are available as of now. Currently, the user would typically open multiple windows or application at the same time:
· watch videos on one application (E.g. YouTube)
• read a webpage on a browser (E.g. Google)
• look at an image on a different application (E.g. Google)
• purchase a product (E.g. Online Stores, Amazon, etc) With respect to user's perspective such interaction is not seamless, specifically the user do not want to leave a particular page while consuming the content for any kind of interaction, irrespective of it being a text, audio or video related to a particular topic or a product. Hence, many times, if the user is pushed to interact by leaving a webpage, the user tend to either leaves the web portal without consuming the contents properly, or user do not interact, and leave the web portal after consuming the content. Such scenario leads to loss for the content owner, who are mostly trying to make business with respect to consumption of these contents. One possible way is to sequentially placing the interactive elements with respect to the content on same web page. However, in this scenario also, the consumer may just use the content without further interacting with the content, as for that the user have to scroll the webpage, or have to look at a separate area with respect to placement of content.
Object of Invention
It is an object of the invention to enable users to seamlessly post and view interactive contents on an online platform or an application connectable to internet, and accessible by multiple users.
Summary of Invention
The object of the invention is achieved by a client-server system for providing contents with embedded actions. The client-server system has a client device and a server device, where the client device is provided with a selection of input unit which is adapted to receive more than one pieces of contents and one or more action items to be embedded into the contents and a client processor. The action items are selectable using a selection input and on selection a micro application related to the action item is executed in the client processor. The server device has a server processor, wherein the client processor and the server processor are adapted to receive the contents and the action items from the input unit, process the contents along with the action items by integrating the contents with the actions in a layered fashion so as to generate an integrated view and store the integrated view in a memory device.
According to one embodiment of the client- server system, the client processor is adapted to receive the integrated view from the memory device and renders the integrated view onto a display unit of the client device. The client processor receives a selection input from one of the input unit for selection of one or more of the action item, and stores the selected action item in a data storage.
According to another embodiment of the client-server system, the client processor is adapted to receive a retrieving input for retrieving the selected action items from the data storage and render the selected action items on the display unit According to a yet another embodiment of the client- server system, the client and server processors are adapted to receive an execution input for atleast one of the selected items displayed on the display unit. The memory device is accessed based on the execution input so as to fetch the micro application related to the selected action item and the said micro application is executed.
According to a further embodiment of the client-server system, the processor is adapted to execute the micro application such that the contents are displayed by dividing a screen of the display unit logically into sections to execute and render the micro application, as well to render the integrated view. Alternatively, the contents can also be displayed by rendering the micro application in a window smaller with respect to size of the screen, and overlapping the window onto the integrated view being displayed.
According to one embodiment of the client-server system, the processor is adapted to store the selected action item along with a part of the content associated with the selected action item. The selected action items are rendered along with the part of the content on the display unit based on the retrieving input.
According to a further embodiment of the client-server system, the client and server processors cooperate to generate an indication related to presence of the actionable item, and to render the indication onto the display unit whenever a part of the integrated view being displayed has the actionable item.
According to another embodiment of the client-server system, the client and server processors are adapted to integrate each of the action item in a separate layer
According to yet another embodiment of the client- server system, the content is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content. The static content is defined as the content which has a single frame, the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention, while the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit. The processor is adapted to receive a location input from the input unit regarding a location on one of the frames of the static content or the self -playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and process the contents as well as the action items based on the location input to generate the integrated view. According to another further embodiment of the client-server system, the selection input should belong to atleast one of the groups such as a gesture-based input made by making a gesture or a sound-based input made by making a sound or a touch-based input made by making a touch pattern on a touch screen or a gyrometer-based input made by shaking or flicking the device or a keyboard-based input made by pressing one or more keys of a keyboard, or a tap-based input made by tapping on a tapping device, or any combination thereof.
Brief Description of Drawings
Fig. 1 illustrates a client-server system for embedding of action items in content to generate an integrated view, and further enabling consuming of the integrated view by various clients in the client- server environment Fig. 2 illustrates the mechanism for embedding content
Fig 3 illustrates the mechanism for accessing the embedded content by a client device
Detailed Description
The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in Drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. The invention focuses on a technique for embedding content with action items that allows a user to interact with the content by performing actions onto the action items embedded within the contents which the user is currently consuming. Previous to this invention, a user may view, read or hear the content and if any user desires to perform further actions with respect to the content, a new window has to be opened to perform the action onto an aspect described or related to the content. Some of the action which a user may like to perform on content includes reading/listening to reviews of books/movies/shows, retrieving more details on a specific subject or topic, purchase a product etc. Present invention provides a seamless way for a user to consume a content as well as interact with the content so that further actions can be performed on the content. Interaction with the content is enabled by embedding action items with the content which are combined to form an integrated view which an end user can view when the content is requested by the user.
In one implementation of the invention, a mechanism is for providing contents with embedded actions in a client server environment. Such an implementation is shown in Fig. 1.
Fig. 1 shows a client server system 1 for providing an integrated view 19 of contents 2 having embedded action items 17 which would allow a user to perform actions onto the action items related to embedded into the contents 2 that is being currently consumed. The client server system 1 has a client device 3 and server device 4, where the said client device 3 has client processor 5 and the server device 4 has a server processor 6. The client processor 5 is capable of receiving one or more than one content 2 as well as action items 17 which are to be embedded with the contents 2. The client processor 5 is also adapted to receive the contents 2 and the actionable items 17 from a plurality of selection inputs which includes gesture based input 11 made by making a gesture, sound-based input 12 made by making a sound, touch-based input 13 made by making a touch pattern on a touch screen, gyrometer-based input 14 made by shaking or flicking the device, keyboard-based input 15 made by pressing one or more keys of a keyboard, or a tap-based input 16 made by tapping on a tapping device, or location input 22 which indicates the geographic location of the user. In an alternate embodiment, there can be only one type of input device provided or more types of input devices are provided to provide the contents 2 and the actionable items 17 as input. The selection inputs may also be a combination of any of the aforementioned inputs.. The integrated view 19 maybe created either by a content creator or a service provider.
The server processor 6 receives the contents 2 along with the action items 17 from the client processor 5 and performs processing on the received contents 2 and action items 17 to generate the integrated view 19. The integrated view 19 is generated by combining the contents 2 and action items 17 in a layered manner wherein the initial layer may include the contents 2 which maybe video, audio, text, image etc., a second layer may include of further information that may pertain to the contents 2 while a further layer include the action items 17 that are associated with the contents 2. For example: A user watching a video maybe interested in purchasing an item which has been shown in the video. The characteristics of the item that the user is interested in should also be available for the user. In such a scenario, the video would form the first layer while the second layer would consist information on the item such as price, quantity, dimensions etc. and the third layer would be the payment gateway for completion on the purchasing activity. In another example, a user watching a program/show might be interested in knowing the details of the participants, the participant details may be provided as an additional layer which shall be presented to the user if the user selects the relevant action item. In one embodiment, the second layer of information pertaining to the contents is not required. The said layers may be activated after the user has started consuming content 2 and may end after or along with the end of the content 2 being consumed. In one embodiment, each action items 17 that are associated with the content 2 is provided in a dedicated layer, therefore based on the action items 17 available for a content 2, the layers that facilitate the formation of an integrated view also increase accordingly. The integrated view 19 which has been generated is stored in a memory device 7 from which it can be retrieved by the server processor 6 based upon request from client device 3, which can be same client device which has provided for the contents 2 along with the actionable items 17, or a different client device connected to the client server system. The integrated view 19 requested by the processor 5 of a client device 3 is rendered on the display unit 8 of a client device 3. The client device 3 receives a selection input 24 so that one or more actions items 17 are selected. The selected action items 17 are stored in a data storage 9 for future retrieval. When a retrieving input 10 is received by the client processor 5, the selected actionable items are retrieved and displayed on the display unit 8. In one embodiment, the content 2 or an information related to the content 2 that is associated with the selected action item 17 is also displayed along with the selected action item 17.
The action items 17 that can be selected are determined by rules defined with respect to the specific action item 17. For example: a user desirous of buying a particular product may be allowed to select the action item pertaining to sale of the product of shipping facility for the product is available in the geographic region where the user resides.
Whenever the integrated view 19 is displayed onto the display unit 8, and while the integrated view is presented, whenever the action item 17 is present, it's presence is indicated through a indication, which can be a special sound, or a visual indicator, or any such indication 21 which can be easily noticed by an user of the client device 3. In one embodiment, the action items 17 maybe indicated as a numerical count in the indication 21 which is activated whenever the part of the integrated view 19 has an actionable item. A user may select the indication 21 as the selection input 24 to perform further actions on the content 2 either after consuming the content 2 or while consuming the content 2. The user may also opt to retrieve the actionable items present in the indication 21 at a later part of the day or the consecutive day as well.
The selected action items 17 are retrieved by providing a retrieving input 10 to the client processor 5. The retrieving input 10 can be provided by shaking of the client device 5 such as one shake indicates retrieving of one action item 17, while shaking multiple times indicates retrieving of multiple action items 17. Similarly, the retrieving input 10 can be provided by making a whistling sound, where one whistle indicates retrieving of one action item 17, while more than one whistles indicates retrieving of multiple action items 17. Further, the client processor 5 may also be able to receive verbal comment made in a natural language for retrieving the action item/s 17 wherein the processor 5 maybe pre-configured to decipher the natural language.
The client processor 5 is also adapted to receive an execution input 18 for any of the action items 17 associated with the content 2. Based on the action item 17 that has been selected a micro application 20 pertaining to the action item 17 is executed onto the client processor 5. The micro applications can be stored wither at the client device 3 or the server device 4. Generally, in case if the micro application 20 is frequently used, it can be stored at the client device 3, and is the micro application 20 is rarely used, it can be stored at the server device 4. In an alternate embodiment, for performing action onto the action items 17, a web page pertaining to the action item 17 can be loaded onto a web browser installed at the client device 3. The micro application 20 may be executed either by dividing the screen of the display unit 8 into sections, such that micro application 20 as well as the integrated view 19 can be rendered for the user. Alternatively, the micro application 20 may be displayed in a small window on the screen while the integrated view 19 continues to be rendered on the display unit 8. The content 2 is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content. The static content is defined as the content which has a single frame, for example images. The self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention. Its example is video, animation, graphic interchange format, etc. The user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit, for example document, webpage, etc. In one embodiment, the server processor 6 receives a location input 22 from the input unit regarding a location on one of the frame of the static content or the self-playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and processes the contents 2 and the action items 17 based on the location input 22 to generate the integrated view Fig. 2 shows a flowchart depicting the mechanism for embedding the content with action items which are selectable when the content is being viewed. The content maybe static content, dynamic content or user-driven content and may include audios, videos, text, animations etc. In step 101, a content creator can create content such as a video with actionable items which may be accessed by another user. The content creator can add action items that pertain to selling of a merchandise, knowing more about the content being displayed, responding to an invite etc. by selecting the any of the action items. The content creator creates a list of action items as mentioned above that shall be added to the content such as booking a show ticket, subscribing to a service/website, purchasing merchandise, donating to a charity etc. in step 102. In step 103, the action items are embedded in to respective frames. The activity of embedding the action item with the content can be performed alternatively by a service provider instead of the content creator. In step 104, the details that pertain to the action items are entered into a database and the embedded action items are linked to this database. For example, if the action item pertains to purchasing a particular merchandise that is being displayed in the displayed video; the database would contain details such as price, dimensions of the merchandise, available quantity, delivery locations, sale locations etc.
In the next step 105, the relevant actions items are connected to a payment gateway by the content creator which may be activated once a user has completed the previous steps and has expressed an interest in buying merchandise or making an donation etc. Once the payment activity is completed by a user, in cases where the shipping of merchandise needs to performed appropriate information is shared with authorized vendors to ensure that the merchandise is delivered to the user in step 106. In step 107, the content along with embedded action items is shared with on an online platform making it accessing to users.
Fig. 3 shows a flowchart for depicting the mechanism for accessing the embedded content by user. In step 201, a user starts consuming content which maybe video, audio, webpage, animations etc. The content being consumed is embedded with action items that pertain to different activities such as purchasing a product, donating to an organization, getting more information about the content being displayed, review of a book that the user is viewing etc. A user can select the any of the actionable items by accessing the indication/icon that is provided in part of the screen to inform the user can perform further actions on the content being consumed as shown in step 202. The user can perform different actions such as shaking the device once or multiple times, snapping fingers once or more than once, whistling or just clicking on the indication/icon that is provided for the action frames to appear as depicted in step 203. In step 204, all the action items that can be performed on the content become available at the indication/icon provided on the screen. The numerical count indicates the number of action items that are available for the content. In step 205, a user can continue to consume the content or select any of the action items that is available for that content. Once a user completes consuming the content in step 206, then the user may proceed to step 207, where the user access the indication/icon for further actions and selects the action to be performed. Based on the action item selected, step 208 is performed wherein the user check for review, view price of a product, quantity available etc. and remove any details that the user is not interested in pursuing. Further in step 209, the user completes the selection procedure and then proceeds for payment activities that might be associated with the action item selected. The user may also need to provide shipping details along with payment in case the user is purchasing activity. Once the user completes the payment activity, the user can continue watching the content from the point where it had been halted or a new content maybe started.
While specific language has been used to describe the invention, any limitations 5 arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to implement the inventive concept as taught herein.
The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. List of references
1 client server system
2 contents
3 client device
4 server device
5 client processor
6 server processor
7 memory device
8 display unit
9 data storage
10 retrieving input
11 gesture based input
12 sound based input
13 touch based input
14 gyrometer based input
15 keyboard based input
16 tap based input
17 action item
18 execution input
19 integrated view
20 micro application
21 indication
22 location input request sent to server from client device selection input

Claims

We claim:
1. A client-server system (1) for providing contents (2) with embedded actions, the client-server system (1) comprising a client device (3) and a server device (4), the client device (3) comprising a selection of input unit adapted to receive more than one pieces of contents (2) and one or more action items (17) to be embedded into the contents (2), and a client processor (5), wherein the action items (17) are selectable using a selection input and on selection, adapted to execute a micro application (21) related to the action item (17) onto the client processor (5), the server device (4) comprising a server processor (6), wherein the client processor (5) and the server processor (6) cooperates to:
- to receive the contents (2) and the action items (17) from the input unit;
- to process the contents (2) and the action items (17) by integrating the contents (2) and the actions in a layered fashion and to generate the integrated view (19); and
- to store the integrated view (19) in a memory device (7).
2. The client-server system (1) according to claim 1, wherein the client processor (5) is adapted to receive the integrated view (19) from the memory device (7), to render the integrated view (19) onto a display unit (8), to receive a selection input (24) from one of the input unit for selection of one or more of the action item (17), and to store the selected action item (17) in a data storage (9).
3. The client-server system (1) according to the claim 2, wherein the client processor (6) is adapted to receive a retrieving input (10) for retrieving the selected action items (18) from the data storage (9), and to render the selected action items (17) on the display unit (8).
4. The client-server system (1) according to the claim 3, wherein the processors (5, 6) are adapted to
- receive an execution input (18) for atleast one of the selected items displayed on the display unit (8); - to access the memory device (7) based on the execution input (18) and to fetch the micro application (20) related to the selected action item (17); and
- to execute the micro application (20).
5. The client-server system (1) according to the claim 4, wherein the processor (6) is adapted to execute the micro application while the contents (2) are being displayed,
- either by dividing a screen of the display unit (8) logically into sections to execute and render the micro application (20), and as well to render the integrated view,
- or by rendering the micro application (20) in a window smaller with respect to size of the screen, and overlapping the window onto the integrated view (19) being displayed.
6. The client-server system (1) according to any of the claims 2 to 5, wherein the processor (5) is adapted to store the selected action item (17) along with a part of the content (2) associated with the selected action item, and to render the selected action items (17) along with the part of the content (2) on the display unit (8) based on the retrieving input (10).
7. The client-server system (1) according to any of the claims 2 to 6, wherein the processors (5,6) cooperate to generate an indication (21) related to presence of the actionable item, and to render the indication (21) onto the display unit (8) whenever a part of the integrated view (19) being displayed has the actionable item.
8. The client-server system (1) according to any of the claims 1 to 7, wherein the processors (5, 6) are adapted to integrate each of the action item (17) in a separate layer.
9. The client-server system (1) according to any of the claims 1 to 8, wherein the content (2) is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content, wherein the static content is defined as the content which has a single frame, the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention, and the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit, wherein the processor (5) is adapted to receive a location input (22) from the input unit regarding a location on one of the frame of the static content or the self -playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and to process the contents (2) and the action items based on the location input (22) to generate the integrated view (19).
10. The client-server system (1) according to any of the claims 1 to 9, wherein the selection input is atleast one of from the group of:
- a gesture -based input (11) made by making a gesture,
- a sound-based input (12) made by making a sound,
- a touch-based input (13) made by making a touch pattern on a touch screen,
- a gyrometer-based input (14) made by shaking or flicking the device,
- a keyboard-based input (15) made by pressing one or more keys of a keyboard, or
- a tap-based input (16) made by tapping on a tapping device, or combination thereof.
11. A client device (3) for facilitating embedding action items (17) within contents (2), the client device (3) comprising:
- a selection of input unit adapted to receive one or more pieces of contents (2) and one or more action items (17);
- a client processor (5) adapted to receive the contents (2) and the action items (17), and to provide the contents (2) and action items (17) to a server processor (6). wherein the server processor (6) processes the contents (2) and the action items (17) by integrating the contents (2) and the actions in a layered fashion, to generate the integrated view (19) , and to store the integrated view (19) in a memory device (7).
12. A client device (3) for rendering an integrated view (19) of contents (2) embedding action items (17), and further enabling actions onto the action items (17), the client device (3) comprising a client processor (5), a selection of input unit, a display unit (8), and a data storage (9), the client processor (5) which is adapted
- to receive the integrated view (17) from a memory device (7); - to render the integrated view (17) onto a display unit (8);
- to receive a selection input (24) from one of the input unit for selection of one or more of the action item (17); and
- to store the selected action item (17) in a data storage (9).
13. The client device (3) according to the claim 12, wherein the client processor (5) is adapted to receive a retrieving input (10) for retrieving the selected action items (17) from the data storage (9), and to render the selected action items (17) on the display unit (8).
14. The client device (3) according to the claim 13, wherein the client processor (5) is adapted to
- receive an execution input (18) for atleast one of the selected items displayed on the display unit (8);
- to access the memory device (7) based on the execution input (18) and to fetch the micro application (20) related to the selected action item (17); and - to execute the micro application (20).
15. The client device (3) according to the claim 14, wherein the client processor (5) is adapted to execute the micro application (20) while the contents (2) are being displayed, - either by dividing a screen of the display unit (8) logically into sections to execute and render the micro application (20), and as well to render the integrated view (19),
- or by rendering the micro application (20) in a window smaller with respect to size of the screen, and overlapping the window onto the integrated view (19) being displayed.
16. The client device (3) according to the any of the claims 12 to 15, wherein the client processor (5) is adapted to store the selected action item (17) along with a part of the content (2) associated with the selected action item (17), and to render the selected action items (17) along with the part of the content (2) on the display unit (8) based on the retrieving input (10).
17. The client device (3) according to any of the claims 12 to 16, wherein the client processor (5) is adapted to identify presence of the action item (17) in a part of the integrated view (19) currently being rendered, to generate an indication (21) related to presence of the action item (17), and to render the indication (21) onto the display unit (8).
18. A server device (4) placed in a client server environment (1) having a server device (4) and one or more client devices (3), wherein the server device (4) facilitates embedding of action items (17) in the contents (2), the server device (4) comprising a server processor (6), and a memory device (7), the server processor (6) is adapted: - to receive one or more pieces of contents (2) and one or more action items (17) from one or more of the client devices (3);
- to process the contents (2) and the action items (17) by integrating the contents (2) and the actions in a layered fashion and to generate the integrated view (19); and
- to store the integrated view (19) in the memory device (7).
19. The server device (4) according to the claim 18, wherein the server processor (6) is adapted to integrate an indication (21) related to presence of the action item (17) along with the action item (17), such that the indication (21) is adapted to be rendered onto a display unit (8) whenever a part of the integrated view (19) being displayed has the actionable item.
20. The server device (4) according to any of the claims 18 or 19, wherein the server processor (6) is adapted to integrate each of the action item (17) in a separate layer.
21. The server device (4) according to any of the claims 18 to 20, wherein the content (2) is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content, wherein the static content is defined as the content which has a single frame, the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention, and the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit, wherein the server processor (6) is adapted to receive a location input (22) from the client device (3) regarding a location on one of the frame of the static content or the self -playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and to process the contents and the action items based on the location input (22) to generate the integrated view (19).
22. A computer program product stored on a non-transitory device, the computer program product adapted to be executed on one or more processors placed in a client-server environment and on execution adapted to enable the processor/s:
- to receive one or more pieces of contents (2) and one or more action items (17) from a selection of input unit; and
- to process the contents (2) and the action items (17) by integrating the contents (2) and the actions in a layered fashion and to generate the integrated view (19), wherein the action items (17) are selectable using a selection input and on selection, adapted to execute a micro application (21) related to the action item (17) onto the processor.
PCT/IB2018/050449 2017-01-26 2018-01-25 Embedding interactive elements in the content and user interaction with the interactive elements WO2018138664A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201741002975 2017-01-26
IN201741002975 2017-01-26

Publications (1)

Publication Number Publication Date
WO2018138664A1 true WO2018138664A1 (en) 2018-08-02

Family

ID=62977974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/050449 WO2018138664A1 (en) 2017-01-26 2018-01-25 Embedding interactive elements in the content and user interaction with the interactive elements

Country Status (1)

Country Link
WO (1) WO2018138664A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161335A1 (en) * 2021-01-27 2022-08-04 北京字跳网络技术有限公司 Interaction method and apparatus, electronic device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760773A (en) * 1995-01-06 1998-06-02 Microsoft Corporation Methods and apparatus for interacting with data objects using action handles
US20040054968A1 (en) * 2001-07-03 2004-03-18 Daniel Savage Web page with system for displaying miniature visual representations of search engine results
US8015259B2 (en) * 2002-09-10 2011-09-06 Alan Earl Swahn Multi-window internet search with webpage preload

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760773A (en) * 1995-01-06 1998-06-02 Microsoft Corporation Methods and apparatus for interacting with data objects using action handles
US20040054968A1 (en) * 2001-07-03 2004-03-18 Daniel Savage Web page with system for displaying miniature visual representations of search engine results
US8015259B2 (en) * 2002-09-10 2011-09-06 Alan Earl Swahn Multi-window internet search with webpage preload

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161335A1 (en) * 2021-01-27 2022-08-04 北京字跳网络技术有限公司 Interaction method and apparatus, electronic device, and storage medium
US12366950B2 (en) 2021-01-27 2025-07-22 Beijing Zitiao Network Technology Co., Ltd. Page-based interaction method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
US9285977B1 (en) Card based package for distributing electronic media and services
US9582485B2 (en) Authoring and delivering wrap packages of cards with custom content to target individuals
US9412208B2 (en) Generating and delivering a wrap package of cards including custom content and/or services in response to a vehicle diagnostic system triggered event
US10019730B2 (en) Reverse brand sorting tools for interest-graph driven personalization
US20140195890A1 (en) Browser interface for accessing supplemental content associated with content pages
US20140195337A1 (en) Browser interface for accessing supplemental content associated with content pages
CA2867833C (en) Intelligent content and navigation
US20140289611A1 (en) System and method for end users to comment on webpage content for display on remote websites
US20160132894A1 (en) Digital companion wrap packages accompanying the sale or lease of a product and/or service
US20160132927A1 (en) Creating and delivering a wrapped package of cards as a digital companion accompanying the purchase of ticket(s) for an event
KR20130129213A (en) Social overlays on ads
US20160350884A1 (en) Creating and delivering a digital travel companion of a wrapped package of cards
CN107003874B (en) Proactive presentation of multitask workflow components to improve user efficiency and interaction performance
US9875497B1 (en) Providing brand information via an offering service
US20160104116A1 (en) Creating and delivering an employee handbook in the form of an interactive wrapped package of cards
JP7134357B2 (en) Systems and methods for selecting actions available from one or more computer applications and providing them to a user
US20160117068A1 (en) Wrapped packages of cards for conveying a story-book user experience with media content, providing application and/or web functionality and engaging users in e-commerce
US9235858B1 (en) Local search of network content
WO2018138664A1 (en) Embedding interactive elements in the content and user interaction with the interactive elements
US20170115852A1 (en) Nested folder control
US20150332322A1 (en) Entity sponsorship within a modular search object framework
JP2020030484A (en) Web page providing device, web page providing system, and web page providing program
US20250211819A1 (en) Presenting supplemental content with paused video
Garczarek-Bąk Trends in website development based on the chosen functionalities in qualitative aspect
WO2016057190A1 (en) Digital companion wrap packages accompanying the sale or lease of a product and/or service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18745233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18745233

Country of ref document: EP

Kind code of ref document: A1