[go: up one dir, main page]

WO2018138664A1 - Incorporation d'éléments interactifs dans le contenu et interaction d'utilisateur avec le contenu interactif - Google Patents

Incorporation d'éléments interactifs dans le contenu et interaction d'utilisateur avec le contenu interactif Download PDF

Info

Publication number
WO2018138664A1
WO2018138664A1 PCT/IB2018/050449 IB2018050449W WO2018138664A1 WO 2018138664 A1 WO2018138664 A1 WO 2018138664A1 IB 2018050449 W IB2018050449 W IB 2018050449W WO 2018138664 A1 WO2018138664 A1 WO 2018138664A1
Authority
WO
WIPO (PCT)
Prior art keywords
client
content
input
contents
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2018/050449
Other languages
English (en)
Inventor
Rajesh SHENOY
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
V C Prasanth
Varghese Vinu
Original Assignee
V C Prasanth
Varghese Vinu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by V C Prasanth, Varghese Vinu filed Critical V C Prasanth
Publication of WO2018138664A1 publication Critical patent/WO2018138664A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the invention relates to creating interactive contents and posting them on an online platform or an application connectable to internet, and further allowing interaction with the posted contents on the online platform or the application connectable to internet.
  • the content available today on most of the web and mobile platforms are basically one sided. The viewer mostly only has the option of viewing the content, and at best he can give a comment or like the video. If the viewer needs to explore more on the content or buy something that he saw, or he wants to take an action based on the content, he will have to open another application or a web browser and continue.
  • the object of the invention is achieved by a client-server system for providing contents with embedded actions.
  • the client-server system has a client device and a server device, where the client device is provided with a selection of input unit which is adapted to receive more than one pieces of contents and one or more action items to be embedded into the contents and a client processor.
  • the action items are selectable using a selection input and on selection a micro application related to the action item is executed in the client processor.
  • the server device has a server processor, wherein the client processor and the server processor are adapted to receive the contents and the action items from the input unit, process the contents along with the action items by integrating the contents with the actions in a layered fashion so as to generate an integrated view and store the integrated view in a memory device.
  • the client processor is adapted to receive the integrated view from the memory device and renders the integrated view onto a display unit of the client device.
  • the client processor receives a selection input from one of the input unit for selection of one or more of the action item, and stores the selected action item in a data storage.
  • the client processor is adapted to receive a retrieving input for retrieving the selected action items from the data storage and render the selected action items on the display unit
  • the client and server processors are adapted to receive an execution input for atleast one of the selected items displayed on the display unit. The memory device is accessed based on the execution input so as to fetch the micro application related to the selected action item and the said micro application is executed.
  • the processor is adapted to execute the micro application such that the contents are displayed by dividing a screen of the display unit logically into sections to execute and render the micro application, as well to render the integrated view.
  • the contents can also be displayed by rendering the micro application in a window smaller with respect to size of the screen, and overlapping the window onto the integrated view being displayed.
  • the processor is adapted to store the selected action item along with a part of the content associated with the selected action item.
  • the selected action items are rendered along with the part of the content on the display unit based on the retrieving input.
  • the client and server processors cooperate to generate an indication related to presence of the actionable item, and to render the indication onto the display unit whenever a part of the integrated view being displayed has the actionable item.
  • the client and server processors are adapted to integrate each of the action item in a separate layer
  • the content is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content.
  • the static content is defined as the content which has a single frame
  • the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention
  • the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit.
  • the processor is adapted to receive a location input from the input unit regarding a location on one of the frames of the static content or the self -playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and process the contents as well as the action items based on the location input to generate the integrated view.
  • the selection input should belong to atleast one of the groups such as a gesture-based input made by making a gesture or a sound-based input made by making a sound or a touch-based input made by making a touch pattern on a touch screen or a gyrometer-based input made by shaking or flicking the device or a keyboard-based input made by pressing one or more keys of a keyboard, or a tap-based input made by tapping on a tapping device, or any combination thereof.
  • Fig. 1 illustrates a client-server system for embedding of action items in content to generate an integrated view, and further enabling consuming of the integrated view by various clients in the client- server environment
  • Fig. 2 illustrates the mechanism for embedding content
  • Fig 3 illustrates the mechanism for accessing the embedded content by a client device
  • the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
  • the invention focuses on a technique for embedding content with action items that allows a user to interact with the content by performing actions onto the action items embedded within the contents which the user is currently consuming.
  • a user may view, read or hear the content and if any user desires to perform further actions with respect to the content, a new window has to be opened to perform the action onto an aspect described or related to the content.
  • Some of the action which a user may like to perform on content includes reading/listening to reviews of books/movies/shows, retrieving more details on a specific subject or topic, purchase a product etc.
  • Present invention provides a seamless way for a user to consume a content as well as interact with the content so that further actions can be performed on the content.
  • Interaction with the content is enabled by embedding action items with the content which are combined to form an integrated view which an end user can view when the content is requested by the user.
  • a mechanism is for providing contents with embedded actions in a client server environment. Such an implementation is shown in Fig. 1.
  • Fig. 1 shows a client server system 1 for providing an integrated view 19 of contents 2 having embedded action items 17 which would allow a user to perform actions onto the action items related to embedded into the contents 2 that is being currently consumed.
  • the client server system 1 has a client device 3 and server device 4, where the said client device 3 has client processor 5 and the server device 4 has a server processor 6.
  • the client processor 5 is capable of receiving one or more than one content 2 as well as action items 17 which are to be embedded with the contents 2.
  • the client processor 5 is also adapted to receive the contents 2 and the actionable items 17 from a plurality of selection inputs which includes gesture based input 11 made by making a gesture, sound-based input 12 made by making a sound, touch-based input 13 made by making a touch pattern on a touch screen, gyrometer-based input 14 made by shaking or flicking the device, keyboard-based input 15 made by pressing one or more keys of a keyboard, or a tap-based input 16 made by tapping on a tapping device, or location input 22 which indicates the geographic location of the user.
  • the selection inputs may also be a combination of any of the aforementioned inputs.
  • the integrated view 19 maybe created either by a content creator or a service provider.
  • the server processor 6 receives the contents 2 along with the action items 17 from the client processor 5 and performs processing on the received contents 2 and action items 17 to generate the integrated view 19.
  • the integrated view 19 is generated by combining the contents 2 and action items 17 in a layered manner wherein the initial layer may include the contents 2 which maybe video, audio, text, image etc., a second layer may include of further information that may pertain to the contents 2 while a further layer include the action items 17 that are associated with the contents 2.
  • the initial layer may include the contents 2 which maybe video, audio, text, image etc.
  • a second layer may include of further information that may pertain to the contents 2
  • a further layer include the action items 17 that are associated with the contents 2.
  • a user watching a video maybe interested in purchasing an item which has been shown in the video.
  • the characteristics of the item that the user is interested in should also be available for the user.
  • the video would form the first layer while the second layer would consist information on the item such as price, quantity, dimensions etc.
  • the third layer would be the payment gateway for completion on the purchasing activity.
  • a user watching a program/show might be interested in knowing the details of the participants, the participant details may be provided as an additional layer which shall be presented to the user if the user selects the relevant action item.
  • the second layer of information pertaining to the contents is not required. The said layers may be activated after the user has started consuming content 2 and may end after or along with the end of the content 2 being consumed.
  • each action items 17 that are associated with the content 2 is provided in a dedicated layer, therefore based on the action items 17 available for a content 2, the layers that facilitate the formation of an integrated view also increase accordingly.
  • the integrated view 19 which has been generated is stored in a memory device 7 from which it can be retrieved by the server processor 6 based upon request from client device 3, which can be same client device which has provided for the contents 2 along with the actionable items 17, or a different client device connected to the client server system.
  • the integrated view 19 requested by the processor 5 of a client device 3 is rendered on the display unit 8 of a client device 3.
  • the client device 3 receives a selection input 24 so that one or more actions items 17 are selected.
  • the selected action items 17 are stored in a data storage 9 for future retrieval.
  • a retrieving input 10 is received by the client processor 5, the selected actionable items are retrieved and displayed on the display unit 8.
  • the content 2 or an information related to the content 2 that is associated with the selected action item 17 is also displayed along with the selected action item 17.
  • the action items 17 that can be selected are determined by rules defined with respect to the specific action item 17. For example: a user desirous of buying a particular product may be allowed to select the action item pertaining to sale of the product of shipping facility for the product is available in the geographic region where the user resides.
  • the action item 17 Whenever the integrated view 19 is displayed onto the display unit 8, and while the integrated view is presented, whenever the action item 17 is present, it's presence is indicated through a indication, which can be a special sound, or a visual indicator, or any such indication 21 which can be easily noticed by an user of the client device 3.
  • the action items 17 maybe indicated as a numerical count in the indication 21 which is activated whenever the part of the integrated view 19 has an actionable item.
  • a user may select the indication 21 as the selection input 24 to perform further actions on the content 2 either after consuming the content 2 or while consuming the content 2.
  • the user may also opt to retrieve the actionable items present in the indication 21 at a later part of the day or the consecutive day as well.
  • the selected action items 17 are retrieved by providing a retrieving input 10 to the client processor 5.
  • the retrieving input 10 can be provided by shaking of the client device 5 such as one shake indicates retrieving of one action item 17, while shaking multiple times indicates retrieving of multiple action items 17.
  • the retrieving input 10 can be provided by making a whistling sound, where one whistle indicates retrieving of one action item 17, while more than one whistles indicates retrieving of multiple action items 17.
  • the client processor 5 may also be able to receive verbal comment made in a natural language for retrieving the action item/s 17 wherein the processor 5 maybe pre-configured to decipher the natural language.
  • the client processor 5 is also adapted to receive an execution input 18 for any of the action items 17 associated with the content 2. Based on the action item 17 that has been selected a micro application 20 pertaining to the action item 17 is executed onto the client processor 5.
  • the micro applications can be stored wither at the client device 3 or the server device 4. Generally, in case if the micro application 20 is frequently used, it can be stored at the client device 3, and is the micro application 20 is rarely used, it can be stored at the server device 4.
  • a web page pertaining to the action item 17 can be loaded onto a web browser installed at the client device 3.
  • the micro application 20 may be executed either by dividing the screen of the display unit 8 into sections, such that micro application 20 as well as the integrated view 19 can be rendered for the user. Alternatively, the micro application 20 may be displayed in a small window on the screen while the integrated view 19 continues to be rendered on the display unit 8.
  • the content 2 is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content.
  • the static content is defined as the content which has a single frame, for example images.
  • the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention. Its example is video, animation, graphic interchange format, etc.
  • the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit, for example document, webpage, etc.
  • the server processor 6 receives a location input 22 from the input unit regarding a location on one of the frame of the static content or the self-playing dynamic content, or at a location onto a part to be displayed of the user-driven dynamic content, and processes the contents 2 and the action items 17 based on the location input 22 to generate the integrated view Fig. 2 shows a flowchart depicting the mechanism for embedding the content with action items which are selectable when the content is being viewed.
  • the content maybe static content, dynamic content or user-driven content and may include audios, videos, text, animations etc.
  • a content creator can create content such as a video with actionable items which may be accessed by another user.
  • the content creator can add action items that pertain to selling of a merchandise, knowing more about the content being displayed, responding to an invite etc. by selecting the any of the action items.
  • the content creator creates a list of action items as mentioned above that shall be added to the content such as booking a show ticket, subscribing to a service/website, purchasing merchandise, donating to a charity etc. in step 102.
  • the action items are embedded in to respective frames. The activity of embedding the action item with the content can be performed alternatively by a service provider instead of the content creator.
  • step 104 the details that pertain to the action items are entered into a database and the embedded action items are linked to this database. For example, if the action item pertains to purchasing a particular merchandise that is being displayed in the displayed video; the database would contain details such as price, dimensions of the merchandise, available quantity, delivery locations, sale locations etc.
  • the relevant actions items are connected to a payment gateway by the content creator which may be activated once a user has completed the previous steps and has expressed an interest in buying merchandise or making an donation etc.
  • the payment activity is completed by a user, in cases where the shipping of merchandise needs to performed appropriate information is shared with authorized vendors to ensure that the merchandise is delivered to the user in step 106.
  • the content along with embedded action items is shared with on an online platform making it accessing to users.
  • Fig. 3 shows a flowchart for depicting the mechanism for accessing the embedded content by user.
  • a user starts consuming content which maybe video, audio, webpage, animations etc.
  • the content being consumed is embedded with action items that pertain to different activities such as purchasing a product, donating to an organization, getting more information about the content being displayed, review of a book that the user is viewing etc.
  • a user can select the any of the actionable items by accessing the indication/icon that is provided in part of the screen to inform the user can perform further actions on the content being consumed as shown in step 202.
  • the user can perform different actions such as shaking the device once or multiple times, snapping fingers once or more than once, whistling or just clicking on the indication/icon that is provided for the action frames to appear as depicted in step 203.
  • all the action items that can be performed on the content become available at the indication/icon provided on the screen.
  • the numerical count indicates the number of action items that are available for the content.
  • a user can continue to consume the content or select any of the action items that is available for that content. Once a user completes consuming the content in step 206, then the user may proceed to step 207, where the user access the indication/icon for further actions and selects the action to be performed.
  • step 208 is performed wherein the user check for review, view price of a product, quantity available etc. and remove any details that the user is not interested in pursuing. Further in step 209, the user completes the selection procedure and then proceeds for payment activities that might be associated with the action item selected. The user may also need to provide shipping details along with payment in case the user is purchasing activity. Once the user completes the payment activity, the user can continue watching the content from the point where it had been halted or a new content maybe started.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un système client-serveur pour fournir des contenus avec des actions intégrées. Le système client-serveur comprend un dispositif client et un dispositif serveur, le dispositif client étant pourvu d'une sélection d'unité d'entrée qui est conçue pour recevoir plus d'un contenu et un ou plusieurs éléments d'action à incorporer dans le contenu et un processeur client. Les éléments d'action peuvent être sélectionnés à l'aide d'une entrée de sélection et lors de la sélection, une micro-application associée à l'élément d'action est exécutée dans le processeur client. Le dispositif serveur comprend un processeur serveur, le processeur client et le processeur serveur étant conçus pour recevoir les contenus et les éléments d'action de l'unité d'entrée, traiter les contenus conjointement avec les articles d'action en intégrant les contenus avec les actions d'une manière stratifiée de manière à générer une vue intégrée et stocker la vue intégrée dans un dispositif de mémoire.
PCT/IB2018/050449 2017-01-26 2018-01-25 Incorporation d'éléments interactifs dans le contenu et interaction d'utilisateur avec le contenu interactif Ceased WO2018138664A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201741002975 2017-01-26
IN201741002975 2017-01-26

Publications (1)

Publication Number Publication Date
WO2018138664A1 true WO2018138664A1 (fr) 2018-08-02

Family

ID=62977974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/050449 Ceased WO2018138664A1 (fr) 2017-01-26 2018-01-25 Incorporation d'éléments interactifs dans le contenu et interaction d'utilisateur avec le contenu interactif

Country Status (1)

Country Link
WO (1) WO2018138664A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161335A1 (fr) * 2021-01-27 2022-08-04 北京字跳网络技术有限公司 Procédé et appareil d'interaction, dispositif électronique et support de stockage

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760773A (en) * 1995-01-06 1998-06-02 Microsoft Corporation Methods and apparatus for interacting with data objects using action handles
US20040054968A1 (en) * 2001-07-03 2004-03-18 Daniel Savage Web page with system for displaying miniature visual representations of search engine results
US8015259B2 (en) * 2002-09-10 2011-09-06 Alan Earl Swahn Multi-window internet search with webpage preload

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5760773A (en) * 1995-01-06 1998-06-02 Microsoft Corporation Methods and apparatus for interacting with data objects using action handles
US20040054968A1 (en) * 2001-07-03 2004-03-18 Daniel Savage Web page with system for displaying miniature visual representations of search engine results
US8015259B2 (en) * 2002-09-10 2011-09-06 Alan Earl Swahn Multi-window internet search with webpage preload

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022161335A1 (fr) * 2021-01-27 2022-08-04 北京字跳网络技术有限公司 Procédé et appareil d'interaction, dispositif électronique et support de stockage
US12366950B2 (en) 2021-01-27 2025-07-22 Beijing Zitiao Network Technology Co., Ltd. Page-based interaction method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
US9285977B1 (en) Card based package for distributing electronic media and services
US9582485B2 (en) Authoring and delivering wrap packages of cards with custom content to target individuals
US10019730B2 (en) Reverse brand sorting tools for interest-graph driven personalization
US20140195890A1 (en) Browser interface for accessing supplemental content associated with content pages
US20140195337A1 (en) Browser interface for accessing supplemental content associated with content pages
CA2867833C (fr) Navigation et contenu intelligent
US20140289611A1 (en) System and method for end users to comment on webpage content for display on remote websites
US20160132894A1 (en) Digital companion wrap packages accompanying the sale or lease of a product and/or service
KR20130129213A (ko) 광고 상의 소셜 오버레이들
US20160350884A1 (en) Creating and delivering a digital travel companion of a wrapped package of cards
CN107003874B (zh) 用以提高用户效率和交互性能的多任务工作流组件的主动呈现
US9875497B1 (en) Providing brand information via an offering service
US20160104116A1 (en) Creating and delivering an employee handbook in the form of an interactive wrapped package of cards
JP7134357B2 (ja) 1つまたは複数のコンピュータアプリケーションから利用可能なアクションを選択し、ユーザに提供するためのシステムおよび方法
US20160117068A1 (en) Wrapped packages of cards for conveying a story-book user experience with media content, providing application and/or web functionality and engaging users in e-commerce
US9235858B1 (en) Local search of network content
US20160104130A1 (en) Active receipt wrapped packages accompanying the sale of products and/or services
WO2018138664A1 (fr) Incorporation d'éléments interactifs dans le contenu et interaction d'utilisateur avec le contenu interactif
US20170115852A1 (en) Nested folder control
US20150332322A1 (en) Entity sponsorship within a modular search object framework
JP2020030484A (ja) ウェブページ提供装置およびウェブページ提供システム並びにウェブページ提供プログラム
US20250211819A1 (en) Presenting supplemental content with paused video
Garczarek-Bąk Trends in website development based on the chosen functionalities in qualitative aspect
WO2016057190A1 (fr) Paquets d'emballage de compagnon numérique accompagnant la vente ou la location d'un produit et/ou d'un service
JP2024532669A (ja) 第三者のウェブページ情報に基づくビデオストリームインタフェース

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18745233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18745233

Country of ref document: EP

Kind code of ref document: A1