[go: up one dir, main page]

WO2003017122A1 - Systemes et procedes de presentation de contenu multimedia personnalisable - Google Patents

Systemes et procedes de presentation de contenu multimedia personnalisable Download PDF

Info

Publication number
WO2003017122A1
WO2003017122A1 PCT/US2002/026250 US0226250W WO03017122A1 WO 2003017122 A1 WO2003017122 A1 WO 2003017122A1 US 0226250 W US0226250 W US 0226250W WO 03017122 A1 WO03017122 A1 WO 03017122A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
viewer
user
context
information
Prior art date
Application number
PCT/US2002/026250
Other languages
English (en)
Inventor
David Tinsley
Frederick Joseph Ii Patton
Original Assignee
Interactive Sapience Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/932,344 external-priority patent/US20040205648A1/en
Priority claimed from US09/932,346 external-priority patent/US6744729B2/en
Priority claimed from US09/932,345 external-priority patent/US20030041159A1/en
Priority claimed from US09/932,217 external-priority patent/US20030043191A1/en
Application filed by Interactive Sapience Corp. filed Critical Interactive Sapience Corp.
Publication of WO2003017122A1 publication Critical patent/WO2003017122A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/756Media network packet handling adapting media to device capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • the present invention relates to systems and methods for supporting dynamically customizable contents.
  • the communications industry has traditionally included a number of media, including television, cable, radio, periodicals, compact disc (CDs) and digital versatile discs (DVDs).
  • CDs compact disc
  • DVDs digital versatile discs
  • Web-casters and cellular telephone service providers among others.
  • One over-arching goal for the communications industry is to provide relevant information upon demand by a user.
  • television, cable and radio broadcasters and Web-casters transmit entertainment, news, educational programs, and presentations such as movies, sport events, or music events that appeal to as many people as possible.
  • An advertisement may be a paid announcement of goods or services for sale, a public notice, or any other such mechanism for informing the general public of a particular subject matter.
  • the advertisement should reach a large number of people and the advertisement should include information that is easy to recall.
  • an advertisement is a full-motion or still image video segment that is inserted into the video programming. The video segment is typically short, for example thirty to sixty seconds. Unfortunately, it is often difficult for an advertiser to provide detailed information regarding the product, service, or public notice during such a short time period. With the existing system, a viewer must be motivated to request information.
  • the viewer may be compelled to search for a paper and pen in order to write down the supplementary information.
  • the supplementary information may not be accurately committed to memory or recorded.
  • a conventional television advertisement may not effectively provide information to the viewer because the viewer cannot successfully remember or record the supplementary advertising information.
  • the telephone number, advertiser address, or the Internet web site address is successfully remembered or recorded, it is necessary for the viewer to undertake later communication with the advertiser if the viewer is interested in learning more about the advertised product, service, or public notice. For example, following the advertisement, the viewer may call the advertiser over a conventional telephone line, send a letter to the advertiser using conventional mail delivery, or access the advertiser's web site through a computer system. Since making the request is inconvenient to the viewer, and the viewer may be less likely to request the information. Additionally, reliance on the web pages can be problematic, since web pages can become outdated and web links become invalid.
  • the system turns traditional content in a contextual database and continues to do so over time (well after initial content creation), so that the viewers are always presented with the most current, focused information tree through the context of the content in question.
  • This experience is different from web browsing with the spaghetti of web links found on a typical web page, both in focus and reliability of the link.
  • related contents reside on a well- maintained central server in a network, the chances for broken links are reduced.
  • These contextual choices may result in the acquisition of additional content that may be of like or different content type, i.e. audio visual data, text, charts, and interactive information models, such as a spread sheet with input fields such as for a personal training record.
  • the system endows content with its contextual relationship between viewer and other content, creating intelligence in the presentation itself.
  • Such a system exceeds what third party ratings such as the Nielsen rating could do because of the continuous direct feedback provided to the system: the surveys are the actual users' interactions with the system. Even the most conscientious survey taker won't be able to capture such a level of detail.
  • Fig. 1 A shows an exemplary diagram showing the relationships among a user viewing content(s) in particular context(s).
  • Fig. IB shows an exemplary presentation.
  • Fig. 2 shows one embodiment of a FABRIC for supporting customizable presentations.
  • Fig. 3 shows an exemplary operation for a local server.
  • Fig. 4 shows an exemplary authoring process.
  • Fig. 5 shows an exemplary process running on a viewing terminal.
  • Fig. 6 illustrates a process relating to content consumption within a browser/player.
  • Fig. 1 A shows an exemplary diagram showing the relationships among a user 1 viewing content 2 in particular context(s) 3.
  • the user 1 interacts with a viewing system through a user interface that can be a graphical user interface (GUI), a voice user interface (VUI), or a combination thereof.
  • GUI graphical user interface
  • VUI voice user interface
  • the user 1 can simply request to see the content 2.
  • the content 2 is streamed and played to the user.
  • the user 1 can view the default stream, or can interact with the content 2 by selecting a different viewing angle, query for more information on a particular scene or actor/actress, for example.
  • the user interest exhibited implicitly in his or her selection and request is captured as the context 3.
  • the actions taken by the user 1 through the user interface is captured, and over time, the behavior of a particular user can be predicted based on the context 3.
  • the user 1 can be presented with additional information associated with a particular program. For example, as the user 1 is browsing through the programs, he or she may wish to obtain more information relating to specific areas of interest or concerns associated with the show, such as the actors, actresses, other movies released during the same time period, or travel packages or promotions that may be available through primary, secondary or third party vendors.
  • the captured context 3 is used to customize information to the viewer even with the multitude of programs broadcast every day.
  • the system can rapidly update and provide the available information to viewers in real time.
  • the combination of content 2 and context 3 is used to provide customized content, including advertising, to viewers.
  • the SDs attributed to it are located via the semantic map.
  • the score specified by the weight is added to the respective attribute subtotals located in a cumulative profile and session profile.
  • transitive aggregation is applied for related SDs via the Semantic Relationship table, and applying the weight assigned to the relating attribute in the Semantic Map.
  • Fig. IB the main presentation window is displayed along with a supplemental window running advertisements.
  • the advertisements might be image-only banners while the main presentation is playing, but whenever it is paused, including when the presentation is halted pending user selection, a video or audio-video advertisement might run. For full screen mode, the window might temporarily split for these purposes.
  • the system locates attributes linked directly via the Semantic Map and indirectly via the Semantic Relationships table, and updates the aggregate scores located in the session and cumulative user state attributes. This value is part of the current context. Should the user pause presentation at this point, a commercial best fitting the current presentation context, the session context, or the user history could be selected via a comparison of attribute scores. In fact, any choice the user makes, the act will be logged along with the current context. Activation of context menu options, will yield contextual content options valid for the present context.
  • PCD 2 becomes valid, while PCD 1 remains valid.
  • the context state change for PCD 2 is sent to the system.
  • the feedback process described at time 0 recurs.
  • PCD 4 becomes invalid, while PCD 1 and 3 remain valid.
  • the context state change for PCD 4 is communicated to the system.
  • the feedback process described at time 0 recurs.
  • PCD 3 becomes invalid, while PCD 1 remains valid.
  • the context state change for PCD 3 is passed to the system.
  • the feedback process described at time 0 recurs.
  • PCD 6 becomes invalid, while PCD 1 and 5 remain valid.
  • the context state change for PCD 6 is communicated to the system.
  • the feedback process described at time 0 recurs.
  • Fig. 2 shows an exemplary system that captures the context 3.
  • the system also stores content 2, serves content 2 and streams the content 2, as modified in real-time by the context 3, to the user 1 on-demand.
  • the system includes a switching FABRIC 50 connecting a plurality of networks 60.
  • the switching FABRIC 50 provides an interconnection architecture which uses multiple stages of switches 56 to route transactions between a source address and a destination address of a data communications network.
  • the switching FABRIC 50 includes multiple switching devices and is scalable because each of the switching devices of the FABRIC 50 includes a plurality of network ports and the number of switching devices of the FABRIC 50 may be increased to increase the number of network 60 connections for the switch.
  • the FABRIC 50 includes all networks, which subscribe and are connected to each other and includes wireless networks, cable television networks, WAN's such as Exodus, Quest, DBN.
  • Computers 62 are connected to a network hub 64 that is connected to a switch 56, which can be an Asynchronous Transfer Mode (ATM) switch, for example.
  • Network hub 64 functions to interface an ATM network to a non-ATM network, such as an Ethernet LAN, for example.
  • Computer 62 is also directly connected to ATM switch 56.
  • Multiple ATM switches are connected to WAN 68.
  • the WAN 68 can communicate with FABRIC, which is the sum of all associated networks.
  • FABRIC is the combination of hardware and software that moves data coming in to a network node out by the correct port (door) to the next node in the network.
  • Each server 55 includes a content database that can be customized and streamed on-demand to the user. Its central repository stores information about content assets, content pages, content structure, links, and user profiles, for example.
  • Each regional server 55 (RUE) also captures usage information for each user, and based on data gathered over a period, can predict user interests based on historical usage information. Based on the predicted user interests and the content stored in the server, the server can customize the content to the user interest.
  • the regional server 55 (RUE) can be a scalable compute farm to handle increases in processing load.
  • the program to be displayed may be transmitted as an analog signal, for example according to the NTSC standard utilized in the United States, or as a digital signal modulated onto an analog carrier, or as a digital stream sent over the Internet, or digital data stored on a DVD.
  • the signals may be received over the Internet, cable, or wireless transmission such as TV, satellite or cellular transmissions.
  • a viewing terminal 70 includes a processor that may be used solely to run a browser GUI and associated software, or the processor may be configured to run other applications, such as word processing, graphics, or the like.
  • the viewing terminal's display can be used as both a television screen and a computer monitor.
  • the terminal will include a number of input devices, such as a keyboard, a mouse and a remote control device, similar to the one described above. However, these input devices may be combined into a single device that inputs commands with keys, a trackball, pointing device, scrolling mechanism, voice activation or a combination thereof.
  • the terminal 70 can include a DVD player that is adapted to receive an enhanced DVD that, in combination with the regional server 55 (RUE), provides a custom rendering based on the content 2 and context 3. Desired content can be stored on a disc such as DVD and can be accessed, downloaded, and/or automatically upgraded, for example, via downloading from a satellite, transmission through the internet or other on-line service, or transmission through another land line such as coax cable, telephone line, optical fiber, or wireless technology.
  • An input device can be used to control the terminal and can be a remote control, keyboard, mouse, a voice activated interface or the like.
  • the terminal may include a video capture mechanism such as a capture card connected to either live video, baseband video, or cable.
  • the video capture card digitizes a video image and displays the video image in a window on the monitor.
  • the terminal is also connected to a regional server 55 (RUE) over the Internet using various mechanisms. This can be a 56K modem, a cable modem, Wireless Connection or a DSL modem.
  • RUE regional server 55
  • This can be a 56K modem, a cable modem, Wireless Connection or a DSL modem.
  • the user connects to a suitable Internet service provider (ISP), which in turn is connected to the backbone of the network 68 such as the Internet, typically via a Tl or a T3 line.
  • the ISP communicates with the viewing terminals 70 using a protocol such as point, to point protocol (PPP) or a serial line Internet protocol (SLIP) 100 over one or more media or telephone network, including landline, wireless line, or a combination thereof.
  • PPP point, to point protocol
  • SLIP serial line Internet protocol
  • the user interface is a GUI that supports Moving Picture Experts Group-4 (MPEG-4), a standard used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format.
  • MPEG-4 Moving Picture Experts Group-4
  • the major advantage of MPEG compared to other video and audio coding formats is that MPEG files are much smaller for the same quality using high quality compression techniques.
  • the GUI (VUI) can be on top of an operating system such as the Java operating system. More details on the GUI are disclosed in the copending application entitled "SYSTEMS AND METHODS FOR DISPLAYING A GRAPHICAL USER INTERFACE", the content of which is incorporated by reference.
  • the terminal 70 is an intelligent entertainment unit that plays
  • the terminal 70 monitors usage pattern entered through the browser and updates the regional server 55 (RUE) with user context data.
  • the regional server 55 (RUE) can modify one or more objects stored on the DVD, and the updated or new objects can be downloaded from a satellite, transmitted through the internet or other on-line service, or transmitted through another land line such as coax cable, telephone line, optical fiber, or wireless technology back to the terminal.
  • the terminal 70 in turn renders the new or updated object along with the other objects on the DVD to provide on-the-fly customization of a desired user view.
  • Object Descriptors define the relationship between the Elementary Streams pertinent to each object (e.g the audio and the video stream of a participant to a videoconference) ODs also provide additional information such as the URL needed to access the Elementary Steams, the characteristics of the decoders needed to parse them, intellectual property and others.
  • Media objects may need streaming data, which is conveyed in one or more elementary streams.
  • An object descriptor identifies all streams associated to one media object. This allows handling hierarchically encoded data as well as the association of meta-information about the content (called Object content information') and the intellectual property rights associated with it.
  • Each stream itself is characterized by a set of descriptors for configuration information, e.g., to determine the required decoder resources and the precision of encoded timing information. Furthermore the descriptors may carry hints to the Quality of Service (QOS) it requests for transmission (e.g., maximum bit rate, bit error rate, priority, etc.)
  • QOS Quality of Service
  • Synchronization of elementary streams is achieved through time stamping of individual access units within elementary streams.
  • the synchronization layer manages the identification of such access units and the time stamping. Independent of the media type, this layer allows identification of the type of access unit (e.g., video or audio frames, scene description commands) in elementary streams, recovery of the media object's or scene description's time base, and it enables synchronization among them.
  • the syntax of this layer is configurable in a large number of ways, allowing use in a broad spectrum of systems.
  • the synchronized delivery of streaming information from source to destination, exploiting different QoS as available from the network, is specified in terms of the synchronization layer and a delivery layer containing a two-layer multiplexer.
  • the first multiplexing layer is managed according to the DMIF specification, part 6 of the MPEG-4 standard. (DMIF stands for Delivery Multimedia Integration Framework)
  • This multiplex may be embodied by the MPEG-defined FlexMux tool, which allows grouping of Elementary Streams (ESs) with a low multiplexing overhead. Multiplexing at this layer may be used, for example, to group ES with similar QoS requirements, reduce the number of network connections or the end to end delay.
  • the "TransMux" (Transport Multiplexing) layer models the layer that offers transport services matching the requested QoS.
  • the BiFS can contain interaction rules that query a field in a database.
  • the field can contain scripts that execute a series of "Rules Driven” (If/Then Statements), for example: If user "X” fits "Profile A” then access Channel 223 for AVO 4.
  • This rules driven system can customize a particular object, for instance, customizing a generic can to reflect a Coke can, in a given scene.
  • Each POP send its current load status and QOS configuration to the gateway hub where Predictive Analysis is performed to handle load balancing of data streams and processor assignment to deliver consistent QOS for the entire network on the fly. The result is that content defines the configuration of the network once its BiFS Layer is parsed and checked against the available DMIF Configuration and network status.
  • a person using a connected wireless PDA, on a 3-G WAN can request access to a given channel, for instance channel 345. The request transmits from the PDA over the wireless network and channel 345 is accessed.
  • a timeline indicates the progression of the scene.
  • the content streams render the presentation proper, while presentation context descriptors reside in companion streams. Each descriptor indicates start and end time code. Pieces of context may freely overlap.
  • the presentation context is attributed to a particular ES, and each ES may or may not have contextual description. Presentation context of different ESs may reside in the same stream or different streams.
  • Each presentation descriptor has a start and end flag, with a zero for both indicating a point in between. Whether or not descriptor information is repeated in each access unit corresponds to the random access characteristics of the associated content stream. For instance, predictive and bi-directional frames of MPEG video are not randomly accessible as they depend upon frames outside themselves. Therefore, in such cases, PCD info need not be repeated in such instances.
  • PCD is absolute, that is, its context is always active when its temporal definition is valid, or conditional, in which case it is only active upon user selection.
  • the PCD refers to presentation content (not context) to jump to, enabling contextual navigation.
  • the conditional context may also be regarded as interactive context.
  • the presentation involves the details of the scene, namely, who and what is in the scene, as well as what is happening. All of these elements contribute to the context of the scene.
  • items and characters in the scene may have contextual relevance throughout their scene presence.
  • the relevant context tends to mirror the timeline of the activity in question.
  • Absolute context will just indicate a particular scene or segment has been reached to the system. This information can be used to funnel additional information outside of the main presentation, such as advertisements.
  • Interactive context is triggered by the user, unlike traditional menus.
  • Interactive context provides a means for the user to access contextually related information via a context menu.
  • a PCD will indicate what text and text properties to present to a user, as well as the hierarchical location within the menu. For instance, a scene with Robert DeNiro and Al Pacino meeting in a cafe, could specify contextual nodes related to DeNiro shown below. The bracketing depicts the positioning within the menu.
  • a transitional stream is a local placeholder used to increased perceived reponsiveness, and provides feedback in regards to stream acquisition.
  • a transitional stream is a local placeholder used to increased perceived responsiveness, and provides feedback in regards to stream acquisition. It's a great opportunity for advertisements. It's up to the author or information provider to decide how to structure context menus. Information in regards to background music, location, set props, and objects corresponding to brand names, such as clothing, could provide contextual information.
  • the system can pass new context in one or more additional presentation context streams.
  • All a presentation context descriptor does is define a region of content in regards to an elementary stream, and, optionally, define a context menu item positioned within an associated hierarchy. It functions like, and corresponds to, a database, key.
  • a descriptor is just a place holder, it is the use of semantic descriptors which generate meaning: that is, how the segment relates to other segments, and to the user, and by an extension, how a user relates to other users.
  • Semantic descriptors operate with context descriptors to create a collection of weighted attributes. Weighted attributes are applied to content segments, user histories, and advertisements, yielding a weight-based system for intelligent marketing.
  • the logic of rules-based data agents then comes down to structured query language.
  • a semantic descriptor is itself no more than an identifier, a label, and a definition, which is enough to introduce categorization. Its power comes from its inter-relationship with other semantic descriptors. Take the following descriptors: playful, silly, funny, flirtatious, sexy, predatorial, and mischievous.
  • a presentation context descriptor and a semantic descriptor are associated via a semantic presentation map tying the two descriptors and a relative weight. This adds a good degree of flexibility in scoring the prominence of attributes within content. It is up to a particular database agent to express the particular formula involved.
  • the system employs some degree of variance regardless of the profile in question, but all things considered equal, the best match in advertising will generally stem from an attribute- based correlation of the profile history at the installation, the current content being viewed, and the advertisements being considered, and some scoring criterion. Also, the system via contextual feedback, can anticipate in advance the need to perform the correlation. As a result, the system can anticipate and customize content when the user requests a particular action on the user interface.
  • Fig. 3 shows an exemplary operation for the local server 62.
  • the server 62 initializes a content database and a context database (step 300).
  • the server receives and parses requests being directed at it (step 302). If the request is from a compatible authoring system, the server adds or updates the received information to its content database (step 304).
  • the content database provides a fine-grained categorization of one or more scenes in a particular movie, corporate presentation, video program, or multimedia content. Based on the categorization, context information could be applied. For example, a movie can have a hundred scenes.
  • a content creator such as a movie editor, would use the authoring system to annotate each scene using a predetermined format, for example an XML compatible format.
  • the annotation tells the local server 62 the type of scene, the actor/actress involved, a list of objects that can be customized, and definitions so that the local server can retrieve and modify the objects. After all scenes have been annotated, the authoring system uploads the information to the local server 62.
  • the local server 62 determines whether it is from a user (step 306). If so, the system determines whether the user is a registered user or a new user and provides the requested content to registered users.
  • the local server 62 can send the default content, or can interactively generate alternate content by selecting a different viewing angle or generate more information on a particular scene or actor/actress, for example.
  • the local server 62 receives in real-time actions taken by the user, and over time, the behavior of a particular user can be predicted based on the context database.
  • the process loops back to step 302 to handle the next request. From step 302, periodically, the system updates the context database by correlating the user's usage patterns with additional external data to determine whether the user may be interested in unseen, but contextually similar info ⁇ nation (step 310). This is done by data- mining the context database.
  • a web master may decide to include advertisements directed to teenagers in the web pages that are accessed by users in this category.
  • the local server 62 may not want to include advertisements directed to teenagers on a certain presentation if users in a different category who are senior citizens also happen to access that presentation frequently.
  • Each view can be customized to a particular user, so there are not static view configurations to worry about. Users can see the same content, but different advertisements.
  • a Naive-Bayes classifier can be used to perform the data mining. The Naive-Bayes classifier uses Bayes rule to compute the probability of each class given an instance, assuming attributes are conditionally independent given a label.
  • the Naive-Bayes classifier requires estimation of the conditional probabilities for each attribute value given the label. For discrete data, because only few parameters need to be estimated, the estimates tend to stabilize quickly and more data does not change the model much. With continuous attributes, discretization is likely to form more intervals as more data is available, thus increasing the representation power. However, even with continuous data, the discretization is usually global and cannot take into account attribute interactions. Generally, Naive-Bayes classifiers are preferred when there are many irrelevant features.
  • a Decision-Tree classifier can be used. This classifier assigns each record to a class, and the Decision-Tree classifier is induced (generated) automatically from data.
  • the data which is made up of records and a label associated with each record, is called the training set.
  • Decision-Trees are commonly built by recursive partitioning. A univariate (single attribute) split is chosen for the root of the tree using some criterion (e.g., mutual information, gain-ratio, gini index). The data is then divided according to the test, and the process repeats recursively for each child. After a full tree is built, a pruning step is executed which reduces the tree size.
  • Decision- Trees are preferred where serial tasks are involved, i.e., once the value of a key feature is known, dependencies and distributions change. Also, Decision-Trees are preferred where segmenting data into sub-populations gives easier subproblems. Also, Decision-Trees are preferred where there are key features, i.e., some features are more important than others.
  • a hybrid classifier called the NB-Tree hybrid classifier, is generated for classifying a set of records.
  • each record has a plurality of attributes.
  • the NB-Tree classifier includes a Decision-Tree structure having zero or more decision-nodes and one or more leafnodes. At each decision-node, a test is performed based on one or more attributes. At each leaf-node, a classifier based on Bayes Rule classifies the records.
  • a process 350 for authoring content and registering the new content with the local server 62 is shown.
  • the process 350 is executed by the Authoring System at Design Time.
  • a user imports content elements (step 352).
  • the user applies contextual descriptors to elementary streams: MPEG-7 layer information, for example (step 354).
  • the user can also define compositional layout, such as multiple windows or event specific popups and certain content meant to be displayed in a windowed presentation can make use of the popups, for example (step 356).
  • the user can also specify licensing requirements (copy protection, access control, and e-commerce), which may vary for specific content segments (step 366).
  • the user then registers as a content provider if he or she is not one already (step 368). Additionally, the user can generate final, registered output image; registration entails updating system databases in regards to content, context, and licensing requirements (step 370).
  • a process 400 running on the local terminal 70 is shown.
  • the user first logs-in to the server (step 401).
  • the server retrieves the user characteristics and presents a list of options that are customized to the user's tastes (step 402).
  • the options can include a custom list of movies, sport programs, financial presentations, among others, that the user has viewed in the past or is likely to watch.
  • the user can select one of the presented options, can designate an item not on the list, or can insert a new DVD (step 404).
  • the user selection is updated in the context database (step 406) and the local server 62 retrieves information from the content to be played (step 408).
  • the local server 62 identifies the DVD and search in its content database for customizable objects and information relating to the content. Based on the content database, the local server customizes the content and/or associated programs such as associated advertisements or information for the content (step 410) and streams the content to the terminal 70 (step 412).
  • the user can passively view the content, or can interact with the content by selecting different viewing angles, can query certain information relating to the scene or the actors and actresses involved, or can interact with a commercial if desired (step 414).
  • Each user operation is captured, along with the context of the operation, and the resulting data is used to update the context database for that user (step 414).
  • the local server can adjust the content based on the new interaction (step 416) before looping back to step 410 to continue showing the requested content.
  • the process thus provides customized content to the user, and allows the .user to link, search, select, retrieve, initiate a subscription to and interact with information on the DVD as well as supplemental value-added information from a remote database, computer network or on-line server, e.g., a network server on the Internet or World Wide Web.
  • Fig. 6 illustrates a process 450 relating to content consumption within a browser/player.
  • a user initiates playback of content (step 452).
  • the browser/player then demultiplexes any multiplexed streams (step 454) and parses a BiFS elementary stream (step 456).
  • the user then fulfill any necessary licensing requirements to gain access if content is protected, this could be ongoing in the event of new content acquisitions (step 458).
  • the browser/player invokes appropriate decoders (step 460) and begins playback of content (step 462).
  • the browser/player continues to send contextual feedback to system (step 464), and the system updates user preferences and feedback into the database (step 466).
  • the system captures transport operations such as fast forward and rewind, generate context information, as they are an aspect of how users interact with the title; for instance, what segments users tend to skip, and which users tend to watch repeatedly, are of interest to the system.
  • the system logs the user and stores the contextual feedback, applying any relative weights assigned in the Semantic Map, and utilizing the Semantic Relationships table for indirect assignments, an intermediate table should be employed for optimized resolution; the assignment of relative weights is reflected in the active user state information.
  • system sends new context information as available, such as new context menu items (step 468).
  • the system may utilize rules-based logic, such as for sending customer focused advertisements, unless there are multiple windows, this would tend to occur during the remote content acquisition process (step 470).
  • the system then handles requests for remote content (step 472). After viewing the content, the user responds to any interactive selections that halt playback, such as with menu screens that lack a timeout and default action (step 474). If live streams are paused, the system performs time-shifting if possible (step 476). The user may activate context menu at anytime, and make an available selection (step 478). The selection may be subject to parental control specified in the configuration of the player or browser.
  • a process 500 to enhance for user community participation is shown.
  • a user may opt to participate in public viewing session, or opt out of such a session; this is useful for point-to-point presentations, for example (step 502).
  • other public users become visible, and may join into groups, resulting in synchronized sessions with one user designated as the pilot for navigation purposes (step 504).
  • a communication window is made available so users may discuss the content (step 506).
  • all content viewed is logged in passive mode, as the user is not responsible for interactive selections (step 508).
  • the pilot can enter a white board mode, and draw on the presentation content; these drawings are made visible to the other group members (step 510).
  • the user may opt to work in annotation mode, which is analogous to third party value-add information, in that users may leave commentary tied to particular sequences of the presentation, the visibility of such annotations may be public, or visible only to restricted access groups; an annotation window is utilized for these purposes, and is tied to the content the user is currently viewing (step 512).
  • annotation mode is analogous to third party value-add information, in that users may leave commentary tied to particular sequences of the presentation, the visibility of such annotations may be public, or visible only to restricted access groups; an annotation window is utilized for these purposes, and is tied to the content the user is currently viewing (step 512).
  • the user may elect to receive email notifications (step 514).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention concerne un procédé de présentation de contenu personnalisé (Contenu 2) à un observateur/auditeur (Utilisateur 1) qui comprend les étapes suivantes : l'archivage du comportement de l'utilisateur (Utilisateur 1) sur un serveur couplé à un réseau de zone étendu et la collecte des préférences de l'utilisateur au fil du temps; la réception d'une requête portant sur un contenu audio ou vidéo sélectionné (Contenu 2); la génération dynamique de contenu audio ou vidéo personnalisé (Contenu 2) correspondant aux préférences de l'utilisateur; la fusion du contenu audio ou vidéo personnalisé généré dynamiquement (Contenu 2) et du contenu audio ou vidéo sélectionné (Contenu 2); et la présentation du contenu audio ou vidéo personnalisé (Contenu 2) à l'utilisateur.
PCT/US2002/026250 2001-08-17 2002-08-15 Systemes et procedes de presentation de contenu multimedia personnalisable WO2003017122A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US09/932,344 US20040205648A1 (en) 2001-08-17 2001-08-17 Systems and methods for authoring content
US09/932,344 2001-08-17
US09/932,346 2001-08-17
US09/932,346 US6744729B2 (en) 2001-08-17 2001-08-17 Intelligent fabric
US09/932,345 US20030041159A1 (en) 2001-08-17 2001-08-17 Systems and method for presenting customizable multimedia presentations
US09/932,217 2001-08-17
US09/932,345 2001-08-17
US09/932,217 US20030043191A1 (en) 2001-08-17 2001-08-17 Systems and methods for displaying a graphical user interface

Publications (1)

Publication Number Publication Date
WO2003017122A1 true WO2003017122A1 (fr) 2003-02-27

Family

ID=27506013

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/US2002/026251 WO2003017059A2 (fr) 2001-08-17 2002-08-15 Matrice intelligente
PCT/US2002/026250 WO2003017122A1 (fr) 2001-08-17 2002-08-15 Systemes et procedes de presentation de contenu multimedia personnalisable
PCT/US2002/026252 WO2003017082A1 (fr) 2001-08-17 2002-08-15 Systeme et procede de traitement de fichier media dans une interface graphique utilisateur
PCT/US2002/026318 WO2003017119A1 (fr) 2001-08-17 2002-08-15 Systemes et procedes de creation d'un contenu

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2002/026251 WO2003017059A2 (fr) 2001-08-17 2002-08-15 Matrice intelligente

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/US2002/026252 WO2003017082A1 (fr) 2001-08-17 2002-08-15 Systeme et procede de traitement de fichier media dans une interface graphique utilisateur
PCT/US2002/026318 WO2003017119A1 (fr) 2001-08-17 2002-08-15 Systemes et procedes de creation d'un contenu

Country Status (4)

Country Link
EP (1) EP1423769A2 (fr)
JP (1) JP2005500769A (fr)
AU (1) AU2002324732A1 (fr)
WO (4) WO2003017059A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009079774A1 (fr) * 2007-12-21 2009-07-02 Espial Group Inc. Appareil et procédé pour un moteur de personnalisation

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080079690A1 (en) * 2006-10-02 2008-04-03 Sony Ericsson Mobile Communications Ab Portable device and server with streamed user interface effects
CN1937623A (zh) * 2006-10-18 2007-03-28 华为技术有限公司 一种控制网络业务的方法及系统
US7870272B2 (en) * 2006-12-05 2011-01-11 Hewlett-Packard Development Company L.P. Preserving a user experience with content across multiple computing devices using location information
CN105700767B (zh) * 2014-11-28 2018-12-04 富泰华工业(深圳)有限公司 文件层叠式显示系统及方法
CN105487920A (zh) * 2015-10-12 2016-04-13 沈阳工业大学 基于蚁群算法的多核系统实时任务调度的优化方法
US11178457B2 (en) 2016-08-19 2021-11-16 Per Gisle JOHNSEN Interactive music creation and playback method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991735A (en) * 1996-04-26 1999-11-23 Be Free, Inc. Computer program apparatus for determining behavioral profile of a computer user
US6067565A (en) * 1998-01-15 2000-05-23 Microsoft Corporation Technique for prefetching a web page of potential future interest in lieu of continuing a current information download
US6262730B1 (en) * 1996-07-19 2001-07-17 Microsoft Corp Intelligent user assistance facility
US6314451B1 (en) * 1998-05-15 2001-11-06 Unicast Communications Corporation Ad controller for use in implementing user-transparent network-distributed advertising and for interstitially displaying an advertisement so distributed
US6385619B1 (en) * 1999-01-08 2002-05-07 International Business Machines Corporation Automatic user interest profile generation from structured document access information
US6434530B1 (en) * 1996-05-30 2002-08-13 Retail Multimedia Corporation Interactive shopping system with mobile apparatus

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682326A (en) * 1992-08-03 1997-10-28 Radius Inc. Desktop digital video processing system
US5760767A (en) * 1995-10-26 1998-06-02 Sony Corporation Method and apparatus for displaying in and out points during video editing
JPH1066008A (ja) * 1996-08-23 1998-03-06 Kokusai Denshin Denwa Co Ltd <Kdd> 動画像検索編集装置
US6006241A (en) * 1997-03-14 1999-12-21 Microsoft Corporation Production of a video stream with synchronized annotations over a computer network
US6301586B1 (en) * 1997-10-06 2001-10-09 Canon Kabushiki Kaisha System for managing multimedia objects
US6363411B1 (en) * 1998-08-05 2002-03-26 Mci Worldcom, Inc. Intelligent network
US6154771A (en) * 1998-06-01 2000-11-28 Mediastra, Inc. Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively
US6119147A (en) * 1998-07-28 2000-09-12 Fuji Xerox Co., Ltd. Method and system for computer-mediated, multi-modal, asynchronous meetings in a virtual space
US6411946B1 (en) * 1998-08-28 2002-06-25 General Instrument Corporation Route optimization and traffic management in an ATM network using neural computing
US6466980B1 (en) * 1999-06-17 2002-10-15 International Business Machines Corporation System and method for capacity shaping in an internet environment
US6542295B2 (en) * 2000-01-26 2003-04-01 Donald R. M. Boys Trinocular field glasses with digital photograph capability and integrated focus function

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991735A (en) * 1996-04-26 1999-11-23 Be Free, Inc. Computer program apparatus for determining behavioral profile of a computer user
US6434530B1 (en) * 1996-05-30 2002-08-13 Retail Multimedia Corporation Interactive shopping system with mobile apparatus
US6262730B1 (en) * 1996-07-19 2001-07-17 Microsoft Corp Intelligent user assistance facility
US6067565A (en) * 1998-01-15 2000-05-23 Microsoft Corporation Technique for prefetching a web page of potential future interest in lieu of continuing a current information download
US6314451B1 (en) * 1998-05-15 2001-11-06 Unicast Communications Corporation Ad controller for use in implementing user-transparent network-distributed advertising and for interstitially displaying an advertisement so distributed
US6385619B1 (en) * 1999-01-08 2002-05-07 International Business Machines Corporation Automatic user interest profile generation from structured document access information

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009079774A1 (fr) * 2007-12-21 2009-07-02 Espial Group Inc. Appareil et procédé pour un moteur de personnalisation

Also Published As

Publication number Publication date
WO2003017082A1 (fr) 2003-02-27
EP1423769A2 (fr) 2004-06-02
AU2002324732A1 (en) 2003-03-03
WO2003017119A1 (fr) 2003-02-27
WO2003017059A8 (fr) 2004-04-22
WO2003017059A3 (fr) 2003-10-30
WO2003017059A2 (fr) 2003-02-27
JP2005500769A (ja) 2005-01-06

Similar Documents

Publication Publication Date Title
US20030041159A1 (en) Systems and method for presenting customizable multimedia presentations
US6744729B2 (en) Intelligent fabric
US20050182852A1 (en) Intelligent fabric
US20240098221A1 (en) Method and apparatus for delivering video and video-related content at sub-asset level
US8230343B2 (en) Audio and video program recording, editing and playback systems using metadata
KR101551137B1 (ko) 다수의 디바이스를 갖는 인터랙티브 매체 안내 시스템
US20030043191A1 (en) Systems and methods for displaying a graphical user interface
US8640160B2 (en) Method and system for providing targeted advertisements
KR101341283B1 (ko) 비디오 분기
JP4107811B2 (ja) オーディオビジュアルシステムの使用方法
US9837126B2 (en) Providing enhanced content
US20050120391A1 (en) System and method for generation of interactive TV content
US20100031162A1 (en) Viewer interface for a content delivery system
US20120116883A1 (en) Methods and systems for use in incorporating targeted advertising into multimedia content streams
WO2002102079A1 (fr) Systemes d&#39;enregistrement, d&#39;edition et de reproduction de programmes audio et video au moyen de metadonnees
JP2000253377A (ja) オーディオビジュアルシステムの使用方法
WO2007143189A2 (fr) système et procédé pour une sélection et une catégorisation vidéo améliorées utilisant des métadonnées
JP2001346140A (ja) オーディオビジュアルシステムの使用方法
JP2005130087A (ja) マルチメディア情報機器
WO2003017122A1 (fr) Systemes et procedes de presentation de contenu multimedia personnalisable
Royer et al. Automatic generation of explicitly embedded advertisement for interactive TV: concept and system architecture
HK1116876A (en) System and method for enhanced video selection

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VC VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP