[go: up one dir, main page]

GB2548091A - Content delivery - Google Patents

Content delivery Download PDF

Info

Publication number
GB2548091A
GB2548091A GB1603775.6A GB201603775A GB2548091A GB 2548091 A GB2548091 A GB 2548091A GB 201603775 A GB201603775 A GB 201603775A GB 2548091 A GB2548091 A GB 2548091A
Authority
GB
United Kingdom
Prior art keywords
user
content output
output devices
content
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1603775.6A
Other versions
GB201603775D0 (en
Inventor
Anthony Eves David
Stephen Cole Richard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ambx UK Ltd
Original Assignee
Ambx UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ambx UK Ltd filed Critical Ambx UK Ltd
Priority to GB1603775.6A priority Critical patent/GB2548091A/en
Publication of GB201603775D0 publication Critical patent/GB201603775D0/en
Publication of GB2548091A publication Critical patent/GB2548091A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Delivering a content package by periodically repeating the steps of determining the location of a user 10, accessing a definition of a space 18 relative to the user, determining one or more content output devices 16 located within the defined space, adding the determined content output devices to a register of content output devices, receiving the content package, and outputting the content package via the content output devices defined in the register of content output devices. Preferably identifying that a content output device defined in the register of content output devices is no longer located within the defined space and removing the device from the register. Preferably the definition of space relative to the user comprises accessing a predefined radius of x metres to the user and specifically for the users current orientation. Preferably the location of multiple users is determined. The invention itself relates to providing ambient content to a moving user while they play video games.

Description

DESCRIPTION
CONTENT DELIVERY
This Invention relates to a method, system and computer program product for delivering a content package.
The delivery of content Is typically made to a user through one or more devices. For example, a user may be watching television, during which visual content is delivered via the television screen and audio content is delivered via one or more speakers. Systems that deliver ambient content alone or in addition to the user’s main activity are also known. For example, cooling and heating devices, vibration devices and additional lights can all be used to augment a content experience or provide an ambient experience to one or more users. For example, a user may be playing a computer game through a console connected to a television and to audio speakers. Video will be delivered via the television and audio will be delivered via the speakers, however additional devices may also be present that provide augmentation to the main delivery of the content. Additional lights may be placed around the user, which respond to the content being delivered via the television, for example. The lights may be controlled from specific authored content that forms part of a larger content package or the lights may be controlled from some interpretation of the video content being shown on the television. However, such augmentation systems are generally designed for fixed installations and do not work well if the user receiving the content package is mobile.
It is therefore an object of the invention to improve upon the known art.
According to a first aspect of the present invention, there is provided a method of delivering a content package, the method comprising periodically repeating the steps of determining the location of a user, accessing a definition of a space relative to the user, determining one or more content output devices located within the defined space, adding the determined content output devices to a register of content output devices, receiving the content package, and outputting the content package via the content output devices defined in the register of content output devices.
According to a second aspect of the present invention, there is provided a system for delivering a content package, the system comprising a processor periodically arranged to determine the location of a user, access a definition of a space relative to the user, determine one or more content output devices located within the defined space, add the determined content output devices to a register of content output devices, receive the content package, and output the content package via the content output devices defined in the register of content output devices.
According to a third aspect of the present invention, there is provided a computer program product on a computer readable medium for delivering a content package, the product comprising instructions for periodically determining the location of a user, accessing a definition of a space relative to the user, determining one or more content output devices located within the defined space, adding the determined content output devices to a register of content output devices, receiving the content package, and outputting the content package via the content output devices defined in the register of content output devices.
Owing to the invention, it is possible to provide a system and method that delivers a content package to the user while also taking into account the current location of the user and the content output devices that are present within the user’s current space. Content output devices are periodically added to a register of current content output devices if they are within the user’s current defined space. These devices are then used to output the content package, to the extent that these devices are able to render the components within the package. The content package may be defined in general terms that need to be mediated into specific instructions for different types of devices or the content package may contain precisely the content that is to be outputted by the different devices.
By combining location sensing technology and an abstracted experience creation language such as amBX (see www.ambx.com) it is possible to design a system that can adapt its locational representation continually in real-time on-the-fly to deliver an experience located to an individual or group so that they remain the focus of the experience despite being mobile. Location sensors allow the system to work out which devices within a space are close enough to contribute to a perceived experience. Currently in a language such as amBX these devices are given a position within a known location model which is normally set by the fixture installer and remains fixed during playback. With the addition of location sensing, the devices can be located by relative position. Ideally, the location sensing will also be able to detect or deduce the orientation of the user(s).
By associating the location of the device to the position of a user or group of users, updating this mapping as they move around the space or spaces in which the experience is played back, the selection of potential rendering devices used by the abstraction engine will change. An abstracted system such as amBX is able to adapt the experience playback to whatever devices are available and provide an equivalent experience on an ever changing system of devices. The result will be an apparently consistent ambient experience rendered locally to an individual despite the user(s) moving.
This system provides a range of advantages in providing a personalised yet highly adaptable experience centred around the viewer (experiencer). This can provide enhanced personal control over their space. External systems can use subtle and non-intrusive cues to provide contextual information. At the same time this can be very power efficient, only nearby and relevant devices needing to be active. Multiple participants can have their experiences combined within a space and this could be based on hierarchies of control or by merging the effects. The same techniques can be used to create entirely ad hoc experiential systems. A range of enabled devices can be brought into an area and by location sensing combined with dynamic configuration of the locational model they can join together to provide an experience. Devices can be moved around, carried, added and removed on the fly and the system adapts in real-time.
Preferably, the method further comprises identifying that a content output device defined in the register of content output devices is no longer located within the defined space and removing the identified content output device from a register of content output devices. The system can also be controlled to remove those content output devices that are no longer located within the user’s defined space, so that as the user changes location, the content output devices that are contained with the register of content output devices is continually updated, with content output devices being added to and removed from the register of content output devices.
Advantageously, the step of accessing a definition of a space relative to the user comprises accessing a definition that comprises a predefined radius of X metres around the user, which could be 5 metres for example. In this case all devices that can deliver content that are within 5 metres of the user’s current location are added to the register of content output devices and will remain part of the register until the user’s location changes. This provides a simple and effective method of defining the user’s space.
Ideally, the method further comprises determining the current orientation of the user and wherein the step of accessing a definition of a space relative to the user comprises accessing a definition of a space specifically for the user’s current orientation. The user’s orientation (the direction in which they are facing) can also be detected and this can then be used in the definition of the space relative to the user. The definition may only include space that is directly in front of the user for the purposes of adding and removing content output devices from the register of content output devices. In many situations this will lead to the better delivery of the content package to the user, as the user will generally be much more aware of devices that are in front of them, particularly if the user is currently moving.
Preferably, the method further comprises determining the location of one or more further users and wherein the step of accessing a definition of a space relative to the user comprises accessing a definition of space relative to all users. The methodology can also be extended to include multiple users. The locations of all of the users are determined and a space is defined that encompasses all of the users. This can be defined as a two-dimensional rectangle that is large enough to include all of the users for whom a location is known. All of the available content delivery devices within the space are then added to the register of devices and the content package is delivered via all of these content delivery devices. This provides an immersive ambient experience for all of the users that are present that can be located within an area covered by the system.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:-
Figure 1 is a schematic diagram of a user in an environment that contains multiple content output device.
Figure 2 is a schematic diagram of the user in the environment showing a space defined around the user.
Figure 3 is a further schematic diagram of the user in the environment showing a space defined around the user.
Figure 4 is a schematic diagram of two users in the environment showing a space defined around the users.
Figure 5 is a flowchart of a method of delivering a content package, and Figure 6 is a schematic diagram of a system.
Figure 1 shows a user 10 who is carrying a mobile phone 12 in an environment 14. Multiple content output devices 16 are also located within the user’s environment 14, which could be an open space in a building, for example. In a mobile environment it is possible to track accurately the location of an individual, a device or a group of individuals. This may be done by using cellular device ids, visual tracking, near field detection, GPS or many other methods. Sensible assumptions may be made about the user’s trajectory or using contextual information to determine further details about where the user 10 is facing or heading. The devices 16, within a building infrastructure for example, can have their absolute locations recorded. The locations of the content output devices 16 are therefore known.
As the user 10 and/or their device 12 moves around, their relative locations can be calculated, and this may be matched against a particular structural map such as a building plan, to ascertain the content output devices 16 that may be in direct line of sight to a viewer, for example in the same room. The location may be also defined by a group of individuals rather than a single user 10, either scoping a relevant space that encompasses them all, suggesting a certain average combined location or focussing on a key individual or majority position.
On-the-fly, a system can use the relative location of the individual(s) 10 and device(s) 12 to place the appropriate devices 16 into an abstract locational model of an experience system, which delivers an enhanced experience to one or more users by providing additional effects around the user(s). These location calculations may update at any appropriate speed, probably every few seconds (or quicker if the dynamics of the space are likely to be high). The experience system will automatically recalculate the best way to render the experience based on any changed information in the device mapping in parallel with changes in the scripted experience. By this method, additionally all the individuals within a group experience may themselves be carrying experience delivering devices and these can be a part of the rendering set. As the individuals move around their devices contribution will adjust to contribute appropriately to the total experience.
Figure 2 shows the same user 10 in the same location 14 with the same multiple content output devices 16 being present (which could be lights, audio speakers and so on). In addition a space 18 has been defined relative to the user 10. Here a locus of experience (the space 18) is defined as a two-dimensional rectangle that is centred on the user 10 and does not take into account the user’s orientation (the direction in which the user 10 is facing) or the user’s direction and/or speed of travel. The space 18 is defined relative to the determined location of the user 10. Other definitions of the space could be used, for example with a radius of a constant distance around the user 10. Six content output devices 16 (shown shaded in the Figure) fall within the space 18.
The embodiment shown in Figure 2 may occur while playing a computer game on the mobile phone 12, the user 10 is walking down a street where there are a number of content delivery devices 16. As the game generates explosion effects these have additional descriptions that describe flashes of light around the user 10. As the user 10 passes the enabled devices 16 a vector from the phone 12 to the devices 16 is calculated at intervals and their place in the locational model adjusted accordingly. This could use the movement detection and monitoring algorithms in modern smart phones and devices to determine when updates would be appropriate.
An experience system adapts the experience to the available devices 16 with their current place in the model and so appropriate lighting flashes will go off around the user 12 even though they are moving. Additionally the user 10 may each be carrying a personal rumble or lighting device, again these continually adjust their relative position in the game arena and so provide located effects within the dynamic group experience. The direction of movement of the player can also be calculated over time to correctly orient the effects. The six content output devices 16 highlighted within the space 18 in Figure 2 can provide additional output to the user as defined by a global content package.
Figure 3 shows a view of the environment 14 that is similar to that shown in Figure 2, with the exception that the user 10 has now moved forward (from right to left in the Figure). The space 18 that is defined with respect to the location of the user 10 is the same size and shape as that shown in Figure 2, but has moved forwards with the user 10. Different content output devices 16 are now within the space 18 and the experience systems uses the current set of devices 16 to output any additional augmenting effects to the user 10. As the user 10 moves, content output devices 16 are added and/or removed from a register of such devices 16, depending upon whether they are within the space 18 or not. As can be seen in the Figure when compared to Figure 2 a different group of six content output devices 16 are now located within the space 18.
Figure 4 shows a view of the same environment 14 as in Figures 1 to 3, but now there are two users 10 within the same location 14. The experience system determines the location of the further users and accesses a definition of a space 18 that is relative to all of the users 10. Again, the space 18 is rectangular in shape but is large enough to include both users 10. The centre of the rectangle can be the point equidistant from both users 10, in this example the centre is slightly adjusted for the direction of movement of the two users 10, which is from right to left in the view of this Figure. One of the users 10 does not carry a mobile phone, but their location can be detected using a camera and the space 18 is large enough to encompass eight of the content output devices 16 present in the location 14.
If a group of players 10 are all participating in the same game nearby for example, then a virtual arena can expand to include content output devices 16 around all of the participants 10. The orientation and layout of the devices 16 will provide a shared experience. As the players 10 move around, expanding the playing area or congregating together, the active device area 18 will adapt accordingly. So for example as four friends walk across a park one of them explodes a virtual bomb off to the right hand side, players 10 on the right hand side of the park see a flash on active devices 16 around them. As the players 10 congregate together at the same bench the corresponding effects become focussed to devices 16 close around them and all the players 10 see the full experience.
Other implementations of the system are possible. For example, a person 10 walking to a meeting in an office building they do not know and they are listening to their personal music system which is connected to an experience system can also be catered for within the experience system. As they pass along a corridor, the content output devices 16 around the user 10 react to the user’s music, thereby creating a lighting visualisation of the audio frequencies, these effects follow them along the corridors creating a pool of active lighting around them. When passing others these effects combine with those being generated by each other’s systems, or mute themselves to avoid interfering in the personal lighting of the other individual.
As this person does not know where they are going an additional lighting effect can be created as a follow me light, this being an effect that continually pulses a gentle green light ahead of the user. At intersections the light pulses in the direction to go, using route finding software to include devices 16 in the correct direction. When the person reaches the meeting room, the room lighting is turned on and the follow-me light changes to indicate they have reached their destination. All of this effect is delivered by the devices 16 that are determined to be in the user’s current space 18. As the user moves the content output devices 16 are added and removed from the current output devices that are actually used.
In addition to determining the user’s current location, it is also possible to determine the current orientation of the user and in this case the accessing of a definition of a space 18 relative to the user 10 comprises accessing a definition of a space 18 specifically for the user’s current orientation. This might mean for example that devices 16 that are considered to be behind the user 10 are not included in the register of current output devices 16. Different definitions of the space 18 can be stored and if orientation data is available, then a suitable definition can be used that also takes into account orientation. This prevents effects being delivered that might not be sensed by the user 10. Different definitions can be considered different mappings between a real environment and an abstract locational model used by an experience system.
Figure 5 shows a flowchart of a method of delivering a content package, the method being executed periodically and repeatedly by a suitably connected system. The method comprises the steps of firstly step S5.1 determining the location of the user 10. The user’s location can be identified indirectly, for example by determining the location of a device that they are known to be carrying, such as the mobile phone 12 shown in Figure 1. Various known technologies exist that can locate a mobile phone to within a 1 metre accuracy using short range wireless beacons. The location of the user 10 can be also be determined using visual recognition techniques, such as identifying an individual using a camera and face recognition software.
The next step in the method is step S5.2, which comprises accessing a definition of a space 18 relative to the user 10. This definition can be stored locally by the system executing the method. As above, the space 18 may be a simple rectangle around the location of the user 10 or may be a circle defined by a radius around the user. Step S5.3 comprises determining one or more content output devices 16 located within the defined space 18 and step S5.4 comprises adding the determined content output devices 16 to a register of content output devices 16. A register is maintained for the specific user 10 containing the currently available devices 16 within their space 18.
The penultimate step in the method is step S5.5, which comprises receiving the content package, and the final step is step S5.6, which comprises outputting the content package via the content output devices 16 defined in the register of content output devices 16. The system supplies the content package in whole or part to the individual devices 16 that then output additional augmenting content to the user 10. The content package that is delivered may be a set of content files (audio, video and so on) or may be more high level instructions (“summer heat” or “cool colours”) depending upon the nature of the experience system being used.
The method shown in Figure 5 is repeated periodically by the system that is performing the method. This ensures that in real time the available content output devices 16 are continually updated with new devices 16 that are within the user’s current location as defined by the space 18 around them being added to the register of content output devices 16. In addition those devices 16 that are no longer within the user’s space 18 are also removed from the register so that an up-to-date list of available devices 16 is constantly generated for delivery of the content package. The method may be performed every five seconds, for example, or this time period may adapt based on how quickly the user is moving.
Figure 6 shows schematically a system 20 that can be used to deliver a content package to a user 10, working in conjunction with the content output devices 16. The system 20 comprises a communication unit 22, a processor 24 connected to the communication unit 22, a local interface 26 that is connected to the processor 24 and a storage device 28 that is also connected to the processor 24. Other components such as a power supply (not shown) are present in the system 20 but are removed for simplicity of explanation. The interface 26 can receive a computer readable medium such as a CD-ROM 30, as shown in the Figure. The CD-ROM 30 stores a computer program product that comprises instructions for controlling the processor 24. The processor 24 is operated under the instructions in the computer program product stored on the computer readable medium 30.
The communication unit 22 may be a wireless interface that can communicate directly with a mobile device 12 that is being carried by a user 10 (in order to determine their location for example), or may be a wired connection to a series of short range wireless beacons that determine the user’s location indirectly. Although the system 20 is here shown as a standalone server, the functions of the system 20 could be distributed amongst different independent hardware components. The system 20 has sufficient processing and communication bandwidth that the system 20 can communicate without noticeable delay with a mobile device 12 or short range beacons to perform the location determination.
The communication unit 22 is also used to communicate with the content output devices 16. The storage device 28 of the system 20 stores various elements for use in the method of Figure 5, including a content package 32, one or more definitions 34 (of the space 18 around a user 10) and a register 36 (of the content output devices 16 currently within the space 18). The server 20 is the central component that will use the location of the user 10 and the defined space 18 to add one or more content output devices 16 to the register 36. The content package 32 is then outputted by those devices 16 that are listed within the register 36, to the extent that they are able to do so.
The content package 32 could be a set of content files that cover different sensory domains, such as RGB values for lights and audio files for speakers and so on. Information for controlling fans and heating devices may also be present. The information within the content package 32 may be entirely directive, such as specific RGB values (0, 255, 0) or may be defined in more high level terms (“FOREST”) that have to be interpreted, either by the content delivery devices 16 or by an intermediate layer between the content package 32 and the actual devices 16. This intermediate layer may be considered a self-contained experience system that is also being run by the processor 24 of the system 20.

Claims (15)

1. A method of delivering a content package, the method comprising periodically repeating the steps of: • determining the location of a user, • accessing a definition of a space relative to the user, • determining one or more content output devices located within the defined space, • adding the determined content output devices to a register of content output devices, • receiving the content package, and • outputting the content package via the content output devices defined in the register of content output devices.
2. A method according to claim 1, and further comprising identifying that a content output device defined in the register of content output devices is no longer located within the defined space and removing the identified content output device from a register of content output devices.
3. A method according to claim 1 or 2, wherein the step of accessing a definition of a space relative to the user comprises accessing a definition that comprises a predefined radius of x metres around the user.
4. A method according to claim 1, 2 or 3, and further comprising determining the current orientation of the user and wherein the step of accessing a definition of a space relative to the user comprises accessing a definition of a space specifically for the user’s current orientation.
5. A method according to any preceding claim, and further comprising determining the location of one or more further users and wherein the step of accessing a definition of a space relative to the user comprises accessing a definition of space relative to all users.
6. A system for delivering a content package, the system comprising a processor periodically arranged to: • determine the location of a user, • access a definition of a space relative to the user, • determine one or more content output devices located within the defined space, • add the determined content output devices to a register of content output devices, • receive the content package, and • output the content package via the content output devices defined in the register of content output devices.
7. A system according to claim 6, wherein the processor is further arranged to identify that a content output device defined in the register of content output devices is no longer located within the defined space and remove the identified content output device from a register of content output devices.
8. A system according to claim 6 or 7, wherein the processor is arranged when accessing a definition of a space relative to the user, to access a definition that comprises a predefined radius of x metres around the user.
9. A system according to claim 6, 7 or 8, wherein the processor is further arranged to determine the current orientation of the user and when accessing a definition of a space relative to the user, to access a definition of a space specifically for the user’s current orientation.
10. A system according to any one of claims 6 to 9, wherein the processor is further arranged to determine the location of one or more further users and when accessing a definition of a space relative to the user, to access a definition of space relative to all users.
11. A computer program product on a computer readable medium for delivering a content package, the product comprising instructions for periodically: • determining the location of a user, • accessing a definition of a space relative to the user, • determining one or more content output devices located within the defined space, • adding the determined content output devices to a register of content output devices, • receiving the content package, and • outputting the content package via the content output devices defined in the register of content output devices.
12. A computer program product according to claim 11, and further comprising instructions for identifying that a content output device defined in the register of content output devices is no longer located within the defined space and removing the identified content output device from a register of content output devices.
13. A computer program product according to claim 11 or 12, wherein the instructions for accessing a definition of a space relative to the user comprise instructions for accessing a definition that comprises a predefined radius of x metres around the user.
14. A computer program product according to claim 11, 12 or 13, and further comprising instructions for determining the current orientation of the user and wherein the instructions for accessing a definition of a space relative to the user comprise instructions for accessing a definition of a space specifically for the user’s current orientation.
15. A computer program product according to any one of claims 11 to 14, and further comprising instructions for determining the location of one or more further users and wherein the instructions for accessing a definition of a space relative to the user comprise instructions for accessing a definition of space relative to all users.
GB1603775.6A 2016-03-04 2016-03-04 Content delivery Withdrawn GB2548091A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1603775.6A GB2548091A (en) 2016-03-04 2016-03-04 Content delivery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1603775.6A GB2548091A (en) 2016-03-04 2016-03-04 Content delivery

Publications (2)

Publication Number Publication Date
GB201603775D0 GB201603775D0 (en) 2016-04-20
GB2548091A true GB2548091A (en) 2017-09-13

Family

ID=55859015

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1603775.6A Withdrawn GB2548091A (en) 2016-03-04 2016-03-04 Content delivery

Country Status (1)

Country Link
GB (1) GB2548091A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020030303A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method for providing loudspeaker signals
WO2020030768A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method for providing loudspeaker signals
WO2020030769A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method considering acoustic obstacles and providing loudspeaker signals

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006100644A2 (en) * 2005-03-24 2006-09-28 Koninklijke Philips Electronics, N.V. Orientation and position adaptation for immersive experiences
US20090328087A1 (en) * 2008-06-27 2009-12-31 Yahoo! Inc. System and method for location based media delivery
US20100042235A1 (en) * 2008-08-15 2010-02-18 At&T Labs, Inc. System and method for adaptive content rendition
US20110069940A1 (en) * 2009-09-23 2011-03-24 Rovi Technologies Corporation Systems and methods for automatically detecting users within detection regions of media devices
EP2950550A1 (en) * 2014-05-28 2015-12-02 Advanced Digital Broadcast S.A. System and method for a follow me television function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006100644A2 (en) * 2005-03-24 2006-09-28 Koninklijke Philips Electronics, N.V. Orientation and position adaptation for immersive experiences
US20090328087A1 (en) * 2008-06-27 2009-12-31 Yahoo! Inc. System and method for location based media delivery
US20100042235A1 (en) * 2008-08-15 2010-02-18 At&T Labs, Inc. System and method for adaptive content rendition
US20110069940A1 (en) * 2009-09-23 2011-03-24 Rovi Technologies Corporation Systems and methods for automatically detecting users within detection regions of media devices
EP2950550A1 (en) * 2014-05-28 2015-12-02 Advanced Digital Broadcast S.A. System and method for a follow me television function

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020030303A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method for providing loudspeaker signals
WO2020030768A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method for providing loudspeaker signals
WO2020030769A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method considering acoustic obstacles and providing loudspeaker signals
WO2020030304A1 (en) 2018-08-09 2020-02-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method considering acoustic obstacles and providing loudspeaker signals
CN113016197A (en) * 2018-08-09 2021-06-22 弗劳恩霍夫应用研究促进协会 Audio processor and method for providing a loudspeaker signal
US11290821B2 (en) 2018-08-09 2022-03-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio processor and a method considering acoustic obstacles and providing loudspeaker signals
EP3996392A1 (en) 2018-08-09 2022-05-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An audio processor and a method for providing loudspeaker signals
CN113016197B (en) * 2018-08-09 2022-12-16 弗劳恩霍夫应用研究促进协会 Audio processor and method for providing speaker signal
US11671757B2 (en) 2018-08-09 2023-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio processor and a method considering acoustic obstacles and providing loudspeaker signals
US12309562B2 (en) 2018-08-09 2025-05-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio processor and a method for providing loudspeaker signals

Also Published As

Publication number Publication date
GB201603775D0 (en) 2016-04-20

Similar Documents

Publication Publication Date Title
JP7239668B2 (en) Verification of the player's real-world location using image data of landmarks corresponding to the verification path
US20240267424A1 (en) System for providing synchronized sharing of augmented reality content in real time across multiple devices
US9836889B2 (en) Executable virtual objects associated with real objects
US11748961B2 (en) Interactable augmented and virtual reality experience
US20150063610A1 (en) Audio rendering system categorising geospatial objects
US8275834B2 (en) Multi-modal, geo-tempo communications systems
JP2023517661A (en) How to determine passable space from a single image
US11106988B2 (en) Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle
D'Auria et al. A 3D audio augmented reality system for a cultural heritage management and fruition.
GB2548091A (en) Content delivery
JP2025507946A (en) Mapping traversable space in a scene using 3D meshes
TWI877483B (en) Method and computer readable storage medium for repeatability predictions of interest points
D'Auria et al. Caruso: Interactive headphones for a dynamic 3d audio application in the cultural heritage context
TWI891997B (en) Method and non-transitory computer-readable storage medium for panoptic segmentation forecasting for augmented reality
TW202304578A (en) Panoptic segmentation forecasting for augmented reality
D'Auria et al. Interactive headphones for a cloud 3D audio application

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)