WO2021101934A1 - Adaptive broadcast media and data services - Google Patents
Adaptive broadcast media and data services Download PDFInfo
- Publication number
- WO2021101934A1 WO2021101934A1 PCT/US2020/060963 US2020060963W WO2021101934A1 WO 2021101934 A1 WO2021101934 A1 WO 2021101934A1 US 2020060963 W US2020060963 W US 2020060963W WO 2021101934 A1 WO2021101934 A1 WO 2021101934A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- bat
- content
- bts
- broadcast
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2381—Adapting the multiplex stream to a specific network, e.g. an Internet Protocol [IP] network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26258—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2662—Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/61—Network physical structure; Signal processing
- H04N21/6106—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
- H04N21/6112—Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving terrestrial transmission, e.g. DVB-T
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6582—Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
Definitions
- the present disclosure relates generally to systems and methods for broadcast media and data delivery, e.g., via television spectrum broadcast in compliance with published and candidate standards such as from the advanced television systems committee (ATSC) standard version 3.0 or higher.
- ATSC advanced television systems committee
- Over-the-top (OTT) or point-to-point traffic requires a connection for each instance or copy of data delivery, which does not scale well. But, via broadcasting, many people (e.g., millions) may be reached in one connection with one copy. Without a transport control protocol / Internet protocol (TCP/IP) basis, though, broadcasting requires overcoming a number of complex challenges when it comes to making sure the data is delivered in a way that is of value to the market (e.g., robust reception characteristics). IP multicast may help resolve this issue, e.g., by taking one packet of data and sending it to a plurality of recipients for more optimal scalability.
- TCP/IP transport control protocol/ Internet protocol
- Legacy viewing devices are known to support such media formats as hypertext transfer protocol (HTTP) live streaming (HLS), but they do not support emissions as described in ATSC 3.0.
- HTTP hypertext transfer protocol
- HLS live streaming
- OTA over-the-air
- a user turns on the TV, and their receiver either gets the signal or it does not.
- the receiver In analog OTA transmissions, the receiver at best obtains the signal (e.g., with a degree of snowiness, since typically some signal is lost due to distance or interference), which has the same amount of data.
- ATSC 1.0 receivers obtain digital content in view of attenuation or other multipath effects on propagation, including absorption, scattering, diffraction, reflection, and refraction.
- Imagery and video, including with compression, are wirelessly obtained at a device by progressively increasing a quality (e.g., by improving upon a fuzzy picture to a sharper picture over time).
- HLS data of 576p resolution may be initially obtained.
- an instruction may cause subsequent obtainment of an independent HLS stream of data at 720p resolution.
- This device may keep working its way up until the buffer is not able to be maintained at an adequate level and may stay at that resolution (e.g., 1080p,
- ATSC 3.0 schedulers are designed to use padding in baseband packets to prevent an overflow of the ATSC stack, including at the broadcast gateway and the studio-to-transmitter link tunneling protocol (STLTP) feed.
- Unidirectional (e.g., IP/UDP/Multicast) broadcast emissions to receiver devices may be lossy.
- Unidirectional broadcast emissions lack acknowledgments for knowing what and when to re-send missing portions. For example, in a particular designated market area (DMA) of a city at a certain frequency or configuration, a majority of emitted files may be unusable due to reception loss, sometimes with loss representing as little as l/60th of a second of data resulting in the reception of every file being incomplete and therefore unusable.
- DMA designated market area
- Known data recovery techniques including those used in peer-to-peer communication, such as forward error correction (FEC), are insufficient by themselves to ensure complete file recovery with a high degree of confidence.
- FEC forward error correction
- Broadcast emissions typically only comprise of a collection of media essence streams, specifically content, such as video and audio encoded and multiplexed into a single transmission.
- Known emissions such as ATSC 1.0, are statically configured within rigid bandwidth parameters, e.g., at a time of install. There is no ability to adjust the transmission modulation and coding (MODCOD), encoding bitrate, or other control functions, such as reduced bandwidth utilization from performing programming lineup changes, for example.
- MODCOD transmission modulation and coding
- encoding bitrate or other control functions, such as reduced bandwidth utilization from performing programming lineup changes, for example.
- MVPD multichannel video programming distributor
- a content delivery network typically comprises of networks of servers geographically located at points-of-presence (PoPs), interconnecting with traditional IP last-mile networks, such as Cable Internet services, and more frequently with mobile carriers, to reduce latency to user devices that request online content.
- PoPs points-of-presence
- IP last-mile networks such as Cable Internet services
- FIG. 1 is a block diagram of an example broadcast system including a set of broadcast television stations (BTSs) transmitting to a plurality of broadcast access terminals (BATs), with each different instance of “n” representing any natural number.
- BTSs broadcast television stations
- BATs broadcast access terminals
- FIG. 2 is a block diagram of an example broadcast system illustrating options for manipulation of broadcast information prior to transmission, and the use and manipulation of information received in broadcast transmissions.
- FIG. 3A is a block diagram of an example BAT connected to legacy viewing devices.
- FIG. 3B is a block diagram of an example process for data conversion.
- FIG. 3C is a flow diagram of an example data conversion process.
- FIG. 3D is a diagram of an example protocol stack.
- FIGs. 4A-4C are block diagrams showing example data paths for the processing of studio to transmitter link transport protocol (STLTP) broadcast information.
- STLTP studio to transmitter link transport protocol
- FIG. 4D is a block diagram of an example equipment for padding space utilization.
- FIG. 4E is a flow diagram of an example process for STLTP optimization.
- FIG. 4F is a block diagram of example interfaces at or about an upstream
- FIG. 5A is a block diagram of an example BTS.
- FIG. 5B is a block diagram of an example BAT.
- FIG. 5C is a diagram of an example physical layer frame and bootstrap structure.
- FIG. 6 is a flow diagram of an example collaborative object delivery and recovery process.
- FIG. 7 is a flow diagram an example hybrid data delivery process using fragmentation.
- FIG. 8 is a flow diagram of an example progressive over-the-air (OTA) application download and runtime process.
- OTA over-the-air
- FIG. 9 is a flow diagram of an example OTA application programming interface (API) services delivery process.
- API application programming interface
- FIG. 10A is a block diagram illustrating simultaneous transmission of an item of content in multiple formats/renditions.
- FIG. 10B is a block diagram of an example implementation of progressive video enhancement.
- FIG. 11 is a flow diagram of an example implementation of a content delivery network (CDN) point of presence (PoP).
- CDN content delivery network
- PoP point of presence
- FIG. 12 is a flow diagram of an example implementation of flash channels using dynamic ATSC 3. link-layer protocol (ALP) management.
- ALP link-layer protocol
- BTSs broadcast television stations
- BATs broadcast access terminals
- Traditionally television broadcast transmitters receivers and were distinct sets of stationary equipment operating which operated on a unidirectional signal essentially in one mode.
- a BTS may be, for present purposes, a mobile computing device which arranges content for high power broadcast.
- a BAT may be a mobile computing device which also acts as a BTS to rebroadcast received signals.
- a BAT may be a mobile device that reconstitutes content from portions received various via broadcast television signals, Internet communication, and/or peer devices, where the portions are obtained at different times and different ways as the BAT travels.
- a BTS and a BAT may be in fixed positions with associated antenna structures, the BTS and BAT functionality described herein may be employed in a variety of combinations in a variety of fixed and mobile computing devices.
- One or more aspects of the present disclosure relate to a method for providing interoperability with existing user equipment (UE).
- the method may comprise: determining, based on a plurality of capabilities at least one of which is orthogonal to at least one input requirement, a set of mechanisms and corresponding processes configured to translate emissions received based on a substantially static plurality of input requirements that include the at least one requirement; obtaining, from user equipment (UE), a request for the translated emissions, the request including information about the capabilities; and responsive to the request, generating, by a broadcast access terminal (BAT), the set of mechanisms and corresponding processes just in time (JIT).
- BAT broadcast access terminal
- One or more other aspects of the present disclosure relate to a method for supporting OTA application programming interfaces (API) services, wherein a device providing this support is operable to: receive OTA broadcast data, the OTA broadcast data being from a broadcast television station (BTS); and provide, via communication circuitry to one or more local devices, the API services, which comprise (i) an OTA capabilities query API, (ii) a broadcast content directory API, (iii) a BAT tuner API, and (iv) a media-content delivery API.
- API application programming interfaces
- One or more other aspects of the present disclosure relate to a method for providing progressive video enhancement.
- the method may comprise: determining an attribute for each of a base layer and one or more enhancement layers to be combined with the base layer such that a stream quality metric satisfies a criterion, the one or more enhancement layers being determined such that the combination is needed for displaying a higher quality stream, wherein each of the enhancement layers is determined based on changes from the base layer.
- One or more other aspects of the present disclosure relate to a method for repurposing unused capacity (e.g., padding) of baseband packets in the STLTP multiplex transport by performing opportunistic data injection (ODI).
- the device may be caused to: obtain a set of non-real -time (NRT) data; obtain a plurality of baseband packets from the STLTP feed; determine an amount of excess capacity within each of the baseband packets (if any); incrementally extract, for each of the baseband packets, portions of the NRT data, each of the portions having a size determined based on the respectively determined amount of unused capacity in each baseband packet; and re-multiplex the extracted portions into the STLTP feed in real-time.
- NRT non-real -time
- One or more other aspects of the present disclosure relate to a method for augmenting data reception integrity via collaborative object delivery.
- the method may comprise coordinating a BTS and a scheduler such that at least one of a set of BATs is operable to emit recovery data via IP multicast to a subset of the BATs in a spatial region and in a licensed portion of a spectrum otherwise utilized by the BTS.
- One or more other aspects of the present disclosure relate to a method for supporting hybrid delivery of fragmented data.
- the method may comprise: obtaining information indicating a number of devices, which satisfies a predetermined criterion, that received all emitted content; and determining whether to rebroadcast, in a carousel, one or more portions of the emitted content based on the information, wherein the emitted content is previously fragmented into a plurality of content portions based on a peer-to-peer file sharing protocol that is decentralized.
- One or more other aspects of the present disclosure relate to a method for supporting progressive OTA application download and runtime.
- the method may comprise: responsive to reception of at least one module of a terminal application, installing the at least one module such that the terminal application begins to run; responsive to reception of at least one other module of the terminal application, integrating into the terminal application additional functionality based on information of the at least one other module, wherein the modules are obtained at the BAT via emissions comprising IP multicast traffic.
- the method may comprise implementing a CDN point of presence (PoP) for ad hoc delivery of OTA data, by providing, via a mobile device, a CDN PoP that comprises a software defined radio on an integrated circuit (IC) that obtains IP multicast traffic broadcasted from an antenna, wherein contents of the CDN are pre-loaded by the broadcast.
- a CDN point of presence PoP
- IC integrated circuit
- One or more other aspects of the present disclosure relate to a method for adding one or more channels, by: obtaining, from each of a plurality of differently -located transmitters, available capacity information; determining, for each of the channels based on the available capacity information of the transmitters, a same size of an emission; and dynamically allocating (or re-allocating) a bandwidth portion to regionally emit a set of linear, audio-video (AV) content dynamically generated based on the size.
- AV audio-video
- Adaptable multichannel television broadcast may support a wide variety of media and data services models via adaptation of streams and systems of streams for transmission, adaptation of receiving client devices, and optional connection of broadcast services and receiving client devices to other networks.
- ATSC 3.0 BATs may support legacy media viewing devices, e.g., via just-in-time (JIT) transcoding and repackaging of new media formats such as moving picture experts group (MPEG), media transport (MMT), and real-time object delivery over unidirectional transport (ROUTE) / dynamic adaptive streaming over HTTP (DASH) into legacy media formats such as HTTP live streaming (HLS).
- JIT just-in-time
- MPEG moving picture experts group
- MMT media transport
- ROUTE real-time object delivery over unidirectional transport
- DASH dynamic adaptive streaming over HTTP
- some embodiments may transcode between MPEG-2 transport stream (TS) to HLS.
- TS MPEG-2 transport stream
- a TS may comprise a sequence of packets via a multiplexing of streams for transmission in noisy environments, and MPEG-TS may implement a digital container format for transmission (e.g., encapsulating multiple MPEG streaming programs in a combination of an emission from a transmitting antenna such as broadcast television station (BTS) 102) and storage of audio, video, and data.
- a transmitting antenna such as broadcast television station (BTS) 102
- BTS broadcast television station
- Such emission may, for example, comprise one or more different television channels and/or multiple angles of a movie.
- ATSC 3.0 BATs may support a variety of in-home content viewing and storage systems, for example, by providing APIs to a variety of application clients.
- the application clients may include apparatuses for consuming over-the-top (OTT) Internet content and cable network content, as well as over-the-air (OTA) broadcast content via an ATSC 3.0 BAT.
- OTT over-the-top
- OTA over-the-air
- ATSC 3.0 BATs may achieve progressive video enhancement by combining a first content stream for a media asset/content with other streams, or portions of other streams, representing deltas in content from the base content and from each other for the media asset.
- another stream may include information at a higher frame rate or resolution than the first content stream.
- the viewing quality ultimately delivered to a user may be a function of which streams, or portions thereof, are received intact or may be corrected in time for viewing.
- Progressive video enhancement may be used for OTA streams, OTT streams, other content sources, or combinations thereof.
- ATSC 3.0 transmissions may opportunistically incorporate data into padding baseband packets of studio to transmiter link (STL) transport protocol (STLTP) feeds, allowing real-time data insertions into media feeds, which may be tailored to transmission- tower specific markets and applications.
- ATSC 3.0 BATs may utilize alternative content available via incorporation into padding baseband packets, and from other sources, for example, to tailor viewing experiences in accordance with observed viewing habits or viewer traits.
- ATSC 3.0 BATs may acquire information to repair damaged transmission streams via collaboration with other ATSC 3.0 BATs or other receiving entities, or via communications on a dedicated return channel or other network connection.
- ATSC transmissions may be used to distribute, e.g., large data files such as application and operating system (OS) upgrades, via rotating and/or scheduled broadcast of data channels, wherein data is coded in a BitTorrent pattern for reassembly of fragments by a client device, whereby, agnostic to content, clients may repair lost fragments via observation of rebroadcasts or IP connection by referring to specific fragmented elements.
- OS operating system
- ATSC 3.0 transmissions may be used to support progressive over-the-air application downloads and runtime data, whereby desired applications may be selected by a client on the basis of media consumption, for example, and built in a modular fashion depending upon the degree of involvement of the viewer and availability of data via broadcast on a rotating basis. Desired applications may also be selected by availability. For example, a particular broadcast application extension or addon may not be available upon first viewing of a channel. The broadcast application extension may become available at a later time (e.g., after 15 minutes), at which time it is installed and executed, and its capabilities made available to the audience with no interruption in video viewing.
- An ATSC 3.0 client may serve as a content delivery network (CDN) point of presence (PoP) for a home or other facility, for example, by collecting data from a variety of sources, such as broadcast OTA, OTT, and cable distribution sources, and then storing and redelivering data as required via a variety of mechanisms and application extensions.
- Broadcast data may encompass not just broadcast media, but also public and private data casting, including such disparate information types as ad pre-positioning and emergency broadcast information.
- a CDN PoP may be context-aware.
- the CDN PoP may recognize different kinds of content, e.g., by file type or subcategory within files, and take action accordingly to obtain missing fragments, alert users to conditions, or otherwise act automatically on received data, or lacunae in received data, in accordance with significance or priority of the data and available options or corrective or preventative action.
- ATSC 3.0 transmissions may encompass dynamically generated flash channels which are generated automatically based on observation of client viewing, unfulfilled viewing requests, environmental factors, emergency conditions, and the like.
- the needed bandwidth within an ATSC 3.0 physical layer pipe (PLP) for anew flash channel may be gleaned by lowering the bandwidth of other content or application level protocols within the same PLP. For example, a channel with low viewership may be canceled, or the encoding of a media channel may be reduced in terms of resolution, frame rate, robustness, etc., to free up the requisite bandwidth.
- Flash channels may be generated automatically, e.g., for purposes of responding to urgent or emergency conditions, optimizing broadcast bandwidth yield management in terms of viewership or revenue, balancing viewer good will, and maintaining view brand awareness.
- Data may be carried in PLPs, which are data structures that may be configured for a wide range of trade-offs between signal robustness and channel capacity utilization for a given data payload. Multiple PLPs may be used to carry different streams of data, all of which being, for example, used to assemble and deliver a complete service. In addition, data streams required to assemble multiple delivered services or products may share PLPs, if those data streams are to be carried with the same level(s) of robustness.
- a PLP may be a network layer protocol, e.g., for the X.25 protocol suite. The PLP may manage the packet exchanges between data terminal devices across virtual calls. And each PLP may have different bit rate and error protection parameters. The PLP may further provide a data and transmission structure of allocated capacity and robustness that may be adjusted to broadcaster needs.
- the maximum number of PLPs in an RF channel may be 64, and each individual service may utilize, e.g., up to 4 PLPs.
- downstream broadcast access terminals (BATs) 104 may be able to simultaneously decode at least four PLPs.
- a PLP may contain a structure of a frame or a series of subframes.
- Example bit rates in an example 6 megahertz (MHz) channel may range, from less than 1 mega-bits per second (Mbps) in a lowest-capacity (e.g., which may be a most robust mode), up to over 57 Mbps when using highest-capacity parameters (e.g., which may be a least robust mode).
- a service may include a set of content elements which, when taken together, provide a complete listening, viewing, or other experience to a viewer. It may comprise audio, base level video, enhancement video, captioning, graphic overlays, web pages, applications, emergency alerts as well as other signaling or required metadata.
- targeted ad insertion may be achieved by selecting between a live OTA broadcast advertisement and one or more alternative advertisements pre-placed in a client via non-real-time (NRT) methods, according to rules also delivered in an NRT fashion, and in accordance with observations of viewing habits on the client system.
- NRT non-real-time
- NRT may be file content (e.g., comprising continuous or discrete media and belonging to an app-based feature, comprising electronic service guide (ESG) or emergency alert (EA) information, or comprising opaque file data to be consumed by a digital rights management (DRM) system client) and/or applications non- contemporaneously (generally before time) delivered with their intended use, e.g., where the delivery is performed via ROUTE.
- ESG electronic service guide
- EA emergency alert
- a CDN PoP function may be used to gather information necessary for the targeted ad insertion via a variety of pathways, such as by observation of OTA data channels on which materials are placed on a rotating basis, from data delivered in fragmented form within multiple padding packets of live transmissions, or from OTT, cable, local, or other networks.
- the techniques described herein may support viewing text overlays or interactive applications in a similar fashion, for example.
- Broadcast may encompass a wide variety of media and data delivery functions. Evolving broadcast standards such as ATSC 3.0 allow for multiple channels to be delivered within a given portion of spectrum in a highly adaptable fashion. Through the use of a service discovery bootstrap signaling and other data channels, BATs may be informed of new conditions and formats, and the means by which to demodulate, decode, and utilize the broadcast information.
- the format of transmissions may be tailored on a tower- by -tower and minute-by-minute basis, e.g., to accommodate current demands or compensate for reception conditions via changes to encoding, diversity, and error correction techniques.
- Multi-channel digital broadcast may be adapted in a number of ways to provide experiences similar to those provided by terrestrial bi-directional data wired, cable, and fiber networks, for example, while also taking advantage of bandwidth advantages available in wireless broadcast spectrums.
- legacy devices and viewing styles may be supported through conversion of new broadcast digital formats, and broadcast data channel information may be used to support over-the-air transmission API services to support contemporary viewing platforms.
- Progressive video enhancement may be achieved efficiently via multiple encodings of different aspects of media. Idle padding baseband packets in media broadcast streams may be utilized to provide timely, localized data services. Collaborative object delivery and recovery mechanisms may be implemented to ease receipt of large transmissions.
- broadcast may be used in hybrid data delivery using fragmentation mechanisms to make it easy to detect and repair missing portions of transmissions in a way that is agnostic to the transmitted content.
- Run time environments such as applications, API services, and user experience aspects may be provided via progressive broadcast modules which are adaptively accumulated and exploited.
- content delivery network point of presence data staging may be achieved in the home via broadcast transmissions.
- Channels may be generated automatically based on statistical observations of viewing or other consumption, requests for viewing or information, and environmental and emergency conditions, for example.
- system 100 implements advanced television systems committee version 3 (ATSC 3) television broadcasting, there being about 25 standards (several of which are directly referenced herein) that provide guidance.
- ATSC 3.0 standards offer a substantially static plurality of broadcast requirements for supporting next generation technologies, e.g., including high efficiency video coding (HEVC) for video channels of 4K resolution at 120 frames per second (FPS), wide color gamut (WCG), high dynamic range (HDR), Dolby AC-4 audio, MPEG-H 3D audio, datacasting capabilities, and/or more robust mobile television support.
- HEVC high efficiency video coding
- WCG wide color gamut
- HDR high dynamic range
- Dolby AC-4 audio MPEG-H 3D audio
- datacasting capabilities and/or more robust mobile television support.
- the ATSC 3.0 standards are limited in their implementation prescriptions due to an emphasis on voluntary implementation.
- system 100 implements a voluntary adoption marketplace model, e.g., without requiring a codified or referenced architecture implementation. Indeed this requirement is absent in the following ATSC 3.0 standards, each of which forming a basis for this disclosure and being incorporated by reference in its entirety herein: “System Discovery and Signaling (A/321);” “Physical Layer Protocol (A/322);”
- FIG. 1 illustrates an example system 100 having at least one BTS 102 transmitting to a plurality of BATs 104.
- a broadcast system may have any number of BTSs and BATs.
- an adaptive protocol such as ATSC 3.0, the specific transmissions from each BTS may be tailored to accommodate a set of BATs that are in range for reception of the BTS.
- transmissions 108 from BTS 102 to BATs 104 may utilize the ATSC 3.0 standards, such as user datagram protocol (UDP) over IP (UDP/IP) multicast (or UDP/IP broadcast) over a broadcast physical layer and/or TCP/IP unicast over the broadband physical layer.
- UDP user datagram protocol
- IP IP
- transmissions 108 from BTS 102 may use a layered architecture that may include at least three layers including a physical layer, management and protocols layer, application and presentation layer, and the like.
- emissions 108 may comprise error correction and synchronization pattern features for maintaining transmission integrity.
- UDP is a data delivery standard (RFC 768), which delivers its payload as datagrams (header and payload sections) to devices on an IP network.
- RRC 768 data delivery standard
- UDP provides checksums, for data integrity, and port numbers, for addressing different functions. There is no handshaking so UDP may be used in single-direction communications.
- transmissions 108 from BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services and contents over broadcast, broadband, and hybrid broadcast/broadband networks, along with the mechanism to signal the language(s) of each provided service, including audio, captions, subtitles (if present), any emergency delivery service, and the like.
- the IP-based delivery of ATSC 3.0 services and contents may be broadcast over network 106, which may be the Internet, a hybrid broadcast network, a broadband network, or the like.
- transmissions 108 from BTS 102 may use the MMT protocol, ROUTE protocol, or the like.
- MMT may be specified by ISO/IEC 23008-1 (MPEG-H Part 1), and MMT protocol may be used to deliver media processing units (MPUs).
- MMT may utilize a digital container standard developed by MPEG that supports HEVC video.
- the ROUTE protocol may be used to deliver content packaged in MPEG DASH Segments.
- MMT may be used to transfer data using an all-Internet protocol (All-IP) network.
- All-IP all-Internet protocol
- Transmissions 108 from BTS 102 may use Dolby AC-4 technology, MPEG- H 3D audio system technology, and the like.
- Dolby AC-4 may implement audio compression.
- Dolby AC-4 bitstreams may contain audio channels, audio objects, and/or the like.
- Dolby AC-4 may have 5.1 core audio channels that may be utilized by Dolby AC-4 decoders to decode.
- MPEG-H 3D audio specified by ISO/IEC 23008-3 (MPEG-H Part 3), may utilize an audio coding standard developed by the ISO/IEC MPEG to support coding audio as audio channels, audio objects, higher-order ambisonics (HO A), and/or the like.
- MPEG-H 3D audio may support up to 64 loudspeaker channels and 128 codec (coder-decoder) core channels.
- BATs 104 may include an ATSC 3.0 receiver, and each ATSC 3.0 receiver may include a dedicated return channel (DRC) terminal module or an equivalently DRC-enabled ATSC 3.0 receiver, or the like.
- System 100 supports TV content being directly received (e.g., and potentially viewed) at many different fixed and portable video devices, such as next generation TVs 103, BATs 104, and in certain instances UE 170.
- System 100 further enables enhanced advertising capabilities, including tailored messaging for specific audience segments in the form of ads, pop ups, or other messaging based on user preferences. These services could be a value-add for public broadcasters who want to point viewers to related content based on their viewing habits, provide member-only viewing options, or offer other ways for viewers to access support services.
- Example data service opportunities may include public safety, education, and member services arenas.
- system 100 may facilitate delivery of media- rich public alerts and mission-critical video and images to local and regional first responders during emergencies.
- System 100 may, e.g., additionally or instead deliver localized AMBER Alerts with accurate, detailed information about victims and/or suspects leveraging real time images and video.
- system 100 may facilitate delivery of customized and targeted distance learning programs to rural and remote areas without access to the Internet.
- System 100 may, e.g., additionally or instead deliver localized training and workforce development programs.
- system 100 may combine video on demand (VOD) with conditional access, and, for example, give viewers specialized access similar to membership video on demand from within an ATSC 3.0 receiver.
- System 100 may, e.g., additionally or instead use the ATSC 3.0 broadcast platform with native HTML 5 such that users may enjoy interactive games and adventures along with a favorite show, with or without the use of broadband.
- VOD video on demand
- BTS 102 carves traditional 30 second advertisement (ad) segments down to the microsecond level to figure out what is the best revenue extraction model, for a time limited source that depreciates to zero almost instantaneously (e.g., when the window for emission passes). For example, in a 30 second window, there may be 6.144 million windows of opportunity in this forward spectrum, and, in a 24-hour window, there may be 2 billion opportunities to monetize each one of those microsecond series of emissions. BTS 102 thus supports a technology model that may pare down the available inventory to sell time over spectrum.
- the model may support delivering a piece of data over an amount of leased or licensed spectrum that is available by balancing how densely populated a market is for the delivery based upon the tower footprint.
- Each BTS 102 may thus support a marketplace exchange model for data distribution services across many different demand regions.
- system 100 may support data capacities about or in excess of 25 or 30 Mbps, e.g., for live and/or non-real time (NRT) data.
- system 100 may support more streams, including multiple high definition (HD) ones.
- the quality of the content may satisfy one or more criteria not achievable by otherwise known broadcasters.
- the spectrum may be flexibly used, including, e.g., use of orthogonal frequency-division multiplexing (OFDM) modulation and different coding choices.
- System 100 may further support multiple simultaneous operating points (e.g., physical layer pipes) and development of a single frequency network (SFN).
- simultaneous operating points e.g., physical layer pipes
- SFN single frequency network
- the robustness of the physical layer of system 100 may allow stations the flexibility to target hard-to-reach areas (e.g., penetrating buildings, even down to the fourth floor of a parking garage) and/or moving vehicles (e.g., traveling 60 miles per hour).
- An SFN is a broadcast network where several transmitters simultaneously send substantially the same signal over the same frequency channel.
- a simplified form of SFN may be achieved by a low power co-channel repeater, booster, or broadcast translator, which is utilized as a gap filler transmitter.
- Embodiments of system 100 implementing one or more SFNs may efficiently utilize radio spectrum, allowing a higher number of radio and TV programs in comparison to traditional multi-frequency network (MFN) transmission.
- MFN multi-frequency network
- BATs 104 may implement gateway devices or tuners to extend broadcast viewing from traditional TV to mobile devices. In some embodiments, BATs 104 may implement second screen devices and synchronized content to allow individuals to access related content and services without disruption or display on a large communal display.
- system 100 may support hybrid delivery, due to a basis in an IP transport. That is, system 100 may implement simultaneous broadcast and broadband delivery.
- system 100 may support new types of hybrid services, such as alternate languages, camera angles, NRT content, and/or localized inserts.
- system 100 may support advanced audio/visual (A/V) compression, including an advanced compression scheme, such as HEVC Main 10 profile specified as core, and performance gains over known systems.
- system 100 may support immersive audio, ultra-high definition (UHD) / 4K video, expanded audio including immersive and personalized audio, alternate languages, and other personalized audio choices, up to 22.2 channels of rendered audio elements, HDR video, and WCG video.
- UHD ultra-high definition
- 4K expanded audio including immersive and personalized audio, alternate languages, and other personalized audio choices, up to 22.2 channels of rendered audio elements, HDR video, and WCG video.
- Example implementations of UHD formats may include 4K UHD (e.g., 4,096 by 2,160 pixels) and 8K UHD (e.g., 7,680 c 4,320 pixels). And implementations of WCG may have video with more color saturation and/or richer quality than traditional video.
- 4K UHD e.g., 4,096 by 2,160 pixels
- 8K UHD e.g., 7,680 c 4,320 pixels
- WCG may have video with more color saturation and/or richer quality than traditional video.
- each BTS 102 comprises just a tower and a transmit antenna.
- BTS 102 may comprise a TV station and a transmitter site.
- the TV station and the transmitter site may, for example, comprise a studio (e.g., of FIG. 5A) and/or an ATSC 3.0 downlink gateway (e.g., of FIG. 5A).
- the TV station may comprise sources 202 and/or 206 that provide 4K/UHD video and next-generation audio (e.g., with captioning) and that provide HD video and audio (e.g., with captioning as well).
- a master controller may obtain the UHD and HD production and then this obtained data may be encoded and multiplexed such that ESG IP packets are emitted via the STL to the transmitter site.
- the transmitter site may comprise an ATSC 3.0 exciter, which generates the ATSC 3.0 waveform such that a transmitter and a mask filter map operate on the emission.
- the ATSC 3.0 downlink gateway or another component of BTS 102 may perform the BitTorrent fragmentation.
- system 100 may support interactivity, personalization, including use of known tools to create interactive experiences (HTML 5), advanced emergency alerting (AWARN), enhanced alerting capabilities (e.g., rich media including evacuation routes and radar images) for first responders and consumers, and receiver wake-up-on-alert to a far larger audience than is currently possible, e.g., in times of a crisis.
- HTML 5 high-information
- AWARN advanced emergency alerting
- enhanced alerting capabilities e.g., rich media including evacuation routes and radar images
- receiver wake-up-on-alert e.g., rich media including evacuation routes and radar images
- system 100 may support new services without obsolescence, e.g., by making possible a future version ATSC 3.1 or higher that provides new services or transmission schemes without interfering with 3.0 users.
- EA wake-up bits of the bootstrap may further be used by BTS 102 for emergency alerting such that BAT 104 transitions from a quiescent or low power mode into a high power mode where it then performs LI basic (LIB) and LI detail (LID) processing, which may provide information to process the actual PLPs.
- LIB LI basic
- LID LI detail
- LIB may be part of the preamble following the bootstrap, and it may carry the more fundamental signaling information as well as data necessary to decode LID.
- LID may be part of the preamble following the LIB, and it may have the information necessary to decode subframes including their MODCOD, number of PLPs, pilot pattern, FEC, etc. Whereas signaling may be delivered over MMT and/or ROUTE, the bootstrap information may be provided by means of the service list table (SLT).
- SLT service list table
- feed sources 202 and/or supplemental sources 206 of FIGs. 1-2 may operate as the data sources of FIG. 4A such that data is then forwarded (e.g., via progressive video enhancer 205) to MODCOD unit 204.
- MODCOD unit 204 of FIG. 2 may implement: (i) the ALP transport protocol (ALPTP) formatting, the STLTP formatting, and the error correction coding (ECC) encoding of FIG. 4A; (ii) the ECC decoding and STLTP demultiplexing of FIG. 4B; and/or (iii) the coded modulation of FIG. 4C.
- ALP transport protocol ALP transport protocol
- ECC error correction coding
- FIG. 2 illustrates how equipment may be arranged before and after the broadcast pipeline to achieve a variety of functions.
- system 200 includes feed sources 202, which provide media and data to MODCOD unit 204.
- MODCOD unit 204 may convert different feeds into different broadcast channels, and may further encode a single feed into multiple encodings on multiple channels. The multiple encodings may correspond to media channels at different resolutions for the same content, or portions thereof.
- the feed sources 202 may be national, while the supplemental sources 206 may be more regional or local in nature.
- MODCOD unit 204 may process both sources 202 and 206 and provide encoded information for transmission by BTS 102.
- a BTS may provide identical transmissions from multiple towers, or provide unique transmissions from each tower.
- feed sources 202 and/or supplemental sources 206 provide ads.
- supplemental sources 206 may provide a collection and data analysis service.
- data to be transmitted may enter a broadcast gateway using either the ALPTP or data source transport protocol (DSTP).
- Other inputs to the broadcast gateway may be instructions from a system manager.
- a scheduler which may be internal to the broadcast gateway, may control the pre-processing functions that occur before delivery of the data and various control information to the transmitter(s) via the STLTP (e.g., with optional ECC applied).
- ECC may, for example, be applied to the STLTP outer layer, which improves reliability of delivery of a complete package of STL data to each transmitter.
- the data and instructions from the studio may be separated, buffered, and used to control the transmitter(s) and to construct the waveform to be emitted to downstream receivers.
- ALP packets may either be segmented or concatenated so that they fill the allocated space in the BBPs carrying them as completely as possible without overflowing the available space.
- BTS 102 may comprise an ATSC 3.0 packager, encoder, signaler, and scheduler. At least some of this functionality may similarly be implemented at BAT 104 (e.g., for the DRC). The packager, encoder, signaler, and scheduler of BTS 102 may deliver fragments (e.g., to differentiate core transport of MMT versus ROUTE).
- the channels of emissions 108 may be numerous, and each channel may be encoded in any number of ways. How much of transmission 108 is received at antenna 231 of BAT 104 depends on a variety of factors, such as noise, path occlusions, environmental factors, range, and implementation technology.
- the detection, demodulation, and decoding of a complex transmission may require complex foreknowledge of the broadcast formats.
- Such foreknowledge may be provided, e.g., by a bootstrap signal for an ATSC 3.0 transmission (see, e.g., FIG. 5C).
- different services may be time-multiplexed together within a single RF channel, which is indicated, at a low level, by the type or form of a signal transmitted during a particular time period so that a receiver discovers and identifies the signal (e.g., which in turn indicates how to receive the services that are available via that signal).
- the bootstrap signal may precede, in time, a longer transmitted signal that carries some form of data.
- the bootstrap employs a fixed configuration (e.g., sampling rate, signal bandwidth, subcarrier spacing, time-domain structure) known to all receiver devices and carries information to enable processing and decoding of the (e.g., potentially new type) signal associated with a detected bootstrap.
- the bootstrap may provide a universal entry point and, as such, may be more robustly encoded, but only a minimum amount of information may be transmitted for system discovery (e.g., identification of the associated signal) and for initial decoding of the following signal.
- FIG. 5C shows an overview of the general structure of a physical layer frame, the bootstrap signal, and the bootstrap position relative to the post bootstrap waveform (e.g., the remainder of the frame).
- the bootstrap comprises a number of symbols, beginning with a synchronization symbol positioned at the start of each frame period to enable signal discovery, coarse synchronization, frequency offset estimation, and initial channel estimation.
- the functionality for receiving, deciphering, and utilizing the broadcast transmission may be arranged in any number of ways in a broadcast access terminal or system encompassing broadcast access terminal functionality.
- such functionality is depicted as residing in a single apparatus, BAT 104.
- fifteen components are shown to be part of BAT 104 in FIG. 2, each is optional, and thus any suitable combination of them is contemplated herein.
- Transmission 108 is received at one or more antennas 231, and circuitry for the amplification and filtering of the transmission 108 signals may be performed at component 232.
- a decoding unit such as component 233, may then further process transmission 108 signals and perform correction by detecting and filling gaps in channel information, e.g., via error correction or diversity techniques.
- Extraction component 234 may extract individual media and data channels from an output of component 232. Fragmentation component 237 may reach out to the Internet (e.g., network 106) or to another network (e.g., 292, 294, or 296) to locate or request missing pieces of information.
- These other networks may be a local area network, for example, or an ad-hoc network of devices arranged to share broadcast information, such as a mesh network of BAT devices.
- data and media are shown as being shared among internal components of BAT 104 via a data bus, to which are connected a storage function 236 and a network (N/W) interfaces (I/Fs) function implemented via collaborative object delivery service 245.
- N/W network interfaces
- BAT 104 may function fully without any Internet or other backhaul network connection. Via a bootstrap or other information channel in the transmission, BAT 104 may receive all the information necessary to decode and interpret emissions 108, as well as to update its own operations and applications. Missing portions of channel information, for example, may be filled by monitoring transmission 108 for retransmissions. [0085] Corrected channel information may be used in a variety of ways. It may be stored in the storage function 236, which is, e.g., a digital media storage array.
- Corrected media information may be immediately shared via one or more local user networks 294 and 296, which may be, for example, Wi-Fi, Bluetooth, NFC, universal serial bus (USB), infrared, or another link to viewing devices, for example. Wire or fiber connections, not shown, may also be used.
- local user networks 294 and 296, may be, for example, Wi-Fi, Bluetooth, NFC, universal serial bus (USB), infrared, or another link to viewing devices, for example.
- Wire or fiber connections may also be used.
- Streaming media from component 233 or NRT media drawn from storage unit 236 may be processed by re-encoding component 238 and repackaging component 239.
- media feeds may be translated from one format to another or resized, e.g., for display on handheld devices.
- storage 236 e.g., for supporting BAT 104 as a whole and/or for implementing CDN PoP 242 may comprise at least 1 terabyte of space.
- BAT 104 may support, via component 243, the download and installation of applications, e.g., programs received via broadcast or other means, and related data that are of use for operation of BAT 104 or users of BAT 104.
- Applications 246 may, for example, augment media viewing, e.g., via video display overlays, or provide access to associated content, information, or other entertainment or useful functions.
- Upstream electronic storage 422 of FIG. 4D may be located at or near BTS 102, and downstream electronic storage 236 of FIG. 3 A may be located at or near BAT 104.
- Upstream electronic storage 422 and downstream electronic storage 236 may each comprise electronic storage media that electronically stores information.
- the electronic storage media of storage 422 and/or storage 236 may comprise system storage that is provided integrally (e.g., substantially non-removable) with system 100 and/or removable storage that is removably connectable to system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
- a port e.g., a USB port, a firewire port, etc.
- a drive e.g., a disk drive, etc.
- Electronic storage 422 may be (in whole or in part) a separate component within system 100, or such storage may be provided (in whole or in part) integrally with one or more other components of system 100 (e.g., a user interface (UI) device 418, processors 420, etc.).
- electronic storage 236 may be (in whole or in part) a separate component within system 100, or such storage may be provided (in whole or in part) integrally with one or more other components of system 100 (e.g., a UI device 118, processors 230, etc.).
- storage 422 may be located in a computer together with processors 420, in a computer that is part of external resources 424, in UI devices 418, and/or in other locations.
- storage 236 may similarly be located in a computer together with processors 230, in a computer that is part of external resources 124, in UI devices 118, and/or in other locations.
- Each of storage 422 and storage 236 may comprise a memory controller and one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
- optically readable storage media e.g., optical disks, etc.
- magnetically readable storage media e.g., magnetic tape, magnetic hard drive, floppy drive, etc.
- electrical charge-based storage media e.g., EPROM, RAM, etc.
- solid-state storage media e.g., flash drive, etc.
- Storage 422 may store software algorithms, information obtained and/or determined by processors 420, information received via UI devices 418 and/or other external computing systems, information received from external resources 424, and/or other information that enables system 100 to function as described herein.
- storage 236 may store software algorithms, information obtained and/or determined by processors 230, information received via UI devices 118 and/or other external computing systems, information received from external resources 124, and/or other information that enables system 100 to function as described herein.
- Each of upstream external resources 424 and downstream external resources 124 may include sources of information (e.g., databases, websites, etc.), external entities participating with system 100, one or more servers outside of system 100, a network, electronic storage, equipment related to wireless fidelity (Wi-Fi) technology, equipment related to Bluetooth technology, data entry devices, a power supply (e.g., battery powered or line-power connected, such as directly to 110 volts AC or indirectly via AC/DC conversion), a transmit/receive element (e.g., an antenna configured to transmit and/or receive wireless signals), a network interface controller (NIC), a display controller, a graphics processing unit (GPU), and/or other resources.
- sources of information e.g., databases, websites, etc.
- external entities participating with system 100 e.g., one or more servers outside of system 100
- a network electronic storage
- equipment related to wireless fidelity (Wi-Fi) technology equipment related to Bluetooth technology
- data entry devices e.g., a power supply (e.g.
- processors 230, external resources 124, UI device 118, electronic storage 236, and/or other components of system 100 may be configured to communicate with each other via wired and/or wireless connections, such as a network (e.g., a local area network (LAN), the Internet, a wide area network (WAN), a personal area network (PAN), a radio access network (RAN), a home area network (HAN), a campus network, a metropolitan network, an enterprise private network, a virtual private network (VPN), an internetwork, a backbone network (BBN), a global area network (GAN), an intranet, an extranet, an overlay network, etc.), near field communication (NFC), cellular telephony (e.g., global system for mobile communications (GSM), UMTS/HSPA, long term evolution (LTE), fifth and/or fourth generation (5GSM), cellular telephony (e.g., global system for mobile communications (GSM), UMTS/HSPA, long term evolution (LTE), fifth and/or fourth generation
- Each of upstream UI device(s) 418 and downstream UI device(s) 118 may be configured, in system 100, to provide an interface between one or more users and system 100.
- Each of UI upstream devices 418 and downstream UI devices 118 may be configured to provide information to and/or receive information from the respective one or more users.
- Each of UI devices 418 and UI devices 118 may include aUI and/or other components. Each of these UIs may be and/or include a graphical UI (GUI) configured to present views and/or fields configured to receive entry and/or selection with respect to particular functionality of system 100, and/or provide and/or receive other information.
- GUI graphical UI
- the UI of UI devices 418 may include a plurality of separate interfaces associated with processors 420 and/or other components of system 100.
- the UI of UI devices 118 may include a plurality of separate interfaces associated with processors 230 and/or other components of system 100.
- Examples of interface devices suitable for inclusion in UI device 418 and/or UI device 118 may include a touch screen, a keypad, touch sensitive and/or physical buttons, switches, a keyboard, knobs, levers, a display, speakers, a microphone, an indicator light, an audible alarm, a printer, and/or other interface devices.
- the present disclosure also contemplates that each of UI device 418 and UI devices 118 may include a removable storage interface.
- information may be loaded into UI device 418 and/or UI devices 118 from removable storage (e.g., a smart card, a flash drive, a removable disk) that enables users to customize the implementation.
- each of UI devices 418 and UI devices 118 may be configured to provide a UI, processing capabilities, databases, and/or electronic storage to system 100.
- UI devices 418 may include processors 420, electronic storage 422, external resources 424, and/or other components of system 100, and UI devices 118 may similarly include processors 230, electronic storage 236, external resources 124, and/or other components of system 100.
- each of UI devices 418 and UI devices 118 may be connected to a network (e.g., the Internet).
- UI devices 418 do not include processors 420, electronic storage 422, external resources 424, and/or other components of system 100, but instead communicate with these components via dedicated lines, a bus, a switch, a network, or other communication means.
- UI devices 118 do not include processors 230, electronic storage 236, external resources 124, and/or other components of system 100, but instead communicate with these components via dedicated lines, a bus, a switch, a network, or other communication means. The communication may be wireless or wired.
- each UI device 418 and/or UI device 118 may be a laptop, desktop computer, smartphone, tablet computer, and/or another UI device.
- Data and content may be exchanged between the various components of the system 100 through a communication interface and communication paths using any one of a number of communications protocols.
- data may be exchanged employing a protocol used for communicating data across a packet-switched internetwork using, for example, the Internet protocol suite, also referred to as TCP/IP.
- the data and content may be delivered using datagrams (or packets) from the source host to the destination host solely based on their addresses.
- the Internet protocol IP
- IP IP version 4
- IPv6 IP version 6
- each of processor(s) 420 and processor(s) 230 may form part of one or more user devices (e.g., in a same or separate housing), including a consumer electronics device, desktop computer, mobile phone, smartphone, personal data assistant (PDA), digital tablet/pad computer, cloud computing device, wearable device (e.g., watch), augmented reality (AR) goggles, virtual reality (VR) goggles, reflective display, personal computer, laptop computer, notebook computer, work station, server, high performance computer (HPC), vehicle (e.g., an embedded computer, such as in a dashboard or in front of a seated occupant of a car or plane), game or entertainment system, set top box, monitor, television (TV), smart TV, panel, space craft, digital video recorder (DVR), digital media receiver (DMR), internal tuner, satellite receiver, router, hub, switch, or any other device.
- user devices e.g., in a same or separate housing
- a consumer electronics device e.g., desktop computer, mobile phone, smartphone, personal data
- Each UE 170 may, e.g., be one of these devices and/or communicate with BAT 104 via network 294 and/or 296.
- UE 170 and/or TV 103 may be a Roku device, GoogleTV, Apple TV, Fire TV, gaming system (e.g., Xbox 360, Xbox One, PS3, PS4, WiiU, etc.), another IP-based system, or another legacy device.
- Next generation TVs 103 may be purchased off the shelf or may be pre-modified to incorporate one or more components of BATs 104.
- each of processors 420 and processors 230 may be configured to provide information processing capabilities in system 100.
- processors 420 and processors 230 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although each of processors 420 and processors 230 is shown in FIGs. 4D and 3 A, respectively, as a single entity, this is for illustrative purposes only. In some embodiments, processors 420 may comprise a plurality of processing units; these processing units may be physically located within the same device, or processors 420 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 418, devices that are part of external resources 424, electronic storage 422, and/or other devices).
- processors 420 may comprise a plurality of processing units; these processing units may be physically located within the same device, or processors 420 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 418, devices that are part
- processors 230 may comprise a plurality of processing units; these processing units may be physically located within the same device, or processors 230 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 118, devices that are part of external resources 124, electronic storage 236, and/or other devices).
- processors 230 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 118, devices that are part of external resources 124, electronic storage 236, and/or other devices).
- each of processors 420 and 230 may be configured via machine-readable instructions to execute one or more computer program components.
- the computer program components of processor 420 may comprise one or more of STLTP extraction component 430, NRT extraction component 432, real-time (RT) yield evaluation component 434, RT injection component 436, MODCOD control component 438, and/or other components.
- processor 230 may comprise one or more of front-end/baseband component 233, extraction component 234, request handling component 235, fragmentation component 237, JIT repackaging component 238, JIT transcoding component 239, data services component 240 (e.g., including flash content handler 241, CDN PoP 242, application download and runtime 243, etc.), API services component 244, applications 246, and/or other components.
- processors 420 and 230 may be configured to execute their respective components through: software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on the respective processor.
- components 430, 432, 434, 436, and 438 are illustrated in FIG. 4D as being co-located within a single processing unit, in embodiments in which processors 420 comprises multiple processing units, one or more of components 430, 432, 434, 436, and/or 438 may be located remotely from the other components.
- components 233, 234, 235, 238, and 239 are illustrated in FIG. 3A as being co-located within a single processing unit, in embodiments in which processors 230 comprises multiple processing units, one or more of components 233, 234, 235, 237, 238, 239, 240, 244, and/or 246 may be located remotely from the other components.
- each of processor components 233, 234, 235, 237, 238, 239, 240, 244, and 246 may comprise a separate and distinct set of processors.
- 233, 234, 235, 237, 238, 239, 240, 244, and/or 246 may provide more or less functionality than is described.
- processors 230 may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components 233,
- any or all of the systems, methods and processes described herein may be embodied in the form of computer executable instructions (e.g., program code) stored on a computer-readable storage medium which instructions, when executed by a machine, such as a BTS apparatus, BAT apparatus, or equipment associated with a BTS or BAT, perform and/or implement the systems, methods and processes described herein.
- a machine such as a BTS apparatus, BAT apparatus, or equipment associated with a BTS or BAT, perform and/or implement the systems, methods and processes described herein.
- a device may operate a web application in conjunction with a database.
- the web application may be hosted in a browser-controlled environment (e.g., a Java applet and/or the like), coded in a browser-supported language (e.g., JavaScript combined with a browser-rendered markup language (e.g., Hyper Text Markup Language (HTML) and/or the like)) and/or the like such that any computer running a common web browser (e.g., Internet ExplorerTM, FirefoxTM, ChromeTM, SafariTM or the like) may render the application executable.
- a web-based service may be more beneficial due to the ubiquity of web browsers and the convenience of using a web browser as a client (e.g., thin client).
- aspects of the disclosure may be implemented in any type of mobile smartphones that are operated by any type of advanced mobile data processing and communication operating system, such as, e.g., an AppleTM iOSTM operating system, a GoogleTM AndroidTM operating system, a RIMTM BlackberryTM operating system, a HuaweiTM Harmony OS, a NokiaTM SymbianTM operating system, a MicrosoftTM Windows MobileTM operating system, a MicrosoftTM Windows PhoneTM operating system, a Linux operating system or the like.
- an AppleTM iOSTM operating system e.g., a GoogleTM AndroidTM operating system, a RIMTM BlackberryTM operating system, a HuaweiTM Harmony OS, a NokiaTM SymbianTM operating system, a MicrosoftTM Windows MobileTM operating system, a MicrosoftTM Windows PhoneTM operating system, a Linux operating system or the like.
- Techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.
- the techniques may be implemented as a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, in a machine-readable storage medium, in a computer-readable storage device, or in a computer- readable storage medium for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
- a data processing apparatus e.g., a programmable processor, a computer, or multiple computers.
- a computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
- Method steps of the herein disclosed techniques may be performed by one or more programmable processors executing a computer program to perform functions of the techniques by operating on input data and generating output. Method steps may also be performed by, and apparatus of the techniques may be implemented as, special purpose logic circuitry, e.g., an FPGA or an application-specific integrated circuit (ASIC).
- special purpose logic circuitry e.g., an FPGA or an application-specific integrated circuit (ASIC).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read-only memory, a random-access memory, or both.
- the essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from and/or transfer data to, one or more tangible mass storage devices for storing data, such as a solid-state medium, magnetic disk, magneto-optical disks, or optical disks.
- Information carriers suitable for embodying computer program instructions and data include all forms of volatile memory and/or non-volatile memory, including by way of example semiconductor memory devices, such as, EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices such as, EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks or removable disks
- magneto-optical disks and CD-ROM and DVD-ROM disks.
- the processor and memory may be supplemented by or incorporated in special purpose logic circuitry.
- the word “may” is used in a permissive sense, “ having the potential to”, rather than the mandatory sense, meaning “must”.
- the words “include,” “including,” and “includes” and the like mean “including, but not limited to”.
- the singular form of “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
- the term “number” shall mean one or an integer greater than one.
- HLS is a legacy media streaming protocol, such as via the Internet, for supporting OTT applications.
- Some embodiments may resolve this issue by repackaging and optionally transcoding one or more of the broadcast protocols (e.g., from MMT video and AC4 audio) to HLS.
- client devices including ATSC 3.0 receivers e.g., BATs 104) may provide support to devices (e.g., legacy devices 170 of FIGs. 1 and 3 A).
- BAT 104 may support live television viewing via JIT repackaging of new media formats, such as MMT and ROUTE/DASH, into legacy media formats such as HLS.
- HLS is designed to withstand unreliable network conditions without causing user- visible playback stalling.
- MPEG-TS is a standard digital container format (e.g., encapsulating packetized elementary streams) for broadcast transmission systems, such as DVB, ATSC 1, and Internet protocol television (IPTV).
- MPEG-TS includes error correction and synchronization patterns for unreliable network conditions.
- MPEG-TS supports multiple programs in a stream, e.g., including transmission and storage of audio, video, program information, and system information data.
- Transport streams differ from the similarly-named MPEG program stream in several important ways: program streams are designed for reasonably reliable media, such as discs (like DVDs), while transport streams are designed for less reliable transmission, namely terrestrial or satellite broadcast. Further, a transport stream may carry multiple programs.
- MMT specified as ISO/IEC 23008-1 (MPEG-H part 1), is a digital container standard that supports high efficiency video coding (HEVC) video (MPEG-H part 2) and IP transmission.
- MMT supports UHD TV, HTML5, multiplexing of various streaming components from different sources, simplified conversion between storage and streaming formats, multiple devices, hybrid delivery, and one or more quality of experience (QoE) and/or quality of service (QoS) levels.
- QoE may be a measure of the overall level of customer satisfaction with a vendor, and QoS may embody the notion that hardware and software characteristics may be measured, improved, and perhaps guaranteed.
- ROUTE is a protocol for transferring media over IP.
- ROUTE packets are sent via UDP for real-time or NRT transmission of packetized DASH segments.
- ROUTE defines a packet format, a source protocol (sending/receiving), a repair protocol, and sessions for metadata and object transmission.
- ROUTE/DASH includes an AV interface unit for receiving an HEVC/MPEG-H 3DA encoded stream, a DASH packetizer for generating DASH segments by analyzing input media, and a ROUTE packetizer for converting the generated DASH segments into ROUTE packets.
- Media files in the DASH-IF profile may be based on the ISO base media file format (ISO BMFF) and used as the delivery, media encapsulation, and synchronization format for both broadcast and broadband delivery.
- ISO BMFF ISO base media file format
- MPEG-DASH enables delivery of media content from such servers to HTTP clients, including content caching in view of changing network conditions and without causing stalling or re-buffering.
- DASH is audio/video codec agnostic.
- DASH implements bit rate adaptation (ABR), enabling high quality streaming of media content delivery, e.g., from conventional HTTP web servers.
- ABR bit rate adaptation
- MPEG-DASH works by breaking the content into a sequence of small HTTP -based file segments, each segment containing a short interval of playback time of content (e.g., audio, video, and data) that is potentially many hours in duration, such as a movie or the live broadcast of a sports or news event.
- implementations of DASH may comprise multimedia files being partitioned segments and delivered to UE 170 using HTTP, API support, DRM, captioning, experience criteria, and/or other complementary features.
- protocols may be, for example, used for: real-time streaming of broadcast media; efficient and robust delivery of file-based objects; support for fast service acquisition by receivers (fast channel change); support for hybrid (broadcast/broadband) services; highly efficient FEC; compatibility within the broadcast infrastructure using formats and delivery methods developed for (and in common use within) the Internet; support for DRM, content encryption, and security; support for service definitions in which all components of the service are delivered via the broadband path (e.g., where acquisition of such services may require access to the signaling delivered in the broadcast); signaling to support state-of-the-art audio and video codecs; NRT delivery of media content; non-multiplexed delivery of service components (e.g., video and audio in separate streams); support for adaptive streaming on broadband-delivered streaming content; and/or appropriate linkage to application-layer features such as ESG and interactive content.
- BAT 104 may support one or more legacy devices 170 (e.g., television, smartphone, laptop, etc.) by wirelessly obtaining ATSC 3.0 emissions and then repackaging (and optionally transcoding) this data for HLS playback of audio and video or for another protocol, such as the H.264/MPEG-4 advanced video coding (AVC) standard.
- This re-translatable ATSC 3.0 data may comprise MMT, ROUTE/DASH, reliable internet streaming transport (RIST), and/or secure reliable transport (SRT) data.
- RIST reliable internet streaming transport
- SRT secure reliable transport
- BAT 104 may perform demodulation, error correction, decompression, AV synchronization, and media reformatting to match display parameters (e.g. interlacing, aspect ratio converting, frame rate converting, image scaling, etc.). BAT 104 may further perform MPEG-TS demultiplexing, e.g., when obtaining legacy ATSC 1 emissions. The repackaging and transcoding activities of BAT 104 may be based on a set of attributes of legacy device 170, including color gamut, audio codec, video codec, high dynamic range (HDR) video, standard dynamic range (SDR) video, interlacing, segmentation, compression, and/or jitter. More particularly, request handling component 235 may obtain a request from each legacy device 170.
- legacy device 170 including color gamut, audio codec, video codec, high dynamic range (HDR) video, standard dynamic range (SDR) video, interlacing, segmentation, compression, and/or jitter. More particularly, request handling component 235 may obtain a request from each legacy
- This request may be generated based on this set of attributes of the legacy device.
- Use of HDR may describe an ability to display a wider and richer range of colors (e.g., much brighter whites, and much deeper, darker blacks) and to give the TV picture a more dynamic look. For example, such use may provide greater bit depth, luminance, and color volume than SDR video, which uses a conventional gamma curve and lower resolutions.
- SDR video a transmitter may encode that service at a lower frame-rate, reserving encoding resources (bits) for the encoding of another service in the broadcast stream (or another component of the same service, such as audio).
- BAT 104 may implement a set top box (STB), a networking gateway, a dongle, or another device.
- BAT 104 may not operate some software functionality disclosed herein, since this functionality may be instead embedded as a software application in legacy device 170 to which the dongle is coupled or in a hardware device of BAT 104.
- the demodulation of ATSC emissions may be performed by software (e.g., front end and baseband component 233) via a software-defined radio implementation or by a hardware demodulator.
- This radio may be implemented (e.g., on an IC that consumes 0.1 Watts) in a same or different enclosure as processors 230, and it may implement both a receiver operable to receive traffic from BTS 102 and a transmitter operable to transmit supplemental traffic to a set of peers in a regional subset of the broadcast (292).
- any functionality of software components 233, 234, 235, 238, and/or 239 may be implemented in legacy device 170.
- radio frequency (RF) and intermediate frequency (IF) component 232 may be implemented in hardware and/or with programmable logic.
- RF and IF component 232 may perform passband processing such that an RF signal is mixed to an IF and then to baseband.
- This component may comprise radio frequency (RF) amplifier and filter 140, digital to analog (D/A) converter 150, and analog to digital (A/D) converter 150, which are, for example, depicted in FIG. 3A.
- A/D converter 150 may be used for the forward link (e.g., from BTS 102 or from BAT 104-2 to BAT 104-1), whereas D/A converter 150 may be used for the return link (e.g., from BAT 104-1 to BTS 102 or to BAT 104-2).
- BAT 104 may implement an environment comprised of a standard world wide web consortium (W3C) user agent with certain characteristics, a WebSocket interface for obtaining information from the receiver and controlling various receiver functionality, and an HTTP interface for accessing files or interactive content delivered over emissions 108.
- W3C world wide web consortium
- a substantial majority of emissions 108 may comprise forward link data, as, for example, depicted by the solid arrows of FIGs. 1-2 having differing widths, but emissions 108 may additionally comprise a relatively small bandwidth of return link data, as, for example, depicted by the dotted arrow of the same FIGs.
- front-end / baseband component 233 may perform MODCOD activity, e.g., for return-link emission 108 of data into local network 292 (e.g., via the DRC).
- RF and IF component 232 may, for example, perform baseband processing such that a digital signal converted to analog by converter 150 is mixed to an IF and then to RF, while being amplified by amplifier 140.
- the digital baseband signals may then, for example, be provided to front-end / baseband component 233 for further processing.
- front end and baseband component 233 may demodulate content received OTA, which may comprise audio, video, and/or data.
- Front end and baseband component 233 may thus obtain, via a plurality of different physical layer pipes (PLPs) of ATSC emissions 108, a plurality of different sets of content.
- PLPs physical layer pipes
- each PLP may obtain one or more distinct sets of content.
- Legacy devices 170 may each request (e.g., via the legacy request signaling depicted in FIG. 3B) at least some of this content. These requests brought down to baseband may be digitized by A/D converter 152.
- the request may be passed to processor 230 via a direct, wired connection. That is, although antenna 160 is, for example, depicted in FIG. 3 A interconnecting processor 230 with UE 170, this is not intended to be limiting, as this interconnection is further contemplated as being a wired connection.
- output data from the disclosed repackaging and/or transcoding may be emitted in a wired protocol (e.g., Ethernet, USB, etc.) to legacy device 170.
- front-end / baseband component 233 of BAT 104 may perform a demodulation and a decoding of modulated and encoded data (e.g., emitted from BTS 102 or another set of antennas).
- front-end / baseband component 233 may perform digital channelization and sample rate conversion.
- front-end / baseband component 233 may perform functionality that includes data control, ECC, and/or cryptography (e.g., encryption, decryption, key handling, etc.). In these embodiments, this component may further perform modulation and demodulation. In these or other embodiments, at least some of this functionality is performed using an embedded field programmable gate array (FPGA) or via digital signal processing (DSP).
- FPGA embedded field programmable gate array
- DSP digital signal processing
- extraction component 234 may select or extract at least one set of received forward-link content based on a request obtained from legacy UE 170. Such requests and/or other information from UE 170 may be obtained from network 294 using antenna 160.
- each of RF amplifier and filter 140 and RF amplifier and filter 142 may comprise an amplifier that differently amplifies low-power signals (e.g., without significantly degrading a signal-to-noise (SNR) ratio).
- SNR signal-to-noise
- each of RF amplifier and filter 140 and RF amplifier and filter 142 may comprise a bandpass filter (BPF) and one or more low-pass filters (LPFs) for signals being input at BAT 104.
- BPF bandpass filter
- LPFs low-pass filters
- Each of these components may further comprise a mixer, e.g., one which operates from an output of a phase-locked loop (PLL) / voltage-controlled oscillator (VCO).
- PLL phase-locked loop
- VCO voltage-controlled oscillator
- antennas 231 and 160 may each be configured to transmit and/or receive radio waves in all horizontal directions (e.g., as an omnidirectional antenna) or in a particular direction (e.g., as a directional, beam antenna). As such, antennas 231 and 160 may each, for example, include one or more components, which serve to direct the radio waves into a beam or other desired radiation pattern.
- FIG. 2 depicts antenna 231 being within BAT 104, this is not intended to be limiting, as this or another antenna used by this BAT may be installed elsewhere (e.g., on a roof of a house).
- the mobile application may be the source (e.g., via the request) of capabilities for a subsequent media essence conversion.
- the mobile device may not support the AC-4 audio codec, but BAT 104 may be able to provide a real time media essence conversion (e.g., real-time transcoding and multiplexing) to a compatible audio format that is specified by the mobile application, such as AAC audio.
- mobile UE 170 may not support HEVC Main-10 video essences (10-bit video samples), and it may only support H.264 hardware-based video decoding.
- BAT 104 may perform an essence conversion (e.g., real-time transcoding, tone mapping from HDR-to-SDR, and multiplexing) that would allow for the most optimal media playback.
- the local network bandwidth e.g., Wi-Fi
- the local network bandwidth may be adjusted to maximize the visual bitrate (or other variables, such as screen resolution/orientation) to accommodate for a less effective media codec needed for device rendition.
- BAT 104 may optionally perform JIT transcoding via component 239, which may include formatting changes of content, such as a down-stepping to a mobile environment that is operably coupled to the device.
- Transcoding is the direct digital-to-digital conversion of one encoding to another, such as for multimedia or other data.
- BAT 104 may thus implement JIT transcoding such that a legacy display device may display broadcast or multicast data in a seamless fashion for on demand or real-time needs.
- JIT repackaging component 238 and/or JIT transcoding component 239 may adapt playback characteristics such that UE 170 is ensured ability to properly unpackage and/or decode emissions.
- these component(s) may adjust a color gamut, whether HDR or SDR is used, an audio codec format, a video codec format, etc., such that at least an HD rendition is met or exceeded.
- a resolution may be adjusted.
- JIT repackaging component 238 and/or JIT transcoding component 239 may be configured to JIT-generate, from a single master essence, one of a plurality of different renditions or permutations, the generation of each of which being selectable for potential compatibility with an actively requesting one of a widely varying set of UE 170.
- Request handling component 235 may thus potentially generate such a manifest (e.g., for HLS) that identifies and/or describes attributes of the plurality of different renditions or permutations for a subsequent resolving into an encoding parameter set.
- JIT transcoding component 239 may perform a JIT encode to prepare the derivatives as needed for that specific device’s proper playback.
- JIT repackaging component 238 and/or JIT transcoding component 239 may thus configure a set of mechanisms and corresponding processes for a device’s needs by performing a selection based on the device’s characteristics for playback. As such, less or no renditions are generated until UE 170 requests one or more of them, which reduces a processor and storage resource load.
- request handling component 235 may make the selection with previously obtained information that describes UE 170. In other embodiments, request handling component 235 may obtain (e.g., from UE 170) the selection as part of the request.
- the generated master manifest may provide a possible rendition or permutation set (e.g., which may include attributes or parameters) such that UE 170 is operable to pick one or more that is, among the options, most relevant for its capability. For example, the master manifest may provide a dozen different variations (e.g., with a dozen different unique resource identifiers). In this or another example, UE 170 may specify which parameters or attributes it supports such that the selection is made for those resources.
- That activity of requesting those resources may then trigger BAT 104, operating as a home gateway, to prepare one variant to be consumed by UE 170.
- the set of mechanisms and corresponding processes may accordingly be determined without having to generate other sets of mechanisms and processes for another set of UE different from UE 170.
- BAT 104 thus makes malleable the ATSC 3.0 broadcast by allowing for matching of the UE’s technological capability into a JIT-prepared essence.
- JIT transcoding component 239 may determine the set of mechanisms and corresponding processes by giving precedence to hardware- accelerated decoding support. By supplying a preferentially ordered list to BAT 104, this gateway may then, e.g., determine which relevant encoder codecs it may produce that would best align with UE 170. As such, from a fixed set of input requirements and a derived set of receiver capabilities, processor 230 may determine the optimal adaptation and transposition function for that media essence conversion.
- the JIT generation of the set of mechanisms and corresponding processes may comprise a determination of a suitable (e.g., optimal) package classifier, label, profile template, or set of configurations (e.g., involving transcoding, repackaging, and/or trans-multiplexing) such that UE 170 is JIT- provided a unique essence or payload.
- a suitable package classifier e.g., optimal
- Such identifying fingerprint or representation of capabilities of each local UE 170 may be used, e.g., in the JIT provisioning, without needing to generate all possible capabilities’ permutations of all possible UE 170 and/or recreate a set of capabilities for other devices requiring a same set of capabilities.
- components 238 and/or 239 may generate, store, search, and/or provide a one-to-many (e.g., which scales with growth in an amount of UEs) capabilities’ assignment for outputted JIT service(s).
- Knowledge of the different sets or lists of configurations e.g., different audio and video codecs
- the request or a response to a discovery call may include a hint or list of supported capabilities or an adaptation set, e.g., sorted by receiver preference.
- components 238 and/or 239 may compute a permutation and fulfill service delivery of that permutation based on the UE (e.g., of network 294 or 296).
- JIT repackaging component 238 and/or JIT transcoding component 239 may determine the set of mechanisms and corresponding processes based on a computation cost (e.g., based on hardware-accelerated encoding being supported), a current encoding utilization load level, target frame criteria dimensions (e.g., vertical video would most likely be in SD down-sampled, while horizontal video would be an HD or Full HD rendition), licensing cost (e.g., for a specific codec or decoder), and/or another such cost for performing the translation.
- component 238 may perform the determination based on a type of local network 294 (e.g., Wi-Fi or another suitable network).
- Components 238 and/or 239 may similarly determine these translation mechanisms and corresponding processes based on (i) an amount of available bandwidth between BAT 104 and UE 170, (ii) whether the outputting is performed over a specific type of Wi-Fi connection (e.g., 802.1 In versus 802.1 lac), (iii) another attribute of network 294, or (iv) the existence of a specific UE capability (e.g., HDR).
- a specific type of Wi-Fi connection e.g. 802.1 In versus 802.1 lac
- another attribute of network 294 e.g., another attribute of network 294, or (iv) the existence of a specific UE capability (e.g., HDR).
- JIT transcoding component 239 may obtain an output of JIT component 238 and perform transcoding of that data.
- This transcoding may include obtaining the manifest file that specifies the location of the media and its format.
- the manifest may be a text file that serves as a directory for the content renditions and includes parameters of the repackaging and/or re-encoding for each file.
- the manifest may further include references to other associated, complementary files.
- Request handling component 235 may select, based on the set of attributes of the legacy request, playback characteristics for legacy device 170 from among such master manifest optionally received in the ATSC emissions. Request handling component 235 may determine an encoding parameter set based on these playback characteristics.
- System 100 may implement MPD and/or MPU payload delivery.
- An MPD or manifest may be a playlist of video and audio segments comprising control mechanisms of DASH-encoded content.
- the MPD may contain timing information and have one or more periods, each period describing a time span and related media files for that time span.
- Alternative content may, e.g., be selected by replacing a period with an alternative period referencing different content segments.
- An MPU may be an MMT construct that encapsulates a single ISO BMFF file, e.g., containing timed content media such as audio or video.
- Emissions 108 may comprise a continuous stream of MPUs delivered to receivers (e.g., one or more BATs 104) for rendering.
- encoder constitution of an ISOBMFF fragment may be the correct model for ROUTE delivery and may restrict the design objective of MMT in regards to optimal media fragment unit (MFU) transmission latency in non-uniform GOP scenarios, including SHVC, where base and enhancement layers may be delivered at spatial, temporal, and closed or open long-GOP for optimal codec efficiency.
- MFU media fragment unit
- the utilization of a complete MPU (e.g., an ISO BMFF box including trun data) from an encoder to a packager, rather than fragmented MFU emission for MMT delivery into the signaler and scheduler may prevent adoption of two more critical ATSC 3.0 use cases.
- SCTE-35 signal messages may be present over the logical MPU, but may be close to (or preceding) the MFU emission from the packager. This enables compatibility of both SCTE-35 messages in which the presentation time stamp (PTS) execution timestamp is not provided, and where the PTS execution is provided and multiplexed-in as a pre-roll or multiple times in a single GOP. A splice may then be performed.
- PTS presentation time stamp
- MMT multiplexed-in as a pre-roll or multiple times in a single GOP.
- an MMT hint track may provide the information to convert encapsulated MPU to MMTP payloads and MMTP packets.
- JIT component 238 may repackage for HLS or another protocol (e.g., from MMT, ROUTE/DASH, or MPEG-TS) the HLS or other protocol being used for legacy support.
- FIG. 3B illustrates an aspect of this repackaging, since ATSC emissions may be obtained and forwarded to a particular pipeline before final emission to legacy device 170.
- the JIT repackaging of component 238 may include trans-multiplexing, trans-muxing, or re encoding.
- JIT component 238 may implement an MPEG-TS to HLS JIT converter, an MMT to HLS JIT converter, a ROUTE/DASH to HLS JIT converter, or another converter from emissions 108 to UE 170.
- JIT repackaging component 238 may, for example, implement a separate pipeline for each of the conversions.
- JIT repackaging component 238 may include stream segmentation into files such that BAT 104 serves as a CDN for delivering the segments to a legacy device. JIT repackaging component 238 may obtain encrypted data, decrypt it, repackage it, and then re encrypt it before emitting to legacy device 170 via antenna 160 or via a wired interface.
- JIT repackaging of component 238 may operate based on another request obtained from another legacy device (e.g., 170-n, n being any natural number) of a type different from a type of legacy device 170 (e.g., 170-1) that sent a previous request.
- another legacy device e.g., 170-n, n being any natural number
- a type of legacy device 170 e.g., 170-1
- JIT component 239 may optionally transcode for HLS based on the legacy device having a compatible or matching decoder set for the ATSC emissions (e.g., from MMT, ROUTE/DASH, RIST, SRT, or MPEG-TS).
- the JIT transcoding may be performed by component 239 before or after component 238 performs JIT repackaging. That is, in implementations where both JIT transcoding and JIT packaging are required, an order of performing these functionalities may depend on what device was doing the request.
- request handling component 235 may implement a decisioning touchpoint on the request to make a determination as to what the appropriate output delivery characteristics would be for that device.
- JIT component 239 may perform the herein disclosed functionality, while supporting processing power and a link speed achieved by legacy device 170.
- JIT component 239 may perform transcoding based on HEVC, scalable-video-coding-extensions ofHEVC (SHVC), or another suitable standard.
- the JIT transcoding of component 238 may provide an additional layer of compatibility for legacy device 170 (e.g., including resource poor devices).
- the disclosed transcoding may optionally be performed via hardware and/or software to adjust formats, codecs, resolutions, and/or video parameters.
- the optional transcoding may be performed from a native format to an intermediate codec without losing quality.
- JIT component 239 may transcode to proxy files.
- the re encoding performed by component 239 may be based on processing power and a link speed achieved by the legacy device.
- JIT transcoding component 239 may obtain audio data encoded using AC-4 or MPEG-H 3D audio. The JIT transcoding may thus reencode this audio to AC-3, EC-3, AAC, MP3, or another suitable codec for the legacy device supporting a specific delivery protocol, such as HLS. Similarly, JIT transcoding component 239 may obtain video encoded using HEVC. The JIT transcoding may thus reencode this video to AVC (H.264) or another suitable codec for the legacy device supporting HLS.
- AVC H.264
- Video formats supported by components 238 and 239 of processor 230 may include Adobe HTTP dynamic streaming (HDS), Microsoft Silverlight smooth streaming (MSS), common media application format (CMAF), 3GPP's adaptation HTTP streaming (AHS), HTTP adaptive streaming (HAS), or another standard.
- BAT 104 may perform ABR via JIT transcoding component 239 when streaming HLS to more computationally capable legacy devices 170.
- JIT repackaging component 238 and JIT transcoding component 239 may perform translation, e.g., by configuring the aforementioned set of mechanisms and corresponding processes to adapt when features are not properly implemented or actually specified in underlying documentation that govern functionality of BAT 104 and/or UE 170.
- components or requirements may be anonymous or lacking in definition such that most of every need from any UE 170 may be met in the translation between systems.
- a transport and/or encoding mechanism may be generated JIT to provide a rendition for UE 170 that does not meet the same universe of requirements that BTS 102 provides.
- this provision may include use of HEVC Main 10 for a 10-bit resolution support and may require a hardware-based decoder to be able to process additional bits of data that may be rendered by the device.
- BAT 104 may thus mediate or adapt technology requirements, when interpretation or expectation of standards or specifications fails, by determining an interoperability matrix and then providing as needed (e.g., JIT) the ability to map the requirements from an input (e.g., forward emission of BTS 102) to the opportunistic capabilities of an orthogonal output (e.g., local emission to UE 170).
- This capability orthogonality may be due, e.g., to UE 170 implementing a set of decoding capabilities that are restricted usually by the hardware or by the manufacturer in their ability to decode broadcast (e.g., emissions 108) media essences.
- UE 170 may not be able to support one or more attributes of HEVC 8 Bit, HEVC 10 Bit, AVI, VP9, H.264, VC-5, MPEG5, or another codec, e.g., due to a relatively recent emergence as a standard.
- JIT transcoding component 239 may perform a service that eliminates need to re-encode all existing content of a publisher from origination and through distribution.
- component 239 may cause a reduction in bandwidth consumption and/or video decoding computation resources (e.g., using a different matrix of potential output formats), while monolithically creating support for content being emitted in forward transmission 108 that is in a newer transport medium (e.g., MMT) and encoding format (e.g., HEVC Main 10 for the video and MPEG-H or AC4 for the audio).
- a newer transport medium e.g., MMT
- encoding format e.g., HEVC Main 10 for the video and MPEG-H or AC4 for the audio.
- the JIT transcoding and/or JIT repackaging of components 238 and/or 239, respectively, may include supporting different presentation periods and ad insertions, including when each is being streamed at different bit rates.
- D/A converter 152 may obtain and forward this data to RF amplifier 142 before antenna 160 finally emits the data to legacy device 170 such that it suitably presents multimedia (e.g., live, streaming content) and suitably receives other types of data (e.g., software application update downloads).
- Component 142 may further include a filter, when receiving requests and other signals from legacy devices 170.
- front-end / baseband component 233 may perform MODCOD activity, e.g., for local emission of data into networks 294, 296 (e.g., via known networking techniques).
- RF and IF component 232 may, for example, perform baseband processing such that a digital signal converted to analog by converter 152 is mixed to an IF and then to RF, while being amplified by amplifier 142.
- BAT 104 may implement a home gateway device that is interoperable with a traditional OTT device such as Roku, Apple TV, or Fire TV.
- ATSC 3.0 encoding characteristics like AVC and AAC audio may be repackaged JIT.
- BAT 104 may still allow playback of ATSC 3.0 content for legacy devices. That is, JIT repackaging may be sufficient for devices that have a compatible decoder set, and a JIT re-transcode may be needed for compatibility for devices that do not have matching capabilities of what ATSC 3.0 requires.
- FIG. 3C illustrates method 180 for providing legacy data translation support, in accordance with one or more embodiments.
- Method 180 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 180 presented below are intended to be illustrative. In some embodiments, method 180 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 180 are illustrated in FIG. 3C and described below is not intended to be limiting.
- method 180 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 180 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 180.
- content may be extracted from ATSC 1 and/or ATSC 3.0 emissions.
- headers and other encapsulation may be stripped to identify and obtain content of a payload.
- operation 180 is performed by a processor component the same as or similar to extraction component 234 (shown in FIG. 3A and described herein).
- a content request generated based on a set of attributes of a legacy device may be obtained from the legacy device.
- a request from each of one or more UEs 170 may be obtained such that a set of mechanisms and corresponding processes may be determined for translating broadcasted emissions. This determination may be based on computation power necessary for performing the translation.
- the request may include information about the technological capabilities of the one or more requesting UEs, e.g., including (i) transport attributes of a network connecting BAT 104 and UE 170 and/or (ii) characteristics of the UE.
- operation 184 is performed by a processor component the same as or similar to request handling component 235 (shown in FIG. 3A and described herein).
- the set of mechanisms and processes may be JIT-generated by selecting (e.g., from among a manifest) playback characteristics based on the request.
- BAT 104 may perform this generation responsive to the request. In this or another example, BAT 104 may perform this generation based on a determination that the request indicates that UE 170 does not have a compatible or matching decoder set for emissions 108. BAT 104 may, for example, select, from among a master manifest, playback characteristics for UE 170 based on the information of the request. In implementations where the manifest is not received in emissions 108, BAT 104 may generate the manifest.
- operation 186 is performed by a processor component the same as or similar to request handling component 235 in combination with components 238 and/or 239 (shown in FIG. 3A and described herein).
- JIT-repackaging (e.g., to HLS) of the content may be performed based on the selection.
- BAT 104 may perform a transport translation and/or a packaging translation of emissions 108 such that UE 170 is operable to play back at least a portion of the emitted content.
- operation 188 is performed by a processor component the same as or similar to JIT repackaging component 238 (shown in FIG. 3 A and described herein).
- JIT-transcoding may be performed, e.g., when a decoder of the requesting legacy device is incompatible with the ATSC emissions.
- BAT 104 may perform an encoding translation of emissions 108.
- operation 190 is performed by a processor component the same as or similar to JIT transcoding component 239 (shown in FIG. 3A and described herein).
- BAT 104 may be performed in a manner that remedies implemented divergence from a standard or protocol such that the determined set of mechanisms and corresponding processes adapts to the divergence.
- BAT 104 may support revisions to the substantially static plurality of input ATSC 3.0 requirements, which may occur about once per year.
- An OTA receiving apparatus may support a variety of in-home content viewing and storage systems, for example, via APIs connected to the apparatus via Wi-Fi or other network connections.
- BAT 104 may host API services 244 to support other devices in the same structure (e.g., home) in which BAT 104 is located, for example.
- APIs may be provided for devices such as other broadcast reception devices, televisions, computers, and mobile devices to access resources available to BAT 104, such as broadcast streams, stored media, data, and applications.
- API services 244 may be hosted and configured to include functionality, for example, that enables devices connected to BAT 104 to access files on BAT 104, e.g., by: obtaining a list of files with file identifications, e.g., in timestamp order; subscribing to file changes; and/or receiving notifications for file changes.
- API services 244 may be configured to include functionality such as implementing: file cache management; integrated testing; OTA API integration; channel polling; error handling; and/or conditional business logic.
- integrated testing may relate to opportunities for equipping APIs 244 with capabilities to help BAT 104 (and UE 170 in the proximity of the BAT) to a variety perform of autonomous operations, e.g., without having to back connect to a central station to achieve the functionality.
- integrated testing may refer to test procedures within the API or associated with the API to make sure of the correct operation of the API and the equipment of either end, e.g., without just assuming that the equipment is working and messages are getting through. While IP backchannels are possible, use of broadcast receiver equipment may minimize reliance on those back connections.
- error handling may relate to arming BAT 104 with everything possible to help it anticipate and resolve problems.
- BATs 104 may, e.g., gracefully degrade to a base state and may, e.g., obtain a firmware update over emissions 108 or via an IP-based backchannel.
- the APIs may include means for devices connected to BAT 104 to determine the OTA capabilities of BAT 104, obtain a list of channels available for live viewing, adjust a tuner operation of BAT 104, receive content metadata, and receive video data in a viewable form.
- APIs may be provided to support viewing of stored content, selectively incorporating advertisements or viewing enhancements, or accessing applications or data hosted on BAT 104.
- API services may be provided OTA, e.g., in a home Wi-Fi environment. For example, they may be provided as a device casting service at home or elsewhere. Implementations including home-casting may be performed via a client (e.g., of the home Wi-Fi environment).
- BAT 104 and/or UE 170 may provide OTT services via Wi-Fi.
- OTA services may be provided via API services components 244 (e.g., of FIG. 2) to a user at BAT 104 and/or to a user at UE 170. That is, a user may be suddenly provided a set of broadcast channels (e.g., 2, 5, and 6), each being selectable for consumption.
- OTA services may be, for example, facilitated to OTT platforms.
- BAT 104 may operate as a switch, hub, or router (e.g., DHCP server), when operating as a home gateway for network 294 or 296.
- an application running in UE 170 e.g., a Roku box, Apple TV device, iPhone, MacBook, or another legacy device
- REST representational state transfer
- BATs 104 when BATs 104 boot or start up, they may communicate (e.g., via the Internet or another network) with a central discovery server for becoming registered.
- An application on a legacy device may utilize OTA APIs 244 by initially calling the central discovery server to determine (e.g., with its local network ID) (i) whether a registered BAT 104 exists on the same local (e.g., Wi-Fi) network and (ii) the local IP address of this BAT. If the central discovery server determines that there is a registered home gateway 104 on network 294 (e.g., Wi-Fi), this server may pass back a local IP address of BAT 104; then, UE 170 may call that address to start making the calls to get to APIs 244.
- a registered home gateway 104 on network 294 e.g., Wi-Fi
- the UE may be able to interact with the BAT’s APIs without need of the remote server.
- gateway 104 may be discovered by an application running on UE 170, and available OTA channels may be displayed by the application (e.g., in a live section of a home page as individual cards), e.g., including logos of the channels and any available information about an event currently being broadcast.
- an application running on UE 170 may call the local IP of this BAT. More particularly, API 244-1 may inform this UE that home gateway 104 is powered-on, operable, and present. Knowing the availability of OTA services, the UE may then call API 244-2, which informs the list of services available in each of the 6 MHz bands, including the corresponding metadata of each service. That is, because there is a tuner there and because it has these streams coming into repackaging and recoding components 238 and 239, respectively, the data may be converted to a legacy format (e.g., HLS).
- a legacy format e.g., HLS
- an API client of UE 170 may be returned back a list of channels (e.g., ATSC 1 and ATSC 3.0 emissions) and its corresponding metadata.
- the application of UE 170 operating such API client may control the tuner of BAT 104, e.g., by setting the tuner to channel 6 such that the HLS stream arrives with the content of this channel.
- API 244-3 may be called for adjusting the tuner from one band or frequency to another; and API 244-4 may be called to provide an IP stream of data and/or video (e.g., from data obtained OTA at BAT 104 to an MPEG audio layer 3 URL Unicode transformation format (UTF)-8-encoded (M3U8) video fed locally to UE 170).
- UTF MPEG audio layer 3 URL Unicode transformation format
- the list of channels provided from BAT 104 to UE 170 may include the M3U8 URL for each channel, but the M3U8s may not work until API 244-3 tunes into a channel; and then, that M3U8 may return with a manifest for the UE to play HLS video.
- BAT 104 may implement two additional APIs for NRT retrieval, including one for retrieving a list of NRT files (including their IDs, in timestamp order) and another for retrieving an NRT file (file ID).
- BAT 104 may, for example, implement an API provider (e.g., operating as a server), whereas UE 170 may, for example, implement an API consumer (e.g., operating as a client).
- APIs simplify ways for developers to interact with other types of software, e.g., by abstracting the underlying implementation and only exposing objects or actions the developer needs. This abstraction may be performed through information hiding, whereby APIs 244, for example, enable modular programming that allows interface use independently of the implementation.
- each of APIs 244 contractually e.g., with providers and consumers conforming to a standardized interface or format
- a client program may make an API request to a data source or server, which responds to the request.
- BAT 104 may implement REST, e.g., an architecture style based on a stateless client-server communications protocol (e.g., HTTP).
- REST e.g., an architecture style based on a stateless client-server communications protocol (e.g., HTTP).
- a client may make a call to a server via HTTP (e.g., using an API with a get, put, or post for consuming, writing, or overwriting information, respectively, and/or another request, such as head, options, connect, delete, and patch).
- HTTP e.g., using an API with a get, put, or post for consuming, writing, or overwriting information, respectively, and/or another request, such as head, options, connect, delete, and patch.
- the REST style may define a set of constraints for creating web services that provide interoperability between online systems. RESTful web services allow the requesting systems to access and manipulate textual representations of web resources by using a uniform and predefined set of stateless operations.
- requests made to a resource's URI will elicit a response with hypertext links and/or a payload formatted in HTML, XML, JavaScript object notation (JSON), e.g., structured data organized according to key-value pairs, simple object access protocol (SOAP), or via another protocol for exchanging structured information in the implementation of RESTful web or online services.
- JSON JavaScript object notation
- SOAP simple object access protocol
- APIs 244 may operate as a messenger and REST allows HTTP requests to format those messages.
- RESTful web services may have modifiable components to meet changing needs (e.g., even while the application is running). Other non functional properties of such implementation may include performance gains, scalability, simplicity, visibility, portability, and reliability.
- REST constraints may include a client-server architecture, statelessness, cache-ability, layered system, code on demand, and a uniform interface.
- UE 170 may implement a service usage data client, including service consumption data collection, storage, and transmission to a server over a broadband channel; and BAT 104 or another system may implement a service usage data server, including service provision either individually or in groups and client consumption data accessing.
- APIs 244 may, for example, provide a reliable and consistent experience for UEs 170 to use (e.g., without making developers of such devices re-implement functionality in UIs). These APIs may be implemented in BAT 104, or next-generation TVs 103 may be modified to support the functionality of these 4 APIs. Accordingly, TV content may be instantaneously delivered over an IP network. Although API services 244 are depicted as residing within BAT 104, they may alternatively be installed in next generation TVs 103. That is, gateway devices may be, for example, used to integrate OTA ATSC 3.0 broadcasts with the traditional broadband capability of the Internet. For example, viewers may extend their broadcast viewing from next generation TVs 103 to UEs 170 (e.g., mobile phones, tablets, PCs, etc.).
- APIs 244 may extend functionality of an application and provide security features (e.g., blocking access).
- data may be targeted to users by BAT 104 using different, conditional dimensions, such as time-frequency dimensions (e.g., being able to consume content at certain times and at certain channels), geographic dimensions (e.g., being only able to consume content if in a certain region or zip code), and/or entitlement dimensions (e.g., by having a particular channel subscription or by having a certain profession or job title for consuming confidential or encrypted content).
- conditional logic may be implemented at UEs 170, e.g., with APIs 144 merely passing through encrypted data.
- the conditional logic may be implemented at the gateway (e.g., BAT 104).
- BAT 104 may implement a home-caster having a web service and a navigation to that browser or application 246 may facilitate a watching of a video.
- an integration may be performed with API services 244.
- Applications or services on UE 170 may similarly integrate, via BAT 104, for accessing emissions 108.
- VOD may be implemented as a function of broadcast application 246, e.g., with links to particular VODs.
- BAT 104 having an IP backchannel (e.g., via network 106 or network 294, 296), such link may obtain the VOD asset thereof.
- BAT 104 not having the IP backchannel, such link may obtain the VOD asset via cache management of CDN 242. For example, if there is a uniform resource identifier (URI), BAT 104 may first attempt to obtain that particular content via CDN 242; otherwise, if the content is not locally stored at the CDN, BAT 104 may use the Internet.
- URI uniform resource identifier
- BAT 104 may implement the Wi-Fi protocols (e.g., IEEE 802.11 a, b, g, n, ac, ax, and/or another standard) for local area networking with UE 170.
- Wi-Fi protocols e.g., IEEE 802.11 a, b, g, n, ac, ax, and/or another standard
- devices of network 294 or 296 may network through a wireless access point to each other as well as to wired devices and the Internet.
- a home-caster e.g., BAT 104 implementing the 4 APIs 244
- BAT 104 may continually (e.g., every 10, 30, or 60 seconds) be performing a scan in the background.
- application(s) on UE 170 may use APIs 244, e.g., for (e.g., regularly) checking OTA services’ availability.
- Such application may be headless, the APIs, for example, being the heads.
- a user of UE 170 may think they are only using an application on UE 170, device, but the application may be using APIs 244 to get the different data it needs, including the video stream itself from various sources within the UE 170 and/or from other devices which the UE 170 can access directly or via network connection.
- UE 170 may collect important information, such as cached copies of content and/or related meta data, forward error correcting symbols, etc., from various other devices to supplement which UE 170 receives from a first BAT.
- the various other devices may be other BATs, mobile devices, personal computers, gaming consoles, and/or consumer video content systems.
- APIs may be arranged for configured or ad hoc networks of devices to exchange requests and responses for content items, portions of content items, missing symbols, enhancement layers, and/or associate meta data.
- the UE 170 may receive data from BAT in a home in which it is affiliated, from other devices in the home, from nearby devices, and/or, in the case of a mobile UE 170, other devices and/or network which the mobile UE 170 encounters as it travels.
- FIG. 9 illustrates an example method 900 for OTA API services delivery.
- Method 900 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 900 presented below are intended to be illustrative. Method 900 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 900 are illustrated in FIG. 9 and described below is not intended to be limiting.
- Method 900 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 900 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 900.
- the following operations may each be performed using a processor component the same as or similar to APIs 244 (shown in FIG. 2).
- a broadcast may be received.
- BAT 104 may obtain at least a portion of emissions 108.
- metadata may be part of emissions 108.
- metadata may be packaged in the channel information supplied by directory API 244-2.
- packets may be reconstructed based on missing data pieces via another network rebroadcast or local peer’s DRC.
- CDN PoP 242 may temporarily (e.g., in cache) and/or semi-permanently (e.g., in flash or other non-volatile random-access memory (NVRAM)) store the obtained packets.
- NVRAM non-volatile random-access memory
- a payment for the content therein may be determined based on the storage duration.
- the OTA capabilities of the BAT may be determined, via an OTA capabilities query API.
- an application of UE 170 may call BAT 104 and be informed in a response by API 244-1.
- an application running on one of the one or more UEs may coordinate data reception via a set of the APIs.
- a list of channels available for live viewing may be emitted, via a broadcast content directory API.
- the application of UE 170 may call BAT 104 and be informed in a response by API 244-2.
- API service 244-2 may, for example, provide to UE 170 metadata about all the services (e.g., within a signal among a plurality of signals), e.g., by initially scanning such that the UE knows to what channel it may turn.
- a tuner operation may be adjusted, via a BAT tuner API.
- the application of UE 170 may control API 244-3 of BAT 104.
- tuner API 244-3 may facilitate a tuning to a federal communications commission (FCC) channel, e.g., a block of spectrum (e.g., at 578 MHz) rather than a single video or TV channel.
- FCC federal communications commission
- metadata of emissions 108 may be obtained for accessing all the virtual channels (e.g., TV channel 4) of a 6 MHz band.
- video data may be filtered based on a region of intended recipient UE.
- a user of UE 170 may only be able to obtain Intemet-like content OTA, when this device is bound to the same subnet or LAN, and thus to the same DMA or region that would otherwise receive the TV broadcast.
- BAT 104 may, for example, stream, via local Wi-Fi, a Chicago Cubs game to the user who is in Chicago.
- BAT 104 may discard it based on the BAT being outside of a relevant region.
- an output of the filter and corresponding metadata may be emitted, via a media-content delivery API, to the UE.
- the application of UE 170 may obtain and display streaming content obtained OTA via API 244- 4.
- an accessibility conflict between tuners and channels requested by the UE may be resolved.
- the number of tuners needed for a requested channel may be greater than the number of existing tuners (e.g., which are presently supporting other, various channels).
- a receiver e.g., TV 103, BAT 104 with one tuner may be operable to only watch, at a time, the services available from one of the 6 MHz ATSC 3.0 channels.
- this channel may provide 10 standard definition (SD) video channels, and the receiver’s user(s) (e.g., of a household) may watch all 10.
- SD standard definition
- the receiver may allow the user(s) to tune into two 6 MHz channels and thus to any services of those two channels. If another person attempts to watch a third channel, the receiver may be configured to terminate some channel consumption for the people currently watching or may inform the other user that the channel cannot be changed.
- a 480p stream received from the BTS may be provided to both the mobile device display and the HD display of the home user. This may be due to a relatively small group of pictures (GOP) length of the 480p stream, and therefore lower time to establish a first frame or viewing.
- GOP group of pictures
- each BAT may seek to optimize the viewing experience of each user by seeking to establish feeds of higher viewing quality, e.g., at higher resolution or frame rate.
- a multichannel transmission such as an ATSC 3.0 transmission, may include multiple streams at different resolutions or frame rates, etc., for the same content.
- the robustness of the streams may vary widely, e.g., based on the modulation, encoding, diversity, error correction, and other schemes applied to each stream. Further, transmission penetration qualities may differ for different users, e.g., for antennas located outdoors versus indoor locations.
- the 480p stream may be the best that is available for use by the mobile device.
- the home BAT checks whether a higher quality stream is available for the same content.
- the home system being stationary and having a larger antenna, may be able to take advantage of less robust transmissions from the BTS that provide information for higher resolution or frame rates. It may be that both 720p and 1080p information streams are available, but too much of a 1080p stream is missing and unrepairable for information in the 1080p stream to be usable, and therefore the broadcast access terminal reverts to showing the next highest quality stream available, e.g., a 720p stream.
- a BTS may provide separate streams at different viewing qualities, such as a 480p stream, a 720p stream, and a 1080p stream. Further, exploiting the myriad options for encoding streams on a multichannel broadcast system and for communicating details of necessary demodulation and decoding of each stream, streams may be divided into resolution components. For example, rather than sending separate streams at 480p, 720p, and 1080p resolution for given content, the broadcast transmission may include a first layer providing a 480p base stream, a second layer containing the differences between the base and 720p streams, and a third layer with the additive information to constitute a 1080p stream by combining its differences with the information in the first and second layers.
- the stream that is ultimately displayed may be the highest quality, e.g., at the highest resolution, highest frame rate, etc., that may be achieved by combining arriving streams of sufficient quality for a given media asset.
- progressive video enhancement may include improving aspects of a stream by incorporating elements of another stream. It is not necessary to combine the entirety of two streams.
- a first stream may be enhanced, for example, by incorporating information in, e.g., any coding tree unit (CTU) of a second stream.
- CTU coding tree unit
- progressive video enhancer 205 may be able to validate by the structure of the CTU for receiving the correct data field for that CTU. This component of BTS 102 may then render out what potentially that one CTU is (e.g., which is an 8x8 pixel block or 64x64 pixel block). In an example, a 1080p stream may be missing pieces or be unrepairable such that the stream as a whole is unavailable. But even if BAT 104 obtains just one CTU, which may be very small, the BAT may still render that as part of the enhancement layer, e.g., when validating by the structure of the CTU such that the correct payload is received for that CTU.
- This component of BTS 102 may then render out what potentially that one CTU is (e.g., which is an 8x8 pixel block or 64x64 pixel block). In an example, a 1080p stream may be missing pieces or be unrepairable such that the stream as a whole is unavailable. But even if BAT 104 obtains just one
- the objective of the enhancement layer may be to provide a high frequency rendition of detail and artifacts that would otherwise have been lost in the base layer of coding.
- any degree of enhancement layer data may be opportunistically extracted to provide a usable essence for decoding, effectively improving upon all-or-nothing approaches (e.g., of ABR, where either the whole GOP is obtained or nothing at all).
- the CTU may thus operate with respect to how an image is broken down, e.g., for motion estimation and prediction of HEVC.
- progressive video enhancer 205 may support different resolutions in the base and enhancement layers, e.g., ranging from 8K UHD (7,680 x 4,320 pixels), 4K (4,096 or 3,840 x 2,160 pixels), 2K, WUXGA, 1080p (1,920 x 1,080 pixels), 720p (1,280 x 720 pixels), or 480p (640 x 480 pixels), to lower known resolutions.
- the p signifies progressive scan, e.g. non interlaced, and the pixels are listed as oriented on a display, vertically by horizontally.
- progressive video enhancer 205 may individually set the picture rate for each of the base and enhancement layers, e.g., to 25 FPS, 50 FPS, 60 FPS, 120 FPS, 250 FPS, or another suitable rate.
- bit-stream for an enhancement layer may be modulated at 256-QAM, e.g., in a first PLP, and the bit-stream for the base layer may be modulated at QPSK or 16-QAM, e.g., in another PLP. But these configurations are not intended to be limiting.
- BTS 102 may implement modulation such that multiple PLPs are emitted in a simulcast (e.g., in one TV spectrum), including use of MMT protocol and SHVC video layering.
- BTS 102 may emit to receivers, penetrating hard to reach areas (e.g., lower levels of a building garage), e.g., by sending many fewer bits than when targeting LOS receivers.
- BTS 102 may, for example, emit to receivers traveling at 250 MPH, e.g., by sending many fewer bits to deal with the Doppler effect.
- one PLP may be set with properties for reception of 480p resolution video inside buildings and/or when moving at speed, e.g., with a mobile device’s antenna that is substantially small.
- a different PLP may be set with properties for reception of 720p resolution video at stationary TVs 103. As such, all receivers of a region of BTS 102 may obtain the robust emissions of 480p video, but only those receivers with more optimal reception characteristics may obtain the less robust emissions of 720p video.
- one SHVC layer may comprise 720p
- another SHVC layer may be at 1080p that is more efficient (e.g., by 20-30%) than a regular 1080p coming over the Internet.
- 3GPP 3rd generation partnership project
- GSM Global System for Mobile communications
- UMTS Universal Mobile Communications
- HSPA High Speed Packet Access
- LTE Long Term Evolution
- 4G Long Term Evolution
- 5G 5th Generation Partnership Project
- multiple renditions of content over multiple ATSC 3.0 PLPs of emissions 108 and/or IP distributions may be provided such that receiving devices are able to consume the content.
- Progressive video enhancer 205 may, for example, manage a distribution profile, e.g., by combining the base and enhancement layers, such that the bitrate is less than the peak utilization of the representative enhancement layers.
- Example implementations using SHVC may provide support for spatial, SNR, bit depth, and color gamut scalability. Such use may provide a high-level syntax only extension to allow reuse of existing decoder components.
- a receiver e.g., BAT 104
- BAT 104 may demultiplex an SHVC bitstream received from emissions 108 such that the base and enhancement layers are individually provided to decoders (e.g., HEVC).
- the receiver may collate, e.g., by providing a virtual surface or rendering surface to be able to composite the base and enhancement layer.
- the BAT may, for example, then take the two separate flows and combine them into one based upon the capabilities of the receiving device. It may produce a derivative transcoding of that media essence for that device as needed.
- progressive video enhancer 205 may combine the base and enhancement layers such that the emission is not larger than what just the 720p layer emission is by itself, or if it is, it may be within some degree of tolerance (e.g., 10%).
- the 4K emission by combining the base and enhancement layers to get to that 4K emission, may be smaller than the 4K emission by itself. This may be because that 4K emission by itself is using a small GOP window; by adjusting the GOP distribution interval on the enhancement layers, a high degree of codec efficacy may be leveraged.
- Another objective on this is to have a combination of media essences that may represent a distribution profile that is smaller than what the net output would be as just a single distribution profile. Due to having a hard limit on bandwidth, emissions 108 may not be able to fit multiple copies of the same content instance at different quality (e.g., resolution) configurations, as is known to be performed OTT.
- content e.g., video or other data
- content may be compressed using different algorithms, an amount of compression of which being based on the algorithm (e.g., if the compression is too extreme, blocky or blurry images will result).
- These different algorithms for video frames are called picture types or frame types.
- I frames e.g., which are intra coded and least compressible, not requiring other video frames to decode by being independently coded
- P frames e.g., which are predictive and use data from previous frames to decompress, being more compressible than I frames by containing motion-compensated difference information relative to previously decoded pictures
- B frames e.g., which are bi-predictive or bidirectional and which use both previous and forward frames for data reference to get a highest amount of data compression, by containing motion-compensated difference information relative to previously decoded pictures. If a transmission error occurs, the type of frame lost may determine the propagation time of the error.
- the emission frequency of the I frame may be determined differently for each of the base layer and the one or more enhancement layers.
- content of 480p, 720p, and 1080p may be obtained OTA via emissions 108.
- the deltas for an enhancement layer e.g., 4K or another resolution
- an IP-based backchannel the latter alternative being a hybrid OTA/OTT configuration such that at least one of the enhancement layers is obtained unicast.
- hybrid mixing may allow connected receivers to obtain a 4K SHVC over the Internet that instills upon or complements content being received OTA.
- an NBC broadcast only in Las Vegas may be performed via a base layer sent OTA (or over network 106) that is complemented with a 4K layer sent via network 106 (or OTA) such that digital rights are managed (e.g., by only being able to get that 4K version or rendering on devices in Las Vegas).
- progressive video enhancer 205 may produce only one variant, which is the base layer, but that base layer may then be complemented by enhancement layers, which are dependent emissions.
- Such receiver-driven resolution and quality of experience may be based on its wireless reception capabilities, rather than what combinations of streams are available from the distribution perspective or the encoding perspective.
- OTT platforms necessarily have a different base stream for every resolution desirable for providing because bandwidth is not limited as compared to a licensed spectrum, e.g., of 6-7 MHz.
- the higher quality stream may not be stand-alone, thus requiring the extra, delta elements that progressive video enhancer 205 adds to the lower quality stream(s) for creating the higher quality stream.
- progressive video enhancer 205 may combine two or more enhancement layers, prior to emission, for downstream reconstitution of the 4K content.
- One or more of these deltas or differential data may be obtained over IP broadband at BAT 104.
- the 4K or another enhancement layer cannot be played alone, e.g., by needing at least the base layer underneath it.
- GOP structure specifies the order in which intra- and inter frames are arranged.
- the GOP is a collection of successive pictures within a coded video stream.
- Each coded video stream may comprise successive GOPs from which the visible frames are generated. Encountering a new GOP in a compressed video stream signifies that the decoder does not need any previous frames in order to decode the next ones, and allows fast seeking through the video.
- Each GOP of emissions 108 may begin (in decoding order) with an I frame, e.g., which may represent a full, compressed video frame. Afterwards, several P and B frames follow. Generally, the more I frames the video stream has, the more editable it is. However, having more I frames substantially increases the bit rate needed to code the video.
- the GOP structure may be referred to by two numbers, for example, M (e.g., telling the distance between two anchor frames, I or P) and N (e.g., telling the distance between two full images, I-frames).
- progressive video enhancer 205 may combine frames into GOPs (e.g., during the encoding process of MODCOD component 204) to remove redundant frames and output a highly compressed data object.
- a compression function may assign properties (e.g., the frame rate, which determines the GOP structure and GOP size) based on the properties of the source object.
- progressive video enhancer 205 may use multiple GOP sizes, additional frame rates, different dynamic ranges (e.g., HDR versus SDR), a different color gamut (e.g., WCG instead of international telecommunication union radiocommunication sector (ITU-R) recommendation BT.709), and/or other configurations for the different SHVC layers.
- the enhancement layer may provide spatial (e.g., from a resolution of 480p to a rendition of 1080p resolution) and/or temporal (e.g., from a 30 FPS source to a 120 FPS source) enhancement. Rather than being a function of interpolation, they may be functions of decomposition.
- the highest enhancement layer may be the highest resolution and input for the video decoding process, and from there it may down-sample to make the base layer output.
- FIG. 10A is a block diagram illustrating a system 1000 for simultaneous transmission of an item of content in multiple formats.
- a broadcast system may simulcast a plurality of renditions of an item of content at different resolutions.
- the content is sent by a broadcast television station 1002 separately in 480p, 720p, 1080p, and 4K/UHD formats 1004, 1006, 1008, and 1010 respectively.
- Each of the broadcast television receivers 1020, 1022, 1024, and 1026 may select any of the renditions, and/or switch between depending on the intended use, current reception, or other issues.
- BTS 1032 sends four different layers of information including a 480p base layer 1034, a 720p enhancement layer 1036, a 1080p enhancement layer 1038, and a 4K/UHD enhancement layer 1040.
- the 480p base layer 1034 of FIG. 10B may be the same as the 480p content version 1004 of FIG. 10A, or it may be something different which is constructed for easier use with the other layers in constructing higher resolution renditions for example.
- BAT 1050 simply makes use of the 480p base layer to obtain a 480p rendition.
- BAT 1052 uses progressive video enhancement to derive a 1080p rendition by combining information provided in each of the 480p base layer 1034, the 720p enhancement layer 1036, and 1080p enhancement layer 1038. That is, each layer, may consist of different video compression information based, for example, on analysis of foreground vs. background, motion, color, textures, etc., such that higher resolution and/or higher quality renditions may be obtained by combining information at different levels of detail with one or more enhancements layers providing higher levels of detail.
- BAT 1052 is using a base layer and two enhancement layers to achieve the desired rendition.
- each enhancement layer may be used independently, e.g., in combination with information in the base layer, but not information in other enhancement layers.
- the approach used herein may have a top layer (e.g., 1080p or 4K) that is lighter (e.g., with the 4k layer being between 20 to 30 percent lighter than it would be on its own) because of the inner layers.
- a top layer e.g., 1080p or 4K
- the 4k layer being between 20 to 30 percent lighter than it would be on its own
- Known approaches have receivers switching altogether from one feed to another, but the herein-disclosed approach reconstructs, when displaying, content of the higher layers by using the lower layers.
- a base layer may be 480p
- an upper layer may comprise deltas between 480p and 720p
- a layer above that may comprise deltas from 1080p to 4K may be provided between the layers and a combination of the lower layers may be needed for the upper layers, while varying the GOP size.
- the different layers may be separately provided such that a geographic fence is formed, the upper layer being, e.g., provided via an IP-based backchannel while the base and potentially other lower layer(s) are being, e.g., provided via emissions 108; but the combination of layers is needed before the upper layer is operable for viewing.
- GOP sizes are known to be predetermined, the GOP being the number of frames in every I frame.
- the GOP may in other words be the number of deltas for setting a new baseline.
- Implementation of emissions 108 comprising 480p resolution content may be considered mobile-oriented rendition, e.g., the base layer may be optimized for mobile reception.
- the base layer may be optimized for mobile reception.
- there may be a PLP wherein all mobile channels are placed and that is emitted with the highest level of durability for reception at the receiver (e.g., due to a particular codec permutation set, power management decoder efficacy, and/or another attribute).
- this 480p example there may be a 15 frame GOP. That is, every quarter of a second, there would be a new baseline frame.
- 480p or 576p with a 30 second GOP may allow a user to switch the channel and consume video instantaneously (e.g., without the user observing a latency).
- a channel may be changed downstream, and content may be quickly (e.g., within a second) consumed.
- OTT devices e.g., Apple TV
- a switch from Hulu to Netflix may take anywhere from 4 to 8 seconds because HLS chunks are typically 2 to 4 seconds, the latency being independent of Internet speed.
- the GOP size for a base layer rendition may be 1 second, 3 seconds, 15 seconds, 30 seconds, or another suitable value.
- the GOP size of enhancement layer renditions (e.g., 720p) may be based on a scene (e.g., 30 seconds, 60 seconds, or another duration larger than that of the base layer) of the content therein, this upper layer comprising the baseline and deltas.
- the GOP size may thus be adjusted for different reasons, such as for accessing the content quickly when changing the channel. For example, an anchor at a desk may not require many deltas, e.g., only for their lips moving and eyes blinking.
- the base layer With the base layer, changing the channel is tolerable, with the user not having to otherwise wait a minute when consuming content at 720p. That is, the base layer facilitates a quick channel change; and then, the resolution may change 30 seconds later, when a full GOP at 720p arrives. GOP sizes substantially affect the efficiency, which is significant in view of a limited (and potentially shrinking) 6-7 MHz TV spectrum.
- any problems of data loss or fidelity may be a function of the receiver (e.g., SNR, receiver aggregate power, receiver velocity, or another characteristic), e.g., in moving from point A to point B. Then, there may potentially be some data loss, e.g., due to Doppler shift.
- Progressive video enhancer 205 may, for example, provide the base layer emission via transport as robust as economically viable; ultimately, use of an IP backchannel may augment its reception of ATSC 3.0 emission. There may be no way such downstream device could receive back the enhancement layer for that essence, if it is not able to actually receive it from the ATSC 3.0 transport because the different PLPs may have different MODCODs. If the MODCOD that the enhancement layer is on is not robust enough such that the device cannot receive it, then there may be no easy way to get it back, but there may be a case where, if a device has an IP backchannel, then that enhancement layer can be delivered through another transport medium that is available to the receiving device. Accordingly, different device-specific mechanisms for recovery, remediation, or best-effort consumption of that content are contemplated.
- model 203 of system 100 may be trained to learn ways for optimal reception on all receivers.
- a current problem with many network affiliation agreements for ATSC 1 is that the visual quality metrics of network originated programming are based upon a target value bitrate when using the MPEG-2 video codec, which is currently pegged at about 8 Mbit/sec for HD encoding.
- the codec performance for video quality encoding from MPEG-2 to H.264 to HEVC usually doubles at each generation, e.g., the same visual quality will be 1 ⁇ 2 the previous generation’s bitrate.
- a simple calculation would then estimate the original 8 Mbit/sec in MPEG-2, which would be 4 Mbit/sec in H.264, and finally approximately 2 Mbit/sec for HEVC.
- use of HEVC-coded video may allow such emission formats and features as spatial scalable coding (e.g., emission of a base layer with one resolution and a separate emission of an enhancement layer that, together with the base layer, provides a higher resolution result), HDR, wide color gamut, 3D, temporal sub-layering, legacy SD video, interlaced HD video, and/or progressive video (e.g., progressive scan, picture rate, or pixel aspect ratio, as defined in the A/341 standard).
- spatial scalable coding e.g., emission of a base layer with one resolution and a separate emission of an enhancement layer that, together with the base layer, provides a higher resolution result
- HDR high resolution
- 3D wide color gamut
- 3D wide color gamut
- temporal sub-layering e.g., temporal sub-layering
- legacy SD video e.g., interlaced HD video
- progressive video e.g., progressive scan, picture rate, or pixel aspect ratio, as defined in the A/341 standard
- the base stream may be 576p at 30 FPS, e.g., for a sporting event.
- the enhancement stream may be 720p, e.g., at 120 FPS, HDR, wide color gamut, larger GOP size, and/or another attribute.
- the lower resolution may have a 15 or 30 second frame GOP.
- Such enhancements or a target bitrate may be defined or required by a third party.
- video quality may be prescribed in terms of fidelity (e.g., how closely does a processed or delivered signal noticeably match the original source or reference signal).
- the score may be on a scale (e.g., ranging from 0 being very satisfied to 4 being most users dissatisfied), and the video quality may be defined subjectively, e.g., by picture quality (e.g., an index of eyes’ ability to understand the picture), audio quality (e.g., an index of the ears’ ability to discern the audio), and/or lip sync (e.g., a measurement of the audio to video synchronization) rather than via objective metrics based on noise, such as peak SNR (PSNR) or mean squared error (MSE).
- PSNR peak SNR
- MSE mean squared error
- the perceptual scores used in DMOS implementations may be averaged, e.g., from an audience being delivered best and worst cases of test video.
- GOP size of 30 or 60 there may be a one second GOP, e.g., a one second window of video that represents the GOP that may be independently decoded.
- the decoder may, for example, be rudimentary (e.g., without a look- back buffer), so the decoder may only start processing the media essence when it finds an I frame.
- An I frame is what represents a full frame of video. It may be a compressed frame of video, but it is the reference frame that all subsequent P frames and B frames use as their anchor. B-frames may be a little different because the anchor could be before or after it.
- the decoder may have to have an I frame in that switch.
- the tight cadence of every 60 frames means that it will take approximately one second before the user could consume that next frame of video. Now, it may compensate by playing the audio essence early because the audio essence is not dependent upon an I frame as a reference frame, effectively starting a decoding relatively soon.
- the efficacy in video codec and compression is somewhat in the I frame. More important for newer codecs is that the size of the GOP may allow the codec to be far more efficient. Accordingly, progressive video enhancer 205 may have to balance the trade off, e.g., by being able to emit a video essence that has 25 to 75 percent less data utilization if the GOP is longer.
- a GOP may be in the five second range, which may cause a challenge for receiving devices because they now potentially have to wait up to five seconds for that GOP.
- the intent may not just be to adjust the resolution, the spatial resolution on, or the temporal resolution of the media essence, but also the frequency of this I frame, which may be used as an anchor, on the lower emission, since what that allows, as a trade-off, is that the lower emissions may be inherently a lower bandwidth utilization.
- progressive video enhancer 205 may determine a similar metric at a smaller frame size, rather than a 720p emission that has the I frame at 60 frames, if this component of BTS 102 puts it down to a 480p emission that has the I frames, which may save about 50% of bandwidth; and the enhancement layer, then, knows where those I frame bases are, but because the enhancement layer may gain its efficacy of bandwidth utilization from the scene change component of the rendition, what progressive video enhancer 205 may do is extend out the GOP for these enhanced sizes for the GOP, so that that way they are not co-dependent like the lower, base layer is for channel change.
- progressive video enhancer 205 has a weather segment.
- BTS 102 may sit on a shot of the five-day weather forecast for 15 seconds, while the on-air talent is moving from the news desk to the weather map. There is thus no value in re-emitting that same image every second out of those 15 seconds if it has not materially changed.
- Progressive video enhancer 205 may, for example, do that on the base layer, only to support a fast channel change use case.
- the decoder may then render the image relatively quickly, but, for an enhancement layer, that utilization may be almost close to zero over time. It may take a few seconds for devices to upscale their resolution, but this is typical (e.g., with ABR content on OTT devices).
- the first five to ten seconds may pass, if you change to a new VOD, for that video to increase in sharpness or other quality.
- a downstream receiver may mitigate this by being able to render, as needed, GOPs that have enhancement layers that have additional B frame data that provide relevant picture enhancement over time and then eventually converge to fully-combined base and enhancement layers.
- progressive video enhancer 205 may provide an MMT video service that allows a channel change to be performed very quickly because it has a very small GOP size (e.g., 10 or 15 frames); and it may progressively upscale to 720p with very large GOP sizes, which substantially reduces the amount of bandwidth used of the overall spectrum, making it much more efficient.
- a very small GOP size e.g. 10 or 15 frames
- emissions 108 may be performed using the MMT protocol, which may comprise timing information.
- MMT may have timing information encoded in the actual encoded video.
- progressive video enhancer 205 may have put them together in a layering.
- the user may be obtaining a main channel OTA at BAT 104 (or TV 103) and several auxiliary channels over the Internet (e.g., using a backchannel) such that the content displayed from different camera angles would all be synchronized time-wise.
- the disclosed progressive video enhancement may be performed for OTA content, such as emissions 108. In these or other embodiments, this enhancement may be performed for OTT content.
- progressive video enhancer 205 may perform interlayer prediction (ILP) for improving the coding efficiency of enhancement layers by predicting an enhancement layer picture using a base layer (or another reference layer picture).
- ILP interlayer prediction
- BAT 104 or another downstream receiver may make a determination as to which matching set of capabilities it has on the receiving device.
- progressive video enhancer 205 may be able to provide a corresponding OTA NRT payload (or an IP-based delivery of a payload may be provided) to provide additional codecs or additional capabilities on that player to be able to support functionality that it does not natively support.
- This would be hardware support on a mobile device for the specific encoding of audio.
- a mobile device’s application may be configured with the ability to add in this codec for the decoder as needed, so that way the player may expand its capabilities as needed for different media essences, which would be part of any ATSC 3.0 transmission which it might not normally or natively support.
- progressive video enhancer 205 may make decisions based on availability of content at the different encodings.
- a corresponding device’s capability may be either potentially a part of service usage reporting (e.g., per the A/333 standard) or a measurement message.
- Some of that metadata may be application specific.
- a downstream receiver may need that metadata via an IP-based backchannel delivery of what the universe of receiving devices is, what the audience of actively consuming devices are, and whether the material would warrant the additional cost basis for capacity utilization.
- an enhancement layer distribution over ATSC 3.0 forward transmission 108 may not be warranted, but making the enhancement layer available over an IP-based transport may be beneficial for the audience and viewing experience.
- the return-link telemetry of network 108 not just from transmission but from reception, and the capabilities of the receivers may influence how the network responds to opportunities and in flexibility in the way it delivers content.
- ATSC 3.0 transmissions may opportunistically incorporate data into padding of baseband packets (BBPs) of STLTP feeds, allowing real-time data insertions into media feeds, which may be tailored to transmission-tower specific markets and applications.
- the padding may be replaced with such opportunistic data and/or the padding may be used to adjust MODCOD for a scheduler of BTS 102 and for a more robust receiver profile (e.g., with the MODCOD being more robust by utilizing the available bits of the padding, effectively replacing them).
- a full potential or yield of data utilization may be achieved or managed using STLTP device 405, and device 405 may extract revenue from a data distribution service in a remnant capacity without having to be part of the broadcaster’s origination by working as a data overlay in this ecosystem.
- the STL may comprise a transmission link between a broadcaster’s studio location and a transmitter which carries the station’s content to be transmitted.
- the STL may, for example, comprise radio means (microwave) or direct digital connection, such as fiber.
- the STLTP provides an STL transmission interface between the broadcast gateway (which is, for example, depicted in FIGs. 4A, 4F, and 5A and which may be communicably coupled to the studio) and an exciter/modulator of the transmitter(s).
- the STLTP may, for example, encapsulate payload data using user datagram protocol (UDP), provide synchronization time data and control, and perform FEC.
- UDP user datagram protocol
- Some embodiments of the upstream portion of system 100 may virtually implement one or more components of FIGs. 4A, 4C, 4F, and 5 A (e.g., the broadcast gateway), by not needing such components to be implemented via hardware techniques.
- ATSC 3.0 link-layer protocol (ALP) packets are encapsulated for transmission at a broadcast gateway in BBPs, with BBP data being sent across the STL in a real-time protocol (RTP) / UDP / Internet protocol (IP) multicast stream for each PLP.
- RTP real-time protocol
- IP Internet protocol
- These streams may be multiplexed into a single RTP/UDP/IP multicast stream, e.g., with the same IPv4 multicast destination address (e.g., in a range from 224.0.0.0 to 239.255.255.255) and port.
- Each individual PLP may convey a series of BBPs.
- the broadcast gateway may, for example, encapsulate ALP packets from studios into BBPs and send them to the ATSC 3.0 transmitter.
- Each PLP configured at such gateway may be mapped to a different port of an IP connection.
- RTP/UDP/IP multicast stacks may be used in both of the ALPTP and STLTP structures, e.g., with specific UDP port numbers being assigned to particular PLP identifiers and used in both protocols.
- an ALP packet stream designated to be carried in PLP 07 may be carried in an ALPTP stream with a UDP port value ending in 07
- the baseband packet stream derived from that ALP stream and to be carried in PLP 07 may be carried within an STLTP stream with a UDP port value also ending in 07.
- each PLP Streams may have a different tradeoff of data rate versus robustness, and data streams may be assigned to appropriate combinations (e.g., by the system manager of FIG. 4F).
- the inner STLTP stream may provide addressing of BBP streams to their respective PLPs through use of UDP port numbers.
- the outer protocol, STLTP may provide maintenance of packet order through use of RTP header packet sequence numbering.
- the STLTP may also enable use of ECC to maintain reliability of stream delivery under conditions of imperfectly reliable STL networks.
- At the transmitter(s) may be an input buffer for each PLP to hold BBP data until it is needed for transmission. There may also be FIFO buffers for the preamble stream and the timing and management stream.
- the A/330 standard may define aspects of the ALP implemented by the disclosed approach.
- the ALP of emissions 108 may deliver IP packets, link layer signaling packets, and MPEG-2 TS packets down to the RF layer and, e.g., back after reception.
- the ALP of emissions 108 may optimize a proportion of useful data in the ATSC 3.0 physical layer by means of efficient encapsulation and overhead reduction mechanisms, e.g., for IP or MPEG-2 TS transport.
- Such ALP may include a packet format, support header compression, and support link layer signaling, and provide extensible headroom for future use.
- the link layer may be the layer supporting traffic between the physical layer and a network layer (e.g., OSI layer 7) such that input packet types are abstracted into a single format for processing by the RF physical layer, ensuring flexibility, efficiency (e.g., via compression of redundancies in input packet headers), and future extensibility of input types.
- OSI layer 7 a network layer
- Services provided by the ALP of system 100 include packet encapsulation (e.g., of IP packets and MPEG-2 TS packets), concatenation, and segmentation (e.g., which may be performed to use the physical layer resources efficiently when input packet sizes are particularly small or large).
- packet encapsulation e.g., of IP packets and MPEG-2 TS packets
- concatenation e.g., which may be performed to use the physical layer resources efficiently when input packet sizes are particularly small or large.
- segmentation e.g., which may be performed to use the physical layer resources efficiently when input packet sizes are particularly small or large.
- the physical layer need only process one single packet format, independent of the network layer protocol type.
- MPEG-2 TS packets may be transformed into the payload of a generic ALP packet.
- An ALP packet comprises a header followed by a data payload.
- the header of an ALP packet may have a base header, an additional header depending on the control fields of the base header, and an optional extension
- a BBP may comprise a header and payload.
- This header may comprise a base field, an optional field, and an extension field, and this payload may comprise a set of ALP packets.
- a payload of a BBP may encapsulate one or more ALP packets and/or a portion of another ALP packet.
- a base field may have a pointer to where the next ALP packet starts.
- the excess capacity may be determined based on (i) a previous BBP, (ii) the next BBP header, and/or (iii) a pointer value attribute of a current BBP header.
- BTS 102 may emit content from multiple ALP creators (e.g., which may lack visibility of the final multiplex).
- the creation of the STLTP and those ALP emission flows may be on a per tenant basis and that per tenant basis may not have the visibility to what the full utilization is of a limited RF spectrum.
- Device 405, which is discussed further herein, may thus be operated by a third party.
- the BBP may provide encapsulation of the ATSC 3.0 ALP, but without enough ALP packets in place to fill this BBP, padding is needed in the BBP.
- the BBP may have a fixed payload length, resulting in a use-it or lose-it utilization, and the way the framing from the ALP packet into the BBP works is that an initial pointer may indicate where the beginning is, and if there is no value for that pointer, such an implementation may indicate the whole BBP is padding (or conversely that there is no excess capacity).
- RT yield evaluation component 434 of processor 420 may determine an array index having a value from 0 to 1,023 such that an opportunity is fulfilled for injecting data therein.
- Some implementations may have a plurality of mechanisms for extracting the excess capacity, one being (e.g., if the pointer at the BBP header is a fixed value or if there is no pointer) that the full BBP is padding, and another being that there is an optional field that indicates how much at the trailing end of the BBP would be used as padding.
- RT yield evaluation component 434 may determine the size of the payload inside of the BBP and, from that, this component may derive what the excess capacity is for opportunistic utilization.
- the configuration manager / scheduler may be a component of the broadcast gateway, which determines which ALP emissions may be multiplexed into an STLTP feed.
- This FIG. combined with FIGs. 4B-4C illustrate an example process for producing an STLTP feed.
- the scheduler may be designed to prevent an overflow of the ATSC 3.0 stack, including the broadcast gateway and the STLTP feed. Such overflow scenario would be catastrophic for data delivery needs.
- Some embodiments of system 100 may incorporate a trans -multiplex (trans-mux) for converting the HEVC and AC4/MPEG-H video and audio flows into a QAM/IP compatible MPEG-TS for direct multiplexing of essence content into the MVPD's network.
- An example flow may include, in order, a TS, an encoder, a WEBDAV push (chunked) to packager/signaler, a scheduler, the STLTP, an ATSC 3.0 exciter, a tuner, a demodulator, and an A/331 receiver, at least some of which are, for example, depicted in FIGs. 4A-4C, 4F, and 5A-5B.
- FIGs. 4A- 4C show a single path carrying the ALP packet stream(s) and then the PLP packet stream(s) on its (their) way(s) from the ALP generator(s) to the transmitter(s).
- the processing for each of the ALP packet streams and/or BBP streams may be applied separately to each of the streams destined for a different PLP.
- a scheduler function may be included in the broadcast gateway.
- the scheduler may manage the operation of a buffer for each ALP stream, control the generation of BBPs destined for each PLP, and create the signaling data transmitted in the preamble as well as signaling data that controls creation of bootstrap signals by the transmitter(s) and the timing of their emission.
- the scheduler may thus communicate with a system manager to receive instructions and with the source(s) of the ALP packets both to receive necessary information and to control the rate(s) of their data delivery.
- the RTP/UDP/IP traffic may comprise an outer ALP frame with inner encapsulations, a base mainframe, actual ALP DU, and the IP UDP emission.
- STLTP extraction component 430 and/or RT yield evaluation component 434 may need to navigate several moving pieces to go five layers deep. And to arrive at all of this content, one or more of these components may flatten them down to one flow. By doing so, the one or more components may solve the broadcast gateway, the modulator, the transmission, and the receiver device.
- the BBP may be at a third or fourth layer of encapsulation.
- Each microsecond of emission from BTS 102 may be represented by one BBP that is then multiplexed in conjunction with other BBPs, all coming together and going through a touchpoint depicted in FIG. 4B.
- the network may be transmitting carriers (e.g., a source of where those microsecond emissions would be modulated on) all the time such that there are continual oscillations to which a receiver may tune.
- carriers e.g., a source of where those microsecond emissions would be modulated on
- padding may be sent as part of these continual oscillations.
- these continual emissions may not need to be emitted from the transmitter/antenna- tower of FIGs. 4F and 5 A; rather, they may be emitted through the STL.
- unutilized capacity in the system between touchpoints of FIG. 4B may only be necessary for facilitating the transmission of the broadcast.
- RT yield evaluation component 434 may analyze this in real time and identify where unutilized capacity is and then dynamically inject, via component 436, opportunistic data effectively replacing a BBP. For example, 3,752-bytes of data payload may be flattened from the STL because there may be no need to send it, sending instead how long a padding frame should be, but the exciter may then have to honor that padding frame and emit 3,752 empty bytes in that transmission to make sure that the carrier oscillations are still there. Without padding frames, BATs 104 and next generation TVs 103 would lose the signal, taking them two or three seconds to come back online. [00247] Once the opportunity has passed, that 3,752-bytes are forever lost as unsold.
- RT yield evaluation component 434 may specifically identify signatures that are padding. There may be padding at the end of the frame or in the encapsulation of another frame behind it to keep it aligned. System 405 may understand on the microsecond level what is potentially available there and then fulfill that opportunity with NRT data delivery and/or MODCOD adjustment. For example, one week before the Super Bowl, the opportunistic data emitted into the STLTP feed may include all potential advertisers that have bought OTA time for digital preemption use spaces.
- RT yield evaluation component 434 may feed information back into the broadcast network (which is, for example, depicted in FIGs. 4A and 4F) such that, upon understanding the tolerances therein, MODCOD control component 438 may direct parameter adjustment (e.g., in conjunction with MODCOD unit s204 of FIG. 2) for improved configuration and provisioning decisions.
- RT yield evaluation component 434 may perform data collection and/or aggregation, e.g., as a decisioning engine that operates in real time after the dataset was computed and obtained to understand the characteristics of emission durability, resiliency, and robustness.
- Yield evaluation component 434 may, for example, analyze the ALC flow in RT to see where the excess capacity is such that RT injection component 436 and/or MODCOD control component 438 can better determine its optimization, e.g., respectively via data injection and/or PLP configuration parameter adjustment. Yield evaluation component 434 may, for example, perform yield management using RT injection component 436.
- system 405 may thus fulfill another class of media buying where, e.g., advertisers 206 may be put in a pool with remnant inventory such that they cannot be informed if they are actually going to get a slot filled until RT yield evaluation component 434 has seen the as-runs that have been fulfilled.
- content of advertisers 206 may not be preempted with another supplemental source (e.g., paying a higher price point), effectively stratifying out their audience impression base.
- another supplemental source e.g., paying a higher price point
- BAT 104 may employ demographic data to provide different targeted ads (e.g., via different PLPs having different versions for the different demographics). BAT 104 may know what the demographic is of its user(s) to pick the respective version for an individual user based on included metadata attributes (e.g., the filter code). In this example, the younger 18-34 user may be provided the Ford Focus ad, the 35-54 divorced male may be provided the Ford Mustang, and the 55 and up demographic may be provided the crossover ad. A media buyer and an advertiser may still get the position paid for and the brand recognition, except that its impact is potentially ten times more relevant by reaching the right audience, without having the risk of being preempted by not paying what the market would pay.
- metadata attributes e.g., the filter code
- some implementations of a broadcaster determining its content to be emitted may identify an excess capacity in their emission that they want to optimize. At an output of their national network touch point, they may then use the same models to provide opportunistic data injection, the forward link being then re-multiplexed into a localized transmission STLTP.
- STLTP extraction component 430 and RT yield evaluation component 434 may identify for extraction a theoretical maximum throughput value such that there are substantially no underutilized resources.
- the excess capacity may be utilized by a third party in the marketplace exchange process. For example, Microsoft and/or Google may deliver at least portions of an application update (e.g., for Office 365 and/or Chrome, respectively) upon resolving one or more new bugs previously identified for correction.
- such delivery may reach substantially all operating BATs 104 (e.g., in a market or region) in one series of emissions 108, where the update portions are collected at all of these receivers over time. Although this may take an hour or more, this is preferable to having to wait a few weeks for each of the users to undergo the update process individually.
- RT yield evaluation component 434 and RT injection component 436 may be able to support at least a guaranteed proof of performance by studying the heuristics of ongoing emissions to approximately identify when and/or how much excess capacity may be utilized. This approach may result in software-driven data deliveries for little or no cost.
- RT yield evaluation component 434 may capture the flow of emissions at the STL, from not just an instantaneous perspective; rather, this component may respond to historical analyses from every data point in a market. For example, there may be 10,000 towers across a country, providing 10,000 exchange markets that RT yield evaluation components 434 would have an opportunistic position to determine an extent of value that may be extracted out of at a point in time.
- component 434 may perform yield evaluation in real-time such that RT injection component 436 fulfills a commercial commitment or contract, e.g., by reemitting a set of opportunistic messages with at least a predetermined frequency. For example, with additional input from upstream systems, a more deterministic delivery may be performed.
- ATSC 3.0 data emissions are made up of empty padding (e.g., overhead) packets due to temporal non-linearity of live ATSC 3.0 data emissions.
- empty padding e.g., overhead
- An example cause for such waste may be video over and under shoots.
- BTS 102 may not otherwise be robust enough to manage the temporal non-linearity of the live data emission. Therefore, there is the opportunity for opportunistic data insertion or injection, e.g., to add fragmented video or other data information into the padding packet spaces of the STLTP feed.
- Probe 440 may thereby detect an over-capacity in real-time and dynamically inject certain data via stratified replacement transmissions. Some implementations may include changing the MODCOD, such as an amount of forward error correction (FEC) via ECC.
- FEC forward error correction
- Device 405 may opportunistically inject data via the STLTP by obtaining a multiplexed, IP-based stream of primary content, analyzing use of baseband padding packets with respect to transmission of this content, and replacing one or more in-transit baseband padding packets with opportunistic secondary content.
- This padding data may each be a baseband ATSC 3.0 packet (e.g., comprising about 1 ps of duration).
- the secondary content may be non-real-time (NRT) and thus may be of a different type from the primary content, which may be RT emissions.
- primary data may be time-sensitive (e.g., live) content
- secondary data may be non-time-sensitive (e.g., application updates).
- the primary and/or secondary data may be transmitted in cyclic or carousel fashion, and this data may support one or more QoS levels.
- the secondary and/or primary data may be transmitted in a multicast.
- Injector 440 may perform at scale based on use of multicast networking comprising tens, hundreds, thousands, or even millions of ATSC 3.0 receivers.
- the baseband packets may comprise inner and outer ALP encapsulations. That is, an outer ALP encapsulation may encapsulate an inner ALP encapsulation, which comprises a baseband frame to be emitted via UDP/IP.
- the inner packet payload may represent either a timing and management (T&M) packet, a preamble packet, or a BBP.
- T&M timing and management
- BBP BBP
- Each of these types of packets are encompassed in the STLTP multiplex, for example, depicted in FIGs. 4A-4B.
- only the BBPs may be utilized, these types of emitted packets having actual data (and thus may have some replaceable padding therein).
- the T&M and preamble packets may be metadata that define example aspects of the emission parameters for the exciter and the system timing and synchronization.
- All inner layer and outer layer packets delivered over the STL using a tunneling process may depend upon an RTP/UDP/IP multicast IPv4 protocol stack. All tunneled packets may, for example, have defined port numbers within a single IP address.
- the RTP protocol may be used with redefined headers. Segmentation and reassembly of large tunneled payload packets and concatenation of small tunneled payload packets within the tunnel packets may be performed using RTP features.
- a segment sequence number within an outer RTP header may indicate a position of a segment within a larger source packet that supports segmentation and reassembly, and a value in an outer RTP header may indicate the offset of the first inner packet segment within the payload of the associated outer packet that supports concatenation.
- STLTP extraction component 430 and/or RT yield evaluation component 434 may remove encapsulations of both the inner and outer encapsulations of the RTP to decode the BBP. For example, some embodiments may determine the presence of a complete padding frame or what portion of the frame is padding. In this or another example, STLTP extraction component 430 may determine where a series of padding bits terminates, and actual data payload begins. This information may, for example, indicate one or more opportunistic data injection locations. In another example, such determination may indicate excess capacity available for subsequent network optimization.
- system 405 may accomplish this padding repurposing improvement without having to make any changes within a broadcast network, e.g., in-flight at the STL between the broadcast gateway and the modulator or exciter of BTS 102.
- Device 405 may thus be installed at the STL between the broadcast gateway and the modulator/exciter (e.g., without having to touch anything internally in the network operations or configuration) to provide supplementary content (e.g., ad creatives). More particularly, device 405 may, for example, be located anywhere at or between the broadcast gateway’s STLTP formatting and ECC encoding and the transmitter’s ECC decoding and STLTP demultiplexing, as illustrated in the examples of FIGs. 4A-4C.
- the ECC (e.g., per the SMPTE 2022-1 standard) may be applied to maintain reliable delivery of the STL data between sites.
- NRT extraction component 432 may utilize a buffer with pointer to an NRT object (e.g., of a size on the order of 1 megabyte (MB)).
- NRT extraction component 432 may enable delivery of such content over an extended period of time (e.g., a 24-hour period for 128 BBPs, each providing up to 8, 192 -bytes (B) of availability such that the ⁇ 1 MB is delivered) via network 108.
- B 192 -bytes
- RT injection component 436 may accordingly inject that amount of NRT into the BBP and then increment the pointer to the NRT object.
- the NRT object may comprise a content data delivery type that is not a live video and/or audio emission. Owners of the NRT object may thus tolerate delay (e.g., minutes, hours, etc.) in the data delivery through system 100.
- delivery of the NRT data may be performed at intermittent intervals depending on network configuration at BTS 102. For example, returning to the 1 MB example of NRT data, if full BBPs were utilizable then this NRT traffic may be quickly emitted, but if STLTP extraction component 430 identifies only smaller blocks of padding (e.g., a few hundred bytes) then this 1 MB of data may be emitted spread-out in padding replacement over about 8,000 BBPs. The padding may be replaced by multiplexed portions of data.
- blocks of padding e.g., a few hundred bytes
- RT injection component 436 may multiplex any number of NRT objects (e.g., video, ads, and/or other content). For example, these objects may be emitted several times repeatedly over an extended period of time (e.g., overnight) and available for a time segment (e.g., for dynamic advertising preemption and insertion using prepositioned content). And some implementations may, for example, deliver NRT objects through network 108 without impacting local configurations from every single facility or region in operation.
- NRT objects e.g., video, ads, and/or other content.
- these objects may be emitted several times repeatedly over an extended period of time (e.g., overnight) and available for a time segment (e.g., for dynamic advertising preemption and insertion using prepositioned content).
- a time segment e.g., for dynamic advertising preemption and insertion using prepositioned content.
- some implementations may, for example, deliver NRT objects through network 108 without impacting local configurations from every single facility or region in operation.
- the STLTP may not have zeros for padding but rather a marker that indicates that there will be in the output of the RF emission a certain amount of flagged data transmitted from the exciter out in the RF emission.
- NRT data extracted by NRT extraction component 432 may be, for example, re-multiplexed into the STLTP emission by injecting the exact length of such opportunistic data, which would otherwise be null emissions from the exciter.
- device 405 may cause an STLTP flow coming from the broadcast gateway to be received and reflected out as a duplicate copy of the STLTP flow in real-time.
- the exciter/modulator of BTS 102 may receive an output of probe 440, as, for example, depicted in FIGs. 4B and 4D, such that it receives the original, unmodified STLTP flow only in a failure mode (e.g., of STLTP extraction component 430). That is, the exciter or modulator may receive, rather than receiving both copies, only one STLTP flow.
- the original STLTP emission may be squelched by an output of the engine, which may be the STLTP flow with any padding packets transposed into data emission packets.
- This transposition may be performed, e.g., on the order of microseconds.
- a buffer may be used such that a few seconds is buffered for this activity.
- device or system 405 may comprise probe 440 and/or another software-based touchpoint.
- Probe 440 may comprise an injector and receiver.
- device 405 may comprise a plurality of differently housed/enclosed portions (e.g., which separate receive, processing, storage, interfacing, and/or transmit portions).
- Probe 440 may, for example, assist in performing padding replacement at the STLTP outer streamed ALP encapsulation/layer and/or at the inner streamed ALP encapsulation/layer.
- RT injection component 436 may inject opportunistic data at one or more STL transmitters.
- these transmitters may be regional, e.g., with different transmitters in different regions receiving (e.g., via a backhaul link) and transmitting (e.g., via another tower) the same original content or different versions of the same content.
- the transmitted content at different regions may be different.
- the opportunistic data may be selected based on the region or demographics of intended recipients.
- RT injection component 436 may stratify audience impression data by sending different versions of an ad for different demographics at a same time whereupon a receiver selects one of those ads to display to the user, since the receiver knows the demographic of the user.
- each of these baseband packets may have header information (e.g., RTP header packet sequence numbering or another identifier (ID)) such that the receiver may reliably reassemble a full 30 second ad using all of the 1 ps segments corresponding to this selected ad.
- header information e.g., RTP header packet sequence numbering or another identifier (ID)
- a selective advertisement insertion may be based on any other content transport mechanism.
- the objective is not to preempt a forward linear insertion (e.g., a first position in a series) per se, but to provide audience segmentation for a traditional media buyer in a digital marketplace.
- the correlating attribute may be from an original ad ID.
- a device e.g., probe 440 or any other device associable with the ATSC emissions
- a traditional ad insertion may have a record that reflects a plurality of derivative insertions that may be allowed for linear preemption.
- metadata supplied via ATSC 1 may be obtained and used for an ad decisioning process that applies the profile/demographic and that preempts only those in a certain set of available creatives to match the suitable audience.
- the device may correlate that linear insertion to the additional plurality of preemptible creatives that are compatible with it for digital distributions.
- the baseband padding has its genesis in needing to modulate any data onto a carrier wave so that the receivers may stay synchronized with the tower’s emissions, e.g., to prevent an out of sync state of 2-3 seconds, which would otherwise occur if the carriers were not used.
- the ATSC 3.0 standard does not require these emissions through the STL.
- Yield evaluation component 434 may analyze baseband packets in real-time and identify unutilized capacity (e.g., where the packet header has a padding signature) to dynamically replace padding with real data. For example, RT yield evaluation component 434 may obtain flattened BBPs of the STLTP feed to determine a flag that indicates how long padding will be without transmitting the padding.
- RT yield evaluation component 434 may further determine a size of the padding or whether this packet is a complete padding segment.
- Each null packet replacement with opportunistic data may have a dynamic size but at most 8,192-bytes, in some implementations. In this fashion, baseband packets may each be analyzed for their availability to replace null data with NRT content.
- the to-be- replaced padding may be at an end of a packet or in the encapsulation of another packet behind it to keep alignment.
- the opportunistic NRT data may be sourced differently from the main broadcast data, thereby comprising a form of data overlay. For example, the NRT data may be obtained via network 106 and/or from electronic storage 422.
- Evaluation component 434 may evaluate RT yield of opportunistic data via non-invasive probe 440, which may be any tool or device suitable to obtain STLTP formatted and ECC encoded data.
- MODCOD control component 438 may determine optimal MODCOD characteristics.
- the adjusted MODCOD may cause improved emission durability and reception robustness of emissions 108.
- the MODCOD configuration changes may be sent to an upstream device and/or a downstream device via external resources 424 or via another suitable means.
- MODCOD control component 438 may provide a self-optimizing network configuration parameter set based upon external demand/feed and yield sources (e.g., 202, 206, and/or another source).
- RT yield evaluation component 434 may determine an amount of time needed to opportunistically transmit a certain amount of NRT content.
- the NRT data may comprise a plurality of different versions or types of content.
- RT injection component 436 may deliver the determined amount of content based on its type.
- a number of baseband packets needed to transmit the NRT content may be dynamically determined based on an amount of each baseband packet utibzable for padding replacement.
- RT injection component 436 may multiplex into the STLTP feed a plurality of different versions of NRT data within a predetermined time period.
- RT yield evaluation component 434 may analyze STLTP feed emissions to such a high level of quality that the NRT data may not just be opportunistically sent but also deterministically at a quality of experience level that satisfies a criterion.
- RT injection component 436 may rely on historical analysis and other heuristics of RT yield evaluation component 434.
- carousel transmissions may be analyzed such that a subsequent iteration emits less padding overhead and/or with improved MODCOD characteristics.
- RT injection component 436 may incrementally extract, e.g., for each of the baseband packets, portions of the NRT data obtained by NRT extraction component 432.
- Each of these portions may have a size that is equal to an amount of excess capacity within each baseband packet.
- the excess capacity may be represented by a corresponding amount of padding when the disclosed approach is not implemented.
- RT injection component 436 may perform this incremental extraction by incrementing a pointer into the NRT data as each portion of the NRT content is transmitted out from probe 440. More particularly, STLTP extraction component 430 may decode each of the BBPs to determine injection locations. RT injection component 436 may obtain portions of the NRT data extracted from a content source, and then multiplex the portions into the STLTP feed at the respective, determined locations, including with proper formatting and protocol characteristics.
- probe 440 may implement STLTP electrical (e.g., for microwave or satellite transmissions) or optical (e.g., for fiber transmissions, including two connectors: one for an ingress from the STLTP feed that includes the flag and the other for an egress back into the STLTP feed) characteristics.
- STLTP electrical e.g., for microwave or satellite transmissions
- optical e.g., for fiber transmissions, including two connectors: one for an ingress from the STLTP feed that includes the flag and the other for an egress back into the STLTP feed
- BTS 102 may adjust low level signaling or emit a service location table (SLT) that identifies this NRT data payload in transmission 108.
- SLT service location table
- device 405 may emit an initial portion of the data capacity with excess to provide the service level signaling to identify that there is an asynchronous layered coating (ALC) transmission that would then contain the data for opportunistic delivery.
- ALC asynchronous layered coating
- RT injection component 436 may implement, for opportunistic delivery, an underlying transport mechanism that uses, e.g., the FLUTE standard. Inside a FLUTE there may be a series of objects that will be delivered to an IP multicast receiver. To ensure a high confidence of delivery (e.g., including a full and accurate payload representation), FLUTE may interoperate, e.g., with a series of carousel object deliveries. For example, FLUTE may support a recovery model by providing carousel delivery in which the receiver then would keep a set of the objects that are being recovered with a map of the bytes that have been recovered.
- FLUTE may support a recovery model by providing carousel delivery in which the receiver then would keep a set of the objects that are being recovered with a map of the bytes that have been recovered.
- FLUTE may then (e.g., compliant with the A/331 standard) apply to those missing byte ranges, if it receives them, to reconstitute the fully recovered object over a series of carousels.
- ATSC 3.0 receivers e.g., BAT 104
- BAT 104 may be configured well enough to provide a highest fidelity for object reception.
- the ALP transmission may be in the FLUTE, there may be a layered coating transport (LCT). Then, on top of the LCT there may be individual blocks that are added into the IP multicast permission.
- the LCT may be comprised of ALC.
- ALC then may have other components, e.g., a style descriptor which is the file transport information (FTI), additional layers of the FEC, and/or the FLUTE layer that contains what that object identifier is. Inside of it may be what the offset is of the object (e.g., the data may start at byte 128), and it also may contain what the length of the data payload is, which is a function of the opportunistic data padding from the STLTP.
- FTI file transport information
- disclosed embodiments may inform the receiver device where in that object this block of data should be placed and the length of the data that this block may contain.
- the receiver device may inform the receiver device where in that object this block of data should be placed and the length of the data that this block may contain.
- the opportunistic insertion e.g., with the SLX emission that occurs in a carousel model
- at least 49 B of padding are obtained (e.g., due to a 40 B header for the IP/UDP and about an 8 B FLUTE header)
- disclosed embodiments may deliver an extra byte of NRT opportunistic data because there may be an indication in the FLUTE header in that emission where that 1 B should be placed and how long that 1 B is for the receiver to add that in.
- MODCOD control component 438 may cooperate with MODCOD unit 204, MODCOD being the modulation and coding characteristics (e.g., which may be quadrature phase shift keying (QPSK), quadrature amplitude modulation (QAM), or another suitable modulation scheme). For example, a signal in which two carriers are shifted in phase by 90 degrees may be modulated and summed, and the resultant output may comprise both amplitude and phase variations.
- QPSK quadrature phase shift keying
- QAM quadrature amplitude modulation
- a signal in which two carriers are shifted in phase by 90 degrees may be modulated and summed, and the resultant output may comprise both amplitude and phase variations.
- constellations resulting from QAM modulation range by broadcaster choice from QPSK to 4096 QAM. High spectral efficiencies may be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the channel required.
- QPSK may, for example, comprise a digital modulated signal comprising a two bit (4 point, or quadrature) QAM constellation, which is usually used for low bit rate, high robust transmission.
- QPSK may thus be more reliable in the modulation format, and QAM may have different derivatives of 16, 256, 1024 non-uniform, and 4K non-uniform.
- a problem may be, when changing from the more reliable QPSK to 4096 non-uniform QAM, that the bandwidth increases but the ability to receive decreases. In forward emission 108 using QPSK, a lower bandwidth may be met but with the highest reception potential for that modulation.
- the coding of RF emissions 108 may imply a level of FEC to ensure robust reception, but there may also be a loss of how much of the IQ vector may be properly inferred by the receiving device based on the SNR and thus the received power of BAT 104.
- the modulation necessarily reduces the bit rate or the channel capacity.
- Component 438 may adjust the MODCOD (e.g., to adjust the capacity of network 108), which may result in an adjustment of the number of ATSC 3.0 receivers able to properly receive the payload of emissions 108.
- the broadcast gateway of BTS 102 may provide small incremental channel capacity modifications by adjusting the FEC ratio.
- the channel capacity of the configured MODCOD may become fixed, affecting real-time delivery requirements of sources 202 and/or 206.
- FIG. 4E illustrates an example method 480 for injecting NRT data.
- Method 480 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 480 presented below are intended to be illustrative. Method 480 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 480 are illustrated in FIG. 4E and described below is not intended to be limiting.
- Method 480 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 480 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 480.
- a set of opportunistic NRT data may be obtained. Operation 482 may be performed using network 106, external resources 424, user interface device 418, and/or a processor component the same as or similar to NRT extraction component 432 (shown in FIG. 4D and described herein).
- a plurality of BBPs from the STLTP feed may be obtained. Operation 484 may be performed by a processor component the same as or similar to STLTP extraction component 430 (shown in FIG. 4D and described herein).
- an amount of excess capacity within each of the obtained BBPs may be determined. Operation 486 may be performed by a processor component the same as or similar to RT yield evaluation component 434 (shown in FIG. 4D and described herein). [00285] At operation 488 of method 480, portions of the NRT data each having a size equal to the respective determined amount may be incrementally extracted for the each BBP. As an example, NRT extraction component 432 may incrementally extract, for each of the BBPs, portions of the NRT data, each of the portions having a size determined based on excess capacity. Operation 488 may be performed by a processor component the same as or similar to NRT extraction component 432 (shown in FIG. 4D and described herein).
- null data may be replaced by multiplexing the extracted portions into the STLTP feed. This padding data would otherwise be emitted by BTS 102. Operation 490 may be performed by a processor component the same as or similar to RT injection component 436 (shown in FIG. 4D and described herein). Alternatively, MODCOD control component 438 may improve the MODCOD when replacing the null data.
- a highly complex video essence emission may require more transport utilization, which may ultimately be mapped in the ALP packet.
- some embodiments may require a reencoding of the video to meet the target objective. This is what a statistical multiplex (stat-mux) may do, e.g., by compressing the GOP.
- stat-mux e.g., which may implement a constant bit rate
- a stat-mux may thus overcome such a bit rate overshoot and require the video encoder to go back and recompute that GOP, to meet the target requirements of the multiplexer.
- the multiplexer may drop the packet because there may be a time component to them. For example, if this packet cannot be shifted in the time window that is required for the receiver, the receiver device may under flow in this packet, resulting in a discard, but an upstream scheduler indiscriminately dropping data (e.g., which is part of an I-frame rendition) may be very negatively impactful for a downstream video decoder because it may not have the ability to decode what the reference frame is for this.
- the GOP size may be based on a scene. For example, if it has low motion, then usually it will have a much lower utilization of the channel capacity for that GOP. If the scene has high motion, then there may be compression artifacts, but it may meet or exceed the channel configuration capacity. This is thus a function of time of the video encoding input where the encoder may try its best, but it may not be guaranteed that it will be exactly n bits per second or n bits per frame. There may thus always be a degree of over or under shoot somewhere in the transport of the media essence into the ALP and BBP.
- Adjustment of the MODCOD may affect both the capacity and ATSC receiver reach of emissions 108.
- MODCOD unit 204 may thus ultimately be controlled, to provide incremental or quantized additional capacity, or to adjust incremental or quantitative reach of that forward transmission.
- Increasing channel capacity may reduce the universe of receiving devices because the inputs may be uncontrollable, and reducing the channel capacity may increase the universe reach because the inputs to control the transition characteristics may have a large capital cost associated with them.
- the physical layer allows BTS 102 to choose from among a wide variety of physical layer parameters for personalized broadcaster performance that may satisfy many different needs. For example, some configurations may cause a high capacity but low robustness, and others may cause a low capacity but high robustness. Some selections may support SFNs, multiple input multiple output (MIMO) channel operation, channel bonding, robustness (e.g., guard interval lengths, FEC code lengths, code rates, etc.), and other suitable configuration decisions. With such flexibility, a used signaling structure may allow the physical layer to change technologies and evolve over time, while maintaining support of other ATSC systems operating in different modes. BTS 102 may provide flexibility to choose among many different operating modes depending on desired robustness/efficiency tradeoffs.
- MIMO multiple input multiple output
- Emissions 108 may comprise OFDM modulation with a suite of LDPC FEC codes, of which there may be 2 code lengths and 12 code rates defined. There may be three basic modes of multiplexing: time, layered, and frequency, along with three subframe types: single input single output (SISO), multiple input single output (MISO), and MIMO.
- SISO single input single output
- MISO multiple input single output
- MIMO MIMO
- the combined ALC NRT emission rate (e.g., the sum of maximum NRT payload bitrate) may never overflow the PLP channel capacity after all ROUTE emissions are framed.
- a committed route payload length calculation may be the total summation of all non-ROUTE emissions in the PLP for the current baseband frame.
- the excess PLP baseband frame payload size may be used to assign NRT data transmissions across one or more NRT sessions by computing the remaining FEC frame size (see A/322 Kpayload (Bose, Ray-Chaudhuri and Hocquenghem (BCH)) for the applicable FEC code rate), subtracting for the committed ROUTE payload length, and subtracting by either: (i) minus 1, resulting in an A/322 baseband frame with a one byte header of base field mode equals 0 and least significant byte (LSB) pointer equals 0, when no proceeding fragmented ALC frame is present; or (ii) minus the proceeding fragmented ALC packet length used at the start of our baseband frame, minus 1 (if the proceeding ALC fragment is less than or equal to 127 bytes), or 2 bytes (if the proceeding ALC fragment is greater than or equal to 128 bytes) for proper pointer length of the start of the first ALP frame, when a proceeding fragmented ALC frame is present.
- A/322 Kpayload
- the NRT payload(s) may, for example, be allocated along the following conditions: condition A, condition B, condition C, and/or condition D.
- Condition A may be allocated if the excess capacity is less than the ALP ALC header length (e.g., 7+20+8+16+ sum(HET/HEL) ALP header, IP header, UDP header, ALC block header, additional LCT extension headers), the baseband frame being only padding (as no un-fragmented ALC frame with ALP header may be injected in the remaining payload).
- Condition B may be allocated if the excess capacity is less than 1500 bytes but more than the above lower bound condition, a single ALC packet being based on a weighted round-robin selection strategy of the set of ALC transmission object identifiers (TOIs), and comprising an ALC data unit (DU) length to the final byte available in Kpayload(BCH) being emitted for the current baseband frame to ensure all bytes are consumed for the baseband frame length of Kpayload(BCH), resulting in a baseband frame of 0, which may indicate A/322 base field mode 0 and LSB point 0.
- TOIs ALC transmission object identifiers
- Condition C may be allocated if the excess capacity is greater than 1500 bytes but less than 1500 + a header length of the ALP ALC +1; the condition B ALC frame may then be emitted, and A/322 baseband frame with one byte header of base field mode equals 0 and LSB pointer to the start of the first ALP frame to complete the baseband frame.
- the excess baseband frame payload capacity may not be long enough; at least one byte of the next ALC DU may, e.g., without fragmenting, avoid increasing the risk of consuming an allocation in the next baseband frame that may be needed for ROUTE delivery.
- Condition D may be allocated when any excess baseband frame capacity is allocated across a weighted round-robin strategy over the ALC TOIs, wherein condition C is repeated as needed to ensure full utilization of excess frame capacity without causing the ALP packet to be fragmented over to the next baseband frame, avoiding increasing the risk of consuming an allocation in the next baseband frame that may be needed for ROUTE delivery.
- BTS 102 may support a mapping of ANSI/SCTE-35 signaling into the MMT signaling information, SCTE-35 being used in the cable ecosystem and MSL world to provide triggering for ad break information across an IP network.
- embodiments may use the SCTE-35 markers as a signaling mechanism in the transport layer (e.g., MMT) to identify when an ad break occurs and when an ad break terminates.
- the transport layer e.g., MMT
- an ad break may be signaled using ROUTE/DASH, e.g., using an e- message box that contains a SCTE-35 message or some other signaling.
- ROUTE/DASH using an e-message moves what is in SCTE-35 a transport level signal into the application layer, which is in theory a layer violation.
- a placement opportunity inventory system may be used to define which portions of that ad break, e.g., the inventory, are preemptable or replaceable for a targeted digital ad insertion (DAI) impression opportunity.
- DAI digital ad insertion
- An ad decisioning server may obtain from a client a video ad serving template (VAST) request and then supply back a VAST for ad decisioning based on demographics and other characteristics of the requestor to compute what the highest yield is of the ad impression.
- VAST is a specification released by the interactive advertising bureau (IAB), which sets a standard for communication requirements between ad servers and video players, including a data structure declared using XML.
- the placement of the ad break may be a function of the decisioning, but to initiate the decisioning process BTS 102 may be required to know where the ads break and where occurrence of the initial switch from linear programing to the linear insertion would be, going thus back to the SCTE-35 model.
- Some implementations may comprise up to at least five different DAI use cases (e.g., compliant with the A/337 (application event delivery) and A/334 (interactive content) standards), along with opportunistic data delivery for positioning of linear creatives across an ATSC 3.0 network without impacting the ATSC 3.0 encoding, packaging, and/or scheduling system touchpoints.
- DAI use cases e.g., compliant with the A/337 (application event delivery) and A/334 (interactive content) standards
- opportunistic data delivery for positioning of linear creatives across an ATSC 3.0 network without impacting the ATSC 3.0 encoding, packaging, and/or scheduling system touchpoints.
- device 405 or another component of BTS 102 may perform an analysis of encoder output to determine correct payload placement. Some embodiments may perform multiplexing in the MPD handoff for ROUTE, which may limit an ability to emit an MMT signaling message to represent a transport-specific signal in SCTE-35 without the requirement of an application aware signaling message.
- MMT protocol (MMTP) with MPU re-assembly may be a function of the decoder, not the encoder to scheduler interface.
- a signaling format that may provide a common format and adoption of multiple fulfillment use cases aligning with each distribution mechanism, to reach the right audience (e.g., with the right content on any device).
- BTS 102 may rely on in-band signaling and ISOBMFF box definitions, MMT signaling information, and alignment into a baseline A/344 runtime application such that a common message identifier is adopted. That is, one message may, for example, be used to support multiple technology, connectivity, and interactivity use cases.
- Some implementations may have a DASH-specific mechanism based on XML (e.g., XLink).
- This format for ad break triggering may support segmented content transmission, e.g., VOD content of DASH.
- the ad break signaling indicator may be moved from the traditional transport layer into a more complex and constrained application layer.
- live linear programming delivered via MMT and a corresponding A/344 runtime app may switch video playback to a locally-decisioned and pre-positioned creative from the SCTE-35 trigger. That is, multiple descriptors, along with multiple triggers in the same ad break (using tiers) may be used to trigger selections of proper ad selected copy and ad replacement opportunities.
- the SCTE-35 trigger may be multiplexed early as a pre-roll using a multiplex and a PTS execution time, as needed, and may contain a URL resource to a VAST ad decision server for decisioning and creative download.
- the multiplex time to prepare for the ad break may be long enough for ad decisioning and download/streaming of the replaced content.
- a broadcast- quality experience for addressable digital ad insertion may be provided without negatively impacting broadcast latency. Feedback information to the ad network over the Internet backchannel may give broadcasters insight into any ads that were not run because of network buffering or delays.
- impression beacons for non-replaced (e.g., rolled-thru) network or barter insertions may be reported by Internet-connected devices with proper metadata provided via the SCTE-35 message.
- the ad may begin and finish with ad beacons (e.g., containing ad-ID), and a subset of audience impression data for traditional eGRP/sharing of voice linear buyers for demographic data enrichment may be developed for media buyers.
- the IAB standard supports companion ads, overlays, and snipes/bugs/L- bar/J-bar with linear content.
- VAST resource with a non-linear creative overlay (no video emission)
- a low overhead solution may be delivered to provide enhanced graphic or textual information along with a traditional linear creative spot placement.
- Some embodiments of BTS 102 may support fully addressable VAST digital ad insertion with a pre-positioned creative copy in partnership with an ad network. In this model, only the ad decisioning call and response may be required to be fulfilled by the Internet backchannel, and any other linear content would be pre-positioned viaNRT/carousel.
- VAST payload response e.g., ad- ID
- devices may use a non-Intemet delivered creative, reducing the time and latency for effective digital ad insertion at scale.
- Some embodiments of BTS 102 may reduce per-impression transmission cost of DAI, increasing value with increased insight for advertisers, and enhancing the advertising experience by delivering the right content, on any device, to the right audience.
- SCTE-35 message available for, e.g., linear MMT may be able to map the signaling information message, which is on the transport level, into an application layer event because it may be going to provide those, and it may maintain the original spirit and integrity of the transport layer signal of the SCTE-35 message to define when an ad break starts.
- An application e.g., which is compliant with the A/344 standard
- an ATSC 3.0 receiver may be like an HTML5 web app so the media essence may be playing from the broadcast as a receiver media player (RMP), which is rendering the MMT or ROUTE/DASH emission on the receiver device.
- RMP receiver media player
- the application may then process that SCTE-35 message.
- This SCTE-35 message may, for example, be obtained at the exact frame that the ad break starts, such that the decisioning process has to occur instantaneously, because otherwise it would miss that first opportunity window for an impression.
- the message may also be provisos in the SCTE-35 message so that the message has a PTF execute time, which tells the receiver of the SCTE-35 message when the ad break will start, and it may occur by a pre-roll of the SCTE-35 message (e.g., three or four seconds beforehand), to allow BAT 104 (e.g., a splicer) to prepare for an ad break.
- BAT 104 e.g., a splicer
- This example preparation process may allow for an Internet-connected device to initiate a VAST ad decisioning request, which would then execute an HTTP call over such back channel for a connected device.
- This back channel request may be the VAST payload request to an ad decisioning server, and the ad decisioning server may then correlate a number of different variables, e.g., what channel the receiver is watching, potentially if it has programmatic metadata of what the programming is, and/or what the ad break number is (e.g., for news programming there is a sequence of blocks), the first ad break having a different value to advertisers.
- ad decisioning server may then correlate a number of different variables, e.g., what channel the receiver is watching, potentially if it has programmatic metadata of what the programming is, and/or what the ad break number is (e.g., for news programming there is a sequence of blocks), the first ad break having a different value to advertisers.
- some embodiments may know from the inventory how that ad break is segmented out (e.g., being four 30 second spots for a two-minute break or eight 15 second spots, or some combination). Subsequent decisioning may be based on what the traffic manager has made available for inventory and what the sellers are able to sell into a market.
- the problem is that in the digital ecosystem, especially with tools like VFP or Spot-X, those sources of programmatic data, such that the context of what a viewer is watching, which ad break they are in, what the content is, what the audience demographics are, and what the content genre is, what the contextual relevancy is, may not be components of digital linear ad ecosystem for video inventory.
- some embodiments may, for a digital ad insertion, make better informed decisions about the context of the content that the user is watching for contextual relevancy, especially in live systems.
- Metadata may be extracted and then used in a subsequent ad decisioning process, but for live content (e.g., where the ad sellers have some context of what programming to buy into, and potentially in priority placements), the ad sellers may know that a certain newscast (e.g., at 5 PM of a B-block) is going to open up with a healthcare segment. It may thus be beneficial, when that block is over, to provide a complementary advertisement opportunity to a local seller that would be for healthcare related services to dovetail on the messaging experience.
- a certain newscast e.g., at 5 PM of a B-block
- Some disclosed embodiments may thus need to facilitate additional sources of contextual metadata for the digital ad decisioning process, which starts when the application (e.g., compliant with the A/344 standard) receives an application event delivery (e.g., compliant with the A/337 standard) as a pre-roll SCTE-35 message.
- This application may, for example, provide additional metadata in the ad decisioning calls to the ad decisioning server to provide context as to what content the viewer is watching and usually to glue it to the entertainment identifier registry (EIDR), which is programming information, or to other broadcaster specific metadata.
- EIDR entertainment identifier registry
- That broadcaster specific metadata in this case may be injected into that SCTE-35 payload that originates from the broadcaster. As such, information may be added via a series of descriptors that are allowed to be able to be added to the SCTE-35 message.
- the only object that may be in the SCTE-35 message is a flag that is called out of the network, the out of network value being either true or false.
- the application may then be aware that it has switched from linear programming into an ad break, and from there, the interactive content application may make a decision of how it should handle that message.
- PTX execute time which may tell the interactive application how much time it has to be able to make that decision. In the out of network equals true model, the time may be zero. It may have to do it essentially from the next video. If the interactive content application is going to insert a linear insertion for content replacement, it may have to do it with a content or resource that is either local to the device or in a tertiary ATSC 3.0 media essence flow.
- the forward transmission may have, for live linear programming, one MMT emission, in the ad break there may be a plurality of linear insertions.
- the primary MMT emission is, for instance, for KOMO 4’s newscast or WJLA’s channel 7 newscast, it may be MMT emission 7, and in the ad break the broadcaster may provide a plurality of additional media flows that contain what the targeted ad replacement payload should be as a live linear story.
- the application in one use case could switch from the WJLA channel 7 to behind the scenes when the splice (e.g., SCTE-35 splice message is present with out of network equals true) may make a decision from context information that is local to the device, e.g., a user’s zip code or some other attribute that is a demographic that the broadcaster wants to reach.
- context information e.g., a user’s zip code or some other attribute that is a demographic that the broadcaster wants to reach.
- SCTE-35 message by providing additional segmentation descriptors, that message may inform the ATSC 3.0 receiver device in a non-connected (e.g., non-Intemet connected) model of what channel it should switch to for the linear replacement.
- a non-connected model e.g., non-Intemet connected
- This may thus be a model that is a disconnected device that may support what would be traditional zonal, or with a possible demographic targeting, where a broadcaster provides a plurality of alternative renditions of that linear ad insertion break.
- the whole linear ad break may be provided by a second, third, or fourth stream in the decisioning characteristics provided in the content of the SCTE-35 splice command by additional descriptors, in which the application may then apply those descriptors to see if there is a matching set of characteristics. If those characteristics match, then the device may make a switch from the receiver media player, which is playing the base, traditional linear insertion provided by the broadcaster, to a tertiary alternative content replacement, or secondary content, for that audience viewer.
- This may be a mechanism that is incremental in its capabilities to provide an alternative linear insertion, but it may be something that would be most likely for the entirety of the SCTE-35 out of network equals true ad break.
- SCTE-35 message which is out of network equals false and which would then signal the receiving device and application to switch back, then, to the original primary content submission.
- This may thus be one model where the device may be disconnected but still has the ability to run the interactive content application because the interactive content application may be delivered as an NRT payload in the ATSC 3.0 forward transmission, and all of these pieces may connect together.
- One such additional model may be that the SCTE-35 message is delivered with the PTX execute in a pre-roll window.
- the device may make a call out to an ad decisioning server, which with supplemental data from the SCTE-35 trigger may provide contextually relevant metadata for programming information for the ad decisioning server to return back one or more linear creatives for that ad in ad break opportunities to fill one or more impression opportunities.
- It may selectively determine the VAST response using a video ad placement map (e.g., video multiple ads playlist (VMAP)), or there may be another mechanism in VAST 3.0 or VAST 4.0 (e.g., ad pawning).
- VMAP video multiple ads playlist
- Ad pawning may signify a definition for the receiver and a plurality of ads that may be placed in this impression avail window.
- the problem with that is, that in traditional digital distribution, that VAST ad decisioning request and response call flow may have some implicit latency in it. It may have to make a network request through the IP back channel. There may be some time that the ad decisioning server will require to compute what, out of the universe of demand, will meet the available supply and any other categorical restrictions, competitive exclusions, frequency capping, or whatever else, and fulfillment to any third party demand sources to maximize the yield opportunity of that inventory. It may take a degree of latency to be able to fulfill that.
- a challenge in implementing the A/344 standard for interactive content may be to fulfill a dynamic ad insertion.
- Another challenge that may occur may be, for that ad decisioning response, inclusion of resources for these linear creatives that need to be played out, and this may be commonly why, when watching content for ad supported content, there may usually be a three to four second buffering window where the ad decisioning is occurring, and then the receiving device may be buffering the ad creative content. So this model may work well for asynchronous content or VOD consumption experiences, but it may be problematic for live linear experiences because once that SCTE-35, PTX execute or out of network equals true flag is perceived by the device, the next frame of video may be the linear insertion.
- Embodiments that pre-roll the SCTE-35 trigger may not be able to perform that insertion because it may then come back late on the splice, resulting in the underlying linear insertion. However, this may come in late to the creative play out, which may then potentially return late to the programming.
- That latency window may be problematic because the receiver may have to make a series of network calls: one for ad decisioning, which has one set of latency characteristics, and then a second call, which may be for the linear creatives, which is a different and usually higher degree of latency for delivery of those creative resources.
- the ad decisioning payload may be usually a few hundred KBs.
- the ad creatives are usually on the order of MBs of data, and so that time has to map into the window of time for the pre-roll message before the splice execute occurs, and if an embodiment does not make it, the embodiment cannot fulfill that impression for the receiver.
- NRT and opportunistic data delivery there may be a model here where those functional components may be comprised to solve that second latency attribute.
- the ad decisioning characteristic may be solved locally by disclosed embodiments, but there may be a hybrid model which allows for the IP back channel to execute the ad decisioning request in a hybrid model with NRT or opportunistic data delivery or even a provisioned network NRT delivery, where an ad network could pre-position its linear creatives throughout the NRT emission for caching on the local device. Fulfillment of the linear ad insertion may thus be only a function of the ad decisioning latency, rather than a function of the ad decisioning latency plus the ad creative delivery to the receiving device.
- This may require an integration with a first party ad network where those linear creatives are distributed out through the ATSC 3.0 network in an NRT emission, but that NRT emission, when cached, may then provide the creative resources that would be part of the broadcaster’s flight or what their potential universe of advertisements are such that, when the splice then occurs, the ad decisioning request may be made.
- the ad decisioning response may include a series of linear creatives with the first linear creative being a representation, a reference to the locally cached NRT emission of that linear creative, thus allowing for the device to, in the interactive content model, use the application media player (AMP) to playback that preset locally cached linear creative, fulfill the ad impression request, and/or optionally provide tracking and metadata back to the ad decisioning server.
- AMP application media player
- This may thus provide a model for a preemptible linear insertion that may map into a hybrid ATSC 3.0 data delivery model with a true one-to-one digital ad decisioning process and ecosystem that may map into a live linear transport level set of temporal requirements to fulfill that opportunistic ad impression opportunity.
- Known approaches may include an addressable content replacement (ACR), by fingerprinting the broadcast emission and then adding into the broadcast transmission a 10-second delay.
- ACR addressable content replacement
- This 10-second delay may be allowed for a long enough temporal window for a system at the broadcaster’s side to determine by identifying from the traffic systems that the next ad break was coming, by sending the fingerprint of what that video frame looks like ahead of the device so that the device may make the ad decisioning request and response, download the creative, and at the proper time use the fingerprint of that video image to then determine where the splice should occur.
- this may add a degree of latency and a degree of system complexity that pushes the receiver into not just processing messages but moving into, e.g., a layer violation of having to process the media essence (e.g., to execute that impression opportunity).
- Such a system may require a lot of computation power.
- embodiments may merely process a series of data payloads that provide the ad break splice information, any additional segmentation descriptors or broadcaster originated metadata needed for the ad decisioning request. And that ad decisioning request may either be executed locally on the device to match a set of demographics or profiles that are stored locally on the device and execute the content replacement locally.
- the linear creatives may be served from the IP back channel network or a hybrid model, which allows for the ad decisioning request to occur over the network and the fulfillment of that ad placement opportunity to locally cache NRT linear creatives for content replacement.
- the interactive content application may not fulfill that individual ad impression but may fulfill other ad impressions that were returned back from the ad decisioning response in an ad hoc model.
- the disclosed approach may have to be able to solve this problem in multiple different capabilities, not just for the immediate contextual relevancy of the ad placement opportunity, but also different systems of records that would have the ad splice or the ad demand sources to match up with the corresponding supply opportunity for an ad insertion placement.
- ATSC 3.0 BATs or receivers may acquire missing data objects to compensate for failures in reception of data transmitted in broadcast streams via collaboration with other ATSC 3.0 BATs or other receiving entities. Such collaboration may be directed by a transmitter (BTS), signaling the re-emission of the missing data, where re-emission may be carried out via communications on a dedicated return channel (DRC) or any other network connection.
- BTS transmitter
- DRC dedicated return channel
- ATSC 3.0 standard A/323 provides for a DRC through which a receiver may be able to communicate back to a transmitter or forward to other receivers.
- the DRC may enable technologies that rely on interactivity among receivers and between receivers and transmitters.
- DRC-enabled receivers may collaborate with each other through opportunistic communications when they are in proximity to each other.
- the A/323 standard in addition to a downlink broadcast channel, provides for a DRC that operates in a dedicated frequency, utilizing frequency division duplexing modulation mode.
- the system architectures of a DRC-enabled transmitter (e.g., a BTS) and a DRC-enabled receiver (e.g., a BAT) are shown in FIGs. 5A- 5B, respectively.
- BTS system 102 may transmit downlink data (broadcast service data) in a first frequency, fo, and may receive uplink data, via the DRC, in a second frequency, fi.
- BAT system 104 may receive the downlink data in frequency fo and may transmit the uplink data in frequency fi.
- the broadcast gateway (ATSC 3.0 downlink gateway of FIG. 5 A) may encapsulate ALP packets into BBPs and may then send them to the transmitter to be buffered in a PLP associated with a certain IP port connection, while the DRC uplink data may be received (at the DRC uplink receiver of FIG.
- the BAT system may process a received PLP (at the PLP processing unit of FIG. 5B) to extract from the broadcast service data DRC-related controls, such as synchronization and signaling data. These controls are sent to the DRC uplink gateway to regulate the transmission of uplink data (e.g., re-emissions) out of the BAT.
- DRC-related controls such as synchronization and signaling data.
- a grid of DRC-enabled receivers may operate in a collaborative and opportunistic manner to assist each other in recovery and relay of data.
- the transmission range of downlink and uplink communications between BTS 102 and BAT 104 may be significantly reduced when a line of sight (LOS) is obstructed by urban structures.
- LOS line of sight
- a BAT lacking an LOS to a BTS may transmit data to another BAT having an LOS, for the latter to relay that data to the BTS.
- a BTS lacking an LOS to a BAT may transmit data to that BAT via another BAT with which there is an LOS.
- transmitters and receivers may be equipped with a real time localization system (RTLS) that may allow measurements of relative locations or proximity among the transmitters and receivers on the grid.
- Emissions 108 may comprise a plurality of content portions determined based on a peer-to-peer file sharing protocol that is decentralized.
- a transmitter is broadcasting such a service (e.g., TV programs, NRT data, or other data)
- a first receiver in the grid may request a second receiver in its proximity for missing packets.
- the first receiver may request the second receiver in his proximity to send a message on its behalf to the transmitter, requesting a repeat of transmission to the first receiver.
- Such procedure may be useful, when the first receiver lacks the power to communicate directly with the transmitter.
- a first receiver that is not DRC-enabled may communicate and may transmit data to a second receiver that is DRC-enabled, using any available local communication links.
- the second DRC-enabled receiver may then send the received data to the transmitter on behalf of the first receiver.
- BTS 102 may, for example, send out, in its broadcast stream’s PLPs, signals directing the re-emission of certain data objects of a service component.
- a BTS may request BATs within range to re-emit critical data objects that may require transmission in high fidelity.
- these BATs may re-emit such critical data objects to other BATs in their locality that may not properly receive these data objects.
- a BTS may identify specific data objects (e.g., audio segments or video key frames) that may require transmission in high resiliency, as opposed to other video segments where a less reliable transmission may yield unnoticeable degradation in perceived quality.
- a BTS may use modulation modes that allow for a robust reception only by a certain type of BATs (e.g., stationary or mobile device) and/or only by BATs at a certain locality (e.g., outdoor or indoor).
- BATs within the robust reception range in response to a re-emission signal from the BATs, may re-emit the required data objects to other BATs.
- the other BATs may listen for re-emissions and may be able to acquire and recover damaged or missing data objects from the re-emitted data, instead of having to acquire such data directly from the BTS.
- BTS 102 may signal to listening BATs 104 about the opportunity of receiving a missing data from BATs within its robust reception range and may send signals that facilitate synchronization of the listening and the re-emission among BATs. Furthermore, a BTS may initiate a signaling in response to a message received from a BAT that is in need of data recovery. In another aspect, a BTS may prioritize the re-emission of data object, for example, with the effect that data objects with higher priority will be re emitted using transmission parameters that allows for a more reliable transmission.
- BTS 102 may coordinate with a plurality of BATs 104 for one or more of such BATs to transmit recovery data to its peers, in a spatial proximity and in a licensed portion of a spectrum otherwise utilized by the BTS.
- BATs 104 may be configured by a scheduler of the BTS 102 to not interfere with or prevent licensees from utilizing their spectrum.
- BTS 102 may open up portions of its licensed spectrum for BATs 104 to then remit back into it, e.g., in a scheduled window using a software designed radio chip, without out-of-band ad hoc networking or re-encapsulation of the IP multicast emission.
- OFDM is a frequency-division multiplexing (FDM) scheme used as a digital multi-carrier modulation method that copes well with severe channel conditions, including attenuation, interference, and multipath fading.
- FDM frequency-division multiplexing
- a large number of closely spaced orthogonal sub-carrier signals are used to carry data on several parallel data streams or channels.
- Each sub-carrier is modulated with a conventional modulation scheme (such as QAM or PSK) at a low symbol rate, maintaining total data rates similar to conventional single-carrier modulation schemes in the same bandwidth.
- Disclosed embodiments may use OFDM to facilitate SFNs, e.g., where several adjacent transmitters send the same signal simultaneously at the same frequency for constructive combination of signals from multiple, distant transmitters.
- BATs 104 may be aware of recovery data (e.g., in the middleware stack of the service discovery and signaling) to determine whether or not it should actually facilitate processing of that data. It would be an opportunistic determination on the BAT as to whether it needs a recovery opportunity, or whether it may provide a recovery opportunity.
- recovery data e.g., in the middleware stack of the service discovery and signaling
- a forward 6 MHz band may be shrunk down to 5 or even 4 MHz, as a function of the scheduler.
- This RF transmission bandwidth may be one adjustable parameter.
- Other adjustable characteristics may include the MODCOD, capacity, and durability. For example, if a receiving device has a high enough SNR resolution, the receiving device may potentially receive data processed via layer division multiplexing (LDM).
- LDM layer division multiplexing
- LDM may be a modulation on top of modulation, e.g., a superposition capability of the constellation which allows a superposition of one reference emission (e.g., the forward ATSC 3.0 modulation at BTS 102) with a tertiary constellation to provide supplemental data if there is enough SNR between a more robust core layer and a less robust enhanced layer. This may be performed when in proximity to the receiver or when there is a higher forward transmission power to the receiver.
- a spatially located transmitter may, for example, modulate on top of or superimpose ATSC 3.0 features in a forward modulation and not use any additional channel capacity or bandwidth of the forward spectrum emission by being managed by the BTS.
- multiple physical data streams e.g., with different power levels, channel coding, and modulation schemes
- Example embodiments using LDM are practical because in the used COFDM may be transmission carriers that support, in an SFN of multiple transmitters, transmit (TX) carrier offset. That is, instead of COFDM, the TX carrier offset may shift on the microsecond emission level of multiple SFN transmitters to avoid any artifact of echo, or it may provide in the SFN network because multiple transmitters will be in the same RF spectrum block. These transmitters may work collaboratively in the transmission, and the TX carrier offset may move the bootstrap earlier and change the frequency offset of the carrier emission.
- a topology and deployment of networks 108-1 through 108-n may, for example, have one with a highest SNR, for a receiving device, and that will most likely be the one that it may pick up.
- the TX carrier offset may allow a shifting as to where those carriers are used inside of that channel bandwidth. Such embodiments may thus transmit in the same forward spectrum (e.g., channel), without utilizing or impacting the ATSC 3.0 forward transmission channel capacity. That is, some implementations may be able to shift where those carriers are, for the BAT retransmission.
- an opportunity to use LDM may be identified, e.g., with respect to implementation of a set of collaborating APIs 244 or for a configuration in which LDM is dynamically added for certain data frames among a plurality.
- stationary BATs 104 having a higher gain antenna and not suffering from the effect of a doppler shift, transient impulse noise, or deep fade may operate as collaborative anchors or collectors of the data transport, comprising the LDM emission of recovery symbols, for making it available to spatially local devices (e.g., other BATs 104 and/or UE 170) that did not previously receive certain payload portions.
- emissions 108 not having LDM may reach a wider set of receivers than emissions 108 having the LDM, but the latter emissions may be of a higher capacity or bandwidth useful for recovery purposes at different time sensitivities.
- optimal determination of a configuration of emissions 108 may be based, for BTS 102, on an excess capacity arbitrage with respect to the value of delivering data to different segments of potential receivers and by a certain amount of time (e.g., by evening today, by evening tomorrow morning, or by evening in 2 days, etc.); proper reception of emissions 108 may be based on reception characteristics (e.g., SNR, proximity to the BTS, multipath effects, clutter, etc.) of each receiver in each moment.
- reception characteristics e.g., SNR, proximity to the BTS, multipath effects, clutter, etc.
- BAT 104 may implement a scale and footprint size reduction (micro), e.g., with respect to network BTS 102, and may be under the direction and guidance of a macro scheduler: the BTS.
- BAT 104 may use windows of opportunity identified by BTS 102 as precision time protocol (PTP) timing references to facilitate its initial set of transmission requests.
- PTP precision time protocol
- BAT 104 may be configured to receive a transmission in which it knows what the carrier offset is from the ATSC 3.0 transmission (e.g., 0 Hz).
- a receiving BAT identifies presence on a TX carrier zero, it may know that it has -1 and +1 potentially to use under the guidance of the window of the BTS. That is, BTS 102 may, for example, inform other BATs 104 when to listen so that the originating BAT may then use the opportunity of what to transmit and, if there is another BAT nearby, it may call the same procedure except in reverse; it would know via the guidance of the BTS of when to listen and, since the BAT may know which one is primarily receiving (e.g., because of a function of the SNR and received signal strength indication (RSSI)).
- RSSI received signal strength indication
- a BAT may, for example, become a receiving BAT of these messages, e.g., only being responsive to ALP application messaging rather than network topology, to be able to handle this fulfillment exercise.
- each BBP may comprise an encapsulated payload, e.g., the ALP or the IP data. So, by standardizing in that portion to re-emit potentially into BATs to use as the source of truth for timing and RF parameters, then the obligation of the BAT may be just to prepare a relevant BBP payload in preparation for emission.
- BAT 104 may, for example, under the direction of BTS 102 for transmission, operate as any other BTS transmitting ATSC 3.0 signals in an SFN.
- the BTS may thus facilitate collaborative listening, e.g., by applying ATSC 3.0 ALP signaling, to initiate an IP multicast emission so that other BATs 104 may know what is needed from its distributed and spatially-located autonomous peers.
- the physical layer protocol may be used to identify an ability to use LDM on top or the ability in the RF emission to use the carrier offset of what opportunity it has to transmit the direction of the BTS.
- the BAT may be informed of when it either should transmit and thus therefore when it should listen for other complementary devices and, when it does find one, it may be able to identify it based on the ATSC 3.0 ALP, which could then be emitted by a BAT device to then facilitate again a unidirectional transport mechanism of this use case under the guidance of BTS 102.
- collaboration for distributed object recovery may be based only on the BTS to provide guidance as to which ROUTE object shall be re-emitted.
- RaptorQ may be used for enabling the collaborative object recovery to only emit FEC recovery blocks, which may be substantially more efficient than a re-carousel of the original transmission data of the source block, as a receiver that may support the fountain-code RaptorQ model would only need to receive N+l combined blocks of any combination to successfully recover the object, N being a natural number.
- the RaptorQ FEC model when applied to implementations of emissions having missing pieces of content via hints from BATs in a region, may allow for the emission and generation of FEC recovery blocks from other BAT’s in the region, regardless of the number of source blocks received; it may perform a full object recovery with N+l packets and then generate additional RaptorQ FEC recovery blocks for local re-emission without a substantive coordination model.
- BTS 102 may, for example, operate as the glue between a potentially autonomous BAT network (e.g., where the BATs need guidance as to when they should listen for other BATs or when they should identify themselves to other BATs).
- the TX and RX portions of BAT 104 may be configured to not receive its own emission.
- BAT 104 may, for example, not need to know which other BAT 104 is emitting to it and instead just need to emit the message back out to network 292. Then, a receiving BAT that is able to listen based on the guidance of the BTS, if it finds a matching emission, may make a determination if its actionable in the scope of the ALP that performs some other activity in system 100.
- BAT 104 may be its own arbiter of what data it needs, what data it is missing, and how to recover the missing data.
- BTS 102 may be a facilitator for frequencies used in the DRC.
- the BTS may hint this information to BATs 104, in a sender originated model.
- other BATs 104 may know collaboratively what flows to which it should be potentially listening, for any of those recovery services. This may be encapsulated by the re-emission of low level signaling (LLS), which may comprise the SLT that would have that information present for receivers. That way, it would be received by BATs like any other emission and flow, by being rather just provided by a peer BAT under the guidance instruction of a BTS.
- LLC low level signaling
- the BTS 102 may not know what the distribution density is of that object.
- the BTS would have to rely on other higher order parameters like carousel cadence and frequency of delivery across the network to provide a projection of what the NRT reception is across the network.
- the BTS may provide portions or hints of what data is most important. For example, in a video emission where the media fragmentation is identified, the BTS may recommend what ranges of the data unit payload for BATs 104 to reemit to provide a higher degree of confidence of reception.
- the herein-disclosed DRC use cases may be based on BTS 102 being a spectrum resource coordinator, e.g., which opens up windows of opportunity in the ATSC 3.0 licensed spectrum.
- the herein-disclosed DRC use cases may alternatively be based on BTS 102 being a spectrum resource coordinator and facilitator, the facilitation implying a level of obligation and responsibility.
- a BTS may weigh an importance of a retransmission from BATs 104 to other BATs 104.
- Resource coordination may, for example, include knowledge of what is missing (e.g., in anticipation of a spatially located BAT device to provide that information again).
- a BTS-scheduled reemission comprises emitting, by the at least one BAT, content portions based on an emission time interval determined by the BTS.
- BATs 104 may be informed in messaging which objects to reemit for recovery or higher network durability. And, when BTS 102 is operating only as a resource coordinator, BATs 104 may operate more autonomously in determining missing portions and in emitting repair message requests to other BATs 104. The management may be based on an identification of at least one data portion, among data emitted by BTS 102, that requires a data reception integrity satisfying a criterion.
- an integrity of data to be received is based on a function of at least one reemission from the BTS, a retransmission from a BAT on demand, and a retransmission via a BAT as scheduled by the BTS.
- the relationship may be proportional such that a higher level of reception integrity may be achieved as the rate of recovery reemission from BTS 102 and/or BAT 102 increases.
- the herein-disclosed DRC use cases may be based on BTS 102 being a transport coordinator, e.g., without needing a BTS-managed transport mechanism.
- the recovery request that is emitted back into peer BAT network 292 may be segments, and then the BAT fulfilling the recovery may initiate parameters for an out of spectrum (e.g., Wi-Fi direct) transport of the data unit. This may provide a higher throughput point of data recovery through the network.
- a BTS-managed transport may comprise TDM and FDM of capacity for BATs with a guided reemission and selection of object(s) to be remitted in an ecosystem.
- a BTS-orchestrated transport may comprise facilitating the TDM and FDM but instead letting the BATs determine what object(s) should be reemitted based on the spatially located peers.
- the emission of recovery data by BAT 104 may be performed via Wi-Fi direct such that use of a wireless access point (WAP) is rendered unnecessary and (ii) may comprise NRT VOD that was stored from previous emissions of the BTS.
- WAP wireless access point
- BATs 104 may form DRC network 292, which may be an ad-hoc, distributive mesh or configured into another topology.
- BATs 104 may operate as transmission devices under the command and control of the broadcast gateway (which are, for example, shown in FIGs. 4A and 5A).
- BTS 102 may thus, for example, allocate a time slot for BATs 104 to communicate out to network 108 and/or network 292.
- BATs 104 may each be configured to emit to BTS 102 in a proximity; and, in this or another example, BATs 104 may each be configured to emit to other BATs 104 in its proximity.
- Such proximity may be based on SNR, a distance (e.g., from between 0.25 and 0.5 miles, when clutter and/or unfavorable terrain are present, or up to about 62 miles when an LOS path exists), or another suitable criterion.
- excess channel capacity may be used for such BAT-driven (i.e., BTS orchestration as opposed to BTS management) data delivery.
- additional information e.g., what data was transmitted, what data is higher priority, what data is of higher business value, etc.
- Networks 108 and 292 may be substantially the same, except that reference herein to emissions 108 may involve BTS 102 whereas emissions 292 may only involve BATs 104.
- BATs 104 may emit data (e.g., for providing recovery and/or interactivity services), via DRC network 292 and/or another connection (e.g., via a server, spun-up peer-to-peer Wi-Fi connection, 4G or 5G broadband backchannel, or another suitable means).
- Forward emissions 108 and DRC emissions 108, 292 may, for example, be implemented using different RF frequencies (e.g., frequency division duplexing). In some implementations, at least some of these emissions may be supported via use of relays.
- relay stations may transparently forward a received signal to BTS 102 through high-speed wired or wireless networks. And such forwarded signal from the relay to BTS 102 may comprise raw data from A/D conversion or the decoded MAC PDU.
- this information may create a greater understanding of what data is being reflected through the network, for example.
- the broadcast gateway may become aware of how to better reconfigure forward emissions 108 to ensure a more effective distribution configuration (e.g., MODCOD or other settings inside of the broadcast gateway from the physical layer side) to ensure either the highest durability reception, the lowest latency, or the highest bandwidth.
- DRC emissions 292 may comprise data of higher value, e.g., an initial set (e.g., the first 30 seconds or 1 minute) of prepositioned content. This may, for example, be preoptimized, as a first touch point in receiving content for other devices.
- a low durability forward transmission 108 may be delivered, in an initial period (e.g., the first 30 seconds of a long form VOD asset).
- the peer network 292 may complementarily provide a higher degree of durability for that object emission, without requiring a fuller degree of forward spectrum emission and utilization.
- BAT 104’s DRC emissions 292 may be performed before or at the same time as BTS 102’s emissions 108 comprising a same content.
- DRC emissions 292 may be performed soon after the original emission of the same content by BTS 102, e.g., upon one or more BATs 104 being triggered to emit, in a spatial region, after being identified as having received needed portions. And this need may be, for example, due to there not being an IP backchannel for the needy device.
- the emissions in these examples may be any media segment, such as a newscast. But rather than sending a whole recording or interview, which may, for example, be 5 minutes long, the emission may comprise an edited and paired down version, which may, for example, be only 15 seconds long. But, to provide a larger relevance of that content experience, another emission may be performed in NRT beforehand of the full VOD segment. But because this content is to have high temporal value, such that that in the newscast when available it does not have a high value beforehand, this longer content may be made available for downstream devices that would be receiving the shorter newscast in conjunction with it.
- BTS 102 may implement a cyclical cadence of an associated procedure description (APD) per the A/331 standard. That APD may inform devices how to receive back any missing portions of said object. But one or more of the downstream devices may not have an IP backchannel to make corresponding data requests. Requests may instead be sent from each such BAT 104 to its peer BATs 104, which may have received a complete set of portions. The missing portions may be aggregated into subsequent NRT forward emission 108.
- API procedure description
- the missing portions may be obtained by needy BATs 104 from another BAT 104 in its vicinity that did receive the missing portions, via DRC network 292 or another suitable network such as the ad-hoc peer-to-peer Wi-Fi connection, without overutilization of the forward spectrum.
- peer BATs 104 may forward a missing data request to other BATs (or to BTS 102), or they may directly fulfill the request by emitting the missing data.
- Collaborative delivery may thus be initiated with a request (e.g., with aggregable metadata) for missing fragments.
- the request may be delivered to BTS 102 via DRC network 292 or an IP backchannel such as network 106, and emissions of DRC network 292 may be in a PLP, for example, such as a PLP for return channel (PLP-R).
- PLP PLP for return channel
- DRC downlink signaling, and data may be transferred to the broadcast gateway in ALP packets with a specific IP port and mapped to the designated PLP-R for return channel application.
- a device utilizing DRC network 292 may itself store and then forward when it has a window of opportunity to do so under the guidance of BTS 102. Turning up an opportunity for devices to collaborate may not imply that the devices eventually have to transmit all the way back to the BTS using the DRC. Some implementations may thus store and forward a message or request such that once a peer BAT 104 is reached (e.g., as part of consecutive storing and forwarding) that has an IP backchannel, then this peer may facilitate a delivery of the message or request to the BTS via the backchannel.
- BATs 104 may receive PLPs and separate out DRC synchronization, DRC signaling, and related data from traditional broadcast service data.
- Such synchronization and signaling data may be sent to the DRC uplink gateway of FIG. 5 A, e.g., for processing and maintaining system operation.
- the DRC uplink gateways may regulate how the uplink data is transmitted.
- Collaborative object recovery in the BAT-centric model may operate with one BAT 104 being the arbitrator of the re-emission into network 292, and thus it may operate as an IP uplink gateway, including implementations having an IP backchannel such that an unconnected device can be served by a connected device (having the IP backchannel) fulfilling a store and forward data payload emission.
- the physical and medium access control (MAC) layers of DRC network 292 may be based on the A/323 standard, which includes specification of the (i) uplink framing, baseband signal generation, random access, and downlink synchronization scheme and (ii) MAC procedures, MAC PDU formats, and signaling schemes between ATSC 3.0 forward and return emissions 108.
- Controls of the DRC may come from the BTS, which, for example, include the synchronization data and signaling data. These controls may be extracted from the broadcast service data.
- a video to transmitter protocol may include a re encapsulation.
- Example characteristics include timing and management (T&M) and the preamble packet.
- T&M timing and management
- the preamble packet defines what the waveform emission should look like when converted from the physical layer values into the RF domain, which may be a function of the modulator.
- T&M timing and management
- the preamble packet defines what the waveform emission should look like when converted from the physical layer values into the RF domain, which may be a function of the modulator.
- the same configuration parameters may be allowed for use at a BAT that are used by a BTS to apply those into the BAT for localized data recovery.
- the controls may comprise a new component in the system to handle the actual return data payload and a slightly different component to handle the object reconstitution and control parameters from there. And another way is for the BTS to signal directly to the listening BATs when it may expect to receive the missing data.
- the BTS may specify to the BATs how it may receive this missing data from other BATs that are in its range.
- the BATs may synchronize how to listen and re-emit with each other via the BTS’ control and management based on already defined components of the specification.
- BATs 104 may be communicative peers, e.g., for storing, processing, and/or forwarding opportunistic data for optimal object availability, QoS, durability, and completeness. As such, BATs 104 may be able to self-recover data in a collaborative fashion. BATs 104 that have larger storage capacity, increased RF reception characteristics, or more ideal spatial locality attributes (e.g., while not temporarily reachable in a subway, elevator, tunnel, etc.) may be used to support data delivery needs of other peer BATs 104. Indeed, each receiving device may have different RF characteristics and not just because of its location but also because of how it is designed (e.g., from an antenna gain perspective).
- BTS 102 may be directed to alter its carousel reemissions. For example, if there is a threshold number of devices (e.g., in one or more regions) that did not receive a full (or substantially full) set of portions, then the BTS may improve its intended durability based on spectrum utilization over time by applying more robust transmission characteristics (e.g., modulation and/or coding). That is, by varying configuration of network 108 using an aggregation of the DRC telemetry (e.g., including store and forward metrics from peer BATs 104), BTS 102 may ensure a greater reception probability of the missing portions via a reemission.
- DRC telemetry e.g., including store and forward metrics from peer BATs 104
- a BAT 104 that did receive all portions may be triggered to so reemit in its proximity.
- These forms of re delivery may be opportunistic, e.g., since carousel remission models are not known to factor in any receiving characteristics, being rather just a schedule or a cadence for reemission (e.g., every 10 minutes, every 4 hours, etc.).
- Information via DRC network 292 or an IP backchannel may be used to influence both the schedule/cadence and the network configuration to ensure that a higher set of receivers actually receive the NRT content as intended from the broadcast gateway.
- multiple data objects when multiple data objects are identified by the BTS to be requiring more robust transmission, multiple data objects may be prioritized and at least one BAT may be directed to reemit a high-priority data object from among multiple data objects using transmission parameters that correspond to transmission in high reliability relative to the transmission parameters used for the transmission of a lower priority data object.
- Objects delivered NRT via ALC and via the FLUTE mechanism may have a carousel over time that provides a higher confidence of recovered reception.
- Such objects may be part a service component or a service identified in the A/331 standard, e.g., in the low level signaling of the SLT.
- ALC or FLUTE may have a representative service location, for its service discovery.
- the APD may define a series of post file repair elements, which may comprise a resource server that may be accessed via an HTTP request to fulfill a byte range object request of what the data units that are missing from the FLUTE transmission.
- BAT 104 may thus emit a repair request, which an opportunistic BTS could then fulfill in a next transmission window opportunity.
- BTS 102 may enhance its emissions network (e.g., via application level FEC), when it wants to ensure a higher durability.
- BTS 102 may implement a machine learning model that determines (i) the largest reach of the universe of devices (e.g., BATs 104, next generation TVs 103, etc.) that may receive a delivered object and (ii) the highest integrity of the object that has been received in its constitution based on the function of a retransmission from the BTS or retransmission from a BAT on demand. For example, with return channel information, the AI-enabled BTS 102 may adjust a retransmission schedule, the FEC, and/or another parameter for more efficient delivery. And such BTS-driven determination may be autonomously based on a criticality of the data being emitted.
- a machine learning model that determines (i) the largest reach of the universe of devices (e.g., BATs 104, next generation TVs 103, etc.) that may receive a delivered object and (ii) the highest integrity of the object that has been received in its constitution based on the function of a retransmission from the BTS or
- FIG. 6 illustrates an example method 600 for collaborative object delivery.
- Method 600 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 600 presented below are intended to be illustrative. Method 600 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 600 are illustrated in FIG. 6 and described below is not intended to be limiting.
- Method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- processing devices e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
- the processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600.
- the following operations may each be performed using a processor component the same as or similar to collaborative object delivery component 245 (shown in FIG. 2).
- a plurality of fragments which are fragmented from data in a way compliant with the decentralized BitTorrent protocol, may be broadcasted by a BTS.
- data may be fragmented to identically sized portions (e.g., with byte sizes of a power of 2 and/or between 32 kB and 16 MB each).
- a hash may be created for each piece.
- a request (e.g., with aggregable metadata) for one or more fragments missing from the broadcast may be obtained (e.g., from one or more BATs).
- a BAT or the BTS may obtain this request.
- available presence of an IP backchannel may be determined. If the answer is yes, then operation 614 may be executed; if not, then operation 608 may be executed.
- a preference for using DRC may be determined. If the answer is yes, then operation 618 may be executed; if not, then operation 610 may be executed.
- operation 610 of method 600 whether a rebroadcast will occur (e.g., in a certain timeframe) may be determined. If the answer is no, then operation 618 may be executed; if yes, then operation 612 may be executed.
- the one or more requested fragments may be obtained via a rebroadcast of the BTS.
- a carousel may be used.
- the one or more requested fragments may be obtained.
- this obtainment may be performed via a spun-up Wi-Fi connection.
- the obtained data may be consumed or stored, at the requesting BAT(s).
- At operation 618 of method 600 at least one other BAT 104, near or in a same region as requesting BAT(s) 104, that acknowledges storage of the requested data may be determined.
- the spatial region may be determined based on a transmit power of the at least one other BAT.
- the one or more requested fragments may be obtained (e.g., from the determined BAT’s storage).
- a portion of spectrum utilized by BTS 102, which is to be made available for recovering the one or more requested fragments, may be determined.
- the one or more requested fragments may be spatially emitted into the determined spectrum portion, from the determined BAT towards the requesting BAT(s).
- BTS 102 may operate as a core to determine exactly what is being transmitted, which may include the retransmission of failed packets.
- the BTS may be configured to utilize some type of yield management or monetary maximization algorithm to determine what an impact would be monetarily on the company if it does not retransmit it or if it does retransmit by weighing that against other uses of the spectrum at that time. For example, if a thousand people watching a certain TV channel are lost, a determination of that may be balanced with sending, e.g., an app’s one or more update files such as for Microsoft Office 365.
- ATSC 3.0 may be used to distribute large video or data files via rotating and/or scheduled broadcast of data channels, wherein data is coded in a BitTorrent pattern for reassembly of pieces by a client device. Agnostic to content, clients may repair lost pieces via observation of rebroadcasts or IP connection by referring to specific fragmented elements.
- BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services, contents, and the like over broadcast networks, broadband networks, hybrid broadcast networks, and the like to one or more of BATs 104 that may each be implemented as and/or with an ATSC 3.0 receiver.
- the contents provided from BTS 102 to one or more of BATs 104 may include data.
- the data may include software, applications, application updates, operating system updates, information, documents, and the like.
- the data may include a collection of interrelated documents intended to run in an application environment and perform one or more functions, such as providing interactivity, targeted ad insertion, software upgrades, executable files, or the like.
- the documents of an application may include HTML, JavaScript, CSS, XML, multimedia files, programs, and the like.
- An application may be configured to access other data that are not part of the application itself.
- the data provided from BTS 102 to one or more of BATs 104 may fail to fully deliver the data.
- the broadcast transmissions from BTS 102 to one or more of BATs 104 may be lost during transmission due to issues with location, certain frequencies used, congestion, radio interference, electromagnetic interference, antenna problems, and the like.
- data may be broadcast with a plurality of packets, frames, pieces, parts, segments, or the like. One or more of the packets, frames, pieces, parts, segments, or the like may be lost during broadcast transmission.
- the data may be broadcast as ROUTE/D ASH-based services, MMT-based services, and/or the like.
- BTS 102 may determine whether to re-broadcast any portion of emitted content based on a set of receiver devices having received a threshold percentage (e.g., 90%) of the content.
- BTS 102 may utilize a carousel rebroadcast process or the like to provide missing packets, frames, pieces, parts, segments, or the like as further described below.
- one or more of BATs 104 may utilize a hydration process to obtain missing packets, frames, pieces, parts, segments, or the like as further described below.
- the data may be transmitted from BTS 102 to one or more of BATs 104 and may utilize a BitTorrent pattern, BitTorrent protocol, or similar data distribution protocol.
- the data being transmitted may be divided into packets, frames, pieces, parts, segments, or the like called pieces.
- Each piece may be protected by a cryptographic hash contained in a descriptor.
- Pieces may be broadcast sequentially or non-sequentially and may be rearranged into a correct order by a client implemented by BATs 104.
- BATs 104 may monitor which pieces it needs and which pieces it has based on data in the received pieces such as metadata, cryptographic hashes, and the like.
- each fragment may have a hash descriptor, which may be a header or preamble that indicates which fragment this is among the plurality (e.g., no. 42 of 6 million). As such, these descriptors may inform exactly how many fragments there are, and which ones have been obtained and which ones have not yet been obtained.
- contents of an ALP header may comprise a length and a source block number or start offset, which may be used determine what data is missing. That is, headers for the transfer object information may comprise a length field that informs what the complete length is (e.g., 1 GB).
- One approach may include creating a sparse array and for every byte in the transmission, its presence may be marked with a flag.
- Another approach may comprise creating a series of block ranges that would inform what has been stored or received. Any of those block ranges that are not defined may inform that a fragment is missing. For example, a gap between pieces 100 and 102 may inform that piece 101 is missing. As such, block recovery may be performed.
- the broadcast of any data may be halted at any time, resumed at a later date, missing pieces obtained, missing pieces determined, and/or the like without the loss of previously downloaded data.
- This also may enable BATs 104 to determine missing pieces, obtain missing pieces, and/or to seek out missing pieces and download them, which may reduce the overall time of the download.
- BTS 102 may utilize a carousel rebroadcast process so that downstream receivers obtain missing packets, frames, pieces, parts, segments, or the like.
- the carousel rebroadcast process may implement a data and object carousel that may be used for repeatedly delivering data in a continuous cycle.
- the carousel rebroadcast process may allow data to be pushed from BTS 102 to one or more BATs 104 by transmitting a data set repeatedly in a standard format.
- BTS 102 may periodically rebroadcast data to BATs 104.
- BATs 104 may monitor which pieces it needs, and which pieces it has, and obtain the pieces needed during the periodic rebroadcast of the data by BTS 102
- one or more BATs 104 may utilize a hydration process to obtain missing packets, frames, pieces, parts, segments, or the like. BATs 104 may monitor which pieces it needs and which pieces it has. Thereafter, BATs 104 may connect to network 106 to obtain the pieces needed from BTS 102.
- Network 106 may be the Internet, a hybrid broadcast network, a broadband network, or the like.
- BATs 104 may utilize hybrid hydration process 700 to obtain missing packets, frames, pieces, parts, segments, or the like.
- BAT 104 may monitor which pieces are needed and which pieces it has, and from this determine whether to utilize the carousel rebroadcast process or the hydration process to obtain missing packets, frames, pieces, parts, segments, or the like.
- each data object or file to be transmitted may be fragmented compliant with the BitTorrent protocol.
- a division made upstream e.g., at BTS 102
- BAT 104 may implement the herein-disclosed hybrid data delivery.
- This CDN may be implemented on a desktop, laptop, or tablet computer, on a user’s phone, or wherever an ATSC 3.0 receiver is. Continuing with the example, this CDN may be configured to obtain the 2 missing fragments over the Internet, when connected to an IP backchannel.
- the originally emitted data may be reemitted via a carousel.
- the probability of losing the same 2 fragments in two different emissions is very low so in the next emission, the CDN may obtain those missing fragments.
- the original, emitted data is entirely received.
- the CDN may miss different fragments, but this is insignificant since they had previously been obtained.
- the received pieces may include data, such as metadata, indicating if and when there will be a carousel rebroadcast process to obtain missing packets, frames, pieces, parts, segments, or the like. Accordingly, BATs 104 may determine the process to obtain missing packets, frames, pieces, parts, segments, or the like of broadcast data.
- BATs 104 may determine to wait for the carousel rebroadcast process based on (i) a predetermined time threshold until the carousel rebroadcast, (ii) an urgency for the data, (iii) a user set time threshold until the carousel rebroadcast, (iv) a cost of obtaining the data, (v) user preferences set via UI devices 118, and/or another factor.
- BATs 104 may determine to connect to network 106 to obtain the pieces needed from BTS 102 based on a predetermined time threshold until the carousel rebroadcast, an urgency for the data, a user set time threshold until the carousel rebroadcast, a cost of obtaining the data, user preferences, and the like.
- fragmentation component 237 of FIG. 2 may determine whether to obtain a set of missing fragments via an IP backchannel 106, carousel reemissions 108, or peers of DRC network 292. This determination may be based on (i) a percentage of the fragments previously obtained via emissions 108, (ii) a known time when the carousel (e.g., at the BTS or via a BAT’s DRC) reemission will take place, (iii) a corresponding cost of using the IP backchannel when available, (iv) a QoS level for a user of BAT 104, (v) urgency or importance of the data (e.g., with the data being a map of ongoing fires in California), and/or (vi) whether a user or application has more recently queried presence of all fragments of the data object.
- the known time of carousel reemission may be obtained via metadata. For example, if the receiver obtained more than 50% of the fragments, it may determine to get the rest over Internet, if connected; a vice versa determination is contemplated such that the receiver determines that a rebroadcast is instead preferred. In some implementations, the determination may be different per data object or file that is emitted. For example, cache header rules implementing parameters (e.g., a time to live (TTL) mechanism or metadata) may be used to indicate lifetime of data at the CDN on a file by file basis. These parameters may be broadcast centric such that the CDN manages itself. For example, the CDN may determine how to obtain missing fragments based on the parameter(s).
- TTL time to live
- fragmentation component 237 may reassemble the obtained fragments (e.g., based on individual tags pre-added before the emission). And these fragments may, for example, be obtained unicast (e.g., OTA via emissions 108) and/or bidirectionally (e.g., OTT via a TCP/IP backchannel).
- unicast e.g., OTA via emissions 108
- bidirectionally e.g., OTT via a TCP/IP backchannel
- a network mesh of BATs 104 may be accessed, which ties several gateways together to transmit the missing pieces.
- CDN PoP component 242 of FIG. 2 may alert a user upon receiving the missing fragments that complete reception of the original, emitted data (which may be NRT data).
- BAT 104 may know how to recover lacunae (e.g., whether to wait for a main rebroadcast or DRC rebroadcast or to immediately fetch via backchannel) based on metadata obtained in a corresponding, original emission 108 of the data. As such, orchestration of the data at BTS 102 may determine how downstream BATs 104 are to perform missing data recovery.
- lacunae e.g., whether to wait for a main rebroadcast or DRC rebroadcast or to immediately fetch via backchannel
- the upstream data orchestration may further determine how soon retransmissions of a carousel will occur based on the type of data. For example, an important cryptographic object may be reemitted every few minutes versus less important metadata being reemitted far less often.
- a broadcast app for interfacing or accessing broadcast data may be sent as NRT every five minutes.
- Other variables that determine a periodicity of the carousel and a duration of the carousel may be based on options selected by the content owner (e.g., by paying more for a more frequent and longer lasting carousel).
- parameters of the carousel may be overridden or superseded.
- a machine learning model of BTS 102 may stop sending the Microsoft Office 365 update when there is emergency alerting to send out to the universe of receivers. And, when the duration of that alerting is past, the BTS may resume sending the app update file(s).
- an artificial intelligence (AI) module of BTS 102 may determine an optimal size of the fragments based on a tradeoff balanced using impacts learned over a wide variety of past permutations. That is, the overhead involved in having too many fragments (e.g., because each fragment is very small) may be unacceptable for bandwidth efficiency, thus resulting in larger fragments.
- Some embodiments may comprise an event-driven process and subsystem, for the BAT to fulfil, and a mechanism for proof of work and verification that certain activities are completed in a distributed ledger of proof of work for validation.
- the event-driven process and subsystem may facilitate the automatic hydration of a dictionary or cache, with a facilitating-distributed-ledger to ensure transactional completion rather than a commit log per se, the objective being to provide traceability and auditability that those activities occurred on those devices in a non-centralized and distributed ecosystem.
- the Kafka event cycle may be the originator of those activities and the mechanism as to how they are synchronized; the transactional commit log may be an as-run or an audit log.
- an appropriate distributed ledger may provide proof of work that these activities were completed and fulfilled on behalf of a decentralized architecture.
- FIG. 7 illustrates method 700 for hybrid data delivery using fragmentation, in accordance with one or more embodiments.
- Method 700 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 700 presented below are intended to be illustrative. In some embodiments, method 700 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 700 are illustrated in FIG. 7 and described below is not intended to be limiting.
- method 700 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 700 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 700. The following operations may each be performed using a processor component the same as or similar to fragmentation component 237 (shown in FIG. 2).
- data that is broadcasted may be received from at least one BTS.
- these pieces of data may be obtained (e.g., from BTS 102) non-sequentially and rearranged into a correct order by BitTorrent client 104, which monitors which pieces it needs and already has.
- BitTorrent client 104 may then upload pieces it has to those peer devices that need them (e.g., using DRC network 292 or any other suitable technology such as Wi-Fi or broadband cellular).
- the receiver device may analyze and determine missing packets, frames, pieces, parts, segments or the like of broadcasted data.
- the process for subsequent provision of this missing data may be based on a BAT connecting to a network to request missing data.
- BAT 104 may perform the request to BTS 102 or to other peers using DRC network 292 or any other suitable networking technology.
- BAT 104 which may miss pieces, may implement the BitTorrent protocol by making many small data requests (e.g., over different network connections to different machines).
- a cryptographic hash contained in a descriptor in the packets, frames, pieces, parts, segments, or the like received may be utilized to determine any missing packets, frames, pieces, parts, segments, or the like of broadcasted data.
- the received pieces may include data, such as metadata, indicating a total number, a sequential numbering of, or the like of packets, frames, pieces, parts, segments, or the like to determine any missing packets, frames, pieces, parts, segments, or the like of broadcasted data.
- the rebroadcast periodicity may be determined based on a rebroadcast schedule cost, urgency of the missing pieces, a number of missing pieces, cost of an IP backchannel, user preferences, and/or other information.
- BTS 102 may determine to reemit missing pieces once per month rather than once per week when an IP backchannel is prohibitively expensive for reemitting too many pieces.
- a determination may be performed as to whether to obtain the missing pieces by a rebroadcast over network 108.
- BAT 104 determines that it is acceptable to wait (e.g., a whole day or another predetermined or user-configured time period) before informing the user of full reception of certain NRT data
- operation 710 may be performed; otherwise, if this BAT determines that a level of urgency of this NRT data is too high, then operation 708 may be performed.
- a rarest-first approach may be used to ensure a high availability.
- operation 708 may be performed for a certain subset of the missing pieces whereas operation 710 may be performed for the remaining subset.
- packets, frames, pieces, parts, segments, or the like that are missing from broadcasted data may be obtained by the BAT connecting to an IP backchannel.
- packets, frames, pieces, parts, segments, or the like that are missing from broadcasted data may be obtained by the BAT awaiting a rebroadcast from the at least one BTS.
- ATSC 3.0 may be used to support progressive over-the-air (OTA) application download and runtime data, whereby application feature(s) may be (i) built in a modular fashion depending upon the degree of involvement of the viewer and availability of data via broadcast on a rotating basis and (ii) selected by a receiver’s user (e.g., based on desired media consumption).
- OTA over-the-air
- application 246 of FIG. 2 may comprise basic or rudimentary multichannel video programming (MVP) functionality.
- application 246 may comprise a terminal or main application, for content distribution opportunities via a set of channels and/or via application extensions or as otherwise discussed herein. But a user, for example, watching content thereat for a time (e.g., ten minutes) may be informed by an indication or notification of a newly available, selectable set of menu items.
- MVP multichannel video programming
- BTS 102 may determine a respective periodicity for emitting, in a carousel, the base application and its modular extensions, the periodicity being, for example, greater than a periodicity for emitting, in the carousel, any other type of data.
- At least one module of base or main application 246 may be pre-installed at the downstream receiver before obtaining, or at least before installing, further modules.
- the at least one module and extension modules may be obtained via emissions 108.
- modules of application 246 may arrive in emissions 108 at different times, intervals, and/or rotations such that a user of BAT 104 is continually provided more menu items (e.g., starting with 0 or 1, then 2, then 10, then 20, and then 50).
- application 246 may be a base HTML5 application with a number of modular extensions.
- these applications may be all broken up in different NRTs.
- the base application may be transmitted on the data carousel once a minute, and an extension (e.g., the weather or sports component) of the application may come across emissions 108 as its own NRT file.
- the extension(s) may, for example, add functionality to the application and be emitted on a longer frequency (e.g., once every ten minutes or once an hour, based on importance or urgency of that module).
- a user of BAT 104 may thus turn on the device and within a minute get the base application.
- one or more of the extensions may pop up on the menu or in a displayed overlay.
- running the newly downloaded extension may cause a picture on the display to shrink and be squeezed.
- an L-shaped bar may display an alert; and, by the user clicking or selecting that alert, they may be taken into a micro webpage that has all the different artifacts about that alert (e.g., different videos, information, pictures, linear channels, flash channels, etc.).
- some implementations of the application extensions may comprise display of weather forecasts, sports statistics, emergency alerting, or other news content that are at least temporarily co-displayed with main video content.
- application 246 may comprise a plurality of applications executable at each of BATs 104.
- one such application may be the broadcast application; and the broadcast application may immediately or, upon being loaded with complementary modules, monitor content consumption behavior.
- the application extensions may be installed at BAT 104 to update (e.g., functionally extend or replace) the broadcast application.
- the base application and the modular extensions may each be a different file or type that informs, via metadata, the OS (or the basic or rudimentary MVP app functionality) of the BAT when and/or how to run. And the metadata may, for example, inform what type of module it is.
- Base or main application 246 may be very small (e.g., on the order of a few megabytes) such that BAT 104 may run it very quickly once received over emissions 108.
- the extensions may be obtained later over a period of time.
- the whole broadcast application may be much larger and also sent in a carousel, but far less frequently.
- a user may tune to a second channel to obtain at least a portion of application 246 quicker than by tuning to a first channel within which a carousel rebroadcast of this application and/or its extensions module is performed less frequently.
- separate modules may be provided via different channels to which a downstream BAT is operable to tune.
- the base runtime (e.g., which is provided by the industry and preinstalled by the manufacturer).
- This base runtime may run the MMT signal, but there may be no Chrome, DAI, or other special capabilities, this device operating just like a legacy TV does today.
- the application extensions to this base runtime may thus arrive over time such that those capabilities appear (or become available via an indication) before a user’s eyes as content is being consumed.
- broadcast application 246 may be stored via CDN PoP 242. And this storage may be for a much longer TTL than regular content (e.g., up to a year and thus of a different scale or order of magnitude for the main application with respect to its extensions or other content downloaded via emissions 108), for example, because a user of the receiver previously consumed such content such that there may be a predicted demand for that application tomorrow or next week, when the user switches back to this same channel.
- regular content e.g., up to a year and thus of a different scale or order of magnitude for the main application with respect to its extensions or other content downloaded via emissions 108
- a broadcast application may be stored in cache of next generation TV 103 or BAT 104 such that, when a user tunes away from a channel, the cache may be flushed, and the app may be lost. So, when the user tunes back to this channel, the application and its modules may need to be received anew.
- the broadcast app may be semi-permanently or permanently stored in another type of memory of the receiver.
- files or modules associated with application 246 may be delivered in ROUTE packages.
- Such broadcaster application may execute inside a worldwide web consortium (W3C)-compliant user agent accessing some of the graphical elements of the receiver to render the user interface or accessing some of the resources or information provided by the receiver (e.g., BAT 104).
- W3C worldwide web consortium
- the broadcaster application may send a request to a WebSocket utilizing a set of JSON-remote procedure call (RPC) messages to provide the APIs that are required by the broadcaster application to access the resources that are otherwise not reachable.
- RPC JSON-remote procedure call
- the receiver may use its user agent to launch or terminate the broadcaster application referenced by a URL provided in broadcast signaling.
- the broadcaster application package may be downloaded, signaled, launched, and managed.
- BAT 104 may be configured to obtain the modular extensions via an IP backchannel, if such Internet or other networked access is available and if an amount of time needed to wait for the next carousel window breaches a threshold.
- one or more of application 246 and its extensions may be associated with an expiration date (e.g., for subsequent overwriting or immediate purge).
- BAT 104 may obtain another application 246 that is co-branded content with different components and/or skin formatting.
- another application may be used by the receiver for the DMA. But, in this example, if the user is tuned into the NBC service, they may see the KSNB logo and the KSNB menu items with corresponding branding and colors. Continuing with this example, if the user instead tunes into the CW station, they may see the KVCW menu items, branding, and colors.
- BAT 104 may obtain one or more other applications and modules, and the other applications and modules may be sourced differently from the broadcaster application.
- supplemental sources 206 may obtain content from the Internet and thus third-party users may pay for its broadcasting.
- BTS 102 may implement an ecosystem like the Apple iPhone or Google Android app stores where entities may be innovative and build their own apps for their own broadcasted services.
- the terminal/ main/base application or modules thereof may comprise third party applications, which may for example implement functionality emulating an application download store or platform for digital distribution on behalf of different third parties.
- This service may constitute a broadcast service made available for third parties and may include periodic rebroadcasts.
- FIG. 8 illustrates an example process 800 for progressive OTA terminal application download and runtime.
- process 800 of FIG. 8 is described with reference to BTS 102 and BATs 104 of FIGs. 1-2.
- Method 800 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 800 presented below are intended to be illustrative. In some embodiments, method 800 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 800 are illustrated in FIG. 8 and described herein is not intended to be limiting.
- method 800 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 800 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 800.
- the following operations may each be performed using a processor component the same as or similar to application download and runtime component 243 (shown in FIG. 2).
- BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services, contents, and the like over broadcast network 108, broadband networks 106,294,296, hybrid networks, and/or the like to one or more of BATs 104 that may each be implemented as and/or with an ATSC 3.0 receiver.
- the contents provided from BTS 102 to one or more of BATs 104 may include data.
- the data may include software, applications, information, documents, a terminal application, modules for the terminal application (e.g., to implement additional functionality, as described herein), and the like.
- the data may include a collection of interrelated documents intended to run in an application environment and perform one or more functions, such as providing interactivity, targeted ad insertion, software upgrades, executable files, or the like.
- the documents of an application may include HTML, JavaScript, CSS, XML, multimedia files, programs, and the like.
- An application may be configured to access other data that are not part of the application itself.
- the data provided from BTS 102 to one or more of BATs 104 may include the terminal application, modules for the terminal application, and the like.
- One or more of BATs 104 may include the terminal application.
- the one or more BATs may include the terminal application that initially only has basic or rudimentary functionality.
- the terminal application or modules thereof may facilitate content consumption by the BAT and/or facilitate provisioning of information about the content being consumed, for example, as described in further detail below, or otherwise.
- BTS 102 may provide to the one or more BATs modules for the terminal application.
- BTS 102 may provide to the one or more BATs modules for the terminal application to add functionality, features, and the like.
- the data provided from BTS 102 to one or more of BATs 104 may include a new module for implementation in the terminal application.
- the data may be broadcast as ROUTE/D ASH-based services, MMT-based services, and/or the like.
- BAT 104 may, for example, receive, in a carousel rebroadcast having a first periodicity, a terminal application from at least one BTS 102.
- one or more of BATs 104 may execute the terminal application.
- the terminal application may be implemented in one or more components of one of BATs 104.
- the terminal application may be implemented with modules, as described herein.
- the terminal application implementing the modules may include one or more features or functionalities, as described herein.
- one or more of BATs 104 may collect and transmit characteristics of use of BAT 104 by a user to at least one BTS 102.
- BAT 104 running the broadcast application implementing its modular extensions may be configured to collect user-interaction data for transmission to at least one of BTS 102 and associates thereof. That is, one or more of BATs 104 may, for example, collect use data.
- the use data may include viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
- the use data may be utilized for selecting content, such as by filtering an advertisement (ad).
- the ad filtering may be, for example, implemented to provide pre-positioning of various ads. More specifically, the ad filtering and/or the pre positioning of various ads may include ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like.
- ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like may be based on use data.
- ad selection functionality and ad insertion functionality may be based on one or more of viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
- BAT 104 may lack connection to an IP backchannel.
- emissions 108 may comprise pre-positioned ads. That is, rather than a vehicle vendor like Ford Motor Company merely providing a same F150 ad for everyone everywhere, they may provide many (e.g., ten) different ads (e.g., based on a same general theme). For example, there may be a default ad for the FI 50 whereas another ad may be for the Mustang or GT. But all the different Ford products may be pre-positioned at the receiver. When this receiver is connected, Internet decisioning may be used, but when not so connected the decisioning may be performed at the receiver with little or no information other than previous content consumption from emissions 108.
- a filter may be emitted as part of OTA emissions 108.
- the filter may comprise a set of rules (e.g., like a JSON file) or logic used by the ad to play a segmented ad (e.g., rather than a targeted ad). That is, based on information of the user of the receiver, a segment to which the user belongs may be identified and used to play an appropriate ad.
- Certain content items may be flagged for preferential transmission in particular time periods or under particular conditions. For example, content requiring a high bandwidth or low latency may be flagged for transmission at a time when network conditions permit.
- content may be slated to play (e.g., an hour later) via a marker in the real-time stream.
- the marker may manage, when content is about to be played, characteristics associated with the client terminal such that if applicable characteristics are identified then the corresponding content is selected and played; otherwise, the viewer may be displayed the default content.
- a set of default ads may be embedded into a live stream whereas alternative ads may be transparently provided beforehand via carousel.
- a linear service via emissions 108, may comprise (i) an app-based enhancement that runs in the background and manages the insertion of targeted ads and (ii) another app-based enhancement that contains a collection of apps that provide an interactive viewing experience to enhance the audio/video program.
- Each app-based enhancement may be separately signaled so that the creators of diverse apps do not need to coordinate their signaling.
- One or more of BATs 104 may, for example, transmit consumption data to at least one BTS 102.
- the use data may be used for implementation of real-time analytics.
- the real-time analytics may track revenue, profit, subscribers, growth, a success of any individual piece of content based on viewership of video content, hits for a given piece of video content, revenue data for a given video content, breakdowns by geographic region of viewers, average length of viewership, and the like.
- BTS 102 may transmit to one or more of BATs 104 a new module for implementation in the terminal application, for example, via emissions 108 comprising IP multicast traffic.
- the one or more BATs may receive and store in memory the new module. Thereafter, the one or more BATs may determine as illustrated by operation 806, that the BAT system has a new module for the terminal application.
- the process may advance to operation 808. On the other hand, if the one or more BATs determine it has not received a new module for the terminal application, the process may return to operation 802 and continue to execute the terminal application.
- the one or more BATs 104 may (i) install the new module in, for, and/or with the terminal application and (ii) advantageously integrate into the terminal application additional functionality based on information of the at least one other module.
- the one or more new modules may be compiled with the terminal application.
- the new module may be compiled separately, via separate compilation with and/or for the terminal application.
- the new module may be pre-compiled before transmission to the BAT.
- the new module may be compiled upon reception or soon after being received at the BAT.
- the installation step may comprise execution of the module. Accordingly, implementation of a particular feature may be seen as being realized by compiling or installing the relevant module or modules.
- the new module may be linked by a linker.
- the new module may utilize a JIT compiler that may perform construction on-the-fly, e.g., at run time.
- a JIT compiler may operate upstream of BAT 104, where artifacts are combined for distribution into a packaged broadcast app distribution.
- the JIT compiler may be used to perform on-the- fly construction at runtime.
- Other processes and approaches of implementing, compiling, updating, installing, and/or the like the new module with the terminal application are contemplated as well.
- the one or more of BATs 104 may execute the terminal application implementing the new module as illustrated by operation 810. Thereafter, process 800 may return to operation 802.
- the modules of the terminal application may be implemented during a runtime or execution time of the terminal application.
- a loader may perform the necessary memory setup and links to the terminal application with any dynamically linked libraries and modules the terminal application may need. Thereafter, execution may begin starting from an entry point of the terminal application.
- a feature provided by the terminal application implementing the modules may include channel scanning, channel list creation, signal standard type determination (ATSC 1 standard, ATSC 3.0 standard, or the like), channel logo presentation, audio track switching capabilities, subtitle display capabilities, gateway connection capabilities, gateway connection discovery capabilities, information presentation regarding current broadcast events, full-screen player capabilities, and the like.
- application 246 may be downloaded. Upon an initial scan, such a device may generate a list of all available channels. The channels may each be virtual and at any suitable frequency or band. Without this application yet running, another more general application preloaded at the receiver may display information of previous tunes or content consumption. This content may be based on MMT emissions.
- Application 246 may be previously obtained via emissions 108, e.g., of a carousel in periodic emissions (e.g., every ten minutes). In this example, these example emissions may not comprise the other, more-general application.
- a feature provided by the terminal application implementing the modules may include obtaining a get list of files with file identifications in timestamp order.
- a feature provided by the terminal application implementing the modules may include functionality to subscribe to file changes, a functionality to receive notification for file changes, implementation of file cache management, implementation of integrated testing, and/or the like.
- a feature provided by the terminal application implementing the modules may include player modification, channel polling, error handling, platform optimization, Apple TV platform utilization, testing and bug fixing, stream testing, and the like.
- a feature provided by the terminal application implementing the modules may include distribution of device software updates such as distribution of macOS and iOS software updates.
- a feature provided by the terminal application implementing the modules may include distribution of application software updates (e.g., for Microsoft Windows or Office) for devices.
- application software updates e.g., for Microsoft Windows or Office
- a feature provided by the terminal application implementing the modules may include distribution of streaming devices such as Apple TV and Apple TV+ Streaming Video (VOD, live events, and the like).
- streaming devices such as Apple TV and Apple TV+ Streaming Video (VOD, live events, and the like).
- a feature provided by the terminal application implementing the modules may include income determination and/or allocation from distribution of other OTT content within implementations such as Apple TV (Netflix, HBO, Showtime, etc.)
- a feature provided by the terminal application implementing the modules may include distribution of emergency responder information and alerting on a more robust infrastructure.
- a feature provided by the terminal application implementing the modules may include menus, graphical user interfaces, interactive menus, and the like.
- a feature provided by the terminal application implementing the modules may include emergency messaging functionality.
- a feature provided by the terminal application implementing the modules may include collection of use data.
- a feature provided by the terminal application implementing the modules may include ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like.
- ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like may be based on use data.
- ad selection functionality and ad insertion functionality may be based on one or more of viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
- a feature provided by the terminal application implementing the modules may include OTT content or functionality including OTT television, OTT messaging, OTT voice calling, and the like.
- a feature provided by the terminal application implementing the modules may include video streaming platform functionality.
- the video streaming platform functionality may include a video hosting platform to organize video content including receiving video content, uploading video content, hosting video content, managing video content, tagging video content, recognizing tagged video content, delivering video content, and/or the like.
- a feature provided by the terminal application implementing the modules may include closed captions for video content.
- a feature provided by the terminal application implementing the modules may include implementation of chapter markers that may enable navigation within video content.
- a feature provided by the terminal application implementing the modules may include implementation of real-time analytics.
- the real-time analytics may track revenue, profit, subscribers, growth, a success of any individual piece of content based on viewership of that video, hits for a given piece of content, revenue data for a given video, breakdowns by geographic region of viewers, average length of viewership, and the like.
- a feature provided by the terminal application implementing the modules may include customer support, professional assistance, and the like support options.
- a feature provided by the terminal application implementing the modules may include statistics for sports betting.
- the modules may generate a graphical user interface that may be utilized by users to input or select various teams, matches, and the like and the terminal application implementing the statistics for sports betting may provide statistics based on the input.
- the statistics for sports betting may include statistics regarding total (Over/Under) values based on the total score between two teams; the statistics for sports betting may include statistics regarding a proposition on a specific outcome of a match; the statistics for sports betting may include statistics on parlays that involve multiple bets that rewards successful bettors with a greater payout only if all bets in the parlay win; and the statistics for sports betting may include statistics regarding other forms of betting.
- a feature provided by the terminal application implementing the modules may include interactive modules for betting.
- the interactive modules may include betting for sports, games, and/or the like.
- the modules may generate a graphical user interface that may be utilized by users to input or select various teams, matches, games, and the like.
- the interactive modules for betting may utilize the graphical user interface for implementing funds transfers based on credit card, electronic check, certified check, money order, wire transfer, cryptocurrencies, and/or the like. For example, bettors may upload funds, make bets, play the games offered, and the like through the graphical user interface. Thereafter, bettors may cash out any winnings through the graphical user interface.
- betting may include total (Over/Under) values based on the total score between two teams, proposition on a specific outcome of a match, parlays that involves multiple bets that rewards successful bettors with a greater payout only if all bets in the parlay win, and the like.
- a feature provided by the terminal application implementing the modules may include interactive modules for purchasing items represented through product placement.
- the modules may generate a graphical user interface that may be utilized by users to input or select products.
- the modules may include electronically buying items represented through product placement through electronic commerce.
- the electronic commerce may utilize mobile commerce, electronic funds transfer, supply chain management, marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, automated data collection systems, and the like.
- a feature provided by the terminal application implementing the modules may include interactive modules for social media use.
- the modules may generate a graphical user interface that may be utilized by users for social media.
- the social media may be interactive computer-mediated technologies that facilitate the creation and sharing of information, ideas, career interests, and other forms of expression via virtual communities and networks.
- the variety of social media services may include social media interactive Internet-based applications, user-generated content, such as text posts or comments, digital photos or videos, and data generated through all online interactions.
- the variety of social media services may include generation of user service- specific profiles and identities for a website, an application, and/or the like that are designed and maintained by a social media organization.
- BTS 102 may utilize a carousel rebroadcast process to deliver one or more new modules and/or the terminal application to one or more BAT systems 104.
- the carousel rebroadcast process may implement a data and object carousel that may be used for repeatedly delivering one or more new modules in a continuous cycle.
- the carousel rebroadcast process may allow the one or more new modules to be pushed from BTS 102 to the one or more BATs by transmitting the one or more new modules repeatedly in a standard format.
- BTS 102 may periodically rebroadcast the one or more new modules to the BAT.
- the BAT may monitor the one or more new modules and obtain the one or more new modules during the periodic rebroadcast of the data to BTS 102.
- the terminal application may implement modular programming.
- the terminal application may implement modules that separate various functionalities of the terminal application into independent and/or interchangeable modules.
- each of the modules of the terminal application may contain everything necessary to execute only one aspect of the desired functionality of the terminal application.
- the modules of the terminal application may include a module interface.
- the module interface may express various elements that may be provided and required by the module.
- the elements may be defined in the interface and may be detectable by other modules of the terminal application.
- the modules of the terminal application may include an implementation.
- the implementation may contain working code that corresponds to the elements.
- the elements may be declared in the interface.
- the modules of the terminal application may include programming utilizing structured type programming, object-oriented type programming, and the like.
- the modules of the terminal application may facilitate construction of one or more of the features as described herein for the terminal application by decomposition of various features.
- the modules of the terminal application may refer to high-level decomposition of the code of the terminal application into pieces having structured control flow, object-oriented programming that may use objects, assemblies, components, packages, and/or the like.
- the modules of the terminal application may utilize programming that may include one or more of the following languages Ada, Algol, BlitzMax, C#, Clojure, COBOL, D, Dart, eC, Erlang, Elixir, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, IBM i Control Language (CL), IBM RPG, Java, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal, Perl, PL/I, PureBasic, Python, Ruby, Rust, JavaScript, Visual Basic .NET, WebDNA, Objective-C, Modula, Oberon, and/or the like.
- a client receiver such as a BAT 104, may serve as a content delivery network (CDN) point of presence (PoP) for one or more TVs 103 and/or UE 170 connected to the BAT (e.g., via network 294, 296) at a home or other facility, e.g., by collecting OTA broadcast data from a variety of channels, then storing and redelivering data as required via a variety of mechanisms.
- CDN PoP for ad hoc delivery of OTA data.
- Broadcast data may encompass not just broadcast media, but also application extensions and public and private data casting, including such disparate information types as ad pre-positioning and emergency broadcast information.
- CDN PoP 242 may be established, for example, in a home receiver unit, or in a mobile device such as a phone or tablet, for example, via the use of a software defined radio receiver or transceiver chip or integrated circuit (as described with reference to FIG. 3, for example), optionally along with associated software.
- a single PoP and/or multiple PoPs may service all devices - including Internet of things (IoT) devices.
- IoT Internet of things
- Candidate integrations may include various devices such as iPhone, Macs, Apple TVs, Third-party Home Gateway devices, wireless devices, and the like.
- BAT 104 may act as a PoP by hosting a data services function 240.
- the data services function 240 may act more generally than either the API services 244 and applications 246, in as much at the data services function 240 may provide general functionality for managing the reception and distribution data in the manner, for example, of a PoP in a CDN comprising the transmission 108 and downstream devices.
- the PoP functionality may underly many of the data services of BAT 104. For example, by coordinating the processing and storage of received transmission data, supplementing received transmission data with data gathered by retransmission by other BATs and locally available networks, managing received data queries from users, handling dedicated return channel communications with the BTS, implementing data retention policies, supporting applications and APIs, the PoP functionality may directly or indirectly support the support of legacy devices, OTA API services, progressive video enhancement, gleaning data packaged in video baseband padding packets, collaborative object delivery and recovery, data delivery via fragmentation, progressive OTA application download and runtime, and flash channel requests and processing.
- CDN PoP 242 may be context-aware (e.g., in terms of what the fragments represent). For example, the CDN PoP may recognize different kinds of content, e.g., by file type or subcategory within files (such that ancillary metadata is deemed to be of lower importance whereas a content decryption key may be deemed of substantially higher importance, for making sure the latter is received or recovered), and take action accordingly to obtain missing fragments, alert users to conditions, or otherwise act automatically on received data, or lacunae in received data, in accordance with significance or priority of the data and available options or corrective or preventative action.
- content e.g., by file type or subcategory within files (such that ancillary metadata is deemed to be of lower importance whereas a content decryption key may be deemed of substantially higher importance, for making sure the latter is received or recovered), and take action accordingly to obtain missing fragments, alert users to conditions, or otherwise act automatically on received data, or lacunae in received data, in accord
- PoP 242 may not make available any file or object that is not valid (e.g., based on a checksum, another integrity checker, signature, Widevine encryption, or another criterion). As such, contents of the CDN may be preloaded, e.g. by the broadcast.
- CDN PoP 242 may be a potentially potent entity that can be expert in understanding users' needs.
- this CDN PoP may be allowed to remediate data, e.g., by reactively plugging holes in data, so it may be clear that it could also be arranged to proactively attend to a user's experience.
- the aforementioned preventative action may imply that if it notes a bad trend in connection quality, it may shift functionality to pre positioning objects that would otherwise be expected to appear in a normal rotation of broadcasted content.
- the CDN PoPs may intelligently and automatically determine deduplications of objects requested in the region (e.g., without requiring user involvement) by being able to transparently preposition the content to the audience set via excess capacity.
- the PoP may act as an intermediary for many types of data received from various sources and used by various users, APIs, and applications supported by a BAT. This may include, for example, traditional DVR functions of semi-permanent media content, and permanent storage and support of tools and data for various applications, and various data needs of users, such as storage of home computer and personal device application and OS upgrades.
- PoP 242 may opportunistically identify and acquire various information broadcast in rotation by BTS 102, even before any user or application of BAT 104 is aware of the need to acquire such information, e.g., for the distribution of a large periodic upgrade (e.g., annual Microsoft Office 365 update). In this way, the PoP may provide, via a substantially (or even exclusively) unidirectional OTA broadcast data service, many of the advantages traditionally associated with terrestrial bi-directional networks.
- one or more CDNs 242 may perform file cache management for local UE 170 consumption and/or consumption via UI device 118 at BAT 104.
- broadcast application 246 and other data may be stored via CDN PoP 242 with a much longer TTL than the other data.
- CDN PoP may interoperate with a file cache manager to have a substantially complete or whole database (e.g., an encoded copy or snapshot of the Netflix library). For example, if plugged in and watching for several months, BAT 104 may locally emit to one or more UE 170 and/or visually display to any of the movies previously obtained via emissions 108 during the several months.
- Such consumption may be necessary for the user, e.g., when BAT 104 and/or the UE are located or configured such that they have no, resource-poor, or otherwise limited other viable means of access (e.g., via the Internet or another network).
- the resource-poor or otherwise limited other viable means e.g., dial-up or 2G cellular
- a receiving device may possess sufficient storage, e.g., when implementing aspects of the A/331 standard, including a distribution window descriptor (DWD), to provide relevant start/end times of NRT emissions.
- DWD distribution window descriptor
- filter codes e.g., for selective application-specific caching of resources
- an overall object expiration mechanism may be provided for management in an extended file delivery table (EFDT).FDT- instance element.
- EFDT extended file delivery table
- a conditional access mechanism for client-side DRM e.g., Widevine content license key acquisition lifecycle
- content provided by CDN 242 may be digitally signed via Widevine DRM or SSL/TLS DRM.
- Widevine s DRM solution provides the capability to license, securely distribute, and protect playback of content at or in relation to BAT 104.
- CDN PoP 242 may perform file cache management using storage policies obtained OTA (e.g., with NRT files), e.g., for informing the local, home gateway of BAT 104 how (e.g., in terms of TTL) to persist each of those files., for example, when storage 236 nears its capacity, an internal storage policy may operate (e.g., as first-in-first-out (FIFO), least-commonly-used, or another algorithm) to not overflow its own cache. Implementations using DWD may thus provide guidance of when to persist, when to purge, and other attributes.
- a single BAT 104 may implement a CDN. In other embodiments, a plurality of geographically distributed BATs 104 may implement the CDN.
- the CDN may provide high availability by being spatially proximate to end users (e.g., users of BATs 104, UE 170, or next generation TVs 103).
- CDN 242 may, for example, provide web objects (e.g., text, graphics, and/or scripts), downloadable objects (e.g., media files, software, and/or documents), applications (e.g., e-commerce, portals, etc.), live streaming media, on-demand streaming media, and/or social media sites.
- CDN 242 may be, for example, configured to receive payment from content owners to deliver their content to the end users.
- CDN 242 may be implemented standalone.
- CDN 242 may be at least partially hosted at a datacenter of an Internet service provider (ISP).
- ISP Internet service provider
- CDN 242 may perform such content delivery services as video streaming, software downloads, web and mobile content acceleration, license management, transparent caching, and performance measurement (e.g., load balancing, switching and analytics, and cloud intelligence). CDN 242 may further perform security, such as distributed denial-of-service (DDoS) protection, a web application firewall (WAF), and WAN optimization.
- DDoS distributed denial-of-service
- WAF web application firewall
- CDN PoP 242 may be optimally positioned at the edge to serve content (e.g., over network 294). As such, CDN PoP 242 may implement a demarcation point or interface point between communicating entities. In doing so, CDN PoP 242 may implement a router, a network switch, a multiplexer, and/or other network interface equipment otherwise located in a server or datacenter. And CDN PoP 242 may implement decompression at the edge.
- CDN PoP 242 may operate as an ISP or as equipment that enables users to connect to the Internet via a more typical ISP.
- CDN PoP may, for example, implement one or more unique IP addresses and a set of other, assignable IP addresses for the end users.
- CDN 242 may operate responsive to application calls it intercepts (e.g., from application 246 or from local UE 170). For example, a URL request may be responded to with a hit if CDN 242 stores the requested content (and an error 404 miss otherwise). As such, CDN 242 may create a facade simulating an Internet connection, which is filled mostly from live broadcasts, including carousel emissions 108, VOD, etc. Thus, contrary to known CDNs, which are filled actively responsive to URL misses, CDN 242 may implement a home gateway that is passively pre-filled via broadcasted emissions 108. For example, BAT 104 may operate with two tuners, one that the user tunes to a station, and the other may be always tuned to another station for continual storage of that data.
- BAT 104 may operate with two tuners, one that the user tunes to a station, and the other may be always tuned to another station for continual storage of that data.
- CDN 242 may respond with any suitable response code depending on the requested content being in cache or another scenario, such as a 204 no content, a 305 use proxy, a 400 bad request, a 404 not found, or a 5XX gateway error.
- CDN PoP 242 may be filled via live and NRT broadcast (e.g., including MMT MPUs and application extensions) and facilitate data casting, ad prepositioning (e.g., by superseding or overlaying an ad received with broadcasted content by a targeted ad previously obtained by emissions 108 and stored at CDN 242), and emergency information delivery.
- CDN 242 may minimize latencies in users loading web content and may offload traffic from content servers to seamlessly improve users’ web experience.
- ads may be obtained via emissions 108 in real-time or via previous such emissions. These ads may be displayed similarly as regular content via UI devices 118. Ads, though, may also be displayed in an L-bar (or another shape) of broadcast app 246, e.g., by not being part of the regular broadcast but rather by being an NRT ad. Alternatively or additionally, an alert may be displayed in the L-bar.
- the regular video may be shrunken down to fit the L-bar.
- the ad may be a video ad, or it could be a static ad.
- CDN PoP 242 may be filled of fragmented content via OTA but also OTT. This PoP may, for example, determine how to obtain content requested at the CDN. For example, due to the BitTorrent fragmentation, a portion may be available from a previous broadcast, but the remaining portion may be obtained via carousel reemission, a collaborative peer’s DRC, or an available IP (e.g., broadband) backchannel. As such, PoP 242 itself may determine to obtain all of these files’ fragments based on different rules for different files. For example, some files may warrant taking advantage of a different distribution pass, whereas others may wait for the next round on the carousel to potentially take a less expedited approach.
- IP e.g., broadband
- BTS 102 may perform delivery and synchronization of media and non-timed data in system 100.
- the delivery functionality may include mechanisms for the synchronization of media components delivered on the same or different transport networks, and application-layer FEC methods that enable error-free reception and consumption of media streams or discrete file objects.
- some FEC may be implemented at the software defined radio of BAT 104, whereas other FEC may be implemented at the application layer (e.g., in the layered coding transport (LCT), as per the A/331 standard).
- LCT layered coding transport
- FIG. 11 illustrates an example method 1100 for implementing a CDN PoP.
- Method 1100 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 1100 presented below are intended to be illustrative. Method 1100 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1100 are illustrated in FIG. 11 and described below is not intended to be limiting.
- Method 1100 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 1100 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1100.
- the following operations may each be performed using a system the same as or similar to CDN PoP 242 (shown in FIG. 2).
- Metadata may be determined, at an orchestration unit of a BTS.
- the metadata may be associated with content in the broadcast.
- CDN PoP 242 may be pre-specified (e.g., via metadata from an orchestration unit of BTS 102) with policies on how the obtained data is to be retained, stored, and locally managed.
- the CDN PoP may transparently mirror elements, including HTML, CSS, software downloads, and media objects originally from third party servers. And this CDN PoP may be automatically chosen based on a type of requested content and a location of a user making the request.
- data may be fragmented via the BitTorrent protocol at BTS 102 (e.g., via fragmentation component 207) such that downstream BAT 104’s fragmentation component 237 is operable to attempt the fragments’ reassembly.
- the fragments may be broadcasted, in an IP -multi cast network (e.g., using the MMT protocol) at BTS 102.
- the broadcast may be temporarily preempted, with a higher priority broadcast (e.g., an alert or other content).
- the CDN PoP may be determined whether the CDN PoP is to supplement a subset of the content actually received at the BAT, in which case a determination as to whether to obtain the data via carousel rebroadcast network 108, DRC peer network 292, or another IP -based connection 106 may be performed at micro CDN PoP 242 based on the metadata.
- the supplementing may be on-demand and may be performed using data stored at the CDN PoP, as discussed above.
- the data may be reconstructed, at micro CDN PoP 242 using component 237, by reordering the fragments (e.g., to provide VOD and/or application extension updates) in NRT.
- a user may be notified when all the fragments are obtained, and/or a subset of the content may be forwarded to the user via UI devices 118.
- An original file may be fragmented into a plurality of content portions at the BTS (e.g., as described above based on a peer-to-peer file sharing protocol that is decentralized), and portions broadcast via the IP multicast traffic (or otherwise) may be reconstituted by the CDN PoP.
- data casting, ad prepositioning, and/or emergency information delivery may be performed from BAT 104.
- ATSC 3.0 transmissions 108 may thus encompass dynamically generated flash channels.
- a flash channel is a channel that may be generated and automatically be added to the broadcast stream based on events occurring in real-time, such as the unexpected prolonged coverage of a live event (e.g., a football game running in overtime) or the unexpected need to cover breaking news or warnings to the public (e.g., a tornado warning).
- ATSC 3.0 transmissions may also encompass dynamic reallocation of transmission bandwidth among channels (or components of services) based on the manner the offered services are consumed.
- a client’s viewing mode may involve heavy consumption (e.g., for a prolonged period of past time) such that the dynamic reallocation only mildly affects quality of the emission (e.g., as observed by the user), or the viewing mode may involve light consumption (e.g., for a brief period of past time) such that the dynamic reallocation is more aggressive.
- Dynamic reallocation of bandwidth - both for allowing the flashing in (adding) of a new channel (service) into the broadcast stream or for reprioritizing service components by reassigning each a different transmission resiliency level - may be carried out by automatically reconfiguring the PLP, including reconfiguring modulation modes, coding, or bitrates amount used to encode each service component.
- This dynamic reallocation of resources may be performed by the transmitter based on metrics that may be aggregated in real-time from viewers’ receivers or from third parties; these metrics may then facilitate the transmitter’s real-time decision-making of bandwidth reallocation.
- content including linear audio/video (AV) services for example, may be distributed by one or more transmitters and in more than one RF channel (e.g., broadcast stream).
- Multiple transmitters, emitting broadcast streams may be operating from one facility or may each be operating from multiple facilities, in accordance with the ATSC 3.0 standard.
- Each broadcast stream may comprise one or more PLPs, each having its own modulation mode and coding parameters.
- a PLP may contain one or more service components (e.g., video, audio, and NRT data); however, a service may be delivered in more than one PLP.
- a transmitter may use information collected in real-time from receivers on a grid to manage the allocation of bandwidth among services provided in its broadcast stream.
- Such information may be a viewing mode used by a user of a receiver - e.g., what service (e.g., station) or service component is currently being viewed and on what platform.
- a real-time data distribution of users’ viewing modes may affect the transmitter’s bandwidth allocation when encoding each component of service in its broadcast stream.
- a video component’s encoding parameters - e.g., frame- rate, frame-resolution, or bit-depth - or parameters of FEC techniques may be determined based on the current viewing modes of receivers on the grid.
- Real time data such as data received by users of receivers on the grid (e.g., viewing modes), may be used by a transmitter to reconfigure the modulation modes that are applied to the transmission of each service in the PLP.
- ATSC 3.0 allows for dynamic adjustment of modulation characteristics that may control the reach of the transmitted data.
- a transmitter based on real time data, received from receivers on the grid or a third party, may decide using a modulation method that is set to better perform with respect to receivers at a certain locality (e.g., in a car outdoors versus in a home indoors) or with respect to a type of receivers (e.g., mobile receivers versus stationary receivers).
- a transmitter may use real time data to decide whether to give preference to certain devices by using a modulation method or mode that deliver a more reliable data transmission to those devices.
- modulation and/or coding of a PLP may be determined based on a manner in which to-be-replaced content is previously consumed such that preference is given to first devices by delivering a more reliable data emission than that delivered to second devices.
- viewing mode data may be available to the transmitter in real time by means of a dedicated return channel (DRC) as defined by the ATSC 3.0 A/323 standard.
- DRC dedicated return channel
- any other communication link may be used by a receiver to inform the transmitter about its current viewing mode.
- the viewing mode data of receivers on the grid may be sent to an associated server, wherein the data will be aggregated, analyzed, and then, recommendations (or controls) may be sent forward to the transmitter for the latter to base its resource allocation on as it encodes the services in its broadcast stream.
- a transmitter may be transmitting a broadcast stream containing several services (e.g., stations) each providing one or more linear, AV services - where each service may comprise multiple video components and corresponding one or more audio components.
- the transmitter or associated server
- the transmitter may find that a large number of users of receivers on the grid consumes a certain service using platforms with limited display capabilities - e.g., a low-resolution display or a standard dynamic range (SDR) display.
- platforms with limited display capabilities e.g., a low-resolution display or a standard dynamic range (SDR) display.
- SDR standard dynamic range
- the transmitter may encode that service at low frame-rate, or at low resolution, or at low bit-depth, thereby reserving encoding resources (bits) for the encoding of another service in the broadcast stream (or another component of the same service, such as audio).
- bits encoding resources
- the viewing mode data may show that a certain service is not being viewed at all by the majority of the users of the receivers in the grid, in which case the transmitter may encode that service at a reduced bitrate, by reducing the frame-rate, the resolution, or the bit-depth for example.
- the viewing mode data may reveal that most users do not view a provided service in a 3DTV, therefore the transmitter may decide to transmit in the broadcast stream only one component of a stereoscopic or multi view content.
- a similar approach may be used with respect to a service: providing, in addition to a video component, also multiple audio components each of which is associated with a different language.
- feed sources 202 may cause resolution reduction, e.g., for content emitted via network 108.
- feed sources 202 may perform corrective behavior and/or increase the codec efficacies (e.g., by increasing the GOP length, by increasing the codec complexity by switching to a more computationally intensive codec in-flight, and/or by another suitable approach).
- a license cost or intrinsic tax may be substantial.
- alliance for open media (AOMedia) video 1 (AVI) may be used (e.g., which may be more compute heavy in comparative analysis but open source). That is, an AVI video codec computation cost may, for example, be less than a cost of HEVC licensing.
- AOMedia alliance for open media
- AVI video codec computation cost may, for example, be less than a cost of HEVC licensing.
- VP9 video coding format may be used (e.g., which may be substantially more efficient at decoding than the aforementioned and/or other formats).
- video encoder MPEG-5 may be used.
- feed sources 202 may determine a set (e.g., matrix) of options for optimal ALP management (e.g., a more efficient change to an upstream configuration of different portions of the encoding and the ecosystem), to meet one or more objective of adding in a new resource. That is, an example implementation may include selective discard of data units that may be below a visual perception quality metric. But some embodiments may, for example, optimize for content aware use cases (e.g., by knowing that an extra couple hundred kilobits out of the reference emission flow are needed), via a deterministic, flow-management determination to perform selective discard (e.g., of a compressed data essence).
- a set e.g., matrix
- options for optimal ALP management e.g., a more efficient change to an upstream configuration of different portions of the encoding and the ecosystem
- a visual analysis engine at BTS 102 may make a determination that a channel falls above an SNR and/or satisfies another reception quality criterion so there is room for this channel to have its modulation and/or coding paired down to a lower quality for squeezing in the new channel.
- a number of different methodologies are contemplated (e.g., as learned by trained model 203-2) in supporting the extra channel, which is significant because previously in ATSC 1 the only degrees of freedom were adjustment of the tower’s height and output power level.
- dynamic ALP management may include configuring parameters of the encoder and leveraging, from a yield perspective, non-monetizable content such as emergency alerting or other breaking news coverage.
- An arbitrage model may have provisos for data emission that does not have a directly monetizable unit of value but may be represented in a monetizable unit of value by defining it as a goodwill emission, which effectively balances its weight as to its value as content.
- An example prediction of AI model 203-2 may cause a dropping of data files that are otherwise to be transmitted (e.g., based on respective importance).
- Another example prediction of model 203-2 may cause a dropping of one or more ALPs (e.g., another video asset) or channels.
- emissions 108 may comprise a plurality of subchannels, e.g., with one including an emergency event; the machine learning of feed sources 202 may, for example, determine not to make an adjustment for the new channel.
- a request e.g., from feed sources 202 or supplemental sources 206
- for creating room for an extra channel e.g., besides only compressing and squeezing.
- certain content transmission may be placed on a much more robust PLP so, even though a receiver may only get one out of ten video frames (e.g., due to the MODCOD not being robust enough or this device is in motion), reception of the audio frames may be uninterrupted and without loss.
- the matrix of parameters/options may thus be, for example, adjusted for durability reasons.
- the flash channel may be, for example, added for a sporting event’s overtime, e.g., when the 11 o’clock news is otherwise supposed to start. In these implementations, the game may be continued, but the 11 o’clock news may not be preempted if emitted as the flash channel, e.g., without disturbing either audience. Such delivery of a plurality of services may be indefinitely continued.
- the flash channel may comprise emergency alerting or the like, e.g., while concurrently emitting regular programming (e.g., without having to destroy a media essence in full-blown or fully- filled coverage of an event).
- ads may be displayed; in another example, ads may be added as part of emissions 108 (e.g., by compressing or adjusting configurations of existing emissions).
- the flash channel may be contractually (monetarily or via another type of value) provided as a service (e.g., for a local community or geographic region). This service may be of higher quality (e.g., by being near the event) for, for example, enhancing goodwill and brand awareness.
- Multiple transmitters may transmit multiple RF channels (or broadcast streams).
- Receivers on a grid within coverage of multiple broadcast streams may send viewing mode data to a transmitter including viewing modes with respect to the services transmitted by another transmitter.
- data that include the viewing modes with respect to services transmitted by multiple broadcast streams (or transmitters) may be aggregated and may be analyzed by a server in communication with one or more of the transmitters.
- the server may then manage and optimize bandwidth allocation for all the broadcast streams emitted from the different transmitters, possibly located at different broadcast facilities.
- Such a server may utilize the temporal availability of bandwidth in a certain transmitter and direct the distribution of packets of NRT data through that transmitter’s broadcast stream.
- the server may operate to predict a future optimal bandwidth allocation based on statistics of the viewing modes - computed, for example, based on viewing mode data that had been received within a preceding window of time.
- the server may predict the optimal bandwidth allocation for a certain transmitter at a time t2 (> ti) based on statistics of viewing mode data formed within a window preceding to, for example.
- aspects disclosed in this disclosure may also be utilized to optimize bandwidth utilization across multiple broadcast streams that cover high-scale live events (e.g., Superbowl).
- Delivering content covering a live event may involve large numbers of content providers each distributing services of multiple live feeds - such as video feeds from multiple cameras, audio feeds of commentaries, and event-dependent computer-generated feeds.
- a server based on viewing mode data received from receivers of viewers of the live event may, for example, prioritize the services (the various live feeds) and may recommend optimal resource allocation to the transmitters of the broadcast streams that encode these services.
- a server may also use other sources of information to prioritize the services.
- other sources may include manual or automatic means indicating the priority of a certain feed at a certain time based on analysis of its content or based on other context derived from the live event’s activities.
- the server may identify temporal segments of a service with low priority and may utilize that to distribute NRT data.
- feed sources component 202 may increase data availability (e.g., at the facility through network 108).
- this component may algorithmically implement the A/331 standard to dynamically bring up and emit a linear, AV service.
- the bring-up and/or emission of this service may be expedited via cloud computing.
- the dynamic allocation of a linear AV service may be facilitated between an on-premise facility and the virtualized cloud environment.
- Components of system 100 may thus facilitate the automatic coordination and collaboration for an emission on network 108.
- feed sources component 202 may implement wide- scale data distribution, e.g., via a linear AV service and/or other MMT services. That is, an automation between service provisioning and data emission/delivery across network 108 may be performed in a single facility and/or in the cloud.
- feed sources component 202 via a statistical multiplex, may distribute bitrates between channels including the new, flash channel. For example, this component may determine bandwidth availability, distributing one data emission at one facility.
- herein contemplated is a distribution of a data emission at all of a plurality of facilities. That is, the data of the new channel may fit for distribution into a remaining, available bandwidth of each of the plurality of different facilities together serving a larger region or nation.
- Such adaption to particularly available capacities may be implemented with the distributive statistical multiplex to create emissions that may be managed and then multiplexed back in according to the PLP characteristics or the additional bandwidth configuration and utilization throughout from a national distribution feed. For example, feedback may be provided back indicating resource availability from the plurality of locations, to dynamically manage delivery of a content emission that may fit in through a plurality of locations.
- Some embodiments may optimize resources in a linear, AV service of one facility, e.g., as a set of allocations that may not match another facility’s set of allocations.
- data may be provided into opportunistic windows that are available.
- a quantized projection of what that data availability looks like from the network perspective may be provided as a whole, being, for example, part of the distributive statistical multiplex.
- a leading indicative signal may be provided to a network encoder, which may then produce derivative outputs that may be relevant for each one of those quantized units of channel capacity and availability, e.g., in real-time.
- a statistical multiplex may include communication link sharing and adaptations to instantaneous traffic demands of the data streams that are transferred over each channel. That is, a communication channel may be, for example, divided into a set of variable-bitrate, digital channels or data streams. Example statistical multiplexing may provide a link utilization improvement or gain.
- a statistical multiplex may operate on a group of pictures by group of pictures basis, e.g., with an active feedback mechanism from the broadcast scheduler. In these or other embodiments, the statistical multiplexer may be used to optimize or reallocate channel utilization in a fixed model.
- this statistical multiplexer may be operable to fill the PLPs when everything is stable, e.g., getting from 95 to 99 percent utilization of a pipe.
- applied machine learning of the core may complement its functionality to manage what to decrease, what to adjust, or any other control parameter at BTS 102. Dynamic allocation may thus be performed by a statistical multiplexer of a respective transmitter.
- feed sources 202 may generate a new channel, including new service components.
- BTS 102 may increase (e.g., incrementally or abruptly) available capacity in a window of time for extra content in the new channel, e.g., as the extra content replaces, during the window, primary content contemporaneously and/or previously emitted.
- an L-shaped bar may display an emergency alert; and, by the user clicking or selecting that emergency alert, they may be taken into a micro webpage that has all the different artifacts about that new (flash) channel.
- temporary mechanisms for the flash channel, in one or more video or audio essence emissions may include: decreasing the average bitrate of the video encoding essence (e.g., reduce bitrate); decreasing the spatial or temporal resolution of the video encoding essence; removing of HDR metadata of the video encoding essence; increasing the GOP length for the video essence; application of a hard-cap (e.g., not to exceed N-kilobit/sec., N being any number) of the video output profile, resulting in extra encoder utilization to meet this variable Q target; reducing the bit-depth (audio) of the encoding essence; and/or removal of tertiary audio tracks.
- a carousel schedule may comprise a set of files to be emitted via emissions 108.
- the emissions may, for example, have a bit rate that is reduced in the carousel.
- the bit rate may be slowed to support inclusion of a new channel.
- the bit rate may be reduced substantially more, e.g., with 50 percent or more of the items on that carousel not being determined to be of high importance resulting thus in a further reduction of their bit rate to dynamically make room for that flash channel.
- Feed sources 202 of BTS 102 may, for example, determine how to fit a new channel such that flash content handler 241 of BAT 104 (or another downstream component such as TV 103) is operable to obtain it. For example, when the carousel includes a set of data files, only one data file may need to be dropped. In another example, five SD channels may be emitted but one of them is being watched substantially less than the others.
- BTS 102 may have trajectory information about how BATs 104 are consuming content of emissions 108 to dynamically adjust MODCOD (e.g., for improving or decreasing penetrative reach of the respective emission) and/or other allocation configuration (e.g., preempt a live transmission, pause an NRT transmission, or adjust another parameter) changes.
- feed sources 202 may drop data channel(s) (or an ALP) to fit new channel(s), e.g., which may comprise a new HD channel.
- feed sources 202 may, for example, determine that a set of content items is of substantially more importance as data payload than one or more other content items. The bit rate for the other content may thus be reduced and another set of content of less importance may even be dropped altogether. For example, one or a couple of users may have their content stream interrupted while tornado evacuation information becomes available to everybody.
- feed sources 202 may subject the content of emissions 108 to one or more grading criteria.
- the modulation and/or encoding may be, for example, adjusted (e.g., to provide a more even share of the resources).
- feed sources 202 may make an adjustment to increase an amount of available capacity only temporarily, e.g., until the flash, additional channel takes over as the primary channel. As such, certain content may temporarily be degraded when increasing the available capacity incrementally for a window of time, for example, reverting back to a base configuration thereafter.
- One or aspects of the MODCOD may be (e.g., incrementally) increased in some instances, and in others decreased. For example, few bits may be sent to lots of receivers, or lots of bits may be sent to a few receivers, e.g., for the flash or existing programming. In another example, less bandwidth may be, for example, made available as a whole to the flash channel and/or the existing emissions, to provide a substantially higher degree of reach to the potential universe of receivers.
- the tradeoff-oriented adjustment(s) to preemptible data may be incremental, e.g., to provide enough sharing of bandwidth in an encoder configuration for the additional channel (feed).
- the flash channel may be emitted with less robust modulation; in another implementation, no change may be performed in the modulation but a relatively impactful FEC as a marginal change may open up enough capacity for that additional channel.
- a profile may be configured causing some other ALP transport(s) to be shut down while the flash channel is sent with a much more robust MODCOD (e.g., having a lower net bandwidth by lowering the resolution, spectral efficiency, and/or bit-rate).
- MODCOD e.g., having a lower net bandwidth by lowering the resolution, spectral efficiency, and/or bit-rate.
- everything may be dropped except for the broadcast app that facilitates display of the flash channel.
- one or more types of NRT files on a data carousel may be kept emitting over network 108 while supporting emission of the flash channel.
- Adjustments to the modulation and/or coding may be based on real-time viewership information.
- a machine learning model may be used to leam how viewers are consuming services, such as the modes of the receivers (e.g., BATs 104, TVs 103, UE 170, etc.) and/or their reception quality (e.g., packet loss, jitter, transmission delay, etc.).
- this information may be developed at a server that forwards findings to BTS 102 (e.g., through DRC network 292 or other means such as a VAST channel).
- live linear transport distribution via the MMT protocol may include a series of measurement messages that may be signaled via network 108 to a set of receiving devices.
- a receiver or a plurality of receivers may be predicted by model 203-2 to be in a marginal reception area or some other scenario that is causing a degree of packet loss, e.g., to make a determination if additional FEC would be beneficial and/or if other parameters in the RF transmission may be beneficially adjusted (e.g., reducing the modulation from 256 QAM to 16 QAM to match available transmission capacity).
- model 203-2 may perform micro or incremental tests to determine whether adjustments (e.g., increasing robustness of the FEC or adjusting bit-depth) have a positive, neutral, or negative impact upon a downstream device’s ability to receive through network 108.
- feed sources 202 may perform such real-time reach management by sending a message to the universe of receiving devices for them to respond back with telemetry metrics (e.g., packet loss).
- feed sources 202 may have access to models 203-2 implementing AI for the new-channel-handling determinations. For example, a prediction may be made that results in dropping the TV show Grey’s Anatomy from HD to SD. The prediction may be based on a learned pattern wherein dropping Grey’s Anatomy from HD to SD loses more viewers and more ad revenue than if this channel were dropped completely.
- ANNs Artificial neural networks
- ANNs are models used in machine learning and may include statistical learning algorithms conceived from biological neural networks (particularly of the brain in the central nervous system of an animal) in machine learning and cognitive science.
- ANNs may refer generally to models that have artificial neurons (nodes) forming a network through synaptic interconnections (weights), and acquires problem solving capability as the strengths of the interconnections are adjusted, e.g., at least throughout training.
- An ANN may be configured to determine a classification based on input data (e.g., from feed sources 202 or another component associated with BTS 102).
- An ANN is a network or circuit of artificial neurons or nodes. Such artificial networks may be used for predictive modeling.
- the prediction models may be and/or include one or more neural networks (e.g., deep neural networks, artificial neural networks, or other neural networks), other machine learning models, or other prediction models.
- the neural networks referred to variously herein may be based on a large collection of neural units (or artificial neurons).
- Neural networks may loosely mimic the manner in which a biological brain works (e.g., via large clusters of biological neurons connected by axons).
- Each neural unit of a neural network may be connected with many other neural units of the neural network. Such connections may be enforcing or inhibitory in their effect on the activation state of connected neural units.
- neural networks may include multiple layers (e.g., where a signal path traverses from input layers to output layers).
- back propagation techniques may be utilized to train the neural networks, where forward stimulation is used to reset weights on the front neural units.
- stimulation and inhibition for neural networks may be more free-flowing, with connections interacting in a more chaotic and complex fashion.
- Disclosed implementations of artificial neural networks may apply a weight and transform the input data by applying a function, this transformation being a neural layer.
- the function may be linear or, more preferably, a nonlinear activation function, such as a logistic sigmoid, Tanh, or rectified linear activation function (ReLU).
- Intermediate outputs of one layer may be used as the input into a next layer.
- the neural network through repeated transformations, leams multiple layers that may be combined into a final layer that makes predictions. This learning (e.g., training) may be performed by varying weights or parameters to minimize the difference between the predictions and expected values.
- information may be fed forward from one layer to the next.
- the neural network may have memory or feedback loops that form, e.g., a neural network. Some embodiments may cause parameters to be adjusted, e.g., via back- propagation.
- An ANN is characterized by features of its model, the features including an activation function, a loss or cost function, a learning algorithm, an optimization algorithm, and so forth.
- the structure of an ANN may be determined by a number of factors, including the number of hidden layers, the number of hidden nodes included in each hidden layer, input feature vectors, target feature vectors, and so forth.
- Hyperparameters may include various parameters which need to be initially set for learning, much like the initial values of model parameters.
- the model parameters may include various parameters sought to be determined through learning. And the hyperparameters are set before learning, and model parameters can be set through learning to specify the architecture of the ANN.
- the hyperparameters may include initial values of weights and biases between nodes, mini-batch size, iteration number, learning rate, and so forth.
- the model parameters may include a weight between nodes, a bias between nodes, and so forth.
- the ANN is first trained by experimentally setting hyperparameters to various values, and based on the results of training, the hyperparameters can be set to optimal values that provide a stable learning rate and accuracy.
- the learning of models 203-2 may be of reinforcement, supervised, and/or unsupervised types. For example, there may be a model for certain predictions that is learned with one of these types but another model for other predictions that is learned with another of these types.
- Reinforcement learning is a technique in the field of artificial intelligence where a learning agent interacts with an environment and receives observations characterizing a current state of the environment. Namely, a deep reinforcement learning network is trained in a deep learning process to improve its intelligence for effectively making predictions.
- the training of a deep learning network may be referred to as a deep learning method or process.
- the deep learning network may be a neural network, Q-leaming network, dueling network, or any other applicable network.
- Reinforcement learning may be based on a theory that given the condition under which a reinforcement learning agent can determine what action to choose at each time instance, the agent can find an optimal path to a solution solely based on experience of its interaction with the environment.
- MDP Markov decision process
- MDP may comprise four stages: first, an agent is given a condition containing information required for performing a next action; second, how the agent behaves in the condition is defined; third, which actions the agent should choose to get rewards and which actions to choose to get penalties are defined; and fourth, the agent iterates until a future reward is maximized, thereby deriving an optimal policy.
- Deep reinforcement learning (DRL) techniques capture the complexities of the RF environment in a model-free manner and leam about it from direct observation.
- DRL can be deployed in different ways such as for example via a centralized controller, hierarchal or in a fully distributed manner. There are many DRL algorithms and examples of their applications to various environments.
- deep learning techniques may be used to solve complicated decision-making problems in wireless network optimization.
- deep learning networks may be trained to adjust one or more parameters of a wireless network, or a plurality of cells in the wireless network so as to achieve optimization of the wireless network with respect to an optimization goal.
- Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It may infer a function from labeled training data comprising a set of training examples.
- each example is a pair consisting of an input object (typically a vector) and a desired output value (the supervisory signal).
- a supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. And the algorithm may correctly determine the class labels for unseen instances.
- Unsupervised learning is a type of machine learning that looks for previously undetected patterns in a dataset with no pre-existing labels.
- unsupervised learning does not via principal component (e.g., to preprocess and reduce the dimensionality of high dimensional datasets while preserving the original structure and relationships inherent to the original dataset) and cluster analysis (e.g., which identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data).
- Semi-supervised learning is also contemplated, which makes use of supervised and unsupervised techniques.
- Feed sources 202 of FIG. 2 may prepare one or more prediction models to generate predictions.
- Models 203-2 may analyze made predictions against a reference set of data called the validation set.
- the reference outputs may be provided as input to the prediction models, which the prediction model may utilize to determine whether its predictions are accurate, to determine the level of accuracy or completeness with respect to the validation set data, or to make other determinations. Such determinations may be utilized by the prediction models to improve the accuracy or completeness of their predictions.
- accuracy or completeness indications with respect to the prediction models’ predictions may be provided to the prediction model, which, in turn, may utilize the accuracy or completeness indications to improve the accuracy or completeness of its predictions with respect to input data.
- a labeled training dataset may enable model improvement. That is, the training model may use a validation set of data to iterate over model parameters until the point where it arrives at a final set of parameters/weights to use in the model.
- feed sources 202 may implement an algorithm for building and training one or more deep neural networks. A used model may follow this algorithm and already be trained on data. In some embodiments, feed sources 202 may train a deep learning model on training data 203-1 providing even more accuracy, after successful tests with these or other algorithms are performed and after the model is provided a large enough dataset.
- a model implementing a neural network may be trained using training data obtained by feed sources 202 from training data 203-1 of a storage/database.
- the training data may include many attributes of a plurality of content moving towards and through downstream receivers.
- this training data obtained from prediction database 203 of FIG. 2 may comprise hundreds, thousands, or even many millions of pieces of information (e.g., continually learning new patterns, second by second at the microsecond level) describing content consumption.
- the dataset may be split between training, validation, and test sets in any suitable fashion. For example, some embodiments may use about 60% or 80% of the information for training or validation, and the other about 40% or 20% may be used for validation or testing.
- feed sources 202 may randomly split the labelled information, the exact ratio of training versus test data varying throughout. When a satisfactory model is found, feed sources 202 may train it on 95% of the training data and validate it further on the remaining 5%.
- the validation set may be a subset of the training data, which is kept hidden from the model to test accuracy of the model.
- the test set may be a dataset, which is new to the model to test accuracy of the model.
- the training dataset used to train prediction models 203-2 may leverage an SQL server and a Pivotal Greenplum database for data storage and extraction purposes.
- feed sources 202 may be configured to obtain training data from any suitable source, via local electronic storage, external resources, network 70, and/or UI devices.
- the connection to network 70 may be wireless or wired.
- feed sources 202 may enable one or more prediction models to be trained.
- the training of the neural networks may be performed via several iterations. For each training iteration, a classification prediction (e.g., output of a layer) of the neural network(s) may be determined and compared to the corresponding, known classification. For example, information known to describe content consumption may be input, during the training or validation, into the neural network to determine whether the prediction model may dynamically predict its presence.
- a classification prediction e.g., output of a layer
- information known to describe content consumption may be input, during the training or validation, into the neural network to determine whether the prediction model may dynamically predict its presence.
- the neural network is configured to receive at least a portion of the training data as an input feature space.
- the model(s) may be stored in database/storage 203-2 of prediction database 203, as shown in FIG. 2, and then used to classify samples of content consumption information based on observed attributes.
- Training data augmentation may be performed to improve the training process, e.g., by giving the model a greater diversity of consumption information. And the data augmentation may help teach the network desired invariance and robustness properties, e.g., when only few training samples are available.
- trained model 203-2 may be used to perform yield management (e.g., to maximize value of each bit emitted via emissions 108).
- a BBP may have a par value (e.g., a fraction of a penny) based on its reach (e.g., number of potential content consumers) and frequency (e.g., number of times that the customer may be exposed to the content).
- This value or model though, in ATSC 3.0 may be substantially changed (e.g., including arbitrage), being, for example, unpredictable.
- some implementations may include broadcast market exchange (BMX) policy running and cognitive spectrum resource management for cost basis management and control of data pieces.
- BMX broadcast market exchange
- herein contemplated is revenue optimization and yield extraction, e.g., as an options contract or options fulfillment exercise, each bit having a value in the future. At some point, that option for that bit will expire and it may be up to the broadcast scheduler to make a best determination for optimum execution of those options in place, effectively setting up (e.g., not just a channel sharing ecosystem and arrangement) an ATSC 3.0 resource sharing and real-time vending ecosystem to determine what are the best distribution characteristics, what are the best reach characteristics, and what is the temporal priority (e.g., whether needing to come out now or waiting 30 seconds). These metrics may come into play in a cognitive ATSC 3.0 scheduler ecosystem, with the objective of yield management for the business.
- the flash channel itself may have intrinsic value, e.g., when alerting a looming crisis or disaster (or airing a presidential debate) to serve the public trust, so the trained model may nevertheless help manage the PLPs by properly performing a tradeoff based on a goodwill attribute in emitting a non-revenue-generating alert.
- trained model 203-2 may be used to perform dynamic ad insertions (e.g., not by preempting a pre-pur chased, upfront ad position in an ad break), e.g., to provide audience segmentation for a traditional media buyer in a digital marketplace.
- ad position inventory buyers may be provided an opportunity to carve up a same inventory ad break position for different demographics. That is, different people may be given different ads at a same time, e.g., based on a viewer’s demographic so that the media buyer is aligned with the position that they purchased, without selling out their inventory from underneath them for digital media.
- some implementations may correlate an initial linear insertion with a respective ad-ID and then determine a plurality of derivative ad-IDs for then filter code matching or for profile or for persona matching.
- a record may, for example, reflect a plurality of derivative insertions that may be allowed for preemption in a linear essence.
- feed sources 202 may, for example, take metadata (e.g., between ATSC 1 emissions) and supply it for this ad decisioning process, applying those persona profiles, behavior, etc., and then preempting what is only in that universe of available creatives to match that demographic for that audience.
- a correlation of that linear insertion may be made to that additional plurality of potentially preemptible ad creatives that are compatible with it for digital distribution.
- EIDR may be used for broadcast ad insertions, and some of those linear insertions may be barred replacement, e.g., with a segmentation marker that would include an ad ID.
- Ad ID has traditionally been a digital attribute that allows for utilization of creatives across multiple different ad networks and exchanges.
- feed sources 202 may glue that ad ID that comes in through traditional linear insertion with what an opportunistic digital ecosystem and experience would be.
- Feed sources 202 may, for example, form part of an AI core that, for example, controls a scheduler (e.g., of FIG. 4A), packagers, and encoders.
- model 203-2 of the core may predict an optimal degradation (e.g., reduction in the resolution or quality) of one of the video services to create more bandwidth for other data and thus to effectively optimize the scarce capacity of emissions 108 for different services. And that may be based on the encoders and packagers, e.g., before the scheduler puts it back together.
- model 203-2 of the core may, rather than (or in addition to) adjusting resolution or quality, implement other mechanisms for creating additional capacity for the flash channel, including creating a base state and then pausing, discarding, or offloading (e.g., to an IP backchannel transmission) an existing OTA data distribution. Once the flash channel transient is complete, the base state may be resumed.
- models 203-2 may be trained for predicting a value of different types of services whether it be monetary or some other kind of value such that a better determination as to how to balance resources is made.
- flash content from feed sources 202 may be configured for delivery via a plurality of PLPs (e.g., one having high penetration into buildings and parking garages at 576p, one with video optimized for mobile receivers traveling at 40 miles per hour, another for stationary receivers or TVs at 720P, audio on a more robust PLP, a scalable rendition for fixed devices, and/or another configuration based on similar gradation). And some implementations may support devices that have multiple tuners by determining whether to split excess channel capacity across different RF frequency transmissions.
- a channel-bonded PLP may thus be used, which is a split between those two channels for any essence delivery.
- the audio may be on a more robust one channel that is on 587 MHz, and the video may be split between the 587 and 593 MHz channels in the video configuration. This may be a function of not just what resources are available in one single RF emission block, but what resources would be available across a whole market transmission capability in the ATSC 3.0 network space.
- a resulting, synthesized PLP may be created from 2 channels for flash data delivery.
- a more robust VHF band may be used for audio transmission, while UHF is used for higher capacity video transmission.
- FIG. 12 illustrates an example method 1200 for instantly adding anew channel (e.g., using dynamic ALP management).
- Method 1200 may be performed with a computer system comprising one or more computer processors and/or other components.
- the processors are configured by machine readable instructions to execute computer program components.
- the operations of method 1200 presented below are intended to be illustrative. Method 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1200 are illustrated in FIG. 12 and described below is not intended to be limiting.
- Method 1200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
- the processing devices may include one or more devices executing some or all of the operations of method 1200 in response to instructions stored electronically on an electronic storage medium.
- the processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1200. The following operations may each be performed using a component the same as or similar to feed sources 202 (shown in FIG. 2).
- available capacity information may be obtained, from each of a plurality of differently-located transmitters (e.g., BTSs 102).
- a same-sized emission may be determined, for each to-be-added channel, e.g., based on the available capacity information of the transmitters.
- MODCOD may be adjusted and/or other, planned traffic may be preempted (e.g., eliminated from the mix).
- the same-sized emission may be determined, for example, at a virtualized environment of the overall network, to be less than the available capacity information of each transmitter.
- a bandwidth portion may be dynamically allocated, to regionally emit in one or more PLPs a set of linear, AV content dynamically generated based on the size.
- the dynamic allocation may be based on a number of BATs 104 intended to receive the emission (e.g., a threshold number of BATs being affected by the flash channel). In this or another example, the dynamic allocation may be based on metrics aggregated in real-time from viewers’ receivers and/or from third parties.
- flash content handler 241 may obtain flash information that is emitted from feed sources 202 via broadcast network 108. In some embodiments, bandwidth adjustment (e.g., via changing encoding) may be performed to fit the flash information. In these or other embodiments, a canceling of other content stream(s) within a same PLP may be performed. In other embodiments, the flash content may otherwise be obtained at BAT 104 using an IP backchannel.
- available capacity at a transmitter may be incrementally increased, for a portion of the emission, as the linear, AV content replaces emission of existing services.
- the duration to generate and emit the at least one portion may satisfy a criterion.
- the criterion may be defined in terms of a number of bits, a number of packets, a time duration, or the like.
- a flash channel may be created within 1 second.
- feedback may be generated for certain decision making, e.g., as to how the flash content will be emitted using a certain bandwidth or space made available responsive to the decision.
- At operation 1210 of method 1200 at least one of: (i) adjusting resolution, (ii) adjusting bit rate, and (iii) removing an existing channel may be performed, via a prediction made by a machine learning model (e.g., model 203-2).
- a machine learning model e.g., model 203-2
- coordination and collaboration may be performed between the transmitters and ISP(s) for an overall network (e.g., as discussed above), to deliver the existing services and the new linear, AV content.
- data emissions may be prioritized based on analysis of the content and/or a context derived from a live event, to replace low priority data with the linear, AV content.
- a set of primary audio data may be subsequently emitted using less resources than a set of secondary audio data based on the consumption manner indicating less consumption of the primary emission, wherein each of the sets is associated with a different language.
- Some embodiments may perform a method for dynamic allocation of bandwidth of a broadcast stream.
- This method may comprise: reallocating, among service components transmitted in one or more PLPs of the broadcast stream, bandwidth by: analyzing one or more metrics, including a viewing mode obtained from each of a plurality of receivers; and determining, based on the one or more analyzed metrics, the reallocation of the bandwidth among the service components of currently delivered services in the broadcast stream such that a new service component is added.
- This reallocation may further be performed by: obtaining, from each of a plurality of differently-located transmitters, available capacity information; and replacing, based on the information, at least one of the service components with more content of the new service component.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A system may be configured to: provide just-in-time interoperability, via a broadcast access terminal to pre-existing user equipment; provide a set of over-the-air application programming interface services to the user equipment; provide progressive video enhancement via a base quality layer and one or more enhancement quality layers operable with the base layer; repurpose padding in baseband packets by dynamically injecting opportunistic data at a studio-to-transmitter feed of the broadcaster; augment data reception integrity via collaborative object delivery; support hybrid broadcast/broadband delivery of fragmented data; support progressive over-the-air download of application extensions at runtime; utilize content ubiquitously provided via broadcast at a content delivery network point-of-presence; and add one or more flash channels at one or more differently-located broadcasting transmitters.
Description
ADAPTIVE BROADCAST MEDIA AND DATA SERVICES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of the priority date of U.S. provisional application 62/936,828 filed on November 18, 2019 and entitled “Adaptive Broadcast Media and Data Services,” the content of which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] The present disclosure relates generally to systems and methods for broadcast media and data delivery, e.g., via television spectrum broadcast in compliance with published and candidate standards such as from the advanced television systems committee (ATSC) standard version 3.0 or higher.
BACKGROUND
[0003] Over-the-top (OTT) or point-to-point traffic requires a connection for each instance or copy of data delivery, which does not scale well. But, via broadcasting, many people (e.g., millions) may be reached in one connection with one copy. Without a transport control protocol / Internet protocol (TCP/IP) basis, though, broadcasting requires overcoming a number of complex challenges when it comes to making sure the data is delivered in a way that is of value to the market (e.g., robust reception characteristics). IP multicast may help resolve this issue, e.g., by taking one packet of data and sending it to a plurality of recipients for more optimal scalability.
[0004] Legacy viewing devices are known to support such media formats as hypertext transfer protocol (HTTP) live streaming (HLS), but they do not support emissions as described in ATSC 3.0. With over-the-air (OTA) content delivery, a user turns on the TV, and their receiver either gets the signal or it does not. In analog OTA transmissions, the receiver at best obtains the signal (e.g., with a degree of snowiness, since typically some signal is lost due to distance or interference), which has the same amount of data. In ATSC 1.0, receivers obtain digital content in view of attenuation or other multipath effects on propagation, including absorption, scattering, diffraction, reflection, and refraction.
[0005] Imagery and video, including with compression, are wirelessly obtained at a device by progressively increasing a quality (e.g., by improving upon a fuzzy picture to a sharper picture over time). For example, HLS data of 576p resolution may be initially obtained. Then, if an internal buffer of this device catches up and is maintained at an adequate level, an instruction may cause subsequent obtainment of an independent HLS
stream of data at 720p resolution. This device may keep working its way up until the buffer is not able to be maintained at an adequate level and may stay at that resolution (e.g., 1080p,
4K, or another resolution), depending on the connectivity of this device.
[0006] ATSC 3.0 schedulers, for example, are designed to use padding in baseband packets to prevent an overflow of the ATSC stack, including at the broadcast gateway and the studio-to-transmitter link tunneling protocol (STLTP) feed. Unidirectional (e.g., IP/UDP/Multicast) broadcast emissions to receiver devices may be lossy. Unidirectional broadcast emissions lack acknowledgments for knowing what and when to re-send missing portions. For example, in a particular designated market area (DMA) of a city at a certain frequency or configuration, a majority of emitted files may be unusable due to reception loss, sometimes with loss representing as little as l/60th of a second of data resulting in the reception of every file being incomplete and therefore unusable. Known data recovery techniques, including those used in peer-to-peer communication, such as forward error correction (FEC), are insufficient by themselves to ensure complete file recovery with a high degree of confidence.
[0007] Broadcast emissions typically only comprise of a collection of media essence streams, specifically content, such as video and audio encoded and multiplexed into a single transmission. Known emissions, such as ATSC 1.0, are statically configured within rigid bandwidth parameters, e.g., at a time of install. There is no ability to adjust the transmission modulation and coding (MODCOD), encoding bitrate, or other control functions, such as reduced bandwidth utilization from performing programming lineup changes, for example. Heritage broadcasters are facing increasing challenges with the loss of retransmission revenue from changes in multichannel video programming distributor (MVPD) contractual relationships, limited solutions for addressable and interactive advertising in traditional ATSC 1.0 models, and the advertising industry shift from traditional “ratings” (statistically estimated viewership) to “actuals” (impression-based metrics) developed for digital monetization technology.
[0008] A content delivery network (CDN) typically comprises of networks of servers geographically located at points-of-presence (PoPs), interconnecting with traditional IP last-mile networks, such as Cable Internet services, and more frequently with mobile carriers, to reduce latency to user devices that request online content. When data is provided in the context of one-way broadcasts, these continually improving receiving devices (e.g., in terms of storage and processing power) may not have access to a broadband connection and thus lack access to the CDN.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The details of particular implementations are set forth in the accompanying drawings and description below. Like reference numerals may refer to like elements throughout the specification. Other features will be apparent from the following description, including the drawings and claims. The drawings, though, are for the purposes of illustration and description only and are not intended as a definition of the limits of the disclosure. The drawings are not be drawn to scale and may not precisely reflect structure or performance characteristics of any given embodiment, and should not be interpreted as defining or limiting the range of values or properties encompassed by example embodiments.
[0010] FIG. 1 is a block diagram of an example broadcast system including a set of broadcast television stations (BTSs) transmitting to a plurality of broadcast access terminals (BATs), with each different instance of “n” representing any natural number.
[0011] FIG. 2 is a block diagram of an example broadcast system illustrating options for manipulation of broadcast information prior to transmission, and the use and manipulation of information received in broadcast transmissions.
[0012] FIG. 3A is a block diagram of an example BAT connected to legacy viewing devices.
[0013] FIG. 3B is a block diagram of an example process for data conversion.
[0014] FIG. 3C is a flow diagram of an example data conversion process.
[0015] FIG. 3D is a diagram of an example protocol stack.
[0016] FIGs. 4A-4C, are block diagrams showing example data paths for the processing of studio to transmitter link transport protocol (STLTP) broadcast information.
[0017] FIG. 4D is a block diagram of an example equipment for padding space utilization.
[0018] FIG. 4E is a flow diagram of an example process for STLTP optimization.
[0019] FIG. 4F is a block diagram of example interfaces at or about an upstream
BTS.
[0020] FIG. 5A is a block diagram of an example BTS.
[0021] FIG. 5B is a block diagram of an example BAT.
[0022] FIG. 5C is a diagram of an example physical layer frame and bootstrap structure.
[0023] FIG. 6 is a flow diagram of an example collaborative object delivery and recovery process.
[0024] FIG. 7 is a flow diagram an example hybrid data delivery process using fragmentation.
[0025] FIG. 8 is a flow diagram of an example progressive over-the-air (OTA) application download and runtime process.
[0026] FIG. 9 is a flow diagram of an example OTA application programming interface (API) services delivery process.
[0027] FIG. 10A is a block diagram illustrating simultaneous transmission of an item of content in multiple formats/renditions.
[0028] FIG. 10B is a block diagram of an example implementation of progressive video enhancement.
[0029] FIG. 11 is a flow diagram of an example implementation of a content delivery network (CDN) point of presence (PoP).
[0030] FIG. 12 is a flow diagram of an example implementation of flash channels using dynamic ATSC 3. link-layer protocol (ALP) management.
DESCRIPTION
[0031] There is a need for broadcasters to provide content via digital/OTT platforms, cable/MSO/MVPD transmission, and over the air using, e.g., Advanced Television Systems Committee (ATSC) emissions. New solutions are described herein for exploiting ATSC version 3.0 and higher transmission mechanisms, and extensions thereto, along and/or in combination with other wired, wireless, and fiber-optic connection mechanisms. For simplicity, examples are described in terms of ATSC 3.0. However, it will be appreciated the techniques described herein may be equally applied in compatible future ATSC standards and in other digital broadcast regimes.
[0032] Herein, we describe broadcast television stations (BTSs) which send ATSC emissions, and broadcast access terminals (BATs) receiving ATSC emissions. Traditionally, television broadcast transmitters receivers and were distinct sets of stationary equipment operating which operated on a unidirectional signal essentially in one mode. It will be appreciated that the broadcast techniques and equipment described herein may take a variety of forms in a variety of combinations. For example, a BTS may be, for present purposes, a mobile computing device which arranges content for high power broadcast. A BAT may be a mobile computing device which also acts as a BTS to rebroadcast received signals. A BAT may be a mobile device that reconstitutes content from portions received various via broadcast television signals, Internet communication, and/or peer devices, where the portions
are obtained at different times and different ways as the BAT travels. Hence, while a BTS and a BAT may be in fixed positions with associated antenna structures, the BTS and BAT functionality described herein may be employed in a variety of combinations in a variety of fixed and mobile computing devices.
[0033] One or more aspects of the present disclosure relate to a method for providing interoperability with existing user equipment (UE). The method may comprise: determining, based on a plurality of capabilities at least one of which is orthogonal to at least one input requirement, a set of mechanisms and corresponding processes configured to translate emissions received based on a substantially static plurality of input requirements that include the at least one requirement; obtaining, from user equipment (UE), a request for the translated emissions, the request including information about the capabilities; and responsive to the request, generating, by a broadcast access terminal (BAT), the set of mechanisms and corresponding processes just in time (JIT).
[0034] One or more other aspects of the present disclosure relate to a method for supporting OTA application programming interfaces (API) services, wherein a device providing this support is operable to: receive OTA broadcast data, the OTA broadcast data being from a broadcast television station (BTS); and provide, via communication circuitry to one or more local devices, the API services, which comprise (i) an OTA capabilities query API, (ii) a broadcast content directory API, (iii) a BAT tuner API, and (iv) a media-content delivery API.
[0035] One or more other aspects of the present disclosure relate to a method for providing progressive video enhancement. The method may comprise: determining an attribute for each of a base layer and one or more enhancement layers to be combined with the base layer such that a stream quality metric satisfies a criterion, the one or more enhancement layers being determined such that the combination is needed for displaying a higher quality stream, wherein each of the enhancement layers is determined based on changes from the base layer.
[0036] One or more other aspects of the present disclosure relate to a method for repurposing unused capacity (e.g., padding) of baseband packets in the STLTP multiplex transport by performing opportunistic data injection (ODI). The device may be caused to: obtain a set of non-real -time (NRT) data; obtain a plurality of baseband packets from the STLTP feed; determine an amount of excess capacity within each of the baseband packets (if any); incrementally extract, for each of the baseband packets, portions of the NRT data, each of the portions having a size determined based on the respectively determined amount of
unused capacity in each baseband packet; and re-multiplex the extracted portions into the STLTP feed in real-time.
[0037] One or more other aspects of the present disclosure relate to a method for augmenting data reception integrity via collaborative object delivery. The method may comprise coordinating a BTS and a scheduler such that at least one of a set of BATs is operable to emit recovery data via IP multicast to a subset of the BATs in a spatial region and in a licensed portion of a spectrum otherwise utilized by the BTS.
[0038] One or more other aspects of the present disclosure relate to a method for supporting hybrid delivery of fragmented data. The method may comprise: obtaining information indicating a number of devices, which satisfies a predetermined criterion, that received all emitted content; and determining whether to rebroadcast, in a carousel, one or more portions of the emitted content based on the information, wherein the emitted content is previously fragmented into a plurality of content portions based on a peer-to-peer file sharing protocol that is decentralized.
[0039] One or more other aspects of the present disclosure relate to a method for supporting progressive OTA application download and runtime. The method may comprise: responsive to reception of at least one module of a terminal application, installing the at least one module such that the terminal application begins to run; responsive to reception of at least one other module of the terminal application, integrating into the terminal application additional functionality based on information of the at least one other module, wherein the modules are obtained at the BAT via emissions comprising IP multicast traffic.
[0040] One or more other aspects of the present disclosure relate to a method for utilizing content ubiquitously provided via broadcast. The method may comprise implementing a CDN point of presence (PoP) for ad hoc delivery of OTA data, by providing, via a mobile device, a CDN PoP that comprises a software defined radio on an integrated circuit (IC) that obtains IP multicast traffic broadcasted from an antenna, wherein contents of the CDN are pre-loaded by the broadcast.
[0041] One or more other aspects of the present disclosure relate to a method for adding one or more channels, by: obtaining, from each of a plurality of differently -located transmitters, available capacity information; determining, for each of the channels based on the available capacity information of the transmitters, a same size of an emission; and dynamically allocating (or re-allocating) a bandwidth portion to regionally emit a set of linear, audio-video (AV) content dynamically generated based on the size.
[0042] These methods may be implemented by one or more systems comprising one or more hardware processors configured by machine-readable instructions and/or other components. The systems may comprise the one or more processors and other components or media, e.g., upon which machine-readable instructions may be executed. Implementations of any of the described techniques and architectures may include a method or process, an apparatus, a device, a machine, a system, or instructions stored on computer-readable storage device(s).
[0043] Adaptable multichannel television broadcast may support a wide variety of media and data services models via adaptation of streams and systems of streams for transmission, adaptation of receiving client devices, and optional connection of broadcast services and receiving client devices to other networks.
[0044] For example, ATSC 3.0 BATs may support legacy media viewing devices, e.g., via just-in-time (JIT) transcoding and repackaging of new media formats such as moving picture experts group (MPEG), media transport (MMT), and real-time object delivery over unidirectional transport (ROUTE) / dynamic adaptive streaming over HTTP (DASH) into legacy media formats such as HTTP live streaming (HLS). In another example, some embodiments may transcode between MPEG-2 transport stream (TS) to HLS. A TS may comprise a sequence of packets via a multiplexing of streams for transmission in noisy environments, and MPEG-TS may implement a digital container format for transmission (e.g., encapsulating multiple MPEG streaming programs in a combination of an emission from a transmitting antenna such as broadcast television station (BTS) 102) and storage of audio, video, and data. Such emission may, for example, comprise one or more different television channels and/or multiple angles of a movie.
[0045] ATSC 3.0 BATs may support a variety of in-home content viewing and storage systems, for example, by providing APIs to a variety of application clients. The application clients, for example, may include apparatuses for consuming over-the-top (OTT) Internet content and cable network content, as well as over-the-air (OTA) broadcast content via an ATSC 3.0 BAT.
[0046] ATSC 3.0 BATs may achieve progressive video enhancement by combining a first content stream for a media asset/content with other streams, or portions of other streams, representing deltas in content from the base content and from each other for the media asset. For example, another stream may include information at a higher frame rate or resolution than the first content stream. The viewing quality ultimately delivered to a user may be a function of which streams, or portions thereof, are received intact or may be
corrected in time for viewing. Progressive video enhancement may be used for OTA streams, OTT streams, other content sources, or combinations thereof.
[0047] ATSC 3.0 transmissions may opportunistically incorporate data into padding baseband packets of studio to transmiter link (STL) transport protocol (STLTP) feeds, allowing real-time data insertions into media feeds, which may be tailored to transmission- tower specific markets and applications. ATSC 3.0 BATs may utilize alternative content available via incorporation into padding baseband packets, and from other sources, for example, to tailor viewing experiences in accordance with observed viewing habits or viewer traits.
[0048] ATSC 3.0 BATs may acquire information to repair damaged transmission streams via collaboration with other ATSC 3.0 BATs or other receiving entities, or via communications on a dedicated return channel or other network connection.
[0049] ATSC transmissions may be used to distribute, e.g., large data files such as application and operating system (OS) upgrades, via rotating and/or scheduled broadcast of data channels, wherein data is coded in a BitTorrent pattern for reassembly of fragments by a client device, whereby, agnostic to content, clients may repair lost fragments via observation of rebroadcasts or IP connection by referring to specific fragmented elements.
[0050] ATSC 3.0 transmissions may be used to support progressive over-the-air application downloads and runtime data, whereby desired applications may be selected by a client on the basis of media consumption, for example, and built in a modular fashion depending upon the degree of involvement of the viewer and availability of data via broadcast on a rotating basis. Desired applications may also be selected by availability. For example, a particular broadcast application extension or addon may not be available upon first viewing of a channel. The broadcast application extension may become available at a later time (e.g., after 15 minutes), at which time it is installed and executed, and its capabilities made available to the audience with no interruption in video viewing.
[0051] An ATSC 3.0 client may serve as a content delivery network (CDN) point of presence (PoP) for a home or other facility, for example, by collecting data from a variety of sources, such as broadcast OTA, OTT, and cable distribution sources, and then storing and redelivering data as required via a variety of mechanisms and application extensions. Broadcast data may encompass not just broadcast media, but also public and private data casting, including such disparate information types as ad pre-positioning and emergency broadcast information. A CDN PoP may be context-aware. For example, the CDN PoP may recognize different kinds of content, e.g., by file type or subcategory within files, and take
action accordingly to obtain missing fragments, alert users to conditions, or otherwise act automatically on received data, or lacunae in received data, in accordance with significance or priority of the data and available options or corrective or preventative action.
[0052] ATSC 3.0 transmissions may encompass dynamically generated flash channels which are generated automatically based on observation of client viewing, unfulfilled viewing requests, environmental factors, emergency conditions, and the like. The needed bandwidth within an ATSC 3.0 physical layer pipe (PLP) for anew flash channel may be gleaned by lowering the bandwidth of other content or application level protocols within the same PLP. For example, a channel with low viewership may be canceled, or the encoding of a media channel may be reduced in terms of resolution, frame rate, robustness, etc., to free up the requisite bandwidth. Flash channels may be generated automatically, e.g., for purposes of responding to urgent or emergency conditions, optimizing broadcast bandwidth yield management in terms of viewership or revenue, balancing viewer good will, and maintaining view brand awareness.
[0053] Data may be carried in PLPs, which are data structures that may be configured for a wide range of trade-offs between signal robustness and channel capacity utilization for a given data payload. Multiple PLPs may be used to carry different streams of data, all of which being, for example, used to assemble and deliver a complete service. In addition, data streams required to assemble multiple delivered services or products may share PLPs, if those data streams are to be carried with the same level(s) of robustness. A PLP may be a network layer protocol, e.g., for the X.25 protocol suite. The PLP may manage the packet exchanges between data terminal devices across virtual calls. And each PLP may have different bit rate and error protection parameters. The PLP may further provide a data and transmission structure of allocated capacity and robustness that may be adjusted to broadcaster needs.
[0054] In ATSC 3.0, the maximum number of PLPs in an RF channel may be 64, and each individual service may utilize, e.g., up to 4 PLPs. In some implementations, downstream broadcast access terminals (BATs) 104 may be able to simultaneously decode at least four PLPs. And a PLP may contain a structure of a frame or a series of subframes. Example bit rates in an example 6 megahertz (MHz) channel may range, from less than 1 mega-bits per second (Mbps) in a lowest-capacity (e.g., which may be a most robust mode), up to over 57 Mbps when using highest-capacity parameters (e.g., which may be a least robust mode). A service may include a set of content elements which, when taken together, provide a complete listening, viewing, or other experience to a viewer. It may comprise
audio, base level video, enhancement video, captioning, graphic overlays, web pages, applications, emergency alerts as well as other signaling or required metadata.
[0055] It will be appreciated that the broadcast techniques described herein may be applied in other broadcast formats and scenarios. References to the ATSC 3.0 standard are for illustrative purposes. It will be further appreciated that many of the techniques described herein apply equally to broadcast, multicast, and unicast scenarios involving television, cable, network, local, and cellular networks, and combinations thereof.
[0056] It will also be appreciated that the techniques described herein may be combined in a wide variety of ways. For example, targeted ad insertion may be achieved by selecting between a live OTA broadcast advertisement and one or more alternative advertisements pre-placed in a client via non-real-time (NRT) methods, according to rules also delivered in an NRT fashion, and in accordance with observations of viewing habits on the client system. In ATSC 3.0, NRT may be file content (e.g., comprising continuous or discrete media and belonging to an app-based feature, comprising electronic service guide (ESG) or emergency alert (EA) information, or comprising opaque file data to be consumed by a digital rights management (DRM) system client) and/or applications non- contemporaneously (generally before time) delivered with their intended use, e.g., where the delivery is performed via ROUTE. A CDN PoP function may be used to gather information necessary for the targeted ad insertion via a variety of pathways, such as by observation of OTA data channels on which materials are placed on a rotating basis, from data delivered in fragmented form within multiple padding packets of live transmissions, or from OTT, cable, local, or other networks. Similarly, the techniques described herein may support viewing text overlays or interactive applications in a similar fashion, for example.
[0057] Broadcast may encompass a wide variety of media and data delivery functions. Evolving broadcast standards such as ATSC 3.0 allow for multiple channels to be delivered within a given portion of spectrum in a highly adaptable fashion. Through the use of a service discovery bootstrap signaling and other data channels, BATs may be informed of new conditions and formats, and the means by which to demodulate, decode, and utilize the broadcast information. The format of transmissions, for example, may be tailored on a tower- by -tower and minute-by-minute basis, e.g., to accommodate current demands or compensate for reception conditions via changes to encoding, diversity, and error correction techniques.
[0058] Multi-channel digital broadcast may be adapted in a number of ways to provide experiences similar to those provided by terrestrial bi-directional data wired, cable, and fiber networks, for example, while also taking advantage of bandwidth advantages
available in wireless broadcast spectrums. For example, legacy devices and viewing styles may be supported through conversion of new broadcast digital formats, and broadcast data channel information may be used to support over-the-air transmission API services to support contemporary viewing platforms. Progressive video enhancement may be achieved efficiently via multiple encodings of different aspects of media. Idle padding baseband packets in media broadcast streams may be utilized to provide timely, localized data services. Collaborative object delivery and recovery mechanisms may be implemented to ease receipt of large transmissions. Similarly, broadcast may be used in hybrid data delivery using fragmentation mechanisms to make it easy to detect and repair missing portions of transmissions in a way that is agnostic to the transmitted content. Run time environments, such as applications, API services, and user experience aspects may be provided via progressive broadcast modules which are adaptively accumulated and exploited. More generally, content delivery network point of presence data staging may be achieved in the home via broadcast transmissions. Channels may be generated automatically based on statistical observations of viewing or other consumption, requests for viewing or information, and environmental and emergency conditions, for example.
Broadcast Transmission and Access Terminal Capabilities
[0059] In some embodiments, system 100 implements advanced television systems committee version 3 (ATSC 3) television broadcasting, there being about 25 standards (several of which are directly referenced herein) that provide guidance. For example, the ATSC 3.0 standards offer a substantially static plurality of broadcast requirements for supporting next generation technologies, e.g., including high efficiency video coding (HEVC) for video channels of 4K resolution at 120 frames per second (FPS), wide color gamut (WCG), high dynamic range (HDR), Dolby AC-4 audio, MPEG-H 3D audio, datacasting capabilities, and/or more robust mobile television support. However, the ATSC 3.0 standards are limited in their implementation prescriptions due to an emphasis on voluntary implementation.
[0060] In some embodiments, system 100 implements a voluntary adoption marketplace model, e.g., without requiring a codified or referenced architecture implementation. Indeed this requirement is absent in the following ATSC 3.0 standards, each of which forming a basis for this disclosure and being incorporated by reference in its entirety herein: “System Discovery and Signaling (A/321);” “Physical Layer Protocol (A/322);”
“Dedicated Return Channel (A/323);” “Scheduler / Studio to Transmitter Link (A/324);”
“Link-Layer Protocol (A/330);” “Signaling, Delivery, Synchronization, and Error Protection (A/331);” “Service Usage Reporting (A/333);” “Application Signaling (A/337);” “Video - HEVC (A/341);” and “ATSC 3.0 Interactive Content, with Corrigendum No. 1 (A/344).”
[0061] While in many aspects defining the transmission streams is possible under the standards, ATSC 3.0 per se does not describe certain potential manipulations or uses of the transmissions. Herein, various adaptations of the facilities generating transmissions and clients receiving transmission are described.
[0062] FIG. 1 illustrates an example system 100 having at least one BTS 102 transmitting to a plurality of BATs 104. In practice, a broadcast system may have any number of BTSs and BATs. Using an adaptive protocol such as ATSC 3.0, the specific transmissions from each BTS may be tailored to accommodate a set of BATs that are in range for reception of the BTS.
[0063] In the example of FIG. 1, transmissions 108 from BTS 102 to BATs 104 may utilize the ATSC 3.0 standards, such as user datagram protocol (UDP) over IP (UDP/IP) multicast (or UDP/IP broadcast) over a broadcast physical layer and/or TCP/IP unicast over the broadband physical layer. In this regard, transmissions 108 from BTS 102 may use a layered architecture that may include at least three layers including a physical layer, management and protocols layer, application and presentation layer, and the like. In some embodiments, emissions 108 may comprise error correction and synchronization pattern features for maintaining transmission integrity. UDP is a data delivery standard (RFC 768), which delivers its payload as datagrams (header and payload sections) to devices on an IP network. UDP provides checksums, for data integrity, and port numbers, for addressing different functions. There is no handshaking so UDP may be used in single-direction communications.
[0064] Moreover, transmissions 108 from BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services and contents over broadcast, broadband, and hybrid broadcast/broadband networks, along with the mechanism to signal the language(s) of each provided service, including audio, captions, subtitles (if present), any emergency delivery service, and the like. The IP-based delivery of ATSC 3.0 services and contents may be broadcast over network 106, which may be the Internet, a hybrid broadcast network, a broadband network, or the like.
[0065] Additionally, transmissions 108 from BTS 102 may use the MMT protocol, ROUTE protocol, or the like. In this regard, MMT may be specified by ISO/IEC 23008-1 (MPEG-H Part 1), and MMT protocol may be used to deliver media processing units
(MPUs). Moreover, MMT may utilize a digital container standard developed by MPEG that supports HEVC video. The ROUTE protocol may be used to deliver content packaged in MPEG DASH Segments. In some embodiments, MMT may be used to transfer data using an all-Internet protocol (All-IP) network.
[0066] Transmissions 108 from BTS 102 may use Dolby AC-4 technology, MPEG- H 3D audio system technology, and the like. Dolby AC-4 may implement audio compression. Dolby AC-4 bitstreams may contain audio channels, audio objects, and/or the like. Dolby AC-4 may have 5.1 core audio channels that may be utilized by Dolby AC-4 decoders to decode. MPEG-H 3D audio, specified by ISO/IEC 23008-3 (MPEG-H Part 3), may utilize an audio coding standard developed by the ISO/IEC MPEG to support coding audio as audio channels, audio objects, higher-order ambisonics (HO A), and/or the like. MPEG-H 3D audio may support up to 64 loudspeaker channels and 128 codec (coder-decoder) core channels.
[0067] One or more aspects of BATs 104 may include an ATSC 3.0 receiver, and each ATSC 3.0 receiver may include a dedicated return channel (DRC) terminal module or an equivalently DRC-enabled ATSC 3.0 receiver, or the like. System 100 supports TV content being directly received (e.g., and potentially viewed) at many different fixed and portable video devices, such as next generation TVs 103, BATs 104, and in certain instances UE 170. System 100 further enables enhanced advertising capabilities, including tailored messaging for specific audience segments in the form of ads, pop ups, or other messaging based on user preferences. These services could be a value-add for public broadcasters who want to point viewers to related content based on their viewing habits, provide member-only viewing options, or offer other ways for viewers to access support services. Example data service opportunities may include public safety, education, and member services arenas.
[0068] With respect to public safety, system 100 may facilitate delivery of media- rich public alerts and mission-critical video and images to local and regional first responders during emergencies. System 100 may, e.g., additionally or instead deliver localized AMBER Alerts with accurate, detailed information about victims and/or suspects leveraging real time images and video. With respect to education, system 100 may facilitate delivery of customized and targeted distance learning programs to rural and remote areas without access to the Internet. System 100 may, e.g., additionally or instead deliver localized training and workforce development programs. With respect to member services, system 100 may combine video on demand (VOD) with conditional access, and, for example, give viewers specialized access similar to membership video on demand from within an ATSC 3.0 receiver. System 100 may, e.g., additionally or instead use the ATSC 3.0 broadcast platform
with native HTML 5 such that users may enjoy interactive games and adventures along with a favorite show, with or without the use of broadband.
[0069] In some embodiments, BTS 102 carves traditional 30 second advertisement (ad) segments down to the microsecond level to figure out what is the best revenue extraction model, for a time limited source that depreciates to zero almost instantaneously (e.g., when the window for emission passes). For example, in a 30 second window, there may be 6.144 million windows of opportunity in this forward spectrum, and, in a 24-hour window, there may be 2 billion opportunities to monetize each one of those microsecond series of emissions. BTS 102 thus supports a technology model that may pare down the available inventory to sell time over spectrum. The model may support delivering a piece of data over an amount of leased or licensed spectrum that is available by balancing how densely populated a market is for the delivery based upon the tower footprint. Each BTS 102 may thus support a marketplace exchange model for data distribution services across many different demand regions.
[0070] In some embodiments, system 100 may support data capacities about or in excess of 25 or 30 Mbps, e.g., for live and/or non-real time (NRT) data. For example, system 100 may support more streams, including multiple high definition (HD) ones. The quality of the content may satisfy one or more criteria not achievable by otherwise known broadcasters. The spectrum may be flexibly used, including, e.g., use of orthogonal frequency-division multiplexing (OFDM) modulation and different coding choices. System 100 may further support multiple simultaneous operating points (e.g., physical layer pipes) and development of a single frequency network (SFN). The robustness of the physical layer of system 100 may allow stations the flexibility to target hard-to-reach areas (e.g., penetrating buildings, even down to the fourth floor of a parking garage) and/or moving vehicles (e.g., traveling 60 miles per hour). An SFN is a broadcast network where several transmitters simultaneously send substantially the same signal over the same frequency channel. A simplified form of SFN may be achieved by a low power co-channel repeater, booster, or broadcast translator, which is utilized as a gap filler transmitter. Embodiments of system 100 implementing one or more SFNs may efficiently utilize radio spectrum, allowing a higher number of radio and TV programs in comparison to traditional multi-frequency network (MFN) transmission. Such an SFN may also increase the coverage area and decrease the outage probability in comparison to an MFN, since the total received signal strength may increase to positions midway between the transmitters.
[0071] In some embodiments, BATs 104 may implement gateway devices or tuners to extend broadcast viewing from traditional TV to mobile devices. In some embodiments, BATs 104 may implement second screen devices and synchronized content to allow individuals to access related content and services without disruption or display on a large communal display.
[0072] In some embodiments, system 100 may support hybrid delivery, due to a basis in an IP transport. That is, system 100 may implement simultaneous broadcast and broadband delivery. For example, system 100 may support new types of hybrid services, such as alternate languages, camera angles, NRT content, and/or localized inserts. In some embodiments, system 100 may support advanced audio/visual (A/V) compression, including an advanced compression scheme, such as HEVC Main 10 profile specified as core, and performance gains over known systems. In some embodiments, system 100 may support immersive audio, ultra-high definition (UHD) / 4K video, expanded audio including immersive and personalized audio, alternate languages, and other personalized audio choices, up to 22.2 channels of rendered audio elements, HDR video, and WCG video. Example implementations of UHD formats may include 4K UHD (e.g., 4,096 by 2,160 pixels) and 8K UHD (e.g., 7,680 c 4,320 pixels). And implementations of WCG may have video with more color saturation and/or richer quality than traditional video.
[0073] In some embodiments, each BTS 102 comprises just a tower and a transmit antenna. In these or other embodiments, BTS 102 may comprise a TV station and a transmitter site. The TV station and the transmitter site may, for example, comprise a studio (e.g., of FIG. 5A) and/or an ATSC 3.0 downlink gateway (e.g., of FIG. 5A). More particularly, the TV station may comprise sources 202 and/or 206 that provide 4K/UHD video and next-generation audio (e.g., with captioning) and that provide HD video and audio (e.g., with captioning as well). A master controller may obtain the UHD and HD production and then this obtained data may be encoded and multiplexed such that ESG IP packets are emitted via the STL to the transmitter site. The transmitter site may comprise an ATSC 3.0 exciter, which generates the ATSC 3.0 waveform such that a transmitter and a mask filter map operate on the emission. The ATSC 3.0 downlink gateway or another component of BTS 102 may perform the BitTorrent fragmentation.
[0074] In some embodiments, system 100 may support interactivity, personalization, including use of known tools to create interactive experiences (HTML 5), advanced emergency alerting (AWARN), enhanced alerting capabilities (e.g., rich media including evacuation routes and radar images) for first responders and consumers, and
receiver wake-up-on-alert to a far larger audience than is currently possible, e.g., in times of a crisis. And the use of bootstrap signaling may allow system 100 to support new services without obsolescence, e.g., by making possible a future version ATSC 3.1 or higher that provides new services or transmission schemes without interfering with 3.0 users. Use of EA wake-up bits of the bootstrap may further be used by BTS 102 for emergency alerting such that BAT 104 transitions from a quiescent or low power mode into a high power mode where it then performs LI basic (LIB) and LI detail (LID) processing, which may provide information to process the actual PLPs. LIB may be part of the preamble following the bootstrap, and it may carry the more fundamental signaling information as well as data necessary to decode LID. LID may be part of the preamble following the LIB, and it may have the information necessary to decode subframes including their MODCOD, number of PLPs, pilot pattern, FEC, etc. Whereas signaling may be delivered over MMT and/or ROUTE, the bootstrap information may be provided by means of the service list table (SLT).
[0075] In some embodiments, feed sources 202 and/or supplemental sources 206 of FIGs. 1-2 may operate as the data sources of FIG. 4A such that data is then forwarded (e.g., via progressive video enhancer 205) to MODCOD unit 204. MODCOD unit 204 of FIG. 2 may implement: (i) the ALP transport protocol (ALPTP) formatting, the STLTP formatting, and the error correction coding (ECC) encoding of FIG. 4A; (ii) the ECC decoding and STLTP demultiplexing of FIG. 4B; and/or (iii) the coded modulation of FIG. 4C.
[0076] FIG. 2 illustrates how equipment may be arranged before and after the broadcast pipeline to achieve a variety of functions. In the example of FIG. 2, system 200 includes feed sources 202, which provide media and data to MODCOD unit 204. MODCOD unit 204 may convert different feeds into different broadcast channels, and may further encode a single feed into multiple encodings on multiple channels. The multiple encodings may correspond to media channels at different resolutions for the same content, or portions thereof. The feed sources 202, for example, may be national, while the supplemental sources 206 may be more regional or local in nature. MODCOD unit 204 may process both sources 202 and 206 and provide encoded information for transmission by BTS 102. In practice, a BTS may provide identical transmissions from multiple towers, or provide unique transmissions from each tower.
[0077] In some embodiments, feed sources 202 and/or supplemental sources 206 provide ads. In some embodiments, supplemental sources 206 may provide a collection and data analysis service.
[0078] As depicted in FIGs. 4A and 4F, data to be transmitted may enter a broadcast gateway using either the ALPTP or data source transport protocol (DSTP). Other inputs to the broadcast gateway may be instructions from a system manager. A scheduler, which may be internal to the broadcast gateway, may control the pre-processing functions that occur before delivery of the data and various control information to the transmitter(s) via the STLTP (e.g., with optional ECC applied). ECC may, for example, be applied to the STLTP outer layer, which improves reliability of delivery of a complete package of STL data to each transmitter. The data and instructions from the studio may be separated, buffered, and used to control the transmitter(s) and to construct the waveform to be emitted to downstream receivers. There may be a one-to-one correspondence between individual streams of ALP packets and individual PLPs. To prepare ALP packets for transmission, in the broadcast gateway the ALP packets may be encapsulated in baseband packets (BBPs), which may have defined sizes. ALP packets may either be segmented or concatenated so that they fill the allocated space in the BBPs carrying them as completely as possible without overflowing the available space.
[0079] In some embodiments, BTS 102 may comprise an ATSC 3.0 packager, encoder, signaler, and scheduler. At least some of this functionality may similarly be implemented at BAT 104 (e.g., for the DRC). The packager, encoder, signaler, and scheduler of BTS 102 may deliver fragments (e.g., to differentiate core transport of MMT versus ROUTE).
[0080] The channels of emissions 108 may be numerous, and each channel may be encoded in any number of ways. How much of transmission 108 is received at antenna 231 of BAT 104 depends on a variety of factors, such as noise, path occlusions, environmental factors, range, and implementation technology.
[0081] The detection, demodulation, and decoding of a complex transmission, such as transmission 108, may require complex foreknowledge of the broadcast formats. Such foreknowledge may be provided, e.g., by a bootstrap signal for an ATSC 3.0 transmission (see, e.g., FIG. 5C). For example, different services may be time-multiplexed together within a single RF channel, which is indicated, at a low level, by the type or form of a signal transmitted during a particular time period so that a receiver discovers and identifies the signal (e.g., which in turn indicates how to receive the services that are available via that signal). The bootstrap signal may precede, in time, a longer transmitted signal that carries some form of data. The bootstrap employs a fixed configuration (e.g., sampling rate, signal bandwidth, subcarrier spacing, time-domain structure) known to all receiver devices and
carries information to enable processing and decoding of the (e.g., potentially new type) signal associated with a detected bootstrap. The bootstrap may provide a universal entry point and, as such, may be more robustly encoded, but only a minimum amount of information may be transmitted for system discovery (e.g., identification of the associated signal) and for initial decoding of the following signal. FIG. 5C shows an overview of the general structure of a physical layer frame, the bootstrap signal, and the bootstrap position relative to the post bootstrap waveform (e.g., the remainder of the frame). The bootstrap comprises a number of symbols, beginning with a synchronization symbol positioned at the start of each frame period to enable signal discovery, coarse synchronization, frequency offset estimation, and initial channel estimation.
[0082] The functionality for receiving, deciphering, and utilizing the broadcast transmission may be arranged in any number of ways in a broadcast access terminal or system encompassing broadcast access terminal functionality. For simplicity, in the example of FIG. 2, such functionality is depicted as residing in a single apparatus, BAT 104. Although fifteen components are shown to be part of BAT 104 in FIG. 2, each is optional, and thus any suitable combination of them is contemplated herein.
[0083] Transmission 108 is received at one or more antennas 231, and circuitry for the amplification and filtering of the transmission 108 signals may be performed at component 232. A decoding unit, such as component 233, may then further process transmission 108 signals and perform correction by detecting and filling gaps in channel information, e.g., via error correction or diversity techniques. Extraction component 234 may extract individual media and data channels from an output of component 232. Fragmentation component 237 may reach out to the Internet (e.g., network 106) or to another network (e.g., 292, 294, or 296) to locate or request missing pieces of information. These other networks may be a local area network, for example, or an ad-hoc network of devices arranged to share broadcast information, such as a mesh network of BAT devices. For ease of illustration, data and media are shown as being shared among internal components of BAT 104 via a data bus, to which are connected a storage function 236 and a network (N/W) interfaces (I/Fs) function implemented via collaborative object delivery service 245.
[0084] Notably, BAT 104 may function fully without any Internet or other backhaul network connection. Via a bootstrap or other information channel in the transmission, BAT 104 may receive all the information necessary to decode and interpret emissions 108, as well as to update its own operations and applications. Missing portions of channel information, for example, may be filled by monitoring transmission 108 for retransmissions.
[0085] Corrected channel information may be used in a variety of ways. It may be stored in the storage function 236, which is, e.g., a digital media storage array. Corrected media information may be immediately shared via one or more local user networks 294 and 296, which may be, for example, Wi-Fi, Bluetooth, NFC, universal serial bus (USB), infrared, or another link to viewing devices, for example. Wire or fiber connections, not shown, may also be used.
[0086] Streaming media from component 233 or NRT media drawn from storage unit 236 may be processed by re-encoding component 238 and repackaging component 239. For example, media feeds may be translated from one format to another or resized, e.g., for display on handheld devices. In an implementation, storage 236 (e.g., for supporting BAT 104 as a whole and/or for implementing CDN PoP 242) may comprise at least 1 terabyte of space.
[0087] BAT 104 may support, via component 243, the download and installation of applications, e.g., programs received via broadcast or other means, and related data that are of use for operation of BAT 104 or users of BAT 104. Applications 246 may, for example, augment media viewing, e.g., via video display overlays, or provide access to associated content, information, or other entertainment or useful functions.
General Capabilities and Specification
[0088] Upstream electronic storage 422 of FIG. 4D may be located at or near BTS 102, and downstream electronic storage 236 of FIG. 3 A may be located at or near BAT 104. Upstream electronic storage 422 and downstream electronic storage 236 may each comprise electronic storage media that electronically stores information. The electronic storage media of storage 422 and/or storage 236 may comprise system storage that is provided integrally (e.g., substantially non-removable) with system 100 and/or removable storage that is removably connectable to system 100 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 422 may be (in whole or in part) a separate component within system 100, or such storage may be provided (in whole or in part) integrally with one or more other components of system 100 (e.g., a user interface (UI) device 418, processors 420, etc.). Similarly, electronic storage 236 may be (in whole or in part) a separate component within system 100, or such storage may be provided (in whole or in part) integrally with one or more other components of system 100 (e.g., a UI device 118, processors 230, etc.). In some embodiments, storage 422 may be located in a computer together with processors 420, in a computer that is part of external resources 424, in UI
devices 418, and/or in other locations. In some embodiments, storage 236 may similarly be located in a computer together with processors 230, in a computer that is part of external resources 124, in UI devices 118, and/or in other locations. Each of storage 422 and storage 236 may comprise a memory controller and one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Storage 422 may store software algorithms, information obtained and/or determined by processors 420, information received via UI devices 418 and/or other external computing systems, information received from external resources 424, and/or other information that enables system 100 to function as described herein. Similarly, storage 236 may store software algorithms, information obtained and/or determined by processors 230, information received via UI devices 118 and/or other external computing systems, information received from external resources 124, and/or other information that enables system 100 to function as described herein.
[0089] Each of upstream external resources 424 and downstream external resources 124 may include sources of information (e.g., databases, websites, etc.), external entities participating with system 100, one or more servers outside of system 100, a network, electronic storage, equipment related to wireless fidelity (Wi-Fi) technology, equipment related to Bluetooth technology, data entry devices, a power supply (e.g., battery powered or line-power connected, such as directly to 110 volts AC or indirectly via AC/DC conversion), a transmit/receive element (e.g., an antenna configured to transmit and/or receive wireless signals), a network interface controller (NIC), a display controller, a graphics processing unit (GPU), and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 424 and/or external resources 124 may be provided by other components or resources included in system 100. One or more of processors 230, external resources 124, UI device 118, electronic storage 236, and/or other components of system 100 may be configured to communicate with each other via wired and/or wireless connections, such as a network (e.g., a local area network (LAN), the Internet, a wide area network (WAN), a personal area network (PAN), a radio access network (RAN), a home area network (HAN), a campus network, a metropolitan network, an enterprise private network, a virtual private network (VPN), an internetwork, a backbone network (BBN), a global area network (GAN), an intranet, an extranet, an overlay network, etc.), near field communication (NFC), cellular telephony (e.g., global system for mobile communications (GSM),
UMTS/HSPA, long term evolution (LTE), fifth and/or fourth generation (5G/4G), code division multiple access (CDMA), etc.), Wi-Fi, WiMAX, a public switched telephone network (PSTN), another wireless communications link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, cm wave, mm wave, etc.), a base station, another suitable resource, and/or a combination of two or more hereof. Similarly, one or more of processors 420, external resources 424, UI device 418, electronic storage 422, and/or other components of system 100 may be configured to communicate with each other via wired and/or wireless connections, such as a network.
[0090] Each of upstream UI device(s) 418 and downstream UI device(s) 118 may be configured, in system 100, to provide an interface between one or more users and system 100. Each of UI upstream devices 418 and downstream UI devices 118 may be configured to provide information to and/or receive information from the respective one or more users.
Each of UI devices 418 and UI devices 118 may include aUI and/or other components. Each of these UIs may be and/or include a graphical UI (GUI) configured to present views and/or fields configured to receive entry and/or selection with respect to particular functionality of system 100, and/or provide and/or receive other information. In some embodiments, the UI of UI devices 418 may include a plurality of separate interfaces associated with processors 420 and/or other components of system 100. Similarly, in some embodiments, the UI of UI devices 118 may include a plurality of separate interfaces associated with processors 230 and/or other components of system 100. Examples of interface devices suitable for inclusion in UI device 418 and/or UI device 118 may include a touch screen, a keypad, touch sensitive and/or physical buttons, switches, a keyboard, knobs, levers, a display, speakers, a microphone, an indicator light, an audible alarm, a printer, and/or other interface devices. The present disclosure also contemplates that each of UI device 418 and UI devices 118 may include a removable storage interface. In this example, information may be loaded into UI device 418 and/or UI devices 118 from removable storage (e.g., a smart card, a flash drive, a removable disk) that enables users to customize the implementation.
[0091] In some embodiments, each of UI devices 418 and UI devices 118 may be configured to provide a UI, processing capabilities, databases, and/or electronic storage to system 100. As such, UI devices 418 may include processors 420, electronic storage 422, external resources 424, and/or other components of system 100, and UI devices 118 may similarly include processors 230, electronic storage 236, external resources 124, and/or other components of system 100. In some embodiments, each of UI devices 418 and UI devices 118 may be connected to a network (e.g., the Internet). In some embodiments, UI devices 418
do not include processors 420, electronic storage 422, external resources 424, and/or other components of system 100, but instead communicate with these components via dedicated lines, a bus, a switch, a network, or other communication means. Similarly, in some embodiments, UI devices 118 do not include processors 230, electronic storage 236, external resources 124, and/or other components of system 100, but instead communicate with these components via dedicated lines, a bus, a switch, a network, or other communication means. The communication may be wireless or wired. In some embodiments, each UI device 418 and/or UI device 118 may be a laptop, desktop computer, smartphone, tablet computer, and/or another UI device.
[0092] Data and content may be exchanged between the various components of the system 100 through a communication interface and communication paths using any one of a number of communications protocols. In one example, data may be exchanged employing a protocol used for communicating data across a packet-switched internetwork using, for example, the Internet protocol suite, also referred to as TCP/IP. The data and content may be delivered using datagrams (or packets) from the source host to the destination host solely based on their addresses. For this purpose, the Internet protocol (IP) defines addressing methods and structures for datagram encapsulation. Of course, other protocols also may be used. Examples of IP include IP version 4 (IPv4) and IP version 6 (IPv6).
[0093] In some embodiments, each of processor(s) 420 and processor(s) 230 may form part of one or more user devices (e.g., in a same or separate housing), including a consumer electronics device, desktop computer, mobile phone, smartphone, personal data assistant (PDA), digital tablet/pad computer, cloud computing device, wearable device (e.g., watch), augmented reality (AR) goggles, virtual reality (VR) goggles, reflective display, personal computer, laptop computer, notebook computer, work station, server, high performance computer (HPC), vehicle (e.g., an embedded computer, such as in a dashboard or in front of a seated occupant of a car or plane), game or entertainment system, set top box, monitor, television (TV), smart TV, panel, space craft, digital video recorder (DVR), digital media receiver (DMR), internal tuner, satellite receiver, router, hub, switch, or any other device. Each UE 170 may, e.g., be one of these devices and/or communicate with BAT 104 via network 294 and/or 296. For example, UE 170 and/or TV 103 may be a Roku device, GoogleTV, Apple TV, Fire TV, gaming system (e.g., Xbox 360, Xbox One, PS3, PS4, WiiU, etc.), another IP-based system, or another legacy device. Next generation TVs 103 may be purchased off the shelf or may be pre-modified to incorporate one or more components of BATs 104.
[0094] In some embodiments, each of processors 420 and processors 230 may be configured to provide information processing capabilities in system 100. Each of processors 420 and processors 230 may comprise one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although each of processors 420 and processors 230 is shown in FIGs. 4D and 3 A, respectively, as a single entity, this is for illustrative purposes only. In some embodiments, processors 420 may comprise a plurality of processing units; these processing units may be physically located within the same device, or processors 420 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 418, devices that are part of external resources 424, electronic storage 422, and/or other devices). In some embodiments, processors 230 may comprise a plurality of processing units; these processing units may be physically located within the same device, or processors 230 may represent processing functionality of a plurality of devices operating in coordination (e.g., one or more computers, UI devices 118, devices that are part of external resources 124, electronic storage 236, and/or other devices).
[0095] As shown in FIGs. 4D and 3A, respectively, each of processors 420 and 230 may be configured via machine-readable instructions to execute one or more computer program components. The computer program components of processor 420 may comprise one or more of STLTP extraction component 430, NRT extraction component 432, real-time (RT) yield evaluation component 434, RT injection component 436, MODCOD control component 438, and/or other components. The computer program components of processor 230 may comprise one or more of front-end/baseband component 233, extraction component 234, request handling component 235, fragmentation component 237, JIT repackaging component 238, JIT transcoding component 239, data services component 240 (e.g., including flash content handler 241, CDN PoP 242, application download and runtime 243, etc.), API services component 244, applications 246, and/or other components. Each of processors 420 and 230 may be configured to execute their respective components through: software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on the respective processor.
[0096] It should be appreciated that although components 430, 432, 434, 436, and 438 are illustrated in FIG. 4D as being co-located within a single processing unit, in embodiments in which processors 420 comprises multiple processing units, one or more of components 430, 432, 434, 436, and/or 438 may be located remotely from the other
components. It should similarly be appreciated that although components 233, 234, 235, 238, and 239 are illustrated in FIG. 3A as being co-located within a single processing unit, in embodiments in which processors 230 comprises multiple processing units, one or more of components 233, 234, 235, 237, 238, 239, 240, 244, and/or 246 may be located remotely from the other components. For example, in some embodiments, each of processor components 233, 234, 235, 237, 238, 239, 240, 244, and 246 may comprise a separate and distinct set of processors.
[0097] The description of the functionality provided by the different components 430, 432, 434, 436, and/or 438 described below is for illustrative purposes, and is not intended to be limiting, as any of components 430, 432, 434, 436, and/or 438 may provide more or less functionality than is described. The description of the functionality provided by the different components 233, 234, 235, 237, 238, 239, 240, 244, and/or 246 described below is similarly for illustrative purposes, and is not intended to be limiting, as any of components
233, 234, 235, 237, 238, 239, 240, 244, and/or 246 may provide more or less functionality than is described. For example, one or more of components 233, 234, 235, 237, 238, 239,
240, 244, and/or 246 may be eliminated, and some or all of its functionality may be provided by other components 233, 234, 235, 237, 238, 239, 240, 244, and/or 246. As another example, processors 230 may be configured to execute one or more additional components that may perform some or all of the functionality attributed below to one of components 233,
234, 235, 237, 238, 239, 240, 244, and/or 246.
[0098] It is understood that any or all of the systems, methods and processes described herein may be embodied in the form of computer executable instructions (e.g., program code) stored on a computer-readable storage medium which instructions, when executed by a machine, such as a BTS apparatus, BAT apparatus, or equipment associated with a BTS or BAT, perform and/or implement the systems, methods and processes described herein.
[0099] Aspects of the systems described herein may be web-based. For example, a device may operate a web application in conjunction with a database. The web application may be hosted in a browser-controlled environment (e.g., a Java applet and/or the like), coded in a browser-supported language (e.g., JavaScript combined with a browser-rendered markup language (e.g., Hyper Text Markup Language (HTML) and/or the like)) and/or the like such that any computer running a common web browser (e.g., Internet Explorer™, Firefox™, Chrome™, Safari™ or the like) may render the application executable. A web-based service
may be more beneficial due to the ubiquity of web browsers and the convenience of using a web browser as a client (e.g., thin client).
[00100] Aspects of the disclosure may be implemented in any type of mobile smartphones that are operated by any type of advanced mobile data processing and communication operating system, such as, e.g., an Apple™ iOS™ operating system, a Google™ Android™ operating system, a RIM™ Blackberry™ operating system, a Huawei™ Harmony OS, a Nokia™ Symbian™ operating system, a Microsoft™ Windows Mobile™ operating system, a Microsoft™ Windows Phone™ operating system, a Linux operating system or the like.
[00101] Techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The techniques may be implemented as a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, in a machine-readable storage medium, in a computer-readable storage device, or in a computer- readable storage medium for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
[00102] Method steps of the herein disclosed techniques may be performed by one or more programmable processors executing a computer program to perform functions of the techniques by operating on input data and generating output. Method steps may also be performed by, and apparatus of the techniques may be implemented as, special purpose logic circuitry, e.g., an FPGA or an application-specific integrated circuit (ASIC).
[00103] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory, a random-access memory, or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from and/or transfer data to, one or more tangible mass storage
devices for storing data, such as a solid-state medium, magnetic disk, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of volatile memory and/or non-volatile memory, including by way of example semiconductor memory devices, such as, EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and memory may be supplemented by or incorporated in special purpose logic circuitry.
[00104] As used throughout this application, the word “may” is used in a permissive sense, “ having the potential to”, rather than the mandatory sense, meaning “must”. The words “include,” “including,” and “includes” and the like mean “including, but not limited to”. As used herein, the singular form of “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. As employed herein, the term “number” shall mean one or an integer greater than one.
[00105] As used herein, the statement that two or more parts or components are “coupled” shall mean that the parts are joined or operate together either directly or indirectly through one or more intermediate parts or components, so long as a link occurs. As used herein, “directly coupled” means that two elements are directly in contact with each other. As used herein, “fixedly coupled” or “fixed” means that two components are coupled so as to move as one while maintaining a constant orientation relative to each other. Directional phrases used herein, such as, for example and without limitation, “top”, “bottom”, “left”, “right”, “upper”, “lower”, “front”, “back”, and derivatives thereof, relate to the orientation of the elements shown in the drawings and are not limiting upon the claims unless expressly recited therein.
[00106] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device.
[00107] Several embodiments of the disclosure are specifically illustrated and/or described herein. However, it will be appreciated that modifications and variations are contemplated and within the purview of the spirit and scope of the appended claims. As such, an exhaustive list of all possible designs, aspects, applications, or modifications of the disclosure is not intended herein.
Legacy Device Support
[00108] There is a need for legacy devices to interoperate with ATSC 3.0 emissions. HLS is a legacy media streaming protocol, such as via the Internet, for supporting OTT applications. Newer existing devices, which utilize HLS, suffer by not being able to receive and process media broadcast via ATSC 1.0 standards, which include MPEG-TS, or via ATSC 3.0 standards, which include ROUTE/DASH and MMT. Some embodiments may resolve this issue by repackaging and optionally transcoding one or more of the broadcast protocols (e.g., from MMT video and AC4 audio) to HLS. For example, client devices including ATSC 3.0 receivers (e.g., BATs 104) may provide support to devices (e.g., legacy devices 170 of FIGs. 1 and 3 A). For example, BAT 104 may support live television viewing via JIT repackaging of new media formats, such as MMT and ROUTE/DASH, into legacy media formats such as HLS. HLS is designed to withstand unreliable network conditions without causing user- visible playback stalling.
[00109] MPEG-TS is a standard digital container format (e.g., encapsulating packetized elementary streams) for broadcast transmission systems, such as DVB, ATSC 1, and Internet protocol television (IPTV). MPEG-TS includes error correction and synchronization patterns for unreliable network conditions. MPEG-TS supports multiple programs in a stream, e.g., including transmission and storage of audio, video, program information, and system information data. Transport streams differ from the similarly-named MPEG program stream in several important ways: program streams are designed for reasonably reliable media, such as discs (like DVDs), while transport streams are designed for less reliable transmission, namely terrestrial or satellite broadcast. Further, a transport stream may carry multiple programs.
[00110] MMT, specified as ISO/IEC 23008-1 (MPEG-H part 1), is a digital container standard that supports high efficiency video coding (HEVC) video (MPEG-H part 2) and IP transmission. MMT supports UHD TV, HTML5, multiplexing of various streaming components from different sources, simplified conversion between storage and streaming formats, multiple devices, hybrid delivery, and one or more quality of experience (QoE) and/or quality of service (QoS) levels. QoE may be a measure of the overall level of customer satisfaction with a vendor, and QoS may embody the notion that hardware and software characteristics may be measured, improved, and perhaps guaranteed.
[00111] ROUTE is a protocol for transferring media over IP. ROUTE packets are sent via UDP for real-time or NRT transmission of packetized DASH segments. ROUTE defines a packet format, a source protocol (sending/receiving), a repair protocol, and sessions
for metadata and object transmission. ROUTE/DASH includes an AV interface unit for receiving an HEVC/MPEG-H 3DA encoded stream, a DASH packetizer for generating DASH segments by analyzing input media, and a ROUTE packetizer for converting the generated DASH segments into ROUTE packets. Media files in the DASH-IF profile may be based on the ISO base media file format (ISO BMFF) and used as the delivery, media encapsulation, and synchronization format for both broadcast and broadband delivery.
[00112] MPEG-DASH enables delivery of media content from such servers to HTTP clients, including content caching in view of changing network conditions and without causing stalling or re-buffering. DASH is audio/video codec agnostic. DASH implements bit rate adaptation (ABR), enabling high quality streaming of media content delivery, e.g., from conventional HTTP web servers. Similar to HLS, MPEG-DASH works by breaking the content into a sequence of small HTTP -based file segments, each segment containing a short interval of playback time of content (e.g., audio, video, and data) that is potentially many hours in duration, such as a movie or the live broadcast of a sports or news event. In some embodiments, implementations of DASH may comprise multimedia files being partitioned segments and delivered to UE 170 using HTTP, API support, DRM, captioning, experience criteria, and/or other complementary features.
[00113] Herein-disclosed protocols may be, for example, used for: real-time streaming of broadcast media; efficient and robust delivery of file-based objects; support for fast service acquisition by receivers (fast channel change); support for hybrid (broadcast/broadband) services; highly efficient FEC; compatibility within the broadcast infrastructure using formats and delivery methods developed for (and in common use within) the Internet; support for DRM, content encryption, and security; support for service definitions in which all components of the service are delivered via the broadband path (e.g., where acquisition of such services may require access to the signaling delivered in the broadcast); signaling to support state-of-the-art audio and video codecs; NRT delivery of media content; non-multiplexed delivery of service components (e.g., video and audio in separate streams); support for adaptive streaming on broadband-delivered streaming content; and/or appropriate linkage to application-layer features such as ESG and interactive content.
[00114] As depicted in FIG. 3 A, BAT 104 may support one or more legacy devices 170 (e.g., television, smartphone, laptop, etc.) by wirelessly obtaining ATSC 3.0 emissions and then repackaging (and optionally transcoding) this data for HLS playback of audio and video or for another protocol, such as the H.264/MPEG-4 advanced video coding (AVC) standard. This re-translatable ATSC 3.0 data may comprise MMT, ROUTE/DASH, reliable
internet streaming transport (RIST), and/or secure reliable transport (SRT) data. ROUTE/DASH and MMT are implemented in an IP multicast distribution layer, whereas SRT and RIST are transport protocols more designed along the lines of a unicast model.
[00115] BAT 104 may perform demodulation, error correction, decompression, AV synchronization, and media reformatting to match display parameters (e.g. interlacing, aspect ratio converting, frame rate converting, image scaling, etc.). BAT 104 may further perform MPEG-TS demultiplexing, e.g., when obtaining legacy ATSC 1 emissions. The repackaging and transcoding activities of BAT 104 may be based on a set of attributes of legacy device 170, including color gamut, audio codec, video codec, high dynamic range (HDR) video, standard dynamic range (SDR) video, interlacing, segmentation, compression, and/or jitter. More particularly, request handling component 235 may obtain a request from each legacy device 170. This request may be generated based on this set of attributes of the legacy device. Use of HDR may describe an ability to display a wider and richer range of colors (e.g., much brighter whites, and much deeper, darker blacks) and to give the TV picture a more dynamic look. For example, such use may provide greater bit depth, luminance, and color volume than SDR video, which uses a conventional gamma curve and lower resolutions. With SDR video, a transmitter may encode that service at a lower frame-rate, reserving encoding resources (bits) for the encoding of another service in the broadcast stream (or another component of the same service, such as audio).
[00116] BAT 104 may implement a set top box (STB), a networking gateway, a dongle, or another device. In dongle implementations, BAT 104 may not operate some software functionality disclosed herein, since this functionality may be instead embedded as a software application in legacy device 170 to which the dongle is coupled or in a hardware device of BAT 104. For example, the demodulation of ATSC emissions may be performed by software (e.g., front end and baseband component 233) via a software-defined radio implementation or by a hardware demodulator. This radio may be implemented (e.g., on an IC that consumes 0.1 Watts) in a same or different enclosure as processors 230, and it may implement both a receiver operable to receive traffic from BTS 102 and a transmitter operable to transmit supplemental traffic to a set of peers in a regional subset of the broadcast (292). In another example, any functionality of software components 233, 234, 235, 238, and/or 239 may be implemented in legacy device 170.
[00117] In some embodiments, all signal processing may be performed in the software domain, e.g., with only I- and Q-sampled data being passed into the hardware domain. For example, radio frequency (RF) and intermediate frequency (IF) component 232,
which is, for example, depicted in FIG. 2, may be implemented in hardware and/or with programmable logic. In some embodiments, RF and IF component 232 may perform passband processing such that an RF signal is mixed to an IF and then to baseband. This component may comprise radio frequency (RF) amplifier and filter 140, digital to analog (D/A) converter 150, and analog to digital (A/D) converter 150, which are, for example, depicted in FIG. 3A. A/D converter 150 may be used for the forward link (e.g., from BTS 102 or from BAT 104-2 to BAT 104-1), whereas D/A converter 150 may be used for the return link (e.g., from BAT 104-1 to BTS 102 or to BAT 104-2).
[00118] In some embodiments, BAT 104 may implement an environment comprised of a standard world wide web consortium (W3C) user agent with certain characteristics, a WebSocket interface for obtaining information from the receiver and controlling various receiver functionality, and an HTTP interface for accessing files or interactive content delivered over emissions 108.
[00119] In some embodiments, a substantial majority of emissions 108 may comprise forward link data, as, for example, depicted by the solid arrows of FIGs. 1-2 having differing widths, but emissions 108 may additionally comprise a relatively small bandwidth of return link data, as, for example, depicted by the dotted arrow of the same FIGs. For example, front-end / baseband component 233 may perform MODCOD activity, e.g., for return-link emission 108 of data into local network 292 (e.g., via the DRC). For example, RF and IF component 232 may, for example, perform baseband processing such that a digital signal converted to analog by converter 150 is mixed to an IF and then to RF, while being amplified by amplifier 140.
[00120] Upon reception of forward-link emission 108, the digital baseband signals may then, for example, be provided to front-end / baseband component 233 for further processing. For example, front end and baseband component 233 may demodulate content received OTA, which may comprise audio, video, and/or data. Front end and baseband component 233 may thus obtain, via a plurality of different physical layer pipes (PLPs) of ATSC emissions 108, a plurality of different sets of content. For example, each PLP may obtain one or more distinct sets of content. Legacy devices 170 may each request (e.g., via the legacy request signaling depicted in FIG. 3B) at least some of this content. These requests brought down to baseband may be digitized by A/D converter 152. However, in another example the request may be passed to processor 230 via a direct, wired connection. That is, although antenna 160 is, for example, depicted in FIG. 3 A interconnecting processor 230 with UE 170, this is not intended to be limiting, as this interconnection is further
contemplated as being a wired connection. For example, output data from the disclosed repackaging and/or transcoding may be emitted in a wired protocol (e.g., Ethernet, USB, etc.) to legacy device 170.
[00121] In some embodiments, front-end / baseband component 233 of BAT 104 may perform a demodulation and a decoding of modulated and encoded data (e.g., emitted from BTS 102 or another set of antennas). In these or other embodiments, front-end / baseband component 233 may perform digital channelization and sample rate conversion. In some embodiments, front-end / baseband component 233 may perform functionality that includes data control, ECC, and/or cryptography (e.g., encryption, decryption, key handling, etc.). In these embodiments, this component may further perform modulation and demodulation. In these or other embodiments, at least some of this functionality is performed using an embedded field programmable gate array (FPGA) or via digital signal processing (DSP).
[00122] In some embodiments, extraction component 234 may select or extract at least one set of received forward-link content based on a request obtained from legacy UE 170. Such requests and/or other information from UE 170 may be obtained from network 294 using antenna 160. In some embodiments, each of RF amplifier and filter 140 and RF amplifier and filter 142 may comprise an amplifier that differently amplifies low-power signals (e.g., without significantly degrading a signal-to-noise (SNR) ratio). The amplifiers may each increase the power of both the signal and the noise present at its input, but such low-noise amplifier (LNA) may be designed to minimize additional noise (e.g., including trade-offs based on impedance matching, low-noise components, biasing conditions, etc.). In these or other embodiments, each of RF amplifier and filter 140 and RF amplifier and filter 142 may comprise a bandpass filter (BPF) and one or more low-pass filters (LPFs) for signals being input at BAT 104. Each of these components may further comprise a mixer, e.g., one which operates from an output of a phase-locked loop (PLL) / voltage-controlled oscillator (VCO).
[00123] In some embodiments, antennas 231 and 160 may each be configured to transmit and/or receive radio waves in all horizontal directions (e.g., as an omnidirectional antenna) or in a particular direction (e.g., as a directional, beam antenna). As such, antennas 231 and 160 may each, for example, include one or more components, which serve to direct the radio waves into a beam or other desired radiation pattern.
[00124] Although FIG. 2 depicts antenna 231 being within BAT 104, this is not intended to be limiting, as this or another antenna used by this BAT may be installed elsewhere (e.g., on a roof of a house).
[00125] In implementations of mobile UE 170 with a corresponding mobile application connected to BAT 104, the mobile application may be the source (e.g., via the request) of capabilities for a subsequent media essence conversion. For example, the mobile device may not support the AC-4 audio codec, but BAT 104 may be able to provide a real time media essence conversion (e.g., real-time transcoding and multiplexing) to a compatible audio format that is specified by the mobile application, such as AAC audio. Similarly, mobile UE 170 may not support HEVC Main-10 video essences (10-bit video samples), and it may only support H.264 hardware-based video decoding. Accordingly, BAT 104 may perform an essence conversion (e.g., real-time transcoding, tone mapping from HDR-to-SDR, and multiplexing) that would allow for the most optimal media playback. The local network bandwidth (e.g., Wi-Fi) between UE 170 and BAT 104 and the resultant media essence conversion may be adjusted to maximize the visual bitrate (or other variables, such as screen resolution/orientation) to accommodate for a less effective media codec needed for device rendition.
[00126] BAT 104 may optionally perform JIT transcoding via component 239, which may include formatting changes of content, such as a down-stepping to a mobile environment that is operably coupled to the device. Transcoding is the direct digital-to-digital conversion of one encoding to another, such as for multimedia or other data. BAT 104 may thus implement JIT transcoding such that a legacy display device may display broadcast or multicast data in a seamless fashion for on demand or real-time needs.
[00127] In some embodiments, JIT repackaging component 238 and/or JIT transcoding component 239 may adapt playback characteristics such that UE 170 is ensured ability to properly unpackage and/or decode emissions. For example, these component(s) may adjust a color gamut, whether HDR or SDR is used, an audio codec format, a video codec format, etc., such that at least an HD rendition is met or exceeded. In this or another example wherein UHD quality is in emissions 108, a resolution may be adjusted. As such,
JIT repackaging component 238 and/or JIT transcoding component 239 may be configured to JIT-generate, from a single master essence, one of a plurality of different renditions or permutations, the generation of each of which being selectable for potential compatibility with an actively requesting one of a widely varying set of UE 170.
[00128] In implementations of a transport mechanism that does not use a (e.g., master) manifest, there may only be a media essence or flow. Request handling component 235 may thus potentially generate such a manifest (e.g., for HLS) that identifies and/or describes attributes of the plurality of different renditions or permutations for a subsequent resolving into an encoding parameter set. When UE 170 requests such resources, JIT transcoding component 239 may perform a JIT encode to prepare the derivatives as needed for that specific device’s proper playback. JIT repackaging component 238 and/or JIT transcoding component 239 may thus configure a set of mechanisms and corresponding processes for a device’s needs by performing a selection based on the device’s characteristics for playback. As such, less or no renditions are generated until UE 170 requests one or more of them, which reduces a processor and storage resource load.
[00129] In some embodiments, request handling component 235 may make the selection with previously obtained information that describes UE 170. In other embodiments, request handling component 235 may obtain (e.g., from UE 170) the selection as part of the request. The generated master manifest may provide a possible rendition or permutation set (e.g., which may include attributes or parameters) such that UE 170 is operable to pick one or more that is, among the options, most relevant for its capability. For example, the master manifest may provide a dozen different variations (e.g., with a dozen different unique resource identifiers). In this or another example, UE 170 may specify which parameters or attributes it supports such that the selection is made for those resources. That activity of requesting those resources may then trigger BAT 104, operating as a home gateway, to prepare one variant to be consumed by UE 170. The set of mechanisms and corresponding processes may accordingly be determined without having to generate other sets of mechanisms and processes for another set of UE different from UE 170. BAT 104 thus makes malleable the ATSC 3.0 broadcast by allowing for matching of the UE’s technological capability into a JIT-prepared essence.
[00130] In some embodiments, JIT transcoding component 239 may determine the set of mechanisms and corresponding processes by giving precedence to hardware- accelerated decoding support. By supplying a preferentially ordered list to BAT 104, this gateway may then, e.g., determine which relevant encoder codecs it may produce that would best align with UE 170. As such, from a fixed set of input requirements and a derived set of receiver capabilities, processor 230 may determine the optimal adaptation and transposition function for that media essence conversion. In some embodiments, the JIT generation of the set of mechanisms and corresponding processes may comprise a determination of a suitable
(e.g., optimal) package classifier, label, profile template, or set of configurations (e.g., involving transcoding, repackaging, and/or trans-multiplexing) such that UE 170 is JIT- provided a unique essence or payload. Such identifying fingerprint or representation of capabilities of each local UE 170 may be used, e.g., in the JIT provisioning, without needing to generate all possible capabilities’ permutations of all possible UE 170 and/or recreate a set of capabilities for other devices requiring a same set of capabilities. For example, components 238 and/or 239 may generate, store, search, and/or provide a one-to-many (e.g., which scales with growth in an amount of UEs) capabilities’ assignment for outputted JIT service(s). Knowledge of the different sets or lists of configurations (e.g., different audio and video codecs) may be obtained at BAT 104 via discovery calls to the one or more UEs that may be requesting a linear, AV service. In an implementation, the request or a response to a discovery call may include a hint or list of supported capabilities or an adaptation set, e.g., sorted by receiver preference. For example, components 238 and/or 239 may compute a permutation and fulfill service delivery of that permutation based on the UE (e.g., of network 294 or 296).
[00131] In some embodiments, JIT repackaging component 238 and/or JIT transcoding component 239 may determine the set of mechanisms and corresponding processes based on a computation cost (e.g., based on hardware-accelerated encoding being supported), a current encoding utilization load level, target frame criteria dimensions (e.g., vertical video would most likely be in SD down-sampled, while horizontal video would be an HD or Full HD rendition), licensing cost (e.g., for a specific codec or decoder), and/or another such cost for performing the translation. For example, with respect to the transport capabilities between BAT 104 and UE 170, component 238 may perform the determination based on a type of local network 294 (e.g., Wi-Fi or another suitable network). Components 238 and/or 239 may similarly determine these translation mechanisms and corresponding processes based on (i) an amount of available bandwidth between BAT 104 and UE 170, (ii) whether the outputting is performed over a specific type of Wi-Fi connection (e.g., 802.1 In versus 802.1 lac), (iii) another attribute of network 294, or (iv) the existence of a specific UE capability (e.g., HDR).
[00132] JIT transcoding component 239 may obtain an output of JIT component 238 and perform transcoding of that data. This transcoding may include obtaining the manifest file that specifies the location of the media and its format. The manifest may be a text file that serves as a directory for the content renditions and includes parameters of the
repackaging and/or re-encoding for each file. The manifest may further include references to other associated, complementary files.
[00133] Request handling component 235 may select, based on the set of attributes of the legacy request, playback characteristics for legacy device 170 from among such master manifest optionally received in the ATSC emissions. Request handling component 235 may determine an encoding parameter set based on these playback characteristics.
[00134] System 100 may implement MPD and/or MPU payload delivery. An MPD or manifest may be a playlist of video and audio segments comprising control mechanisms of DASH-encoded content. The MPD may contain timing information and have one or more periods, each period describing a time span and related media files for that time span. Alternative content may, e.g., be selected by replacing a period with an alternative period referencing different content segments. An MPU may be an MMT construct that encapsulates a single ISO BMFF file, e.g., containing timed content media such as audio or video. Emissions 108 may comprise a continuous stream of MPUs delivered to receivers (e.g., one or more BATs 104) for rendering.
[00135] In some embodiments, encoder constitution of an ISOBMFF fragment may be the correct model for ROUTE delivery and may restrict the design objective of MMT in regards to optimal media fragment unit (MFU) transmission latency in non-uniform GOP scenarios, including SHVC, where base and enhancement layers may be delivered at spatial, temporal, and closed or open long-GOP for optimal codec efficiency. The utilization of a complete MPU (e.g., an ISO BMFF box including trun data) from an encoder to a packager, rather than fragmented MFU emission for MMT delivery into the signaler and scheduler may prevent adoption of two more critical ATSC 3.0 use cases. Multiple SCTE-35 signal messages may be present over the logical MPU, but may be close to (or preceding) the MFU emission from the packager. This enables compatibility of both SCTE-35 messages in which the presentation time stamp (PTS) execution timestamp is not provided, and where the PTS execution is provided and multiplexed-in as a pre-roll or multiple times in a single GOP. A splice may then be performed. In some embodiments, there may be two examples of MMT encapsulation, one for timed and the other for non-timed media. For packetized delivery of MPU, an MMT hint track may provide the information to convert encapsulated MPU to MMTP payloads and MMTP packets. The depacketization procedure may be performed at the MMT receiving entity to rebuild the transmitted MPU. The depacketization procedure may operate, depending on the application needs, in an MPU mode, movie fragment mode, or MFU mode.
[00136] JIT component 238 may repackage for HLS or another protocol (e.g., from MMT, ROUTE/DASH, or MPEG-TS) the HLS or other protocol being used for legacy support. FIG. 3B illustrates an aspect of this repackaging, since ATSC emissions may be obtained and forwarded to a particular pipeline before final emission to legacy device 170. The JIT repackaging of component 238 may include trans-multiplexing, trans-muxing, or re encoding.
[00137] In some embodiments, JIT component 238 may implement an MPEG-TS to HLS JIT converter, an MMT to HLS JIT converter, a ROUTE/DASH to HLS JIT converter, or another converter from emissions 108 to UE 170. JIT repackaging component 238 may, for example, implement a separate pipeline for each of the conversions.
[00138] JIT repackaging component 238 may include stream segmentation into files such that BAT 104 serves as a CDN for delivering the segments to a legacy device. JIT repackaging component 238 may obtain encrypted data, decrypt it, repackage it, and then re encrypt it before emitting to legacy device 170 via antenna 160 or via a wired interface.
[00139] JIT repackaging of component 238 may operate based on another request obtained from another legacy device (e.g., 170-n, n being any natural number) of a type different from a type of legacy device 170 (e.g., 170-1) that sent a previous request.
[00140] JIT component 239 may optionally transcode for HLS based on the legacy device having a compatible or matching decoder set for the ATSC emissions (e.g., from MMT, ROUTE/DASH, RIST, SRT, or MPEG-TS). In implementations where JIT transcoding is performed, the JIT transcoding may be performed by component 239 before or after component 238 performs JIT repackaging. That is, in implementations where both JIT transcoding and JIT packaging are required, an order of performing these functionalities may depend on what device was doing the request. As such, request handling component 235 may implement a decisioning touchpoint on the request to make a determination as to what the appropriate output delivery characteristics would be for that device.
[00141] JIT component 239 may perform the herein disclosed functionality, while supporting processing power and a link speed achieved by legacy device 170. JIT component 239 may perform transcoding based on HEVC, scalable-video-coding-extensions ofHEVC (SHVC), or another suitable standard. The JIT transcoding of component 238 may provide an additional layer of compatibility for legacy device 170 (e.g., including resource poor devices).
[00142] The disclosed transcoding may optionally be performed via hardware and/or software to adjust formats, codecs, resolutions, and/or video parameters. The optional
transcoding may be performed from a native format to an intermediate codec without losing quality. JIT component 239 may transcode to proxy files. In some embodiments, the re encoding performed by component 239 may be based on processing power and a link speed achieved by the legacy device.
[00143] JIT transcoding component 239 may obtain audio data encoded using AC-4 or MPEG-H 3D audio. The JIT transcoding may thus reencode this audio to AC-3, EC-3, AAC, MP3, or another suitable codec for the legacy device supporting a specific delivery protocol, such as HLS. Similarly, JIT transcoding component 239 may obtain video encoded using HEVC. The JIT transcoding may thus reencode this video to AVC (H.264) or another suitable codec for the legacy device supporting HLS. Other video formats supported by components 238 and 239 of processor 230 may include Adobe HTTP dynamic streaming (HDS), Microsoft Silverlight smooth streaming (MSS), common media application format (CMAF), 3GPP's adaptation HTTP streaming (AHS), HTTP adaptive streaming (HAS), or another standard. BAT 104 may perform ABR via JIT transcoding component 239 when streaming HLS to more computationally capable legacy devices 170.
[00144] In some embodiments, JIT repackaging component 238 and JIT transcoding component 239 may perform translation, e.g., by configuring the aforementioned set of mechanisms and corresponding processes to adapt when features are not properly implemented or actually specified in underlying documentation that govern functionality of BAT 104 and/or UE 170. For example, components or requirements may be anonymous or lacking in definition such that most of every need from any UE 170 may be met in the translation between systems. Accordingly, a transport and/or encoding mechanism may be generated JIT to provide a rendition for UE 170 that does not meet the same universe of requirements that BTS 102 provides. For example, this provision may include use of HEVC Main 10 for a 10-bit resolution support and may require a hardware-based decoder to be able to process additional bits of data that may be rendered by the device.
[00145] BAT 104 may thus mediate or adapt technology requirements, when interpretation or expectation of standards or specifications fails, by determining an interoperability matrix and then providing as needed (e.g., JIT) the ability to map the requirements from an input (e.g., forward emission of BTS 102) to the opportunistic capabilities of an orthogonal output (e.g., local emission to UE 170). This capability orthogonality may be due, e.g., to UE 170 implementing a set of decoding capabilities that are restricted usually by the hardware or by the manufacturer in their ability to decode broadcast (e.g., emissions 108) media essences. For example, UE 170 may not be able to
support one or more attributes of HEVC 8 Bit, HEVC 10 Bit, AVI, VP9, H.264, VC-5, MPEG5, or another codec, e.g., due to a relatively recent emergence as a standard. As such, JIT transcoding component 239 may perform a service that eliminates need to re-encode all existing content of a publisher from origination and through distribution. In doing so, component 239 may cause a reduction in bandwidth consumption and/or video decoding computation resources (e.g., using a different matrix of potential output formats), while monolithically creating support for content being emitted in forward transmission 108 that is in a newer transport medium (e.g., MMT) and encoding format (e.g., HEVC Main 10 for the video and MPEG-H or AC4 for the audio).
[00146] The JIT transcoding and/or JIT repackaging of components 238 and/or 239, respectively, may include supporting different presentation periods and ad insertions, including when each is being streamed at different bit rates. After repackaging and/or transcoding, D/A converter 152 may obtain and forward this data to RF amplifier 142 before antenna 160 finally emits the data to legacy device 170 such that it suitably presents multimedia (e.g., live, streaming content) and suitably receives other types of data (e.g., software application update downloads). Component 142 may further include a filter, when receiving requests and other signals from legacy devices 170.
[00147] In some embodiments, front-end / baseband component 233 may perform MODCOD activity, e.g., for local emission of data into networks 294, 296 (e.g., via known networking techniques). For example, RF and IF component 232 may, for example, perform baseband processing such that a digital signal converted to analog by converter 152 is mixed to an IF and then to RF, while being amplified by amplifier 142.
[00148] In some embodiments, BAT 104 may implement a home gateway device that is interoperable with a traditional OTT device such as Roku, Apple TV, or Fire TV. For example, ATSC 3.0 encoding characteristics like AVC and AAC audio may be repackaged JIT. For more complex characteristics like HEVC and MPEG-H audio, BAT 104 may still allow playback of ATSC 3.0 content for legacy devices. That is, JIT repackaging may be sufficient for devices that have a compatible decoder set, and a JIT re-transcode may be needed for compatibility for devices that do not have matching capabilities of what ATSC 3.0 requires.
[00149] FIG. 3C illustrates method 180 for providing legacy data translation support, in accordance with one or more embodiments. Method 180 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program
components. The operations of method 180 presented below are intended to be illustrative. In some embodiments, method 180 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 180 are illustrated in FIG. 3C and described below is not intended to be limiting. In some embodiments, method 180 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 180 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 180.
[00150] At operation 182 of method 180, content may be extracted from ATSC 1 and/or ATSC 3.0 emissions. As an example, headers and other encapsulation may be stripped to identify and obtain content of a payload. In some embodiments, operation 180 is performed by a processor component the same as or similar to extraction component 234 (shown in FIG. 3A and described herein).
[00151] At operation 184 of method 180, a content request generated based on a set of attributes of a legacy device may be obtained from the legacy device. As an example, a request from each of one or more UEs 170 may be obtained such that a set of mechanisms and corresponding processes may be determined for translating broadcasted emissions. This determination may be based on computation power necessary for performing the translation. And The request may include information about the technological capabilities of the one or more requesting UEs, e.g., including (i) transport attributes of a network connecting BAT 104 and UE 170 and/or (ii) characteristics of the UE. In some embodiments, operation 184 is performed by a processor component the same as or similar to request handling component 235 (shown in FIG. 3A and described herein).
[00152] At operation 186 of method 180, the set of mechanisms and processes may be JIT-generated by selecting (e.g., from among a manifest) playback characteristics based on the request. As an example, BAT 104 may perform this generation responsive to the request. In this or another example, BAT 104 may perform this generation based on a determination that the request indicates that UE 170 does not have a compatible or matching decoder set for emissions 108. BAT 104 may, for example, select, from among a master manifest, playback
characteristics for UE 170 based on the information of the request. In implementations where the manifest is not received in emissions 108, BAT 104 may generate the manifest. In some embodiments, operation 186 is performed by a processor component the same as or similar to request handling component 235 in combination with components 238 and/or 239 (shown in FIG. 3A and described herein).
[00153] At operation 188 of method 180, JIT-repackaging (e.g., to HLS) of the content may be performed based on the selection. As an example, BAT 104 may perform a transport translation and/or a packaging translation of emissions 108 such that UE 170 is operable to play back at least a portion of the emitted content. In some embodiments, operation 188 is performed by a processor component the same as or similar to JIT repackaging component 238 (shown in FIG. 3 A and described herein).
[00154] At operation 190 of method 180, JIT-transcoding may be performed, e.g., when a decoder of the requesting legacy device is incompatible with the ATSC emissions. As an example, BAT 104 may perform an encoding translation of emissions 108. In some embodiments, operation 190 is performed by a processor component the same as or similar to JIT transcoding component 239 (shown in FIG. 3A and described herein).
[00155] One or more of the foregoing translations of BAT 104 may be performed in a manner that remedies implemented divergence from a standard or protocol such that the determined set of mechanisms and corresponding processes adapts to the divergence. In one example, BAT 104 may support revisions to the substantially static plurality of input ATSC 3.0 requirements, which may occur about once per year.
OTA API Services Support
[00156] There is a need for devices operable only to obtain OTT content to be provided application programming interface (API) services for obtaining OTA data delivery, including scenarios where the content is not accessible or available over a network, such as the Internet. An OTA receiving apparatus may support a variety of in-home content viewing and storage systems, for example, via APIs connected to the apparatus via Wi-Fi or other network connections. Referring again to FIG. 2, BAT 104 may host API services 244 to support other devices in the same structure (e.g., home) in which BAT 104 is located, for example. APIs may be provided for devices such as other broadcast reception devices, televisions, computers, and mobile devices to access resources available to BAT 104, such as broadcast streams, stored media, data, and applications.
[00157] In some embodiments, API services 244 may be hosted and configured to include functionality, for example, that enables devices connected to BAT 104 to access files on BAT 104, e.g., by: obtaining a list of files with file identifications, e.g., in timestamp order; subscribing to file changes; and/or receiving notifications for file changes. API services 244 may be configured to include functionality such as implementing: file cache management; integrated testing; OTA API integration; channel polling; error handling; and/or conditional business logic.
[00158] The term “integrated testing” may relate to opportunities for equipping APIs 244 with capabilities to help BAT 104 (and UE 170 in the proximity of the BAT) to a variety perform of autonomous operations, e.g., without having to back connect to a central station to achieve the functionality. Hence, the term “integrated testing” may refer to test procedures within the API or associated with the API to make sure of the correct operation of the API and the equipment of either end, e.g., without just assuming that the equipment is working and messages are getting through. While IP backchannels are possible, use of broadcast receiver equipment may minimize reliance on those back connections. The mentioned term “error handling” may relate to arming BAT 104 with everything possible to help it anticipate and resolve problems. For devices connected to the BAT, that BAT may be the only available connection for Intemet-like content. In an implementation, BATs 104 may, e.g., gracefully degrade to a base state and may, e.g., obtain a firmware update over emissions 108 or via an IP-based backchannel.
[00159] For live video content viewing, the APIs may include means for devices connected to BAT 104 to determine the OTA capabilities of BAT 104, obtain a list of channels available for live viewing, adjust a tuner operation of BAT 104, receive content metadata, and receive video data in a viewable form. Similarly, APIs may be provided to support viewing of stored content, selectively incorporating advertisements or viewing enhancements, or accessing applications or data hosted on BAT 104.
[00160] In some embodiments, API services may be provided OTA, e.g., in a home Wi-Fi environment. For example, they may be provided as a device casting service at home or elsewhere. Implementations including home-casting may be performed via a client (e.g., of the home Wi-Fi environment). For example, BAT 104 and/or UE 170 may provide OTT services via Wi-Fi. In this or another example, upon requesting or being informed, OTA services may be provided via API services components 244 (e.g., of FIG. 2) to a user at BAT 104 and/or to a user at UE 170. That is, a user may be suddenly provided a set of broadcast
channels (e.g., 2, 5, and 6), each being selectable for consumption. As such, OTA services may be, for example, facilitated to OTT platforms.
[00161] In some embodiments, BAT 104 may operate as a switch, hub, or router (e.g., DHCP server), when operating as a home gateway for network 294 or 296. In some embodiments, an application running in UE 170 (e.g., a Roku box, Apple TV device, iPhone, MacBook, or another legacy device) may call the API server by calling an OTA service that would be on a domain hosting digital content, e.g., with domain platform.sinclairstoryline.com being a service exposed by a discovery server. That is, UE 170 may perform (e.g., upon starting the application) the call to a representational state transfer (REST) service implemented on BAT 104 operating as a local home gateway. In response, a list of all the home gateways that are registered may be automatically accessed such that the RESTful service sends a return message indicating false when there are no OTA services available, and a return message indicating true otherwise.
[00162] In some embodiments, when BATs 104 boot or start up, they may communicate (e.g., via the Internet or another network) with a central discovery server for becoming registered. An application on a legacy device may utilize OTA APIs 244 by initially calling the central discovery server to determine (e.g., with its local network ID) (i) whether a registered BAT 104 exists on the same local (e.g., Wi-Fi) network and (ii) the local IP address of this BAT. If the central discovery server determines that there is a registered home gateway 104 on network 294 (e.g., Wi-Fi), this server may pass back a local IP address of BAT 104; then, UE 170 may call that address to start making the calls to get to APIs 244. In another implementation, the UE may be able to interact with the BAT’s APIs without need of the remote server. As such, gateway 104 may be discovered by an application running on UE 170, and available OTA channels may be displayed by the application (e.g., in a live section of a home page as individual cards), e.g., including logos of the channels and any available information about an event currently being broadcast.
[00163] Upon being informed of gateway 104 being present on the same local network, an application running on UE 170 may call the local IP of this BAT. More particularly, API 244-1 may inform this UE that home gateway 104 is powered-on, operable, and present. Knowing the availability of OTA services, the UE may then call API 244-2, which informs the list of services available in each of the 6 MHz bands, including the corresponding metadata of each service. That is, because there is a tuner there and because it has these streams coming into repackaging and recoding components 238 and 239, respectively, the data may be converted to a legacy format (e.g., HLS). As such, an API client
of UE 170 may be returned back a list of channels (e.g., ATSC 1 and ATSC 3.0 emissions) and its corresponding metadata. The application of UE 170 operating such API client may control the tuner of BAT 104, e.g., by setting the tuner to channel 6 such that the HLS stream arrives with the content of this channel. More specifically, API 244-3 may be called for adjusting the tuner from one band or frequency to another; and API 244-4 may be called to provide an IP stream of data and/or video (e.g., from data obtained OTA at BAT 104 to an MPEG audio layer 3 URL Unicode transformation format (UTF)-8-encoded (M3U8) video fed locally to UE 170). The list of channels provided from BAT 104 to UE 170 may include the M3U8 URL for each channel, but the M3U8s may not work until API 244-3 tunes into a channel; and then, that M3U8 may return with a manifest for the UE to play HLS video.
[00164] In some embodiments, BAT 104 may implement two additional APIs for NRT retrieval, including one for retrieving a list of NRT files (including their IDs, in timestamp order) and another for retrieving an NRT file (file ID).
[00165] BAT 104 may, for example, implement an API provider (e.g., operating as a server), whereas UE 170 may, for example, implement an API consumer (e.g., operating as a client). APIs simplify ways for developers to interact with other types of software, e.g., by abstracting the underlying implementation and only exposing objects or actions the developer needs. This abstraction may be performed through information hiding, whereby APIs 244, for example, enable modular programming that allows interface use independently of the implementation. In a sense, each of APIs 244 contractually (e.g., with providers and consumers conforming to a standardized interface or format) operates as a software layer or computing interface that defines possible interactions for client applications to interface with other functionality, such as services. For example, a client program may make an API request to a data source or server, which responds to the request.
[00166] In some embodiments, BAT 104 may implement REST, e.g., an architecture style based on a stateless client-server communications protocol (e.g., HTTP).
For example, a client may make a call to a server via HTTP (e.g., using an API with a get, put, or post for consuming, writing, or overwriting information, respectively, and/or another request, such as head, options, connect, delete, and patch). The REST style may define a set of constraints for creating web services that provide interoperability between online systems. RESTful web services allow the requesting systems to access and manipulate textual representations of web resources by using a uniform and predefined set of stateless operations. For example, requests made to a resource's URI will elicit a response with hypertext links and/or a payload formatted in HTML, XML, JavaScript object notation
(JSON), e.g., structured data organized according to key-value pairs, simple object access protocol (SOAP), or via another protocol for exchanging structured information in the implementation of RESTful web or online services. Accordingly, APIs 244 may operate as a messenger and REST allows HTTP requests to format those messages.
[00167] In some embodiments, RESTful web services may have modifiable components to meet changing needs (e.g., even while the application is running). Other non functional properties of such implementation may include performance gains, scalability, simplicity, visibility, portability, and reliability. REST constraints may include a client-server architecture, statelessness, cache-ability, layered system, code on demand, and a uniform interface. For example, UE 170 may implement a service usage data client, including service consumption data collection, storage, and transmission to a server over a broadband channel; and BAT 104 or another system may implement a service usage data server, including service provision either individually or in groups and client consumption data accessing.
[00168] APIs 244 may, for example, provide a reliable and consistent experience for UEs 170 to use (e.g., without making developers of such devices re-implement functionality in UIs). These APIs may be implemented in BAT 104, or next-generation TVs 103 may be modified to support the functionality of these 4 APIs. Accordingly, TV content may be instantaneously delivered over an IP network. Although API services 244 are depicted as residing within BAT 104, they may alternatively be installed in next generation TVs 103. That is, gateway devices may be, for example, used to integrate OTA ATSC 3.0 broadcasts with the traditional broadband capability of the Internet. For example, viewers may extend their broadcast viewing from next generation TVs 103 to UEs 170 (e.g., mobile phones, tablets, PCs, etc.).
[00169] In some embodiments, APIs 244 may extend functionality of an application and provide security features (e.g., blocking access).
[00170] In some embodiments, data may be targeted to users by BAT 104 using different, conditional dimensions, such as time-frequency dimensions (e.g., being able to consume content at certain times and at certain channels), geographic dimensions (e.g., being only able to consume content if in a certain region or zip code), and/or entitlement dimensions (e.g., by having a particular channel subscription or by having a certain profession or job title for consuming confidential or encrypted content). For example, such conditional logic may be implemented at UEs 170, e.g., with APIs 144 merely passing through encrypted data. In another example, the conditional logic may be implemented at the gateway (e.g., BAT 104).
[00171] In some embodiments, BAT 104 may implement a home-caster having a web service and a navigation to that browser or application 246 may facilitate a watching of a video. However, in either example, an integration may be performed with API services 244. Applications or services on UE 170 may similarly integrate, via BAT 104, for accessing emissions 108.
[00172] In some embodiments, VOD may be implemented as a function of broadcast application 246, e.g., with links to particular VODs. In implementations of BAT 104 having an IP backchannel (e.g., via network 106 or network 294, 296), such link may obtain the VOD asset thereof. In other implementations of BAT 104 not having the IP backchannel, such link may obtain the VOD asset via cache management of CDN 242. For example, if there is a uniform resource identifier (URI), BAT 104 may first attempt to obtain that particular content via CDN 242; otherwise, if the content is not locally stored at the CDN, BAT 104 may use the Internet.
[00173] In some embodiments, BAT 104 may implement the Wi-Fi protocols (e.g., IEEE 802.11 a, b, g, n, ac, ax, and/or another standard) for local area networking with UE 170. For example, devices of network 294 or 296 may network through a wireless access point to each other as well as to wired devices and the Internet. In some embodiments, a home-caster (e.g., BAT 104 implementing the 4 APIs 244) may emit data using an HDMI cable to a home TV or may emit data via Wi-Fi or Ethernet to a standard IP router, which forwards locally via Wi-Fi to UE 170.
[00174] In some embodiments, due to the possibility of flash channels, BAT 104 may continually (e.g., every 10, 30, or 60 seconds) be performing a scan in the background. Accordingly, application(s) on UE 170 may use APIs 244, e.g., for (e.g., regularly) checking OTA services’ availability. Such application may be headless, the APIs, for example, being the heads. As such, a user of UE 170 may think they are only using an application on UE 170, device, but the application may be using APIs 244 to get the different data it needs, including the video stream itself from various sources within the UE 170 and/or from other devices which the UE 170 can access directly or via network connection.
[00175] For example, UE 170 may collect important information, such as cached copies of content and/or related meta data, forward error correcting symbols, etc., from various other devices to supplement which UE 170 receives from a first BAT. The various other devices may be other BATs, mobile devices, personal computers, gaming consoles, and/or consumer video content systems. Further, APIs may be arranged for configured or ad hoc networks of devices to exchange requests and responses for content items, portions of
content items, missing symbols, enhancement layers, and/or associate meta data. In this way, the UE 170 may receive data from BAT in a home in which it is affiliated, from other devices in the home, from nearby devices, and/or, in the case of a mobile UE 170, other devices and/or network which the mobile UE 170 encounters as it travels.
[00176] FIG. 9 illustrates an example method 900 for OTA API services delivery. Method 900 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 900 presented below are intended to be illustrative. Method 900 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 900 are illustrated in FIG. 9 and described below is not intended to be limiting. Method 900 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 900 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 900. The following operations may each be performed using a processor component the same as or similar to APIs 244 (shown in FIG. 2).
[00177] At operation 902 of method 900, a broadcast may be received. As an example, BAT 104 may obtain at least a portion of emissions 108. In some embodiments, metadata may be part of emissions 108. In these or other embodiments, metadata may be packaged in the channel information supplied by directory API 244-2.
[00178] At operation 904 of method 900, packets (e.g., forming a set of real-time or non-real-time emissions) may be reconstructed based on missing data pieces via another network rebroadcast or local peer’s DRC. As an example, CDN PoP 242 may temporarily (e.g., in cache) and/or semi-permanently (e.g., in flash or other non-volatile random-access memory (NVRAM)) store the obtained packets. In some implementations, a payment for the content therein may be determined based on the storage duration.
[00179] At operation 906 of method 900, the OTA capabilities of the BAT may be determined, via an OTA capabilities query API. As an example, an application of UE 170
may call BAT 104 and be informed in a response by API 244-1. And an application running on one of the one or more UEs may coordinate data reception via a set of the APIs.
[00180] At operation 908 of method 900, a list of channels available for live viewing may be emitted, via a broadcast content directory API. As an example, the application of UE 170 may call BAT 104 and be informed in a response by API 244-2. API service 244-2 may, for example, provide to UE 170 metadata about all the services (e.g., within a signal among a plurality of signals), e.g., by initially scanning such that the UE knows to what channel it may turn.
[00181] At operation 910 of method 900, a tuner operation may be adjusted, via a BAT tuner API. As an example, the application of UE 170 may control API 244-3 of BAT 104. In some embodiments, tuner API 244-3 may facilitate a tuning to a federal communications commission (FCC) channel, e.g., a block of spectrum (e.g., at 578 MHz) rather than a single video or TV channel. For example, metadata of emissions 108 may be obtained for accessing all the virtual channels (e.g., TV channel 4) of a 6 MHz band.
[00182] At operation 912 of method 900, video data may be filtered based on a region of intended recipient UE. For example, rather than a service on the Internet that anyone could hit, a user of UE 170 may only be able to obtain Intemet-like content OTA, when this device is bound to the same subnet or LAN, and thus to the same DMA or region that would otherwise receive the TV broadcast. In this example, BAT 104 may, for example, stream, via local Wi-Fi, a Chicago Cubs game to the user who is in Chicago. When the received OTA broadcast data comprises an emergency alert, BAT 104 may discard it based on the BAT being outside of a relevant region.
[00183] At operation 914 of method 900, an output of the filter and corresponding metadata may be emitted, via a media-content delivery API, to the UE. As an example, the application of UE 170 may obtain and display streaming content obtained OTA via API 244- 4.
[00184] At operation 916 of method 900, an accessibility conflict between tuners and channels requested by the UE may be resolved. For example, the number of tuners needed for a requested channel may be greater than the number of existing tuners (e.g., which are presently supporting other, various channels). A receiver (e.g., TV 103, BAT 104) with one tuner may be operable to only watch, at a time, the services available from one of the 6 MHz ATSC 3.0 channels. In an example, this channel may provide 10 standard definition (SD) video channels, and the receiver’s user(s) (e.g., of a household) may watch all 10. With two tuners, the receiver may allow the user(s) to tune into two 6 MHz channels and thus to
any services of those two channels. If another person attempts to watch a third channel, the receiver may be configured to terminate some channel consumption for the people currently watching or may inform the other user that the channel cannot be changed.
Progressive Video Enhancement
[00185] There is a need to emit a plurality of content each at different quality types at the same time, due to there being no feedback in broadcast and/or multicast emissions for allowing selection of an optimal one; in the context of a very limited allocated spectrum, simulcasting of different quality levels is known to be infeasible. The quality at which media is viewed may be adjusted based upon several factors, including the capabilities of the player, the robustness of the transmission, the speed to first viewing, and the availability of the content at particular encodings. Several options are available both for how to determine what feed is presented to viewers, and for how to efficiently encode multiple streams of varying viewing quality and transmission robustness.
[00186] For example, consider two users selecting the same program to view via transmissions from a nearby BTS. The first user is in a vehicle moving at 80 miles per hour (MPH) and using a BAT with a small antenna. The second user is at home with a BAT that has a larger antenna and a 4k HD display. At first, to establish a feed quickly for both users, a 480p stream received from the BTS may be provided to both the mobile device display and the HD display of the home user. This may be due to a relatively small group of pictures (GOP) length of the 480p stream, and therefore lower time to establish a first frame or viewing.
[00187] After viewing is initially established, each BAT may seek to optimize the viewing experience of each user by seeking to establish feeds of higher viewing quality, e.g., at higher resolution or frame rate. A multichannel transmission, such as an ATSC 3.0 transmission, may include multiple streams at different resolutions or frame rates, etc., for the same content. The robustness of the streams may vary widely, e.g., based on the modulation, encoding, diversity, error correction, and other schemes applied to each stream. Further, transmission penetration qualities may differ for different users, e.g., for antennas located outdoors versus indoor locations.
[00188] In this example, the 480p stream may be the best that is available for use by the mobile device. For the HD display, the home BAT checks whether a higher quality stream is available for the same content. The home system, being stationary and having a larger antenna, may be able to take advantage of less robust transmissions from the BTS that
provide information for higher resolution or frame rates. It may be that both 720p and 1080p information streams are available, but too much of a 1080p stream is missing and unrepairable for information in the 1080p stream to be usable, and therefore the broadcast access terminal reverts to showing the next highest quality stream available, e.g., a 720p stream.
[00189] A BTS may provide separate streams at different viewing qualities, such as a 480p stream, a 720p stream, and a 1080p stream. Further, exploiting the myriad options for encoding streams on a multichannel broadcast system and for communicating details of necessary demodulation and decoding of each stream, streams may be divided into resolution components. For example, rather than sending separate streams at 480p, 720p, and 1080p resolution for given content, the broadcast transmission may include a first layer providing a 480p base stream, a second layer containing the differences between the base and 720p streams, and a third layer with the additive information to constitute a 1080p stream by combining its differences with the information in the first and second layers. This division of resolution information leads to more efficient use of the broadcast spectrum, in as much as each lower resolution information set need not be repeated in the higher resolution additive information feeds. The stream that is ultimately displayed may be the highest quality, e.g., at the highest resolution, highest frame rate, etc., that may be achieved by combining arriving streams of sufficient quality for a given media asset.
[00190] Similarly, progressive video enhancement may include improving aspects of a stream by incorporating elements of another stream. It is not necessary to combine the entirety of two streams. A first stream may be enhanced, for example, by incorporating information in, e.g., any coding tree unit (CTU) of a second stream.
[00191] In some embodiments, progressive video enhancer 205 may be able to validate by the structure of the CTU for receiving the correct data field for that CTU. This component of BTS 102 may then render out what potentially that one CTU is (e.g., which is an 8x8 pixel block or 64x64 pixel block). In an example, a 1080p stream may be missing pieces or be unrepairable such that the stream as a whole is unavailable. But even if BAT 104 obtains just one CTU, which may be very small, the BAT may still render that as part of the enhancement layer, e.g., when validating by the structure of the CTU such that the correct payload is received for that CTU. As such, the objective of the enhancement layer may be to provide a high frequency rendition of detail and artifacts that would otherwise have been lost in the base layer of coding. As such, any degree of enhancement layer data may be opportunistically extracted to provide a usable essence for decoding, effectively improving
upon all-or-nothing approaches (e.g., of ABR, where either the whole GOP is obtained or nothing at all). The CTU may thus operate with respect to how an image is broken down, e.g., for motion estimation and prediction of HEVC.
[00192] In some embodiments, progressive video enhancer 205 (e.g., of FIG. 2) may support different resolutions in the base and enhancement layers, e.g., ranging from 8K UHD (7,680 x 4,320 pixels), 4K (4,096 or 3,840 x 2,160 pixels), 2K, WUXGA, 1080p (1,920 x 1,080 pixels), 720p (1,280 x 720 pixels), or 480p (640 x 480 pixels), to lower known resolutions. In these resolution designations, the p signifies progressive scan, e.g. non interlaced, and the pixels are listed as oriented on a display, vertically by horizontally.
[00193] In some implementations, progressive video enhancer 205 may individually set the picture rate for each of the base and enhancement layers, e.g., to 25 FPS, 50 FPS, 60 FPS, 120 FPS, 250 FPS, or another suitable rate.
[00194] In an implementation, the bit-stream for an enhancement layer may be modulated at 256-QAM, e.g., in a first PLP, and the bit-stream for the base layer may be modulated at QPSK or 16-QAM, e.g., in another PLP. But these configurations are not intended to be limiting.
[00195] In some embodiments, BTS 102 may implement modulation such that multiple PLPs are emitted in a simulcast (e.g., in one TV spectrum), including use of MMT protocol and SHVC video layering. For example, BTS 102 may emit to receivers, penetrating hard to reach areas (e.g., lower levels of a building garage), e.g., by sending many fewer bits than when targeting LOS receivers. Similarly, BTS 102 may, for example, emit to receivers traveling at 250 MPH, e.g., by sending many fewer bits to deal with the Doppler effect. In one or more of these examples, one PLP may be set with properties for reception of 480p resolution video inside buildings and/or when moving at speed, e.g., with a mobile device’s antenna that is substantially small. In these example(s), a different PLP may be set with properties for reception of 720p resolution video at stationary TVs 103. As such, all receivers of a region of BTS 102 may obtain the robust emissions of 480p video, but only those receivers with more optimal reception characteristics may obtain the less robust emissions of 720p video.
[00196] In some embodiments, one SHVC layer may comprise 720p, and another SHVC layer may be at 1080p that is more efficient (e.g., by 20-30%) than a regular 1080p coming over the Internet. Accordingly, if emissions 108 were to comprise standard 720p and 1080p renditions, substantially more spectrum would be needed. As such, there are bandwidth constraints, but also unicast emissions under the 3rd generation partnership project
(3GPP), such as GSM, UMTS, HSPA, LTE, 4G, and 5G, may further constrain by requiring users to pay for their data. Accordingly, multiple renditions of content over multiple ATSC 3.0 PLPs of emissions 108 and/or IP distributions may be provided such that receiving devices are able to consume the content. Progressive video enhancer 205 may, for example, manage a distribution profile, e.g., by combining the base and enhancement layers, such that the bitrate is less than the peak utilization of the representative enhancement layers.
[00197] Example implementations using SHVC may provide support for spatial, SNR, bit depth, and color gamut scalability. Such use may provide a high-level syntax only extension to allow reuse of existing decoder components. To support SHVC, a receiver (e.g., BAT 104) may demultiplex an SHVC bitstream received from emissions 108 such that the base and enhancement layers are individually provided to decoders (e.g., HEVC). The receiver may collate, e.g., by providing a virtual surface or rendering surface to be able to composite the base and enhancement layer. For example, with two separate flows or enhancement layers of the media essence, the BAT may, for example, then take the two separate flows and combine them into one based upon the capabilities of the receiving device. It may produce a derivative transcoding of that media essence for that device as needed.
[00198] In some embodiments, progressive video enhancer 205 may combine the base and enhancement layers such that the emission is not larger than what just the 720p layer emission is by itself, or if it is, it may be within some degree of tolerance (e.g., 10%). Likewise, the 4K emission, by combining the base and enhancement layers to get to that 4K emission, may be smaller than the 4K emission by itself. This may be because that 4K emission by itself is using a small GOP window; by adjusting the GOP distribution interval on the enhancement layers, a high degree of codec efficacy may be leveraged. Another objective on this is to have a combination of media essences that may represent a distribution profile that is smaller than what the net output would be as just a single distribution profile. Due to having a hard limit on bandwidth, emissions 108 may not be able to fit multiple copies of the same content instance at different quality (e.g., resolution) configurations, as is known to be performed OTT.
[00199] In some embodiments, content (e.g., video or other data) may be compressed using different algorithms, an amount of compression of which being based on the algorithm (e.g., if the compression is too extreme, blocky or blurry images will result). These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I frames (e.g., which are intra coded and least compressible, not requiring other video frames to decode by being
independently coded), P frames (e.g., which are predictive and use data from previous frames to decompress, being more compressible than I frames by containing motion-compensated difference information relative to previously decoded pictures), and B frames (e.g., which are bi-predictive or bidirectional and which use both previous and forward frames for data reference to get a highest amount of data compression, by containing motion-compensated difference information relative to previously decoded pictures). If a transmission error occurs, the type of frame lost may determine the propagation time of the error.
[00200] The emission frequency of the I frame may be determined differently for each of the base layer and the one or more enhancement layers.
[00201] In some embodiments, content of 480p, 720p, and 1080p may be obtained OTA via emissions 108. In these embodiments, the deltas for an enhancement layer (e.g., 4K or another resolution) may similarly be obtained OTA or via an IP-based backchannel, the latter alternative being a hybrid OTA/OTT configuration such that at least one of the enhancement layers is obtained unicast. Such hybrid mixing may allow connected receivers to obtain a 4K SHVC over the Internet that instills upon or complements content being received OTA. For example, an NBC broadcast only in Las Vegas may be performed via a base layer sent OTA (or over network 106) that is complemented with a 4K layer sent via network 106 (or OTA) such that digital rights are managed (e.g., by only being able to get that 4K version or rendering on devices in Las Vegas).
[00202] Accordingly, rather than producing N different, independent variants of content, progressive video enhancer 205 may produce only one variant, which is the base layer, but that base layer may then be complemented by enhancement layers, which are dependent emissions. Such receiver-driven resolution and quality of experience may be based on its wireless reception capabilities, rather than what combinations of streams are available from the distribution perspective or the encoding perspective. To be clear, OTT platforms necessarily have a different base stream for every resolution desirable for providing because bandwidth is not limited as compared to a licensed spectrum, e.g., of 6-7 MHz.
[00203] As such, the higher quality stream may not be stand-alone, thus requiring the extra, delta elements that progressive video enhancer 205 adds to the lower quality stream(s) for creating the higher quality stream. For example, progressive video enhancer 205 may combine two or more enhancement layers, prior to emission, for downstream reconstitution of the 4K content. One or more of these deltas or differential data may be obtained over IP broadband at BAT 104. In some embodiments, the 4K or another
enhancement layer cannot be played alone, e.g., by needing at least the base layer underneath it.
[00204] In video coding, GOP structure specifies the order in which intra- and inter frames are arranged. The GOP is a collection of successive pictures within a coded video stream. Each coded video stream may comprise successive GOPs from which the visible frames are generated. Encountering a new GOP in a compressed video stream signifies that the decoder does not need any previous frames in order to decode the next ones, and allows fast seeking through the video. Each GOP of emissions 108 may begin (in decoding order) with an I frame, e.g., which may represent a full, compressed video frame. Afterwards, several P and B frames follow. Generally, the more I frames the video stream has, the more editable it is. However, having more I frames substantially increases the bit rate needed to code the video. The GOP structure may be referred to by two numbers, for example, M (e.g., telling the distance between two anchor frames, I or P) and N (e.g., telling the distance between two full images, I-frames).
[00205] In some embodiments, progressive video enhancer 205 may combine frames into GOPs (e.g., during the encoding process of MODCOD component 204) to remove redundant frames and output a highly compressed data object. For example, a compression function may assign properties (e.g., the frame rate, which determines the GOP structure and GOP size) based on the properties of the source object.
[00206] In some embodiments, progressive video enhancer 205 may use multiple GOP sizes, additional frame rates, different dynamic ranges (e.g., HDR versus SDR), a different color gamut (e.g., WCG instead of international telecommunication union radiocommunication sector (ITU-R) recommendation BT.709), and/or other configurations for the different SHVC layers. In some embodiments, the enhancement layer may provide spatial (e.g., from a resolution of 480p to a rendition of 1080p resolution) and/or temporal (e.g., from a 30 FPS source to a 120 FPS source) enhancement. Rather than being a function of interpolation, they may be functions of decomposition. For example, the highest enhancement layer may be the highest resolution and input for the video decoding process, and from there it may down-sample to make the base layer output.
[00207] FIG. 10A is a block diagram illustrating a system 1000 for simultaneous transmission of an item of content in multiple formats. As in conventional OTT platforms, a broadcast system may simulcast a plurality of renditions of an item of content at different resolutions. In the example of FIG. 10A, the content is sent by a broadcast television station 1002 separately in 480p, 720p, 1080p, and 4K/UHD formats 1004, 1006, 1008, and 1010
respectively. Each of the broadcast television receivers 1020, 1022, 1024, and 1026 may select any of the renditions, and/or switch between depending on the intended use, current reception, or other issues.
[00208] An alternative approach is shown in the example of FIG. 10B. Rather than sending four different formats, BTS 1032 sends four different layers of information including a 480p base layer 1034, a 720p enhancement layer 1036, a 1080p enhancement layer 1038, and a 4K/UHD enhancement layer 1040. The 480p base layer 1034 of FIG. 10B may be the same as the 480p content version 1004 of FIG. 10A, or it may be something different which is constructed for easier use with the other layers in constructing higher resolution renditions for example. In this example, BAT 1050 simply makes use of the 480p base layer to obtain a 480p rendition.
[00209] In FIG. 10B, BAT 1052 uses progressive video enhancement to derive a 1080p rendition by combining information provided in each of the 480p base layer 1034, the 720p enhancement layer 1036, and 1080p enhancement layer 1038. That is, each layer, may consist of different video compression information based, for example, on analysis of foreground vs. background, motion, color, textures, etc., such that higher resolution and/or higher quality renditions may be obtained by combining information at different levels of detail with one or more enhancements layers providing higher levels of detail. In this example, BAT 1052 is using a base layer and two enhancement layers to achieve the desired rendition. Alternatively, not shown in FIG. 10B, each enhancement layer may be used independently, e.g., in combination with information in the base layer, but not information in other enhancement layers.
[00210] This reliance of the higher enhancement layers on the lower enhancement layer information allows more compact transmission to achieve many of the renditions. For example, a 1080p or 4K enhancement layer may require 20 to 30 percent less bandwidth than a 1080p or 4K rendition would require on its own. Thus, a higher broadcast efficiency is achieved by having the BAT 1052 reconstruct content for display at a high resolution and/or quality by combining video enhancement layers with a base layer at a lower resolution and/or quality, where each enhancement layer provides a delta in video information between layers, while, e.g., varying the GOP size.
[00211] As compared to current OTT platforms that simulcast a plurality of renditions at different resolutions, the approach used herein may have a top layer (e.g., 1080p or 4K) that is lighter (e.g., with the 4k layer being between 20 to 30 percent lighter than it would be on its own) because of the inner layers. Known approaches have receivers
switching altogether from one feed to another, but the herein-disclosed approach reconstructs, when displaying, content of the higher layers by using the lower layers. For example, a base layer may be 480p, an upper layer may comprise deltas between 480p and 720p, a layer above that may comprise deltas from 720p to 1080p, and a layer above that may comprise deltas from 1080p to 4K. Deltas may thus be provided between the layers and a combination of the lower layers may be needed for the upper layers, while varying the GOP size. In another example, the different layers may be separately provided such that a geographic fence is formed, the upper layer being, e.g., provided via an IP-based backchannel while the base and potentially other lower layer(s) are being, e.g., provided via emissions 108; but the combination of layers is needed before the upper layer is operable for viewing.
[00212] GOP sizes are known to be predetermined, the GOP being the number of frames in every I frame. The GOP may in other words be the number of deltas for setting a new baseline. Implementation of emissions 108 comprising 480p resolution content may be considered mobile-oriented rendition, e.g., the base layer may be optimized for mobile reception. For example, there may be a PLP wherein all mobile channels are placed and that is emitted with the highest level of durability for reception at the receiver (e.g., due to a particular codec permutation set, power management decoder efficacy, and/or another attribute). In this 480p example, there may be a 15 frame GOP. That is, every quarter of a second, there would be a new baseline frame.
[00213] In some embodiments, 480p or 576p with a 30 second GOP may allow a user to switch the channel and consume video instantaneously (e.g., without the user observing a latency). Thus, a channel may be changed downstream, and content may be quickly (e.g., within a second) consumed. By comparison, on OTT devices (e.g., Apple TV), a switch from Hulu to Netflix may take anywhere from 4 to 8 seconds because HLS chunks are typically 2 to 4 seconds, the latency being independent of Internet speed.
[00214] In some embodiments, the GOP size for a base layer rendition may be 1 second, 3 seconds, 15 seconds, 30 seconds, or another suitable value. In these or other embodiments, the GOP size of enhancement layer renditions (e.g., 720p) may be based on a scene (e.g., 30 seconds, 60 seconds, or another duration larger than that of the base layer) of the content therein, this upper layer comprising the baseline and deltas. The GOP size may thus be adjusted for different reasons, such as for accessing the content quickly when changing the channel. For example, an anchor at a desk may not require many deltas, e.g., only for their lips moving and eyes blinking. As such, with the base layer, changing the channel is tolerable, with the user not having to otherwise wait a minute when consuming
content at 720p. That is, the base layer facilitates a quick channel change; and then, the resolution may change 30 seconds later, when a full GOP at 720p arrives. GOP sizes substantially affect the efficiency, which is significant in view of a limited (and potentially shrinking) 6-7 MHz TV spectrum.
[00215] Traditionally, when OTT receivers go to a lower quality, it is because there is bandwidth contention or network transport contention that is causing received aggregate bandwidth to decrease. Here, though, that contention calculus may have been solved at the scheduler touchpoint of BTS 102, e.g., before emissions 108. As such, any problems of data loss or fidelity may be a function of the receiver (e.g., SNR, receiver aggregate power, receiver velocity, or another characteristic), e.g., in moving from point A to point B. Then, there may potentially be some data loss, e.g., due to Doppler shift. Progressive video enhancer 205 may, for example, provide the base layer emission via transport as robust as economically viable; ultimately, use of an IP backchannel may augment its reception of ATSC 3.0 emission. There may be no way such downstream device could receive back the enhancement layer for that essence, if it is not able to actually receive it from the ATSC 3.0 transport because the different PLPs may have different MODCODs. If the MODCOD that the enhancement layer is on is not robust enough such that the device cannot receive it, then there may be no easy way to get it back, but there may be a case where, if a device has an IP backchannel, then that enhancement layer can be delivered through another transport medium that is available to the receiving device. Accordingly, different device-specific mechanisms for recovery, remediation, or best-effort consumption of that content are contemplated.
[00216] There may be a use case where the downstream device provides one of those measurement messages back into network 108, e.g., indicating that one or more devices has experienced some data loss, which is a representation of loss of receiver fidelity, but those usually would not be the correlation for what would usually have been some other larger transmission issue or transient in the RF chain for the most part, so it may be helpful to understand where outliers may be and whether it is something that the ecosystem itself should attempt to remediate or address or at least identify. As such, model 203 of system 100 may be trained to learn ways for optimal reception on all receivers.
[00217] A current problem with many network affiliation agreements for ATSC 1 is that the visual quality metrics of network originated programming are based upon a target value bitrate when using the MPEG-2 video codec, which is currently pegged at about 8 Mbit/sec for HD encoding. Over the past 20 years, the codec performance for video quality encoding from MPEG-2 to H.264 to HEVC usually doubles at each generation, e.g., the same
visual quality will be ½ the previous generation’s bitrate. A simple calculation would then estimate the original 8 Mbit/sec in MPEG-2, which would be 4 Mbit/sec in H.264, and finally approximately 2 Mbit/sec for HEVC. However, this calculation does not take into account the flexibility of SHVC and variable length GOPs for the enhancement layer, which may reduce the overall consumed bandwidth for fixed (e.g., high gain) receiving devices and still provide a mobile-first presentation experience. Using a scoring mechanism or visual quality metric, such as difference mean opinion score (DMOS), would allow for a non-bitrate metric and objective for agreement on visual carriage quality between networks, distributers, and broadcasters, without tying to a fixed bitrate and codec profile which may soon again become irrelevant with new codecs being developed and released (e.g., VCC/H.266).
[00218] In some embodiments, use of HEVC-coded video may allow such emission formats and features as spatial scalable coding (e.g., emission of a base layer with one resolution and a separate emission of an enhancement layer that, together with the base layer, provides a higher resolution result), HDR, wide color gamut, 3D, temporal sub-layering, legacy SD video, interlaced HD video, and/or progressive video (e.g., progressive scan, picture rate, or pixel aspect ratio, as defined in the A/341 standard).
[00219] In some embodiments, the base stream may be 576p at 30 FPS, e.g., for a sporting event. In these embodiments, the enhancement stream may be 720p, e.g., at 120 FPS, HDR, wide color gamut, larger GOP size, and/or another attribute. As such, there may be several different enhancements or dimensions that would require more bandwidth. For example, the lower resolution may have a 15 or 30 second frame GOP. Such enhancements or a target bitrate may be defined or required by a third party. Alternatively or additionally, video quality may be prescribed in terms of fidelity (e.g., how closely does a processed or delivered signal noticeably match the original source or reference signal). In DMOS, the score may be on a scale (e.g., ranging from 0 being very satisfied to 4 being most users dissatisfied), and the video quality may be defined subjectively, e.g., by picture quality (e.g., an index of eyes’ ability to understand the picture), audio quality (e.g., an index of the ears’ ability to discern the audio), and/or lip sync (e.g., a measurement of the audio to video synchronization) rather than via objective metrics based on noise, such as peak SNR (PSNR) or mean squared error (MSE). The perceptual scores used in DMOS implementations may be averaged, e.g., from an audience being delivered best and worst cases of test video.
[00220] With a GOP size of 30 or 60, there may be a one second GOP, e.g., a one second window of video that represents the GOP that may be independently decoded. Thus, when, for example, changing from channel 2 to channel 4, what may happen is that the
decoder is reset on its expectations when it goes from 2 to 4, but the decoder may need to start out somewhere. The decoder may, for example, be rudimentary (e.g., without a look- back buffer), so the decoder may only start processing the media essence when it finds an I frame. An I frame is what represents a full frame of video. It may be a compressed frame of video, but it is the reference frame that all subsequent P frames and B frames use as their anchor. B-frames may be a little different because the anchor could be before or after it. For a decoder to switch from one media essence to another, the decoder may have to have an I frame in that switch.
[00221] There may be other ways of being able to have a look back behind it so it may reconstitute that I frame, but those are far more invasive and complex, especially if it is on different RF frequencies. As such, a secondary tuner may be needed for listening to this other channel, and a user may not always change channels to the next sequential one. For example, the user may go from channel 2 to channel 7. In that case, there may be nothing the decoder could do except wait for the next I frame emission.
[00222] In ATSC 1, the tight cadence of every 60 frames means that it will take approximately one second before the user could consume that next frame of video. Now, it may compensate by playing the audio essence early because the audio essence is not dependent upon an I frame as a reference frame, effectively starting a decoding relatively soon. However, the efficacy in video codec and compression is somewhat in the I frame. More important for newer codecs is that the size of the GOP may allow the codec to be far more efficient. Accordingly, progressive video enhancer 205 may have to balance the trade off, e.g., by being able to emit a video essence that has 25 to 75 percent less data utilization if the GOP is longer. For example, a GOP may be in the five second range, which may cause a challenge for receiving devices because they now potentially have to wait up to five seconds for that GOP. In this model, the intent may not just be to adjust the resolution, the spatial resolution on, or the temporal resolution of the media essence, but also the frequency of this I frame, which may be used as an anchor, on the lower emission, since what that allows, as a trade-off, is that the lower emissions may be inherently a lower bandwidth utilization.
[00223] In some embodiments, progressive video enhancer 205 may determine a similar metric at a smaller frame size, rather than a 720p emission that has the I frame at 60 frames, if this component of BTS 102 puts it down to a 480p emission that has the I frames, which may save about 50% of bandwidth; and the enhancement layer, then, knows where those I frame bases are, but because the enhancement layer may gain its efficacy of bandwidth utilization from the scene change component of the rendition, what progressive
video enhancer 205 may do is extend out the GOP for these enhanced sizes for the GOP, so that that way they are not co-dependent like the lower, base layer is for channel change. However, they may provide incremental enhancement of the video rendition, as data is available in B frames in the enhancement layer, so that it may provide areas of high spatial resolution over time. Eventually they will synchronize after a period of time (e.g., about 5 or 10 seconds), but this may not be significant because a bitrate utilization of extending out the GOP size in the enhancement layer is far more valuable than having a shorter GOP. This may be because that shorter GOP would be excess data that does not need to be re-emitted in most circumstances. What this thus allows is to build a grid of sorts, in the base layer, of how frequently the I frames will be emitted, and then, in the series of enhancement layers is how frequently from a fixed interval or how on an intelligent interval the enhancement layers should emit an I frame, which essentially delineates the start of anew GOP.
[00224] In one use case and model, progressive video enhancer 205 has a weather segment. For example, BTS 102 may sit on a shot of the five-day weather forecast for 15 seconds, while the on-air talent is moving from the news desk to the weather map. There is thus no value in re-emitting that same image every second out of those 15 seconds if it has not materially changed. Progressive video enhancer 205 may, for example, do that on the base layer, only to support a fast channel change use case. Thus, the decoder may then render the image relatively quickly, but, for an enhancement layer, that utilization may be almost close to zero over time. It may take a few seconds for devices to upscale their resolution, but this is typical (e.g., with ABR content on OTT devices). Typically, the first five to ten seconds may pass, if you change to a new VOD, for that video to increase in sharpness or other quality. But a downstream receiver may mitigate this by being able to render, as needed, GOPs that have enhancement layers that have additional B frame data that provide relevant picture enhancement over time and then eventually converge to fully-combined base and enhancement layers.
[00225] In some embodiments, progressive video enhancer 205 may provide an MMT video service that allows a channel change to be performed very quickly because it has a very small GOP size (e.g., 10 or 15 frames); and it may progressively upscale to 720p with very large GOP sizes, which substantially reduces the amount of bandwidth used of the overall spectrum, making it much more efficient.
[00226] In some embodiments, emissions 108 may be performed using the MMT protocol, which may comprise timing information. For example, when emitting live content, received emissions at different devices may all be displayed at the same time. That is, MMT
may have timing information encoded in the actual encoded video. Thus, when a user is displaying two or three streams, progressive video enhancer 205 may have put them together in a layering. When, for example, watching a baseball game, the user may be obtaining a main channel OTA at BAT 104 (or TV 103) and several auxiliary channels over the Internet (e.g., using a backchannel) such that the content displayed from different camera angles would all be synchronized time-wise.
[00227] In some embodiments, the disclosed progressive video enhancement may be performed for OTA content, such as emissions 108. In these or other embodiments, this enhancement may be performed for OTT content.
[00228] In some embodiments, progressive video enhancer 205 may perform interlayer prediction (ILP) for improving the coding efficiency of enhancement layers by predicting an enhancement layer picture using a base layer (or another reference layer picture).
[00229] BAT 104 or another downstream receiver may make a determination as to which matching set of capabilities it has on the receiving device. In example scenarios where a player natively may not have this capability, progressive video enhancer 205 may be able to provide a corresponding OTA NRT payload (or an IP-based delivery of a payload may be provided) to provide additional codecs or additional capabilities on that player to be able to support functionality that it does not natively support. One example of this would be hardware support on a mobile device for the specific encoding of audio. A mobile device’s application (or provided as an additional download for OTT) may be configured with the ability to add in this codec for the decoder as needed, so that way the player may expand its capabilities as needed for different media essences, which would be part of any ATSC 3.0 transmission which it might not normally or natively support.
[00230] In some embodiments, progressive video enhancer 205 may make decisions based on availability of content at the different encodings. A corresponding device’s capability may be either potentially a part of service usage reporting (e.g., per the A/333 standard) or a measurement message. Some of that metadata may be application specific. In order to facilitate that, though, a downstream receiver may need that metadata via an IP-based backchannel delivery of what the universe of receiving devices is, what the audience of actively consuming devices are, and whether the material would warrant the additional cost basis for capacity utilization. For example, an enhancement layer distribution over ATSC 3.0 forward transmission 108 may not be warranted, but making the enhancement layer available over an IP-based transport may be beneficial for the audience and viewing experience. As
such, the return-link telemetry of network 108, not just from transmission but from reception, and the capabilities of the receivers may influence how the network responds to opportunities and in flexibility in the way it delivers content.
Utilization of Baseband Packets ’ Padding
[00231] With the existence of padding in emissions, there are opportunities for data insertion of fragmented data into the padding at the studio to transmitter link (STL) transport protocol (STLTP) feed to fulfill a need for greater throughput utilization.
[00232] ATSC 3.0 transmissions may opportunistically incorporate data into padding of baseband packets (BBPs) of STLTP feeds, allowing real-time data insertions into media feeds, which may be tailored to transmission-tower specific markets and applications. The padding may be replaced with such opportunistic data and/or the padding may be used to adjust MODCOD for a scheduler of BTS 102 and for a more robust receiver profile (e.g., with the MODCOD being more robust by utilizing the available bits of the padding, effectively replacing them). Accordingly, a full potential or yield of data utilization may be achieved or managed using STLTP device 405, and device 405 may extract revenue from a data distribution service in a remnant capacity without having to be part of the broadcaster’s origination by working as a data overlay in this ecosystem.
[00233] In some embodiments, the STL may comprise a transmission link between a broadcaster’s studio location and a transmitter which carries the station’s content to be transmitted. The STL may, for example, comprise radio means (microwave) or direct digital connection, such as fiber. In ATSC 3.0, the STLTP provides an STL transmission interface between the broadcast gateway (which is, for example, depicted in FIGs. 4A, 4F, and 5A and which may be communicably coupled to the studio) and an exciter/modulator of the transmitter(s). The STLTP may, for example, encapsulate payload data using user datagram protocol (UDP), provide synchronization time data and control, and perform FEC. Some embodiments of the upstream portion of system 100 (which may include subsystem 405) may virtually implement one or more components of FIGs. 4A, 4C, 4F, and 5 A (e.g., the broadcast gateway), by not needing such components to be implemented via hardware techniques.
[00234] ATSC 3.0 link-layer protocol (ALP) packets are encapsulated for transmission at a broadcast gateway in BBPs, with BBP data being sent across the STL in a real-time protocol (RTP) / UDP / Internet protocol (IP) multicast stream for each PLP. These streams may be multiplexed into a single RTP/UDP/IP multicast stream, e.g., with the same IPv4 multicast destination address (e.g., in a range from 224.0.0.0 to 239.255.255.255) and
port. Each individual PLP may convey a series of BBPs. The broadcast gateway may, for example, encapsulate ALP packets from studios into BBPs and send them to the ATSC 3.0 transmitter. Each PLP configured at such gateway may be mapped to a different port of an IP connection.
[00235] RTP/UDP/IP multicast stacks may be used in both of the ALPTP and STLTP structures, e.g., with specific UDP port numbers being assigned to particular PLP identifiers and used in both protocols. Thus, for example, an ALP packet stream designated to be carried in PLP 07 may be carried in an ALPTP stream with a UDP port value ending in 07, and the baseband packet stream derived from that ALP stream and to be carried in PLP 07 may be carried within an STLTP stream with a UDP port value also ending in 07. When multiple PLP Streams are used, each may have a different tradeoff of data rate versus robustness, and data streams may be assigned to appropriate combinations (e.g., by the system manager of FIG. 4F). The inner STLTP stream may provide addressing of BBP streams to their respective PLPs through use of UDP port numbers. The outer protocol, STLTP, may provide maintenance of packet order through use of RTP header packet sequence numbering. The STLTP may also enable use of ECC to maintain reliability of stream delivery under conditions of imperfectly reliable STL networks. At the transmitter(s) may be an input buffer for each PLP to hold BBP data until it is needed for transmission. There may also be FIFO buffers for the preamble stream and the timing and management stream.
[00236] The A/330 standard may define aspects of the ALP implemented by the disclosed approach. For example, the ALP of emissions 108 may deliver IP packets, link layer signaling packets, and MPEG-2 TS packets down to the RF layer and, e.g., back after reception. In this or another example, the ALP of emissions 108 may optimize a proportion of useful data in the ATSC 3.0 physical layer by means of efficient encapsulation and overhead reduction mechanisms, e.g., for IP or MPEG-2 TS transport. Such ALP may include a packet format, support header compression, and support link layer signaling, and provide extensible headroom for future use. The link layer may be the layer supporting traffic between the physical layer and a network layer (e.g., OSI layer 7) such that input packet types are abstracted into a single format for processing by the RF physical layer, ensuring flexibility, efficiency (e.g., via compression of redundancies in input packet headers), and future extensibility of input types.
[00237] Services provided by the ALP of system 100, for example, include packet encapsulation (e.g., of IP packets and MPEG-2 TS packets), concatenation, and segmentation
(e.g., which may be performed to use the physical layer resources efficiently when input packet sizes are particularly small or large). Using ALP, the physical layer need only process one single packet format, independent of the network layer protocol type. For example, MPEG-2 TS packets may be transformed into the payload of a generic ALP packet. An ALP packet comprises a header followed by a data payload. The header of an ALP packet may have a base header, an additional header depending on the control fields of the base header, and an optional extension header indicated by flag fields within the additional header.
[00238] In some embodiments, a BBP may comprise a header and payload. This header may comprise a base field, an optional field, and an extension field, and this payload may comprise a set of ALP packets. For example, a payload of a BBP may encapsulate one or more ALP packets and/or a portion of another ALP packet. In some implementations, a base field may have a pointer to where the next ALP packet starts. In some embodiments, the excess capacity may be determined based on (i) a previous BBP, (ii) the next BBP header, and/or (iii) a pointer value attribute of a current BBP header. In the pointer of the base field, there may also be the ability to provide a padding attribute, and the BBPs may be rolled into an inner RTP payload, which then gets multiplexed into an outer RTP payload for the STLTP. In some embodiments, BTS 102 may emit content from multiple ALP creators (e.g., which may lack visibility of the final multiplex). The creation of the STLTP and those ALP emission flows may be on a per tenant basis and that per tenant basis may not have the visibility to what the full utilization is of a limited RF spectrum. Device 405, which is discussed further herein, may thus be operated by a third party.
[00239] In some embodiments, there may be multiple opportunities for injection replacement or MODCOD reconfiguration, from a plurality of BBPs (having padding) that traverse the STLTP. As mentioned, the BBP may provide encapsulation of the ATSC 3.0 ALP, but without enough ALP packets in place to fill this BBP, padding is needed in the BBP. The BBP may have a fixed payload length, resulting in a use-it or lose-it utilization, and the way the framing from the ALP packet into the BBP works is that an initial pointer may indicate where the beginning is, and if there is no value for that pointer, such an implementation may indicate the whole BBP is padding (or conversely that there is no excess capacity). However, if the pointer is, for example, 1,024, then RT yield evaluation component 434 of processor 420 may determine an array index having a value from 0 to 1,023 such that an opportunity is fulfilled for injecting data therein. For a trailing portion of the BBP, there may be an extension field that indicates how much padding there shall be as the excess capacity. That is, the optional fields in the extension may indicate how long the actual used
capacity of that baseband packet is, thus defining what the excess capacity would be for padding.
[00240] Some implementations may have a plurality of mechanisms for extracting the excess capacity, one being (e.g., if the pointer at the BBP header is a fixed value or if there is no pointer) that the full BBP is padding, and another being that there is an optional field that indicates how much at the trailing end of the BBP would be used as padding. Through such mechanism(s), RT yield evaluation component 434 may determine the size of the payload inside of the BBP and, from that, this component may derive what the excess capacity is for opportunistic utilization.
[00241] As depicted in FIG. 4A, the configuration manager / scheduler may be a component of the broadcast gateway, which determines which ALP emissions may be multiplexed into an STLTP feed. This FIG. combined with FIGs. 4B-4C illustrate an example process for producing an STLTP feed. The scheduler, though, may be designed to prevent an overflow of the ATSC 3.0 stack, including the broadcast gateway and the STLTP feed. Such overflow scenario would be catastrophic for data delivery needs.
[00242] Some embodiments of system 100 may incorporate a trans -multiplex (trans-mux) for converting the HEVC and AC4/MPEG-H video and audio flows into a QAM/IP compatible MPEG-TS for direct multiplexing of essence content into the MVPD's network. An example flow may include, in order, a TS, an encoder, a WEBDAV push (chunked) to packager/signaler, a scheduler, the STLTP, an ATSC 3.0 exciter, a tuner, a demodulator, and an A/331 receiver, at least some of which are, for example, depicted in FIGs. 4A-4C, 4F, and 5A-5B.
[00243] If the ALP stream(s) is (are) created within the same equipment that provides the broadcast gateway functionality, use of ALPTP may not be necessary. FIGs. 4A- 4C show a single path carrying the ALP packet stream(s) and then the PLP packet stream(s) on its (their) way(s) from the ALP generator(s) to the transmitter(s). In reality, if there are multiple streams at any point in the system, the processing for each of the ALP packet streams and/or BBP streams may be applied separately to each of the streams destined for a different PLP. To manage characteristics of the emission and to coordinate elements of the physical layer subsystem with respect to their parameter settings and times of operation, a scheduler function may be included in the broadcast gateway. The scheduler may manage the operation of a buffer for each ALP stream, control the generation of BBPs destined for each PLP, and create the signaling data transmitted in the preamble as well as signaling data that controls creation of bootstrap signals by the transmitter(s) and the timing of their emission.
The scheduler may thus communicate with a system manager to receive instructions and with the source(s) of the ALP packets both to receive necessary information and to control the rate(s) of their data delivery.
[00244] In some embodiments, the RTP/UDP/IP traffic, for example, depicted as entering FIG. 4B (e.g., from the STLTP formatting and ECC encoding block of FIG. 4A) may comprise an outer ALP frame with inner encapsulations, a base mainframe, actual ALP DU, and the IP UDP emission. As such, STLTP extraction component 430 and/or RT yield evaluation component 434 may need to navigate several moving pieces to go five layers deep. And to arrive at all of this content, one or more of these components may flatten them down to one flow. By doing so, the one or more components may solve the broadcast gateway, the modulator, the transmission, and the receiver device.
[00245] As part of this example STLTP flow, there may be potentially thousands of packets coming through per second, with all having the same destination multicast IP address and port for a touchpoint in the vicinity of the blocks of FIG. 4B. Whether there are 1, 10, 100, etc. video streams, it all may look very similar (e.g., going through these 5 levels of encapsulation). In some embodiments, the BBP may be at a third or fourth layer of encapsulation. Each microsecond of emission from BTS 102 may be represented by one BBP that is then multiplexed in conjunction with other BBPs, all coming together and going through a touchpoint depicted in FIG. 4B.
[00246] In ATSC 3.0, the network may be transmitting carriers (e.g., a source of where those microsecond emissions would be modulated on) all the time such that there are continual oscillations to which a receiver may tune. When not enough useable or actual data, or no data at all, is being emitted, padding may be sent as part of these continual oscillations. However, these continual emissions may not need to be emitted from the transmitter/antenna- tower of FIGs. 4F and 5 A; rather, they may be emitted through the STL. As such, unutilized capacity in the system between touchpoints of FIG. 4B may only be necessary for facilitating the transmission of the broadcast. RT yield evaluation component 434 may analyze this in real time and identify where unutilized capacity is and then dynamically inject, via component 436, opportunistic data effectively replacing a BBP. For example, 3,752-bytes of data payload may be flattened from the STL because there may be no need to send it, sending instead how long a padding frame should be, but the exciter may then have to honor that padding frame and emit 3,752 empty bytes in that transmission to make sure that the carrier oscillations are still there. Without padding frames, BATs 104 and next generation TVs 103 would lose the signal, taking them two or three seconds to come back online.
[00247] Once the opportunity has passed, that 3,752-bytes are forever lost as unsold. As such, if not replaced or transposed, null data would otherwise be emitted by an upstream tower, e.g., BTS 102. RT yield evaluation component 434 may specifically identify signatures that are padding. There may be padding at the end of the frame or in the encapsulation of another frame behind it to keep it aligned. System 405 may understand on the microsecond level what is potentially available there and then fulfill that opportunity with NRT data delivery and/or MODCOD adjustment. For example, one week before the Super Bowl, the opportunistic data emitted into the STLTP feed may include all potential advertisers that have bought OTA time for digital preemption use spaces.
[00248] In some embodiments, RT yield evaluation component 434 may feed information back into the broadcast network (which is, for example, depicted in FIGs. 4A and 4F) such that, upon understanding the tolerances therein, MODCOD control component 438 may direct parameter adjustment (e.g., in conjunction with MODCOD unit s204 of FIG. 2) for improved configuration and provisioning decisions. As such, RT yield evaluation component 434 may perform data collection and/or aggregation, e.g., as a decisioning engine that operates in real time after the dataset was computed and obtained to understand the characteristics of emission durability, resiliency, and robustness. Yield evaluation component 434 may, for example, analyze the ALC flow in RT to see where the excess capacity is such that RT injection component 436 and/or MODCOD control component 438 can better determine its optimization, e.g., respectively via data injection and/or PLP configuration parameter adjustment. Yield evaluation component 434 may, for example, perform yield management using RT injection component 436.
[00249] In some embodiments, system 405 may thus fulfill another class of media buying where, e.g., advertisers 206 may be put in a pool with remnant inventory such that they cannot be informed if they are actually going to get a slot filled until RT yield evaluation component 434 has seen the as-runs that have been fulfilled. In these or other embodiments, content of advertisers 206 may not be preempted with another supplemental source (e.g., paying a higher price point), effectively stratifying out their audience impression base. For example, there may be a secondary or tertiary demographic inside of a national ad spot that Ford bought that would be more relevant to an 18-34-year-old demographic than the 35-54 or the 55-year-old and up demographics. As such, BAT 104 may employ demographic data to provide different targeted ads (e.g., via different PLPs having different versions for the different demographics). BAT 104 may know what the demographic is of its user(s) to pick the respective version for an individual user based on included metadata attributes (e.g., the
filter code). In this example, the younger 18-34 user may be provided the Ford Focus ad, the 35-54 divorced male may be provided the Ford Mustang, and the 55 and up demographic may be provided the crossover ad. A media buyer and an advertiser may still get the position paid for and the brand recognition, except that its impact is potentially ten times more relevant by reaching the right audience, without having the risk of being preempted by not paying what the market would pay.
[00250] Due to ATSC 3.0 supporting a more national perspective, some implementations of a broadcaster determining its content to be emitted may identify an excess capacity in their emission that they want to optimize. At an output of their national network touch point, they may then use the same models to provide opportunistic data injection, the forward link being then re-multiplexed into a localized transmission STLTP.
[00251] Due to BTSs 102 being implemented at different cities, regions, and/or nations, that network may be orchestrating together architecturally at different capacity availabilities of different layers. At each such location, STLTP extraction component 430 and RT yield evaluation component 434 may identify for extraction a theoretical maximum throughput value such that there are substantially no underutilized resources. The excess capacity may be utilized by a third party in the marketplace exchange process. For example, Microsoft and/or Google may deliver at least portions of an application update (e.g., for Office 365 and/or Chrome, respectively) upon resolving one or more new bugs previously identified for correction. In this example, such delivery may reach substantially all operating BATs 104 (e.g., in a market or region) in one series of emissions 108, where the update portions are collected at all of these receivers over time. Although this may take an hour or more, this is preferable to having to wait a few weeks for each of the users to undergo the update process individually.
[00252] In some embodiments, RT yield evaluation component 434 and RT injection component 436 may be able to support at least a guaranteed proof of performance by studying the heuristics of ongoing emissions to approximately identify when and/or how much excess capacity may be utilized. This approach may result in software-driven data deliveries for little or no cost. In these or other embodiments, RT yield evaluation component 434 may capture the flow of emissions at the STL, from not just an instantaneous perspective; rather, this component may respond to historical analyses from every data point in a market. For example, there may be 10,000 towers across a country, providing 10,000 exchange markets that RT yield evaluation components 434 would have an opportunistic position to determine an extent of value that may be extracted out of at a point in time.
[00253] In some embodiments, component 434 may perform yield evaluation in real-time such that RT injection component 436 fulfills a commercial commitment or contract, e.g., by reemitting a set of opportunistic messages with at least a predetermined frequency. For example, with additional input from upstream systems, a more deterministic delivery may be performed.
[00254] Traditionally, 1-5% of ATSC 3.0 data emissions are made up of empty padding (e.g., overhead) packets due to temporal non-linearity of live ATSC 3.0 data emissions. In other words, the conversion of television video feed into the ATSC 3.0 format often leaves small gaps in the actual STLTP feed. An example cause for such waste may be video over and under shoots. BTS 102 may not otherwise be robust enough to manage the temporal non-linearity of the live data emission. Therefore, there is the opportunity for opportunistic data insertion or injection, e.g., to add fragmented video or other data information into the padding packet spaces of the STLTP feed. How much content may be added depends on specific encoding of the primary content, which may be difficult to anticipate, but may be observed by probing an STLTP feed. Probe 440 (, for example, depicted in FIG. 4D) may thereby detect an over-capacity in real-time and dynamically inject certain data via stratified replacement transmissions. Some implementations may include changing the MODCOD, such as an amount of forward error correction (FEC) via ECC.
[00255] Device 405 may opportunistically inject data via the STLTP by obtaining a multiplexed, IP-based stream of primary content, analyzing use of baseband padding packets with respect to transmission of this content, and replacing one or more in-transit baseband padding packets with opportunistic secondary content. This padding data may each be a baseband ATSC 3.0 packet (e.g., comprising about 1 ps of duration). In some implementations, the secondary content may be non-real-time (NRT) and thus may be of a different type from the primary content, which may be RT emissions. One example of primary data may be time-sensitive (e.g., live) content, and one example of secondary data may be non-time-sensitive (e.g., application updates). The primary and/or secondary data may be transmitted in cyclic or carousel fashion, and this data may support one or more QoS levels. The secondary and/or primary data may be transmitted in a multicast. Injector 440 may perform at scale based on use of multicast networking comprising tens, hundreds, thousands, or even millions of ATSC 3.0 receivers.
[00256] The baseband packets may comprise inner and outer ALP encapsulations. That is, an outer ALP encapsulation may encapsulate an inner ALP encapsulation, which comprises a baseband frame to be emitted via UDP/IP. The inner packet payload may
represent either a timing and management (T&M) packet, a preamble packet, or a BBP. Each of these types of packets are encompassed in the STLTP multiplex, for example, depicted in FIGs. 4A-4B. In some implementations, only the BBPs may be utilized, these types of emitted packets having actual data (and thus may have some replaceable padding therein).
The T&M and preamble packets may be metadata that define example aspects of the emission parameters for the exciter and the system timing and synchronization.
[00257] All inner layer and outer layer packets delivered over the STL using a tunneling process may depend upon an RTP/UDP/IP multicast IPv4 protocol stack. All tunneled packets may, for example, have defined port numbers within a single IP address.
The RTP protocol may be used with redefined headers. Segmentation and reassembly of large tunneled payload packets and concatenation of small tunneled payload packets within the tunnel packets may be performed using RTP features. A segment sequence number within an outer RTP header may indicate a position of a segment within a larger source packet that supports segmentation and reassembly, and a value in an outer RTP header may indicate the offset of the first inner packet segment within the payload of the associated outer packet that supports concatenation.
[00258] In some embodiments, STLTP extraction component 430 and/or RT yield evaluation component 434 may remove encapsulations of both the inner and outer encapsulations of the RTP to decode the BBP. For example, some embodiments may determine the presence of a complete padding frame or what portion of the frame is padding. In this or another example, STLTP extraction component 430 may determine where a series of padding bits terminates, and actual data payload begins. This information may, for example, indicate one or more opportunistic data injection locations. In another example, such determination may indicate excess capacity available for subsequent network optimization.
[00259] In some embodiments, system 405 may accomplish this padding repurposing improvement without having to make any changes within a broadcast network, e.g., in-flight at the STL between the broadcast gateway and the modulator or exciter of BTS 102. Device 405 may thus be installed at the STL between the broadcast gateway and the modulator/exciter (e.g., without having to touch anything internally in the network operations or configuration) to provide supplementary content (e.g., ad creatives). More particularly, device 405 may, for example, be located anywhere at or between the broadcast gateway’s STLTP formatting and ECC encoding and the transmitter’s ECC decoding and STLTP demultiplexing, as illustrated in the examples of FIGs. 4A-4C. The ECC (e.g., per the
SMPTE 2022-1 standard) may be applied to maintain reliable delivery of the STL data between sites.
[00260] In some embodiments, NRT extraction component 432 may utilize a buffer with pointer to an NRT object (e.g., of a size on the order of 1 megabyte (MB)). For example, NRT extraction component 432 may enable delivery of such content over an extended period of time (e.g., a 24-hour period for 128 BBPs, each providing up to 8, 192 -bytes (B) of availability such that the ~1 MB is delivered) via network 108. In some implementations, there may be a full BBP of padding. RT injection component 436 may accordingly inject that amount of NRT into the BBP and then increment the pointer to the NRT object. The NRT object may comprise a content data delivery type that is not a live video and/or audio emission. Owners of the NRT object may thus tolerate delay (e.g., minutes, hours, etc.) in the data delivery through system 100.
[00261] As such, delivery of the NRT data may be performed at intermittent intervals depending on network configuration at BTS 102. For example, returning to the 1 MB example of NRT data, if full BBPs were utilizable then this NRT traffic may be quickly emitted, but if STLTP extraction component 430 identifies only smaller blocks of padding (e.g., a few hundred bytes) then this 1 MB of data may be emitted spread-out in padding replacement over about 8,000 BBPs. The padding may be replaced by multiplexed portions of data.
[00262] In some embodiments, RT injection component 436 may multiplex any number of NRT objects (e.g., video, ads, and/or other content). For example, these objects may be emitted several times repeatedly over an extended period of time (e.g., overnight) and available for a time segment (e.g., for dynamic advertising preemption and insertion using prepositioned content). And some implementations may, for example, deliver NRT objects through network 108 without impacting local configurations from every single facility or region in operation.
[00263] In some implementations, the STLTP may not have zeros for padding but rather a marker that indicates that there will be in the output of the RF emission a certain amount of flagged data transmitted from the exciter out in the RF emission. NRT data extracted by NRT extraction component 432 may be, for example, re-multiplexed into the STLTP emission by injecting the exact length of such opportunistic data, which would otherwise be null emissions from the exciter.
[00264] In some embodiments, device 405 may cause an STLTP flow coming from the broadcast gateway to be received and reflected out as a duplicate copy of the STLTP flow
in real-time. The exciter/modulator of BTS 102 may receive an output of probe 440, as, for example, depicted in FIGs. 4B and 4D, such that it receives the original, unmodified STLTP flow only in a failure mode (e.g., of STLTP extraction component 430). That is, the exciter or modulator may receive, rather than receiving both copies, only one STLTP flow. When the opportunistic data engine is engaged, the original STLTP emission may be squelched by an output of the engine, which may be the STLTP flow with any padding packets transposed into data emission packets. This transposition may be performed, e.g., on the order of microseconds. In some implementations, a buffer may be used such that a few seconds is buffered for this activity.
[00265] Referring to FIG. 4D, device or system 405 may comprise probe 440 and/or another software-based touchpoint. Probe 440 may comprise an injector and receiver. In some embodiments, device 405 may comprise a plurality of differently housed/enclosed portions (e.g., which separate receive, processing, storage, interfacing, and/or transmit portions).
[00266] Probe 440 may, for example, assist in performing padding replacement at the STLTP outer streamed ALP encapsulation/layer and/or at the inner streamed ALP encapsulation/layer. In some implementations, RT injection component 436 may inject opportunistic data at one or more STL transmitters. For example, these transmitters may be regional, e.g., with different transmitters in different regions receiving (e.g., via a backhaul link) and transmitting (e.g., via another tower) the same original content or different versions of the same content. In other implementations, the transmitted content at different regions may be different. The opportunistic data may be selected based on the region or demographics of intended recipients.
[00267] RT injection component 436 may stratify audience impression data by sending different versions of an ad for different demographics at a same time whereupon a receiver selects one of those ads to display to the user, since the receiver knows the demographic of the user. For example, each of these baseband packets may have header information (e.g., RTP header packet sequence numbering or another identifier (ID)) such that the receiver may reliably reassemble a full 30 second ad using all of the 1 ps segments corresponding to this selected ad.
[00268] A selective advertisement insertion may be based on any other content transport mechanism. For dynamic ad insertion, the objective is not to preempt a forward linear insertion (e.g., a first position in a series) per se, but to provide audience segmentation for a traditional media buyer in a digital marketplace. The correlating attribute may be from
an original ad ID. A device (e.g., probe 440 or any other device associable with the ATSC emissions) may thus correlate the initial, linear insertion with a respective ad ID and then determine a plurality of respective, derivative ad IDs for profile/demographic matching. Rather than a receiver deciding as to which ad to display to a particular user device of a certain profile/demographic, a traditional ad insertion may have a record that reflects a plurality of derivative insertions that may be allowed for linear preemption. For example, metadata supplied via ATSC 1 may be obtained and used for an ad decisioning process that applies the profile/demographic and that preempts only those in a certain set of available creatives to match the suitable audience. In other words, the device may correlate that linear insertion to the additional plurality of preemptible creatives that are compatible with it for digital distributions.
[00269] The baseband padding has its genesis in needing to modulate any data onto a carrier wave so that the receivers may stay synchronized with the tower’s emissions, e.g., to prevent an out of sync state of 2-3 seconds, which would otherwise occur if the carriers were not used. The ATSC 3.0 standard does not require these emissions through the STL. Yield evaluation component 434 may analyze baseband packets in real-time and identify unutilized capacity (e.g., where the packet header has a padding signature) to dynamically replace padding with real data. For example, RT yield evaluation component 434 may obtain flattened BBPs of the STLTP feed to determine a flag that indicates how long padding will be without transmitting the padding.
[00270] RT yield evaluation component 434 may further determine a size of the padding or whether this packet is a complete padding segment. Each null packet replacement with opportunistic data may have a dynamic size but at most 8,192-bytes, in some implementations. In this fashion, baseband packets may each be analyzed for their availability to replace null data with NRT content. In some implementations, the to-be- replaced padding may be at an end of a packet or in the encapsulation of another packet behind it to keep alignment. The opportunistic NRT data may be sourced differently from the main broadcast data, thereby comprising a form of data overlay. For example, the NRT data may be obtained via network 106 and/or from electronic storage 422.
[00271] Evaluation component 434 may evaluate RT yield of opportunistic data via non-invasive probe 440, which may be any tool or device suitable to obtain STLTP formatted and ECC encoded data. Using analytical output from RT yield evaluation component 434, MODCOD control component 438 may determine optimal MODCOD characteristics. The adjusted MODCOD may cause improved emission durability and reception robustness of
emissions 108. The MODCOD configuration changes may be sent to an upstream device and/or a downstream device via external resources 424 or via another suitable means. In some embodiments, MODCOD control component 438 may provide a self-optimizing network configuration parameter set based upon external demand/feed and yield sources (e.g., 202, 206, and/or another source).
[00272] RT yield evaluation component 434 may determine an amount of time needed to opportunistically transmit a certain amount of NRT content. The NRT data may comprise a plurality of different versions or types of content. RT injection component 436 may deliver the determined amount of content based on its type. A number of baseband packets needed to transmit the NRT content may be dynamically determined based on an amount of each baseband packet utibzable for padding replacement. RT injection component 436 may multiplex into the STLTP feed a plurality of different versions of NRT data within a predetermined time period.
[00273] RT yield evaluation component 434 may analyze STLTP feed emissions to such a high level of quality that the NRT data may not just be opportunistically sent but also deterministically at a quality of experience level that satisfies a criterion. For example, RT injection component 436 may rely on historical analysis and other heuristics of RT yield evaluation component 434. In another example, carousel transmissions may be analyzed such that a subsequent iteration emits less padding overhead and/or with improved MODCOD characteristics.
[00274] RT injection component 436 may incrementally extract, e.g., for each of the baseband packets, portions of the NRT data obtained by NRT extraction component 432.
Each of these portions may have a size that is equal to an amount of excess capacity within each baseband packet. The excess capacity may be represented by a corresponding amount of padding when the disclosed approach is not implemented. RT injection component 436 may perform this incremental extraction by incrementing a pointer into the NRT data as each portion of the NRT content is transmitted out from probe 440. More particularly, STLTP extraction component 430 may decode each of the BBPs to determine injection locations. RT injection component 436 may obtain portions of the NRT data extracted from a content source, and then multiplex the portions into the STLTP feed at the respective, determined locations, including with proper formatting and protocol characteristics. For example, probe 440 may implement STLTP electrical (e.g., for microwave or satellite transmissions) or optical (e.g., for fiber transmissions, including two connectors: one for an ingress from the
STLTP feed that includes the flag and the other for an egress back into the STLTP feed) characteristics.
[00275] To inform the receiving devices that there is an opportunistic data payload emission comprising opportunistic NRT, BTS 102 may adjust low level signaling or emit a service location table (SLT) that identifies this NRT data payload in transmission 108. As such, device 405 may emit an initial portion of the data capacity with excess to provide the service level signaling to identify that there is an asynchronous layered coating (ALC) transmission that would then contain the data for opportunistic delivery. So, emitting this service level signal along with its corresponding MDMS representation or MDMS metadata would then contain the ALC in the file delivery over unidirectional transport (FLUTE) representation of the MDMS and the EFDT, which would then contain what the representative objects are for reception (e.g., at BAT 104).
[00276] In some embodiments, RT injection component 436 may implement, for opportunistic delivery, an underlying transport mechanism that uses, e.g., the FLUTE standard. Inside a FLUTE there may be a series of objects that will be delivered to an IP multicast receiver. To ensure a high confidence of delivery (e.g., including a full and accurate payload representation), FLUTE may interoperate, e.g., with a series of carousel object deliveries. For example, FLUTE may support a recovery model by providing carousel delivery in which the receiver then would keep a set of the objects that are being recovered with a map of the bytes that have been recovered. When a carousel retransmission occurs that has the missing bytes of a previous transmission, FLUTE may then (e.g., compliant with the A/331 standard) apply to those missing byte ranges, if it receives them, to reconstitute the fully recovered object over a series of carousels. Under a unidirectional model, ATSC 3.0 receivers (e.g., BAT 104) may be configured well enough to provide a highest fidelity for object reception.
[00277] Because the ALP transmission may be in the FLUTE, there may be a layered coating transport (LCT). Then, on top of the LCT there may be individual blocks that are added into the IP multicast permission. The LCT may be comprised of ALC. ALC then may have other components, e.g., a style descriptor which is the file transport information (FTI), additional layers of the FEC, and/or the FLUTE layer that contains what that object identifier is. Inside of it may be what the offset is of the object (e.g., the data may start at byte 128), and it also may contain what the length of the data payload is, which is a function of the opportunistic data padding from the STLTP.
[00278] By representing this NRT object in a FLUTE-comprised LCT transmission, disclosed embodiments may inform the receiver device where in that object this block of data should be placed and the length of the data that this block may contain. Even with only one data emission payload for the opportunistic insertion (e.g., with the SLX emission that occurs in a carousel model), as long as at least 49 B of padding are obtained (e.g., due to a 40 B header for the IP/UDP and about an 8 B FLUTE header), disclosed embodiments may deliver an extra byte of NRT opportunistic data because there may be an indication in the FLUTE header in that emission where that 1 B should be placed and how long that 1 B is for the receiver to add that in. Thus, even if that is only one second for one byte of data transmission in a day, this results in 86 kilobytes (KB) of data, which multiplied by an example universe of ATSC 3.0 receivers (e.g., 300 million in a country), results in, for a 24 hour period, 29.9 terabytes (TB). That is, there may be about 30 TB of transport capacity in one day by extracting one byte of data delivery. This same capacity may be used for MODCOD adjustment.
[00279] MODCOD control component 438 may cooperate with MODCOD unit 204, MODCOD being the modulation and coding characteristics (e.g., which may be quadrature phase shift keying (QPSK), quadrature amplitude modulation (QAM), or another suitable modulation scheme). For example, a signal in which two carriers are shifted in phase by 90 degrees may be modulated and summed, and the resultant output may comprise both amplitude and phase variations. In the ATSC 3.0 physical layer, constellations resulting from QAM modulation range by broadcaster choice from QPSK to 4096 QAM. High spectral efficiencies may be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the channel required. QPSK may, for example, comprise a digital modulated signal comprising a two bit (4 point, or quadrature) QAM constellation, which is usually used for low bit rate, high robust transmission. QPSK may thus be more reliable in the modulation format, and QAM may have different derivatives of 16, 256, 1024 non-uniform, and 4K non-uniform. A problem may be, when changing from the more reliable QPSK to 4096 non-uniform QAM, that the bandwidth increases but the ability to receive decreases. In forward emission 108 using QPSK, a lower bandwidth may be met but with the highest reception potential for that modulation.
[00280] The coding of RF emissions 108 may imply a level of FEC to ensure robust reception, but there may also be a loss of how much of the IQ vector may be properly inferred by the receiving device based on the SNR and thus the received power of BAT 104. The modulation necessarily reduces the bit rate or the channel capacity. Component 438 may
adjust the MODCOD (e.g., to adjust the capacity of network 108), which may result in an adjustment of the number of ATSC 3.0 receivers able to properly receive the payload of emissions 108. For example, the broadcast gateway of BTS 102 may provide small incremental channel capacity modifications by adjusting the FEC ratio. However, after the STLTP is created, the channel capacity of the configured MODCOD may become fixed, affecting real-time delivery requirements of sources 202 and/or 206.
[00281] FIG. 4E illustrates an example method 480 for injecting NRT data. Method 480 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 480 presented below are intended to be illustrative. Method 480 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 480 are illustrated in FIG. 4E and described below is not intended to be limiting. Method 480 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 480 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 480.
[00282] At operation 482 of method 480, a set of opportunistic NRT data may be obtained. Operation 482 may be performed using network 106, external resources 424, user interface device 418, and/or a processor component the same as or similar to NRT extraction component 432 (shown in FIG. 4D and described herein).
[00283] At operation 484 of method 480, a plurality of BBPs from the STLTP feed may be obtained. Operation 484 may be performed by a processor component the same as or similar to STLTP extraction component 430 (shown in FIG. 4D and described herein).
[00284] At operation 486 of method 480, an amount of excess capacity within each of the obtained BBPs may be determined. Operation 486 may be performed by a processor component the same as or similar to RT yield evaluation component 434 (shown in FIG. 4D and described herein).
[00285] At operation 488 of method 480, portions of the NRT data each having a size equal to the respective determined amount may be incrementally extracted for the each BBP. As an example, NRT extraction component 432 may incrementally extract, for each of the BBPs, portions of the NRT data, each of the portions having a size determined based on excess capacity. Operation 488 may be performed by a processor component the same as or similar to NRT extraction component 432 (shown in FIG. 4D and described herein).
[00286] At operation 490 of method 480, null data may be replaced by multiplexing the extracted portions into the STLTP feed. This padding data would otherwise be emitted by BTS 102. Operation 490 may be performed by a processor component the same as or similar to RT injection component 436 (shown in FIG. 4D and described herein). Alternatively, MODCOD control component 438 may improve the MODCOD when replacing the null data.
[00287] In implementations having wide swings in the requirements of what the video encode bit rate is for a GOP, a highly complex video essence emission may require more transport utilization, which may ultimately be mapped in the ALP packet. To accommodate a higher bit rate utilization, some embodiments may require a reencoding of the video to meet the target objective. This is what a statistical multiplex (stat-mux) may do, e.g., by compressing the GOP. A stat-mux (e.g., which may implement a constant bit rate) may thus overcome such a bit rate overshoot and require the video encoder to go back and recompute that GOP, to meet the target requirements of the multiplexer. If the multiplexer cannot meet the target requirements for the bit rate emission, it may drop the packet because there may be a time component to them. For example, if this packet cannot be shifted in the time window that is required for the receiver, the receiver device may under flow in this packet, resulting in a discard, but an upstream scheduler indiscriminately dropping data (e.g., which is part of an I-frame rendition) may be very negatively impactful for a downstream video decoder because it may not have the ability to decode what the reference frame is for this.
[00288] As mentioned, the GOP size may be based on a scene. For example, if it has low motion, then usually it will have a much lower utilization of the channel capacity for that GOP. If the scene has high motion, then there may be compression artifacts, but it may meet or exceed the channel configuration capacity. This is thus a function of time of the video encoding input where the encoder may try its best, but it may not be guaranteed that it will be exactly n bits per second or n bits per frame. There may thus always be a degree of over or under shoot somewhere in the transport of the media essence into the ALP and BBP. What this allows is to utilize scenarios where either the PLP may be overprovisioned by some
percentage to accommodate for these overshoots of the input, or alternatively in the case where the channel capacity is not fully utilized, by extraction of this padding representation of the baseband packet and then being able to inject a data transmission that would provide additional utilization of that configured resource throughout the system.
[00289] Adjustment of the MODCOD may affect both the capacity and ATSC receiver reach of emissions 108. MODCOD unit 204 may thus ultimately be controlled, to provide incremental or quantized additional capacity, or to adjust incremental or quantitative reach of that forward transmission. Increasing channel capacity may reduce the universe of receiving devices because the inputs may be uncontrollable, and reducing the channel capacity may increase the universe reach because the inputs to control the transition characteristics may have a large capital cost associated with them.
[00290] In some embodiments, the physical layer allows BTS 102 to choose from among a wide variety of physical layer parameters for personalized broadcaster performance that may satisfy many different needs. For example, some configurations may cause a high capacity but low robustness, and others may cause a low capacity but high robustness. Some selections may support SFNs, multiple input multiple output (MIMO) channel operation, channel bonding, robustness (e.g., guard interval lengths, FEC code lengths, code rates, etc.), and other suitable configuration decisions. With such flexibility, a used signaling structure may allow the physical layer to change technologies and evolve over time, while maintaining support of other ATSC systems operating in different modes. BTS 102 may provide flexibility to choose among many different operating modes depending on desired robustness/efficiency tradeoffs. Emissions 108 may comprise OFDM modulation with a suite of LDPC FEC codes, of which there may be 2 code lengths and 12 code rates defined. There may be three basic modes of multiplexing: time, layered, and frequency, along with three subframe types: single input single output (SISO), multiple input single output (MISO), and MIMO.
[00291] The combined ALC NRT emission rate (e.g., the sum of maximum NRT payload bitrate) may never overflow the PLP channel capacity after all ROUTE emissions are framed. Assuming a non-channel bonded scenario (e.g., with a baseband header optional field indicator (OFI) of 0, rather than 5 bytes in channel bonded with OFI equal to 1 (short extension mode) and extension type being 0 for a 2 byte counter), a committed route payload length calculation may be the total summation of all non-ROUTE emissions in the PLP for the current baseband frame. The excess PLP baseband frame payload size may be used to assign NRT data transmissions across one or more NRT sessions by computing the remaining
FEC frame size (see A/322 Kpayload (Bose, Ray-Chaudhuri and Hocquenghem (BCH)) for the applicable FEC code rate), subtracting for the committed ROUTE payload length, and subtracting by either: (i) minus 1, resulting in an A/322 baseband frame with a one byte header of base field mode equals 0 and least significant byte (LSB) pointer equals 0, when no proceeding fragmented ALC frame is present; or (ii) minus the proceeding fragmented ALC packet length used at the start of our baseband frame, minus 1 (if the proceeding ALC fragment is less than or equal to 127 bytes), or 2 bytes (if the proceeding ALC fragment is greater than or equal to 128 bytes) for proper pointer length of the start of the first ALP frame, when a proceeding fragmented ALC frame is present.
[00292] The NRT payload(s) may, for example, be allocated along the following conditions: condition A, condition B, condition C, and/or condition D. Condition A may be allocated if the excess capacity is less than the ALP ALC header length (e.g., 7+20+8+16+ sum(HET/HEL) ALP header, IP header, UDP header, ALC block header, additional LCT extension headers), the baseband frame being only padding (as no un-fragmented ALC frame with ALP header may be injected in the remaining payload). Condition B may be allocated if the excess capacity is less than 1500 bytes but more than the above lower bound condition, a single ALC packet being based on a weighted round-robin selection strategy of the set of ALC transmission object identifiers (TOIs), and comprising an ALC data unit (DU) length to the final byte available in Kpayload(BCH) being emitted for the current baseband frame to ensure all bytes are consumed for the baseband frame length of Kpayload(BCH), resulting in a baseband frame of 0, which may indicate A/322 base field mode 0 and LSB point 0. Condition C may be allocated if the excess capacity is greater than 1500 bytes but less than 1500 + a header length of the ALP ALC +1; the condition B ALC frame may then be emitted, and A/322 baseband frame with one byte header of base field mode equals 0 and LSB pointer to the start of the first ALP frame to complete the baseband frame. The excess baseband frame payload capacity may not be long enough; at least one byte of the next ALC DU may, e.g., without fragmenting, avoid increasing the risk of consuming an allocation in the next baseband frame that may be needed for ROUTE delivery. Condition D may be allocated when any excess baseband frame capacity is allocated across a weighted round-robin strategy over the ALC TOIs, wherein condition C is repeated as needed to ensure full utilization of excess frame capacity without causing the ALP packet to be fragmented over to the next baseband frame, avoiding increasing the risk of consuming an allocation in the next baseband frame that may be needed for ROUTE delivery.
[00293] In some embodiments, BTS 102 may support a mapping of ANSI/SCTE-35 signaling into the MMT signaling information, SCTE-35 being used in the cable ecosystem and MSL world to provide triggering for ad break information across an IP network. For a targeted linear ad insertion, e.g., originating from a network with a national, regional, or zonal audience, embodiments may use the SCTE-35 markers as a signaling mechanism in the transport layer (e.g., MMT) to identify when an ad break occurs and when an ad break terminates. Indeed, an ad break may be signaled using ROUTE/DASH, e.g., using an e- message box that contains a SCTE-35 message or some other signaling. But the problem is that, in that model, ROUTE/DASH using an e-message moves what is in SCTE-35 a transport level signal into the application layer, which is in theory a layer violation. The framework that may be provided is the initial signaling in MMT or ROUTE/DASH to utilize common SCTE-35 mechanisms of identification of where the ad break is. A placement opportunity inventory system may be used to define which portions of that ad break, e.g., the inventory, are preemptable or replaceable for a targeted digital ad insertion (DAI) impression opportunity.
[00294] An ad decisioning server may obtain from a client a video ad serving template (VAST) request and then supply back a VAST for ad decisioning based on demographics and other characteristics of the requestor to compute what the highest yield is of the ad impression. VAST is a specification released by the interactive advertising bureau (IAB), which sets a standard for communication requirements between ad servers and video players, including a data structure declared using XML.
[00295] The placement of the ad break may be a function of the decisioning, but to initiate the decisioning process BTS 102 may be required to know where the ads break and where occurrence of the initial switch from linear programing to the linear insertion would be, going thus back to the SCTE-35 model.
[00296] Some implementations may comprise up to at least five different DAI use cases (e.g., compliant with the A/337 (application event delivery) and A/334 (interactive content) standards), along with opportunistic data delivery for positioning of linear creatives across an ATSC 3.0 network without impacting the ATSC 3.0 encoding, packaging, and/or scheduling system touchpoints.
[00297] To support end-to-end SCTE-35 triggering in both ROUTE and MMT models, device 405 or another component of BTS 102 may perform an analysis of encoder output to determine correct payload placement. Some embodiments may perform multiplexing in the MPD handoff for ROUTE, which may limit an ability to emit an MMT
signaling message to represent a transport-specific signal in SCTE-35 without the requirement of an application aware signaling message.
[00298] MMT protocol (MMTP) with MPU re-assembly may be a function of the decoder, not the encoder to scheduler interface.
[00299] Herein contemplated is a signaling format that may provide a common format and adoption of multiple fulfillment use cases aligning with each distribution mechanism, to reach the right audience (e.g., with the right content on any device). For example, embodiments of BTS 102 may rely on in-band signaling and ISOBMFF box definitions, MMT signaling information, and alignment into a baseline A/344 runtime application such that a common message identifier is adopted. That is, one message may, for example, be used to support multiple technology, connectivity, and interactivity use cases.
[00300] Some implementations may have a DASH-specific mechanism based on XML (e.g., XLink). This format for ad break triggering may support segmented content transmission, e.g., VOD content of DASH. In the XLink solution, the ad break signaling indicator may be moved from the traditional transport layer into a more complex and constrained application layer. For example, live linear programming delivered via MMT and a corresponding A/344 runtime app may switch video playback to a locally-decisioned and pre-positioned creative from the SCTE-35 trigger. That is, multiple descriptors, along with multiple triggers in the same ad break (using tiers) may be used to trigger selections of proper ad selected copy and ad replacement opportunities.
[00301] For fully addressable VAST digital ad insertion and for devices that are Internet connected, the SCTE-35 trigger may be multiplexed early as a pre-roll using a multiplex and a PTS execution time, as needed, and may contain a URL resource to a VAST ad decision server for decisioning and creative download. To provide a linear-like experience, the multiplex time to prepare for the ad break may be long enough for ad decisioning and download/streaming of the replaced content. By increasing the pre-roll length, a broadcast- quality experience for addressable digital ad insertion may be provided without negatively impacting broadcast latency. Feedback information to the ad network over the Internet backchannel may give broadcasters insight into any ads that were not run because of network buffering or delays.
[00302] Additionally, impression beacons for non-replaced (e.g., rolled-thru) network or barter insertions may be reported by Internet-connected devices with proper metadata provided via the SCTE-35 message. By injecting an SCTE-35 time-signal command at the linear time, the ad may begin and finish with ad beacons (e.g., containing ad-ID), and a
subset of audience impression data for traditional eGRP/sharing of voice linear buyers for demographic data enrichment may be developed for media buyers.
[00303] The IAB standard supports companion ads, overlays, and snipes/bugs/L- bar/J-bar with linear content. By providing a VAST resource with a non-linear creative overlay (no video emission), a low overhead solution may be delivered to provide enhanced graphic or textual information along with a traditional linear creative spot placement.
[00304] Some embodiments of BTS 102 may support fully addressable VAST digital ad insertion with a pre-positioned creative copy in partnership with an ad network. In this model, only the ad decisioning call and response may be required to be fulfilled by the Internet backchannel, and any other linear content would be pre-positioned viaNRT/carousel. By providing a unified identifier of the creative copy in the VAST payload response (e.g., ad- ID), along with a traditional URL, devices may use a non-Intemet delivered creative, reducing the time and latency for effective digital ad insertion at scale. Some embodiments of BTS 102 may reduce per-impression transmission cost of DAI, increasing value with increased insight for advertisers, and enhancing the advertising experience by delivering the right content, on any device, to the right audience.
[00305] With that SCTE-35 message available for, e.g., linear MMT, disclosed embodiments may be able to map the signaling information message, which is on the transport level, into an application layer event because it may be going to provide those, and it may maintain the original spirit and integrity of the transport layer signal of the SCTE-35 message to define when an ad break starts.
[00306] An application (e.g., which is compliant with the A/344 standard), for example, running at an ATSC 3.0 receiver may be like an HTML5 web app so the media essence may be playing from the broadcast as a receiver media player (RMP), which is rendering the MMT or ROUTE/DASH emission on the receiver device. Then, what may occur is, when receiving an event message (e.g., compliant with the A/337 standard) which may comprise the SCTE-35 payload, the application may then process that SCTE-35 message. This SCTE-35 message may, for example, be obtained at the exact frame that the ad break starts, such that the decisioning process has to occur instantaneously, because otherwise it would miss that first opportunity window for an impression.
[00307] There may also be provisos in the SCTE-35 message so that the message has a PTF execute time, which tells the receiver of the SCTE-35 message when the ad break will start, and it may occur by a pre-roll of the SCTE-35 message (e.g., three or four seconds beforehand), to allow BAT 104 (e.g., a splicer) to prepare for an ad break. This example
preparation process may allow for an Internet-connected device to initiate a VAST ad decisioning request, which would then execute an HTTP call over such back channel for a connected device. This back channel request may be the VAST payload request to an ad decisioning server, and the ad decisioning server may then correlate a number of different variables, e.g., what channel the receiver is watching, potentially if it has programmatic metadata of what the programming is, and/or what the ad break number is (e.g., for news programming there is a sequence of blocks), the first ad break having a different value to advertisers.
[00308] Additionally, in a connected model that maps into a traditional traffic and automation system, some embodiments may know from the inventory how that ad break is segmented out (e.g., being four 30 second spots for a two-minute break or eight 15 second spots, or some combination). Subsequent decisioning may be based on what the traffic manager has made available for inventory and what the sellers are able to sell into a market. There may be demand for a 60 second spot for longer insertions or things like that (e.g., advertisement for something related to health related items, such as with COVID-19, or maybe a smaller insertion that’s maybe just a five second tag, such as a local automobile dealer having a limited budget but still wanting to get their name out for impression and audience recognition of their brand). Tradeoffs may include brand reinforcement versus messaging.
[00309] The problem is that in the digital ecosystem, especially with tools like VFP or Spot-X, those sources of programmatic data, such that the context of what a viewer is watching, which ad break they are in, what the content is, what the audience demographics are, and what the content genre is, what the contextual relevancy is, may not be components of digital linear ad ecosystem for video inventory. For disconnected systems, some embodiments may, for a digital ad insertion, make better informed decisions about the context of the content that the user is watching for contextual relevancy, especially in live systems.
[00310] For VOD content, metadata may be extracted and then used in a subsequent ad decisioning process, but for live content (e.g., where the ad sellers have some context of what programming to buy into, and potentially in priority placements), the ad sellers may know that a certain newscast (e.g., at 5 PM of a B-block) is going to open up with a healthcare segment. It may thus be beneficial, when that block is over, to provide a complementary advertisement opportunity to a local seller that would be for healthcare related services to dovetail on the messaging experience.
[00311] There may be a risk of contextual relevancy being lost when combining live linear programming with an ad decisioning process that is usually a disconnected or digital system. Some disclosed embodiments may thus need to facilitate additional sources of contextual metadata for the digital ad decisioning process, which starts when the application (e.g., compliant with the A/344 standard) receives an application event delivery (e.g., compliant with the A/337 standard) as a pre-roll SCTE-35 message. This application may, for example, provide additional metadata in the ad decisioning calls to the ad decisioning server to provide context as to what content the viewer is watching and usually to glue it to the entertainment identifier registry (EIDR), which is programming information, or to other broadcaster specific metadata.
[00312] That broadcaster specific metadata in this case may be injected into that SCTE-35 payload that originates from the broadcaster. As such, information may be added via a series of descriptors that are allowed to be able to be added to the SCTE-35 message.
So, as a base use case, the only object that may be in the SCTE-35 message is a flag that is called out of the network, the out of network value being either true or false. When out of network and when a SCTE-35 message is presented to a splicing device or to an ATSC 3.0 interactive application, the application may then be aware that it has switched from linear programming into an ad break, and from there, the interactive content application may make a decision of how it should handle that message.
[00313] In some embodiments, there may be additional attributes instead of the SCTE-35 message. One is PTX execute time, which may tell the interactive application how much time it has to be able to make that decision. In the out of network equals true model, the time may be zero. It may have to do it essentially from the next video. If the interactive content application is going to insert a linear insertion for content replacement, it may have to do it with a content or resource that is either local to the device or in a tertiary ATSC 3.0 media essence flow.
[00314] While the forward transmission may have, for live linear programming, one MMT emission, in the ad break there may be a plurality of linear insertions. Where the primary MMT emission is, for instance, for KOMO 4’s newscast or WJLA’s channel 7 newscast, it may be MMT emission 7, and in the ad break the broadcaster may provide a plurality of additional media flows that contain what the targeted ad replacement payload should be as a live linear story. So the application in one use case could switch from the WJLA channel 7 to behind the scenes when the splice (e.g., SCTE-35 splice message is present with out of network equals true) may make a decision from context information that
is local to the device, e.g., a user’s zip code or some other attribute that is a demographic that the broadcaster wants to reach.
[00315] In that SCTE-35 message, by providing additional segmentation descriptors, that message may inform the ATSC 3.0 receiver device in a non-connected (e.g., non-Intemet connected) model of what channel it should switch to for the linear replacement. This may thus be a model that is a disconnected device that may support what would be traditional zonal, or with a possible demographic targeting, where a broadcaster provides a plurality of alternative renditions of that linear ad insertion break. The whole linear ad break may be provided by a second, third, or fourth stream in the decisioning characteristics provided in the content of the SCTE-35 splice command by additional descriptors, in which the application may then apply those descriptors to see if there is a matching set of characteristics. If those characteristics match, then the device may make a switch from the receiver media player, which is playing the base, traditional linear insertion provided by the broadcaster, to a tertiary alternative content replacement, or secondary content, for that audience viewer.
[00316] This may be a mechanism that is incremental in its capabilities to provide an alternative linear insertion, but it may be something that would be most likely for the entirety of the SCTE-35 out of network equals true ad break. When the ad break completes, there may be another SCTE-35 message, which is out of network equals false and which would then signal the receiving device and application to switch back, then, to the original primary content submission. This may thus be one model where the device may be disconnected but still has the ability to run the interactive content application because the interactive content application may be delivered as an NRT payload in the ATSC 3.0 forward transmission, and all of these pieces may connect together. There may also be additional models, which may provide in an IP-connected device one or two different use cases.
[00317] One such additional model may be that the SCTE-35 message is delivered with the PTX execute in a pre-roll window. In that pre-roll window the device may make a call out to an ad decisioning server, which with supplemental data from the SCTE-35 trigger may provide contextually relevant metadata for programming information for the ad decisioning server to return back one or more linear creatives for that ad in ad break opportunities to fill one or more impression opportunities. It may selectively determine the VAST response using a video ad placement map (e.g., video multiple ads playlist (VMAP)), or there may be another mechanism in VAST 3.0 or VAST 4.0 (e.g., ad pawning). Ad pawning may signify a definition for the receiver and a plurality of ads that may be placed in
this impression avail window. The problem with that is, that in traditional digital distribution, that VAST ad decisioning request and response call flow may have some implicit latency in it. It may have to make a network request through the IP back channel. There may be some time that the ad decisioning server will require to compute what, out of the universe of demand, will meet the available supply and any other categorical restrictions, competitive exclusions, frequency capping, or whatever else, and fulfillment to any third party demand sources to maximize the yield opportunity of that inventory. It may take a degree of latency to be able to fulfill that.
[00318] A challenge in implementing the A/344 standard for interactive content may be to fulfill a dynamic ad insertion. Another challenge that may occur may be, for that ad decisioning response, inclusion of resources for these linear creatives that need to be played out, and this may be commonly why, when watching content for ad supported content, there may usually be a three to four second buffering window where the ad decisioning is occurring, and then the receiving device may be buffering the ad creative content. So this model may work well for asynchronous content or VOD consumption experiences, but it may be problematic for live linear experiences because once that SCTE-35, PTX execute or out of network equals true flag is perceived by the device, the next frame of video may be the linear insertion.
[00319] Embodiments that pre-roll the SCTE-35 trigger (e.g., with a four second pre-roll, and the combination of the time for the ad decisioning call plus the time for the ad creative buffering takes more than that four seconds) may not be able to perform that insertion because it may then come back late on the splice, resulting in the underlying linear insertion. However, this may come in late to the creative play out, which may then potentially return late to the programming. That latency window may be problematic because the receiver may have to make a series of network calls: one for ad decisioning, which has one set of latency characteristics, and then a second call, which may be for the linear creatives, which is a different and usually higher degree of latency for delivery of those creative resources.
[00320] The ad decisioning payload may be usually a few hundred KBs. The ad creatives are usually on the order of MBs of data, and so that time has to map into the window of time for the pre-roll message before the splice execute occurs, and if an embodiment does not make it, the embodiment cannot fulfill that impression for the receiver.
[00321] Regarding NRT and opportunistic data delivery, there may be a model here where those functional components may be comprised to solve that second latency attribute.
The ad decisioning characteristic may be solved locally by disclosed embodiments, but there may be a hybrid model which allows for the IP back channel to execute the ad decisioning request in a hybrid model with NRT or opportunistic data delivery or even a provisioned network NRT delivery, where an ad network could pre-position its linear creatives throughout the NRT emission for caching on the local device. Fulfillment of the linear ad insertion may thus be only a function of the ad decisioning latency, rather than a function of the ad decisioning latency plus the ad creative delivery to the receiving device. This may require an integration with a first party ad network where those linear creatives are distributed out through the ATSC 3.0 network in an NRT emission, but that NRT emission, when cached, may then provide the creative resources that would be part of the broadcaster’s flight or what their potential universe of advertisements are such that, when the splice then occurs, the ad decisioning request may be made.
[00322] The ad decisioning response may include a series of linear creatives with the first linear creative being a representation, a reference to the locally cached NRT emission of that linear creative, thus allowing for the device to, in the interactive content model, use the application media player (AMP) to playback that preset locally cached linear creative, fulfill the ad impression request, and/or optionally provide tracking and metadata back to the ad decisioning server. This may thus provide a model for a preemptible linear insertion that may map into a hybrid ATSC 3.0 data delivery model with a true one-to-one digital ad decisioning process and ecosystem that may map into a live linear transport level set of temporal requirements to fulfill that opportunistic ad impression opportunity.
[00323] Known approaches may include an addressable content replacement (ACR), by fingerprinting the broadcast emission and then adding into the broadcast transmission a 10-second delay. This 10-second delay may be allowed for a long enough temporal window for a system at the broadcaster’s side to determine by identifying from the traffic systems that the next ad break was coming, by sending the fingerprint of what that video frame looks like ahead of the device so that the device may make the ad decisioning request and response, download the creative, and at the proper time use the fingerprint of that video image to then determine where the splice should occur. But, as mentioned, this may add a degree of latency and a degree of system complexity that pushes the receiver into not just processing messages but moving into, e.g., a layer violation of having to process the media essence (e.g., to execute that impression opportunity). Such a system may require a lot of computation power. But, in the disclosed model, embodiments may merely process a series of data payloads that provide the ad break splice information, any additional segmentation
descriptors or broadcaster originated metadata needed for the ad decisioning request. And that ad decisioning request may either be executed locally on the device to match a set of demographics or profiles that are stored locally on the device and execute the content replacement locally. Or it may provide a hybrid model of a traditional digital ad insertion with local decisioning and the linear creatives may be served from the IP back channel network or a hybrid model, which allows for the ad decisioning request to occur over the network and the fulfillment of that ad placement opportunity to locally cache NRT linear creatives for content replacement.
[00324] If the linear creative is not available on the local device, if it is corrupt, or if it is expired, then the interactive content application may not fulfill that individual ad impression but may fulfill other ad impressions that were returned back from the ad decisioning response in an ad hoc model. Ultimately, the disclosed approach may have to be able to solve this problem in multiple different capabilities, not just for the immediate contextual relevancy of the ad placement opportunity, but also different systems of records that would have the ad splice or the ad demand sources to match up with the corresponding supply opportunity for an ad insertion placement.
Collaborative Object Delivery and Recovery
[00325] For dense populations of receivers, one use case may require a wireless base station every 20 to 30 feet to support the demand in traditional point to point communication systems and models. But with ATSC 3.0, a laptop with a transmitter could utilize a software defined radio and have the ability to deliver that content to mostly everyone in a venue. In this example, whichever entity is transmitting may only have to pay one opportunistic position rather than thousands or millions.
[00326] Due to the possibility of emissions being lossy, there is a need for downstream devices to recover lacunae. And due to their potentially lacking a line of sight to a BTS or better reception characteristics (e.g., sufficient power), there is a further need for the devices to cooperate with respect to complete data delivery.
[00327] ATSC 3.0 BATs or receivers (BATs) may acquire missing data objects to compensate for failures in reception of data transmitted in broadcast streams via collaboration with other ATSC 3.0 BATs or other receiving entities. Such collaboration may be directed by a transmitter (BTS), signaling the re-emission of the missing data, where re-emission may be carried out via communications on a dedicated return channel (DRC) or any other network connection.
[00328] ATSC 3.0 standard A/323 provides for a DRC through which a receiver may be able to communicate back to a transmitter or forward to other receivers. The DRC may enable technologies that rely on interactivity among receivers and between receivers and transmitters. In an aspect, DRC-enabled receivers may collaborate with each other through opportunistic communications when they are in proximity to each other.
[00329] To facilitate interactivity, the A/323 standard, in addition to a downlink broadcast channel, provides for a DRC that operates in a dedicated frequency, utilizing frequency division duplexing modulation mode. The system architectures of a DRC-enabled transmitter (e.g., a BTS) and a DRC-enabled receiver (e.g., a BAT) are shown in FIGs. 5A- 5B, respectively.
[00330] As shown in FIG. 5 A, BTS system 102 may transmit downlink data (broadcast service data) in a first frequency, fo, and may receive uplink data, via the DRC, in a second frequency, fi. On the other end, as shown in FIG. 5B, BAT system 104 may receive the downlink data in frequency fo and may transmit the uplink data in frequency fi. Thus, the broadcast gateway (ATSC 3.0 downlink gateway of FIG. 5 A) may encapsulate ALP packets into BBPs and may then send them to the transmitter to be buffered in a PLP associated with a certain IP port connection, while the DRC uplink data may be received (at the DRC uplink receiver of FIG. 5 A) in ALP packets through a PLP-R that is associated with another IP port connection. Upon reception of the transmitted downlink data from the BTS system, the BAT system may process a received PLP (at the PLP processing unit of FIG. 5B) to extract from the broadcast service data DRC-related controls, such as synchronization and signaling data. These controls are sent to the DRC uplink gateway to regulate the transmission of uplink data (e.g., re-emissions) out of the BAT.
[00331] A grid of DRC-enabled receivers may operate in a collaborative and opportunistic manner to assist each other in recovery and relay of data. For example, the transmission range of downlink and uplink communications between BTS 102 and BAT 104 may be significantly reduced when a line of sight (LOS) is obstructed by urban structures. In such a case, a BAT lacking an LOS to a BTS may transmit data to another BAT having an LOS, for the latter to relay that data to the BTS. Otherwise, a BTS lacking an LOS to a BAT, may transmit data to that BAT via another BAT with which there is an LOS. To facilitate such collaborative data distributions, transmitters and receivers may be equipped with a real time localization system (RTLS) that may allow measurements of relative locations or proximity among the transmitters and receivers on the grid.
[00332] Emissions 108 may comprise a plurality of content portions determined based on a peer-to-peer file sharing protocol that is decentralized. When a transmitter is broadcasting such a service (e.g., TV programs, NRT data, or other data), a first receiver in the grid may request a second receiver in its proximity for missing packets. Alternatively, the first receiver may request the second receiver in his proximity to send a message on its behalf to the transmitter, requesting a repeat of transmission to the first receiver. Such procedure may be useful, when the first receiver lacks the power to communicate directly with the transmitter.
[00333] A first receiver that is not DRC-enabled may communicate and may transmit data to a second receiver that is DRC-enabled, using any available local communication links. The second DRC-enabled receiver may then send the received data to the transmitter on behalf of the first receiver.
[00334] BTS 102 may, for example, send out, in its broadcast stream’s PLPs, signals directing the re-emission of certain data objects of a service component. For example, a BTS may request BATs within range to re-emit critical data objects that may require transmission in high fidelity. In response, these BATs may re-emit such critical data objects to other BATs in their locality that may not properly receive these data objects. For example, a BTS may identify specific data objects (e.g., audio segments or video key frames) that may require transmission in high resiliency, as opposed to other video segments where a less reliable transmission may yield unnoticeable degradation in perceived quality. Furthermore, a BTS may use modulation modes that allow for a robust reception only by a certain type of BATs (e.g., stationary or mobile device) and/or only by BATs at a certain locality (e.g., outdoor or indoor). In such a case, BATs within the robust reception range, in response to a re-emission signal from the BATs, may re-emit the required data objects to other BATs. The other BATs, on the other hand, may listen for re-emissions and may be able to acquire and recover damaged or missing data objects from the re-emitted data, instead of having to acquire such data directly from the BTS.
[00335] In an aspect, BTS 102 may signal to listening BATs 104 about the opportunity of receiving a missing data from BATs within its robust reception range and may send signals that facilitate synchronization of the listening and the re-emission among BATs. Furthermore, a BTS may initiate a signaling in response to a message received from a BAT that is in need of data recovery. In another aspect, a BTS may prioritize the re-emission of data object, for example, with the effect that data objects with higher priority will be re emitted using transmission parameters that allows for a more reliable transmission.
[00336] In some embodiments, BTS 102 (e.g., its scheduler) may coordinate with a plurality of BATs 104 for one or more of such BATs to transmit recovery data to its peers, in a spatial proximity and in a licensed portion of a spectrum otherwise utilized by the BTS.
And this BTS may be coordinating between not just the coded OFDM (COFDM) channel capacity and reservation or availability in the ATSC 3.0 forward spectrum. That is, BATs 104 may be configured by a scheduler of the BTS 102 to not interfere with or prevent licensees from utilizing their spectrum. For example, BTS 102 may open up portions of its licensed spectrum for BATs 104 to then remit back into it, e.g., in a scheduled window using a software designed radio chip, without out-of-band ad hoc networking or re-encapsulation of the IP multicast emission.
[00337] OFDM is a frequency-division multiplexing (FDM) scheme used as a digital multi-carrier modulation method that copes well with severe channel conditions, including attenuation, interference, and multipath fading. A large number of closely spaced orthogonal sub-carrier signals are used to carry data on several parallel data streams or channels. Each sub-carrier is modulated with a conventional modulation scheme (such as QAM or PSK) at a low symbol rate, maintaining total data rates similar to conventional single-carrier modulation schemes in the same bandwidth. Disclosed embodiments may use OFDM to facilitate SFNs, e.g., where several adjacent transmitters send the same signal simultaneously at the same frequency for constructive combination of signals from multiple, distant transmitters.
[00338] In some embodiments, there may be provisions for BATs 104 to be aware of recovery data (e.g., in the middleware stack of the service discovery and signaling) to determine whether or not it should actually facilitate processing of that data. It would be an opportunistic determination on the BAT as to whether it needs a recovery opportunity, or whether it may provide a recovery opportunity.
[00339] For supporting the DRC window of retransmission opportunities, a forward 6 MHz band may be shrunk down to 5 or even 4 MHz, as a function of the scheduler. This RF transmission bandwidth may be one adjustable parameter. Other adjustable characteristics may include the MODCOD, capacity, and durability. For example, if a receiving device has a high enough SNR resolution, the receiving device may potentially receive data processed via layer division multiplexing (LDM). LDM may be a modulation on top of modulation, e.g., a superposition capability of the constellation which allows a superposition of one reference emission (e.g., the forward ATSC 3.0 modulation at BTS 102) with a tertiary constellation to provide supplemental data if there is enough SNR between a more robust core layer and a
less robust enhanced layer. This may be performed when in proximity to the receiver or when there is a higher forward transmission power to the receiver. As such, a spatially located transmitter may, for example, modulate on top of or superimpose ATSC 3.0 features in a forward modulation and not use any additional channel capacity or bandwidth of the forward spectrum emission by being managed by the BTS. In implementing LDM, multiple physical data streams (e.g., with different power levels, channel coding, and modulation schemes) may be layered on top of one another and may be used for different services and reception environments.
[00340] Example embodiments using LDM are practical because in the used COFDM may be transmission carriers that support, in an SFN of multiple transmitters, transmit (TX) carrier offset. That is, instead of COFDM, the TX carrier offset may shift on the microsecond emission level of multiple SFN transmitters to avoid any artifact of echo, or it may provide in the SFN network because multiple transmitters will be in the same RF spectrum block. These transmitters may work collaboratively in the transmission, and the TX carrier offset may move the bootstrap earlier and change the frequency offset of the carrier emission. A topology and deployment of networks 108-1 through 108-n may, for example, have one with a highest SNR, for a receiving device, and that will most likely be the one that it may pick up. But the TX carrier offset may allow a shifting as to where those carriers are used inside of that channel bandwidth. Such embodiments may thus transmit in the same forward spectrum (e.g., channel), without utilizing or impacting the ATSC 3.0 forward transmission channel capacity. That is, some implementations may be able to shift where those carriers are, for the BAT retransmission.
[00341] In some embodiments, an opportunity to use LDM may be identified, e.g., with respect to implementation of a set of collaborating APIs 244 or for a configuration in which LDM is dynamically added for certain data frames among a plurality. For example, stationary BATs 104 having a higher gain antenna and not suffering from the effect of a doppler shift, transient impulse noise, or deep fade may operate as collaborative anchors or collectors of the data transport, comprising the LDM emission of recovery symbols, for making it available to spatially local devices (e.g., other BATs 104 and/or UE 170) that did not previously receive certain payload portions. By comparison emissions 108 not having LDM may reach a wider set of receivers than emissions 108 having the LDM, but the latter emissions may be of a higher capacity or bandwidth useful for recovery purposes at different time sensitivities. For example, optimal determination of a configuration of emissions 108 may be based, for BTS 102, on an excess capacity arbitrage with respect to the value of
delivering data to different segments of potential receivers and by a certain amount of time (e.g., by evening today, by evening tomorrow morning, or by evening in 2 days, etc.); proper reception of emissions 108 may be based on reception characteristics (e.g., SNR, proximity to the BTS, multipath effects, clutter, etc.) of each receiver in each moment.
[00342] In some embodiments, BAT 104’s return network 292, by implementing substantially similar or the same type of emissions as network 108 and by adjusting its scheduler and gateway functionalities, may implement a scale and footprint size reduction (micro), e.g., with respect to network BTS 102, and may be under the direction and guidance of a macro scheduler: the BTS. For example, BAT 104 may use windows of opportunity identified by BTS 102 as precision time protocol (PTP) timing references to facilitate its initial set of transmission requests. As such, BAT 104 may be configured to receive a transmission in which it knows what the carrier offset is from the ATSC 3.0 transmission (e.g., 0 Hz). If a receiving BAT, for example, identifies presence on a TX carrier zero, it may know that it has -1 and +1 potentially to use under the guidance of the window of the BTS. That is, BTS 102 may, for example, inform other BATs 104 when to listen so that the originating BAT may then use the opportunity of what to transmit and, if there is another BAT nearby, it may call the same procedure except in reverse; it would know via the guidance of the BTS of when to listen and, since the BAT may know which one is primarily receiving (e.g., because of a function of the SNR and received signal strength indication (RSSI)). A BAT may, for example, become a receiving BAT of these messages, e.g., only being responsive to ALP application messaging rather than network topology, to be able to handle this fulfillment exercise.
[00343] In some embodiments, each BBP may comprise an encapsulated payload, e.g., the ALP or the IP data. So, by standardizing in that portion to re-emit potentially into BATs to use as the source of truth for timing and RF parameters, then the obligation of the BAT may be just to prepare a relevant BBP payload in preparation for emission.
[00344] BAT 104 may, for example, under the direction of BTS 102 for transmission, operate as any other BTS transmitting ATSC 3.0 signals in an SFN. The BTS may thus facilitate collaborative listening, e.g., by applying ATSC 3.0 ALP signaling, to initiate an IP multicast emission so that other BATs 104 may know what is needed from its distributed and spatially-located autonomous peers. The physical layer protocol may be used to identify an ability to use LDM on top or the ability in the RF emission to use the carrier offset of what opportunity it has to transmit the direction of the BTS. For example, the BAT may be informed of when it either should transmit and thus therefore when it should listen for
other complementary devices and, when it does find one, it may be able to identify it based on the ATSC 3.0 ALP, which could then be emitted by a BAT device to then facilitate again a unidirectional transport mechanism of this use case under the guidance of BTS 102.
[00345] In an example, collaboration for distributed object recovery may be based only on the BTS to provide guidance as to which ROUTE object shall be re-emitted. In this or another example, RaptorQ may be used for enabling the collaborative object recovery to only emit FEC recovery blocks, which may be substantially more efficient than a re-carousel of the original transmission data of the source block, as a receiver that may support the fountain-code RaptorQ model would only need to receive N+l combined blocks of any combination to successfully recover the object, N being a natural number. The RaptorQ FEC model, when applied to implementations of emissions having missing pieces of content via hints from BATs in a region, may allow for the emission and generation of FEC recovery blocks from other BAT’s in the region, regardless of the number of source blocks received; it may perform a full object recovery with N+l packets and then generate additional RaptorQ FEC recovery blocks for local re-emission without a substantive coordination model.
[00346] BTS 102 may, for example, operate as the glue between a potentially autonomous BAT network (e.g., where the BATs need guidance as to when they should listen for other BATs or when they should identify themselves to other BATs). In some implementations, the TX and RX portions of BAT 104 may be configured to not receive its own emission.
[00347] Due to use of IP multicast, BAT 104 may, for example, not need to know which other BAT 104 is emitting to it and instead just need to emit the message back out to network 292. Then, a receiving BAT that is able to listen based on the guidance of the BTS, if it finds a matching emission, may make a determination if its actionable in the scope of the ALP that performs some other activity in system 100.
[00348] In some embodiments, BAT 104 may be its own arbiter of what data it needs, what data it is missing, and how to recover the missing data.
[00349] In some embodiments, BTS 102 may be a facilitator for frequencies used in the DRC. For example, the BTS may hint this information to BATs 104, in a sender originated model. By, for example, being under control of BTS 102, other BATs 104 may know collaboratively what flows to which it should be potentially listening, for any of those recovery services. This may be encapsulated by the re-emission of low level signaling (LLS), which may comprise the SLT that would have that information present for receivers. That
way, it would be received by BATs like any other emission and flow, by being rather just provided by a peer BAT under the guidance instruction of a BTS.
[00350] , for example, in a zero-knowledge model of object recovery reemissions,
BTS 102 may not know what the distribution density is of that object. The BTS would have to rely on other higher order parameters like carousel cadence and frequency of delivery across the network to provide a projection of what the NRT reception is across the network. As such, the BTS may provide portions or hints of what data is most important. For example, in a video emission where the media fragmentation is identified, the BTS may recommend what ranges of the data unit payload for BATs 104 to reemit to provide a higher degree of confidence of reception. There may be other data that may be lossy that is not as important but, for instance, there may be implementations wherein a high propensity of I-frame retransmission for reception, or there may be other use cases where the first one or two minutes of content could be reemitted at a BAT carousel to provide a higher distribution density.
[00351] So, in a zero-knowledge model, there may be different approaches as to what data potentially could be valuable to receivers even if there is no feedback mechanism into the network. The contra case is if there are feedback mechanisms in the network, such that that the BAT does have a backchannel or there is a store and forward mechanism where the other collaborative BATs may articulate back which plurality of pieces are missing; then, under that guidance, the facilitator BAT would be able to make an informed decision as to what its peers need for fulfillment. Another store and forward example may include a non- Intemet-connected BAT 104 obtaining missing content from UE 170 (e.g., which does have an Internet connection) via network 294 (e.g., Wi-Fi) for then emitting to peers via network 292.
[00352] In the BTS-originated model, there may be some origination requirements to ensure a high distribution or population reception for important segments of that content.
In the peer-to-peer model, which would provide hints into whoever the arbiter BAT is that would have those for fulfillment, those may be brought under the re-transmission windows that the BTS manages. But it would be up to the discretion of the BAT that was providing the arbitration responsibility as to which actual data units were reemitted into its peers.
[00353] The herein-disclosed DRC use cases may be based on BTS 102 being a spectrum resource coordinator, e.g., which opens up windows of opportunity in the ATSC 3.0 licensed spectrum. The herein-disclosed DRC use cases may alternatively be based on BTS 102 being a spectrum resource coordinator and facilitator, the facilitation implying a level of
obligation and responsibility. For example, a BTS may weigh an importance of a retransmission from BATs 104 to other BATs 104. Resource coordination may, for example, include knowledge of what is missing (e.g., in anticipation of a spatially located BAT device to provide that information again). And BAT 104, under direction of BTS 102 performing spectrum resource coordination and facilitation, may know as an IP multicast recipient the RF interval in the OFDM modulation into which it may reemit. Upon reception of such a message, BAT 104 could then potentially use that next spectrum coordinated window for missing data reemission or for emitting a repair message, without knowing other BATs 104 in its proximity. In some embodiments, a BTS-scheduled reemission comprises emitting, by the at least one BAT, content portions based on an emission time interval determined by the BTS.
[00354] Accordingly, when BTS 102 is operating as a resource coordinator and manager, BATs 104 may be informed in messaging which objects to reemit for recovery or higher network durability. And, when BTS 102 is operating only as a resource coordinator, BATs 104 may operate more autonomously in determining missing portions and in emitting repair message requests to other BATs 104. The management may be based on an identification of at least one data portion, among data emitted by BTS 102, that requires a data reception integrity satisfying a criterion.
[00355] In some embodiments, an integrity of data to be received is based on a function of at least one reemission from the BTS, a retransmission from a BAT on demand, and a retransmission via a BAT as scheduled by the BTS. For example, the relationship may be proportional such that a higher level of reception integrity may be achieved as the rate of recovery reemission from BTS 102 and/or BAT 102 increases.
[00356] The herein-disclosed DRC use cases may be based on BTS 102 being a transport coordinator, e.g., without needing a BTS-managed transport mechanism. For example, the recovery request that is emitted back into peer BAT network 292 (which may comprise a spatially proximate set of BATs 104) may be segments, and then the BAT fulfilling the recovery may initiate parameters for an out of spectrum (e.g., Wi-Fi direct) transport of the data unit. This may provide a higher throughput point of data recovery through the network.
[00357] In an implementation, a BTS-managed transport may comprise TDM and FDM of capacity for BATs with a guided reemission and selection of object(s) to be remitted in an ecosystem. In another implementation, a BTS-orchestrated transport may comprise
facilitating the TDM and FDM but instead letting the BATs determine what object(s) should be reemitted based on the spatially located peers.
[00358] In some embodiments, the emission of recovery data by BAT 104 (i) may be performed via Wi-Fi direct such that use of a wireless access point (WAP) is rendered unnecessary and (ii) may comprise NRT VOD that was stored from previous emissions of the BTS.
[00359] In some embodiments, BATs 104 may form DRC network 292, which may be an ad-hoc, distributive mesh or configured into another topology. For example, BATs 104 may operate as transmission devices under the command and control of the broadcast gateway (which are, for example, shown in FIGs. 4A and 5A). BTS 102 may thus, for example, allocate a time slot for BATs 104 to communicate out to network 108 and/or network 292. For example, BATs 104 may each be configured to emit to BTS 102 in a proximity; and, in this or another example, BATs 104 may each be configured to emit to other BATs 104 in its proximity. Such proximity may be based on SNR, a distance (e.g., from between 0.25 and 0.5 miles, when clutter and/or unfavorable terrain are present, or up to about 62 miles when an LOS path exists), or another suitable criterion. In an implementation, excess channel capacity may be used for such BAT-driven (i.e., BTS orchestration as opposed to BTS management) data delivery. And, by being under the control of the broadcast gateway, there may be obtained additional information (e.g., what data was transmitted, what data is higher priority, what data is of higher business value, etc.), e.g., which may provide collaboration in networks 108 and 292. Networks 108 and 292 may be substantially the same, except that reference herein to emissions 108 may involve BTS 102 whereas emissions 292 may only involve BATs 104.
[00360] In some embodiments, BATs 104 may emit data (e.g., for providing recovery and/or interactivity services), via DRC network 292 and/or another connection (e.g., via a server, spun-up peer-to-peer Wi-Fi connection, 4G or 5G broadband backchannel, or another suitable means). Forward emissions 108 and DRC emissions 108, 292 may, for example, be implemented using different RF frequencies (e.g., frequency division duplexing). In some implementations, at least some of these emissions may be supported via use of relays. For example, relay stations may transparently forward a received signal to BTS 102 through high-speed wired or wireless networks. And such forwarded signal from the relay to BTS 102 may comprise raw data from A/D conversion or the decoded MAC PDU.
[00361] In embodiments wherein information from BATs 104 is returned to BTS 102, more particularly the broadcast gateway (or DRC downlink gateway of FIG. 5 A), this
information may create a greater understanding of what data is being reflected through the network, for example. In another example, the broadcast gateway may become aware of how to better reconfigure forward emissions 108 to ensure a more effective distribution configuration (e.g., MODCOD or other settings inside of the broadcast gateway from the physical layer side) to ensure either the highest durability reception, the lowest latency, or the highest bandwidth.
[00362] In some embodiments, DRC emissions 292 may comprise data of higher value, e.g., an initial set (e.g., the first 30 seconds or 1 minute) of prepositioned content. This may, for example, be preoptimized, as a first touch point in receiving content for other devices. For example, a low durability forward transmission 108 may be delivered, in an initial period (e.g., the first 30 seconds of a long form VOD asset). As such, the peer network 292 may complementarily provide a higher degree of durability for that object emission, without requiring a fuller degree of forward spectrum emission and utilization. In an example, BAT 104’s DRC emissions 292 may be performed before or at the same time as BTS 102’s emissions 108 comprising a same content. In another example, DRC emissions 292 may be performed soon after the original emission of the same content by BTS 102, e.g., upon one or more BATs 104 being triggered to emit, in a spatial region, after being identified as having received needed portions. And this need may be, for example, due to there not being an IP backchannel for the needy device.
[00363] The emissions in these examples may be any media segment, such as a newscast. But rather than sending a whole recording or interview, which may, for example, be 5 minutes long, the emission may comprise an edited and paired down version, which may, for example, be only 15 seconds long. But, to provide a larger relevance of that content experience, another emission may be performed in NRT beforehand of the full VOD segment. But because this content is to have high temporal value, such that that in the newscast when available it does not have a high value beforehand, this longer content may be made available for downstream devices that would be receiving the shorter newscast in conjunction with it. That is, as the newscast is playing out the shorter interview and the longer length object is being transparently delivered (e.g., in another PLP), there may be downstream devices that receive an incomplete set of portions of that object. For them to be able to play it back, some embodiments of BTS 102 may implement a cyclical cadence of an associated procedure description (APD) per the A/331 standard. That APD may inform devices how to receive back any missing portions of said object. But one or more of the downstream devices may not have an IP backchannel to make corresponding data requests.
Requests may instead be sent from each such BAT 104 to its peer BATs 104, which may have received a complete set of portions. The missing portions may be aggregated into subsequent NRT forward emission 108. Or the missing portions may be obtained by needy BATs 104 from another BAT 104 in its vicinity that did receive the missing portions, via DRC network 292 or another suitable network such as the ad-hoc peer-to-peer Wi-Fi connection, without overutilization of the forward spectrum. As such, peer BATs 104 may forward a missing data request to other BATs (or to BTS 102), or they may directly fulfill the request by emitting the missing data.
[00364] Collaborative delivery may thus be initiated with a request (e.g., with aggregable metadata) for missing fragments. The request may be delivered to BTS 102 via DRC network 292 or an IP backchannel such as network 106, and emissions of DRC network 292 may be in a PLP, for example, such as a PLP for return channel (PLP-R). For example, DRC downlink signaling, and data may be transferred to the broadcast gateway in ALP packets with a specific IP port and mapped to the designated PLP-R for return channel application.
[00365] In some embodiments, a device utilizing DRC network 292 may itself store and then forward when it has a window of opportunity to do so under the guidance of BTS 102. Turning up an opportunity for devices to collaborate may not imply that the devices eventually have to transmit all the way back to the BTS using the DRC. Some implementations may thus store and forward a message or request such that once a peer BAT 104 is reached (e.g., as part of consecutive storing and forwarding) that has an IP backchannel, then this peer may facilitate a delivery of the message or request to the BTS via the backchannel.
[00366] In some embodiments, BATs 104 may receive PLPs and separate out DRC synchronization, DRC signaling, and related data from traditional broadcast service data.
Such synchronization and signaling data may be sent to the DRC uplink gateway of FIG. 5 A, e.g., for processing and maintaining system operation. The DRC uplink gateways may regulate how the uplink data is transmitted. Collaborative object recovery in the BAT-centric model may operate with one BAT 104 being the arbitrator of the re-emission into network 292, and thus it may operate as an IP uplink gateway, including implementations having an IP backchannel such that an unconnected device can be served by a connected device (having the IP backchannel) fulfilling a store and forward data payload emission.
[00367] In some implementations, the physical and medium access control (MAC) layers of DRC network 292 may be based on the A/323 standard, which includes
specification of the (i) uplink framing, baseband signal generation, random access, and downlink synchronization scheme and (ii) MAC procedures, MAC PDU formats, and signaling schemes between ATSC 3.0 forward and return emissions 108.
[00368] Controls of the DRC may come from the BTS, which, for example, include the synchronization data and signaling data. These controls may be extracted from the broadcast service data. For example, a video to transmitter protocol may include a re encapsulation. Example characteristics include timing and management (T&M) and the preamble packet. The preamble packet defines what the waveform emission should look like when converted from the physical layer values into the RF domain, which may be a function of the modulator. As such, when the BTS is transmitting an allocation for other BTS devices or BATs, it could potentially re-encapsulate what the recommended RF and other ATSC 3.0 waveform parameters are, in those same T&M and preamble packets as part of the forward emission for those receiving devices. The same configuration parameters may be allowed for use at a BAT that are used by a BTS to apply those into the BAT for localized data recovery.
[00369] The controls may comprise a new component in the system to handle the actual return data payload and a slightly different component to handle the object reconstitution and control parameters from there. And another way is for the BTS to signal directly to the listening BATs when it may expect to receive the missing data. For example, the BTS may specify to the BATs how it may receive this missing data from other BATs that are in its range. The BATs may synchronize how to listen and re-emit with each other via the BTS’ control and management based on already defined components of the specification.
[00370] In some embodiments, BATs 104 may be communicative peers, e.g., for storing, processing, and/or forwarding opportunistic data for optimal object availability, QoS, durability, and completeness. As such, BATs 104 may be able to self-recover data in a collaborative fashion. BATs 104 that have larger storage capacity, increased RF reception characteristics, or more ideal spatial locality attributes (e.g., while not temporarily reachable in a subway, elevator, tunnel, etc.) may be used to support data delivery needs of other peer BATs 104. Indeed, each receiving device may have different RF characteristics and not just because of its location but also because of how it is designed (e.g., from an antenna gain perspective).
[00371] In some embodiments, BTS 102 may be directed to alter its carousel reemissions. For example, if there is a threshold number of devices (e.g., in one or more regions) that did not receive a full (or substantially full) set of portions, then the BTS may improve its intended durability based on spectrum utilization over time by applying more
robust transmission characteristics (e.g., modulation and/or coding). That is, by varying configuration of network 108 using an aggregation of the DRC telemetry (e.g., including store and forward metrics from peer BATs 104), BTS 102 may ensure a greater reception probability of the missing portions via a reemission. Alternatively, as mentioned, a BAT 104 that did receive all portions may be triggered to so reemit in its proximity. These forms of re delivery may be opportunistic, e.g., since carousel remission models are not known to factor in any receiving characteristics, being rather just a schedule or a cadence for reemission (e.g., every 10 minutes, every 4 hours, etc.). Information via DRC network 292 or an IP backchannel may be used to influence both the schedule/cadence and the network configuration to ensure that a higher set of receivers actually receive the NRT content as intended from the broadcast gateway.
[00372] In some embodiments, when multiple data objects are identified by the BTS to be requiring more robust transmission, multiple data objects may be prioritized and at least one BAT may be directed to reemit a high-priority data object from among multiple data objects using transmission parameters that correspond to transmission in high reliability relative to the transmission parameters used for the transmission of a lower priority data object.
[00373] In implementations of FLUTE, there may be provided an order of magnitude of reception durability for high value data objects throughout the network, since the coordinated BTS emissions may then potentially advise BATs 104 what object and potentially what portions of that object are relevant inside a peer network for high data reception. Objects delivered NRT via ALC and via the FLUTE mechanism may have a carousel over time that provides a higher confidence of recovered reception. Such objects may be part a service component or a service identified in the A/331 standard, e.g., in the low level signaling of the SLT. ALC or FLUTE may have a representative service location, for its service discovery. The APD may define a series of post file repair elements, which may comprise a resource server that may be accessed via an HTTP request to fulfill a byte range object request of what the data units that are missing from the FLUTE transmission.
[00374] BAT 104 may thus emit a repair request, which an opportunistic BTS could then fulfill in a next transmission window opportunity. Alternatively, BTS 102 may enhance its emissions network (e.g., via application level FEC), when it wants to ensure a higher durability.
[00375] In some embodiments, BTS 102 may implement a machine learning model that determines (i) the largest reach of the universe of devices (e.g., BATs 104, next
generation TVs 103, etc.) that may receive a delivered object and (ii) the highest integrity of the object that has been received in its constitution based on the function of a retransmission from the BTS or retransmission from a BAT on demand. For example, with return channel information, the AI-enabled BTS 102 may adjust a retransmission schedule, the FEC, and/or another parameter for more efficient delivery. And such BTS-driven determination may be autonomously based on a criticality of the data being emitted.
[00376] FIG. 6 illustrates an example method 600 for collaborative object delivery. Method 600 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 600 presented below are intended to be illustrative. Method 600 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 600 are illustrated in FIG. 6 and described below is not intended to be limiting. Method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600. The following operations may each be performed using a processor component the same as or similar to collaborative object delivery component 245 (shown in FIG. 2).
[00377] At operation 602 of method 600, a plurality of fragments, which are fragmented from data in a way compliant with the decentralized BitTorrent protocol, may be broadcasted by a BTS. As an example, at or near BTS 102, data may be fragmented to identically sized portions (e.g., with byte sizes of a power of 2 and/or between 32 kB and 16 MB each). And a hash may be created for each piece.
[00378] At operation 604 of method 600, a request (e.g., with aggregable metadata) for one or more fragments missing from the broadcast may be obtained (e.g., from one or more BATs). As an example, either a BAT or the BTS may obtain this request.
[00379] At operation 606 of method 600, available presence of an IP backchannel may be determined. If the answer is yes, then operation 614 may be executed; if not, then operation 608 may be executed.
[00380] At operation 608 of method 600, a preference for using DRC may be determined. If the answer is yes, then operation 618 may be executed; if not, then operation 610 may be executed.
[00381] At operation 610 of method 600, whether a rebroadcast will occur (e.g., in a certain timeframe) may be determined. If the answer is no, then operation 618 may be executed; if yes, then operation 612 may be executed.
[00382] At operation 612 of method 600, the one or more requested fragments may be obtained via a rebroadcast of the BTS. As an example, a carousel may be used.
[00383] At operation 614 of method 600, the one or more requested fragments may be obtained. As an example, this obtainment may be performed via a spun-up Wi-Fi connection.
[00384] At operation 616 of method 600, the obtained data may be consumed or stored, at the requesting BAT(s).
[00385] At operation 618 of method 600, at least one other BAT 104, near or in a same region as requesting BAT(s) 104, that acknowledges storage of the requested data may be determined. As an example, the spatial region may be determined based on a transmit power of the at least one other BAT.
[00386] At operation 620 of method 600, the one or more requested fragments may be obtained (e.g., from the determined BAT’s storage).
[00387] At operation 622 of method 600, a portion of spectrum utilized by BTS 102, which is to be made available for recovering the one or more requested fragments, may be determined.
[00388] At operation 624 of method 600, the one or more requested fragments may be spatially emitted into the determined spectrum portion, from the determined BAT towards the requesting BAT(s).
[00389] In some embodiments, BTS 102 may operate as a core to determine exactly what is being transmitted, which may include the retransmission of failed packets. The BTS may be configured to utilize some type of yield management or monetary maximization algorithm to determine what an impact would be monetarily on the company if it does not retransmit it or if it does retransmit by weighing that against other uses of the spectrum at that time. For example, if a thousand people watching a certain TV channel are lost, a
determination of that may be balanced with sending, e.g., an app’s one or more update files such as for Microsoft Office 365.
[00390] With respect to privacy, there may be messaging reputability and message confidentiality. Some implementations may ensure that peer BATs are identifiable in the ecosystem only for the context of providing a chain of custody when it comes to reputability of the message and providing the best effort for integrity. Message confidentiality is additionally one of those pieces of how privacy is concerned, especially if there is store and forward data that have any degree of personally identifiable information (PII) data where message confidentiality would be important.
Hybrid Data Delivery Using Fragmentation
[00391] ATSC 3.0 may be used to distribute large video or data files via rotating and/or scheduled broadcast of data channels, wherein data is coded in a BitTorrent pattern for reassembly of pieces by a client device. Agnostic to content, clients may repair lost pieces via observation of rebroadcasts or IP connection by referring to specific fragmented elements.
[00392] BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services, contents, and the like over broadcast networks, broadband networks, hybrid broadcast networks, and the like to one or more of BATs 104 that may each be implemented as and/or with an ATSC 3.0 receiver.
[00393] In this regard, the contents provided from BTS 102 to one or more of BATs 104 may include data. The data may include software, applications, application updates, operating system updates, information, documents, and the like. The data may include a collection of interrelated documents intended to run in an application environment and perform one or more functions, such as providing interactivity, targeted ad insertion, software upgrades, executable files, or the like. The documents of an application may include HTML, JavaScript, CSS, XML, multimedia files, programs, and the like. An application may be configured to access other data that are not part of the application itself.
[00394] The data provided from BTS 102 to one or more of BATs 104 may fail to fully deliver the data. In this regard, the broadcast transmissions from BTS 102 to one or more of BATs 104 may be lost during transmission due to issues with location, certain frequencies used, congestion, radio interference, electromagnetic interference, antenna problems, and the like. For example, data may be broadcast with a plurality of packets, frames, pieces, parts, segments, or the like. One or more of the packets, frames, pieces, parts,
segments, or the like may be lost during broadcast transmission. The data may be broadcast as ROUTE/D ASH-based services, MMT-based services, and/or the like.
[00395] Moreover, as broadcast transmissions from BTS 102 to one or more of BATs 104 may be broadcast uni directionally to one or more BATs 104, failed delivery of the data to the one or more BATs 104 may present a problem due to the one-way nature of the broadcast.
[00396] In some embodiments, BTS 102 may determine whether to re-broadcast any portion of emitted content based on a set of receiver devices having received a threshold percentage (e.g., 90%) of the content. In one aspect, to address the failed delivery of data from BTS 102 to one or more of BATs 104, BTS 102 may utilize a carousel rebroadcast process or the like to provide missing packets, frames, pieces, parts, segments, or the like as further described below.
[00397] To address the failed delivery of data from BTS 102 to one or more of BATs 104, one or more of BATs 104 may utilize a hydration process to obtain missing packets, frames, pieces, parts, segments, or the like as further described below.
[00398] The data may be transmitted from BTS 102 to one or more of BATs 104 and may utilize a BitTorrent pattern, BitTorrent protocol, or similar data distribution protocol. As such, the data being transmitted may be divided into packets, frames, pieces, parts, segments, or the like called pieces. Each piece may be protected by a cryptographic hash contained in a descriptor. Pieces may be broadcast sequentially or non-sequentially and may be rearranged into a correct order by a client implemented by BATs 104.
[00399] In some embodiments, BATs 104 may monitor which pieces it needs and which pieces it has based on data in the received pieces such as metadata, cryptographic hashes, and the like. In some implementations, each fragment may have a hash descriptor, which may be a header or preamble that indicates which fragment this is among the plurality (e.g., no. 42 of 6 million). As such, these descriptors may inform exactly how many fragments there are, and which ones have been obtained and which ones have not yet been obtained.
[00400] In some embodiments, contents of an ALP header may comprise a length and a source block number or start offset, which may be used determine what data is missing. That is, headers for the transfer object information may comprise a length field that informs what the complete length is (e.g., 1 GB). One approach may include creating a sparse array and for every byte in the transmission, its presence may be marked with a flag. Another approach may comprise creating a series of block ranges that would inform what has been
stored or received. Any of those block ranges that are not defined may inform that a fragment is missing. For example, a gap between pieces 100 and 102 may inform that piece 101 is missing. As such, block recovery may be performed.
[00401] Due to the nature of the herein-disclosed approach, the broadcast of any data may be halted at any time, resumed at a later date, missing pieces obtained, missing pieces determined, and/or the like without the loss of previously downloaded data. This also may enable BATs 104 to determine missing pieces, obtain missing pieces, and/or to seek out missing pieces and download them, which may reduce the overall time of the download.
[00402] In one aspect, BTS 102 may utilize a carousel rebroadcast process so that downstream receivers obtain missing packets, frames, pieces, parts, segments, or the like. In this regard, the carousel rebroadcast process may implement a data and object carousel that may be used for repeatedly delivering data in a continuous cycle. The carousel rebroadcast process may allow data to be pushed from BTS 102 to one or more BATs 104 by transmitting a data set repeatedly in a standard format. For example, BTS 102 may periodically rebroadcast data to BATs 104. BATs 104 may monitor which pieces it needs, and which pieces it has, and obtain the pieces needed during the periodic rebroadcast of the data by BTS 102
[00403] In another aspect, one or more BATs 104 may utilize a hydration process to obtain missing packets, frames, pieces, parts, segments, or the like. BATs 104 may monitor which pieces it needs and which pieces it has. Thereafter, BATs 104 may connect to network 106 to obtain the pieces needed from BTS 102. Network 106 may be the Internet, a hybrid broadcast network, a broadband network, or the like.
[00404] In another aspect, BATs 104 may utilize hybrid hydration process 700 to obtain missing packets, frames, pieces, parts, segments, or the like. In this regard, BAT 104 may monitor which pieces are needed and which pieces it has, and from this determine whether to utilize the carousel rebroadcast process or the hydration process to obtain missing packets, frames, pieces, parts, segments, or the like.
[00405] In some embodiments, each data object or file to be transmitted may be fragmented compliant with the BitTorrent protocol. For example, a division made upstream (e.g., at BTS 102) may result in 10,000 pieces each sized about 100 kilobits. And their emission may result in 9,998 being received. BAT 104, for example, operating as a CDN, may implement the herein-disclosed hybrid data delivery. This CDN may be implemented on a desktop, laptop, or tablet computer, on a user’s phone, or wherever an ATSC 3.0 receiver is. Continuing with the example, this CDN may be configured to obtain the 2 missing fragments
over the Internet, when connected to an IP backchannel. Or, in the connectionless version, the originally emitted data may be reemitted via a carousel. The probability of losing the same 2 fragments in two different emissions is very low so in the next emission, the CDN may obtain those missing fragments. As a result, the original, emitted data is entirely received. The CDN may miss different fragments, but this is insignificant since they had previously been obtained.
[00406] The received pieces may include data, such as metadata, indicating if and when there will be a carousel rebroadcast process to obtain missing packets, frames, pieces, parts, segments, or the like. Accordingly, BATs 104 may determine the process to obtain missing packets, frames, pieces, parts, segments, or the like of broadcast data. For example, BATs 104 may determine to wait for the carousel rebroadcast process based on (i) a predetermined time threshold until the carousel rebroadcast, (ii) an urgency for the data, (iii) a user set time threshold until the carousel rebroadcast, (iv) a cost of obtaining the data, (v) user preferences set via UI devices 118, and/or another factor. On the other hand, BATs 104 may determine to connect to network 106 to obtain the pieces needed from BTS 102 based on a predetermined time threshold until the carousel rebroadcast, an urgency for the data, a user set time threshold until the carousel rebroadcast, a cost of obtaining the data, user preferences, and the like.
[00407] In some embodiments, fragmentation component 237 of FIG. 2 may determine whether to obtain a set of missing fragments via an IP backchannel 106, carousel reemissions 108, or peers of DRC network 292. This determination may be based on (i) a percentage of the fragments previously obtained via emissions 108, (ii) a known time when the carousel (e.g., at the BTS or via a BAT’s DRC) reemission will take place, (iii) a corresponding cost of using the IP backchannel when available, (iv) a QoS level for a user of BAT 104, (v) urgency or importance of the data (e.g., with the data being a map of ongoing fires in California), and/or (vi) whether a user or application has more recently queried presence of all fragments of the data object.
[00408] The known time of carousel reemission may be obtained via metadata. For example, if the receiver obtained more than 50% of the fragments, it may determine to get the rest over Internet, if connected; a vice versa determination is contemplated such that the receiver determines that a rebroadcast is instead preferred. In some implementations, the determination may be different per data object or file that is emitted. For example, cache header rules implementing parameters (e.g., a time to live (TTL) mechanism or metadata) may be used to indicate lifetime of data at the CDN on a file by file basis. These parameters
may be broadcast centric such that the CDN manages itself. For example, the CDN may determine how to obtain missing fragments based on the parameter(s).
[00409] In some embodiments, fragmentation component 237 may reassemble the obtained fragments (e.g., based on individual tags pre-added before the emission). And these fragments may, for example, be obtained unicast (e.g., OTA via emissions 108) and/or bidirectionally (e.g., OTT via a TCP/IP backchannel).
[00410] In some embodiments, a network mesh of BATs 104 may be accessed, which ties several gateways together to transmit the missing pieces.
[00411] In some embodiments, CDN PoP component 242 of FIG. 2 may alert a user upon receiving the missing fragments that complete reception of the original, emitted data (which may be NRT data).
[00412] In some embodiments, BAT 104 may know how to recover lacunae (e.g., whether to wait for a main rebroadcast or DRC rebroadcast or to immediately fetch via backchannel) based on metadata obtained in a corresponding, original emission 108 of the data. As such, orchestration of the data at BTS 102 may determine how downstream BATs 104 are to perform missing data recovery.
[00413] The upstream data orchestration may further determine how soon retransmissions of a carousel will occur based on the type of data. For example, an important cryptographic object may be reemitted every few minutes versus less important metadata being reemitted far less often. In another example, a broadcast app for interfacing or accessing broadcast data may be sent as NRT every five minutes. Other variables that determine a periodicity of the carousel and a duration of the carousel may be based on options selected by the content owner (e.g., by paying more for a more frequent and longer lasting carousel).
[00414] For emergency alerting or another content type, parameters of the carousel may be overridden or superseded. For example, a machine learning model of BTS 102 may stop sending the Microsoft Office 365 update when there is emergency alerting to send out to the universe of receivers. And, when the duration of that alerting is past, the BTS may resume sending the app update file(s).
[00415] In some embodiments, an artificial intelligence (AI) module of BTS 102 may determine an optimal size of the fragments based on a tradeoff balanced using impacts learned over a wide variety of past permutations. That is, the overhead involved in having too many fragments (e.g., because each fragment is very small) may be unacceptable for bandwidth efficiency, thus resulting in larger fragments.
[00416] Some embodiments may comprise an event-driven process and subsystem, for the BAT to fulfil, and a mechanism for proof of work and verification that certain activities are completed in a distributed ledger of proof of work for validation. For example, the event-driven process and subsystem may facilitate the automatic hydration of a dictionary or cache, with a facilitating-distributed-ledger to ensure transactional completion rather than a commit log per se, the objective being to provide traceability and auditability that those activities occurred on those devices in a non-centralized and distributed ecosystem. The Kafka event cycle may be the originator of those activities and the mechanism as to how they are synchronized; the transactional commit log may be an as-run or an audit log. In this implementation or another, an appropriate distributed ledger may provide proof of work that these activities were completed and fulfilled on behalf of a decentralized architecture.
[00417] FIG. 7 illustrates method 700 for hybrid data delivery using fragmentation, in accordance with one or more embodiments. Method 700 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 700 presented below are intended to be illustrative. In some embodiments, method 700 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 700 are illustrated in FIG. 7 and described below is not intended to be limiting. In some embodiments, method 700 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 700 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 700. The following operations may each be performed using a processor component the same as or similar to fragmentation component 237 (shown in FIG. 2).
[00418] At operation 702 of method 700, data that is broadcasted may be received from at least one BTS. As an example, these pieces of data may be obtained (e.g., from BTS 102) non-sequentially and rearranged into a correct order by BitTorrent client 104, which monitors which pieces it needs and already has. Such BAT may then upload pieces it has to
those peer devices that need them (e.g., using DRC network 292 or any other suitable technology such as Wi-Fi or broadband cellular).
[00419] At operation 704 of method 700, the receiver device may analyze and determine missing packets, frames, pieces, parts, segments or the like of broadcasted data. In some embodiments, the process for subsequent provision of this missing data may be based on a BAT connecting to a network to request missing data. For example, BAT 104 may perform the request to BTS 102 or to other peers using DRC network 292 or any other suitable networking technology. In some embodiments, BAT 104, which may miss pieces, may implement the BitTorrent protocol by making many small data requests (e.g., over different network connections to different machines). And a cryptographic hash contained in a descriptor in the packets, frames, pieces, parts, segments, or the like received may be utilized to determine any missing packets, frames, pieces, parts, segments, or the like of broadcasted data. The received pieces may include data, such as metadata, indicating a total number, a sequential numbering of, or the like of packets, frames, pieces, parts, segments, or the like to determine any missing packets, frames, pieces, parts, segments, or the like of broadcasted data.
[00420] At operation 706 of method 700, the rebroadcast periodicity may be determined based on a rebroadcast schedule cost, urgency of the missing pieces, a number of missing pieces, cost of an IP backchannel, user preferences, and/or other information. As an example, BTS 102 may determine to reemit missing pieces once per month rather than once per week when an IP backchannel is prohibitively expensive for reemitting too many pieces. In an example, if 50% of the receiving devices (e.g., BATs 104, next generation TVs 103, etc.) receive first emission 108 and 80% of these devices receive second emission 108, such a good cadence for carousel delivery may be deemed to be working properly since at some point that percentage may converge to 97. As such, an analysis of a number of devices that received all emitted content may affect whether to rebroadcast and how aggressive the periodicity is. And this number may be calculated using information from BATs 104 indicating an amount of the received content pieces.
[00421] At operation 707 of method 700, a determination may be performed as to whether to obtain the missing pieces by a rebroadcast over network 108. As an example, if BAT 104 determines that it is acceptable to wait (e.g., a whole day or another predetermined or user-configured time period) before informing the user of full reception of certain NRT data, then operation 710 may be performed; otherwise, if this BAT determines that a level of urgency of this NRT data is too high, then operation 708 may be performed. In making the
determination of operation 707, a rarest-first approach may be used to ensure a high availability. And, in some implementations, operation 708 may be performed for a certain subset of the missing pieces whereas operation 710 may be performed for the remaining subset.
[00422] At operation 708 of method 700, packets, frames, pieces, parts, segments, or the like that are missing from broadcasted data may be obtained by the BAT connecting to an IP backchannel.
[00423] At operation 710 of method 700, packets, frames, pieces, parts, segments, or the like that are missing from broadcasted data may be obtained by the BAT awaiting a rebroadcast from the at least one BTS.
Progressive OTA Application Download and Runtime
[00424] There is a need to deliver application functionality via broadcast means (e.g., via emissions 108), even while a downstream application is running (e.g., at BAT 104). ATSC 3.0 may be used to support progressive over-the-air (OTA) application download and runtime data, whereby application feature(s) may be (i) built in a modular fashion depending upon the degree of involvement of the viewer and availability of data via broadcast on a rotating basis and (ii) selected by a receiver’s user (e.g., based on desired media consumption).
[00425] In some embodiments, application 246 of FIG. 2 may comprise basic or rudimentary multichannel video programming (MVP) functionality. As such, application 246 may comprise a terminal or main application, for content distribution opportunities via a set of channels and/or via application extensions or as otherwise discussed herein. But a user, for example, watching content thereat for a time (e.g., ten minutes) may be informed by an indication or notification of a newly available, selectable set of menu items. Then (e.g., once a minute, hour, or day), in some implementations, other features or menu items may arrive at a display of BAT 104 (e.g., as content is being consumed, the receiver receives the application extension for the menu in the background and determines how to handle it). In an example, these other features may not be as critical as the basic functionality; in this or another example, their respective periodicity may be determined based on their relative importance and/or on other characteristic(s). For example, BTS 102 may determine a respective periodicity for emitting, in a carousel, the base application and its modular extensions, the periodicity being, for example, greater than a periodicity for emitting, in the carousel, any other type of data.
[00426] In some embodiments, at least one module of base or main application 246 may be pre-installed at the downstream receiver before obtaining, or at least before installing, further modules. In other embodiments, the at least one module and extension modules may be obtained via emissions 108. As such, throughout the day or longer the user may be continually provided additional functionality. For example, modules of application 246 may arrive in emissions 108 at different times, intervals, and/or rotations such that a user of BAT 104 is continually provided more menu items (e.g., starting with 0 or 1, then 2, then 10, then 20, and then 50).
[00427] In some embodiments, application 246 may be a base HTML5 application with a number of modular extensions. In these or other embodiments, these applications may be all broken up in different NRTs. For example, the base application may be transmitted on the data carousel once a minute, and an extension (e.g., the weather or sports component) of the application may come across emissions 108 as its own NRT file. The extension(s) may, for example, add functionality to the application and be emitted on a longer frequency (e.g., once every ten minutes or once an hour, based on importance or urgency of that module). A user of BAT 104 may thus turn on the device and within a minute get the base application. As the user is watching, one or more of the extensions may pop up on the menu or in a displayed overlay. Alternatively, running the newly downloaded extension may cause a picture on the display to shrink and be squeezed. For example, an L-shaped bar may display an alert; and, by the user clicking or selecting that alert, they may be taken into a micro webpage that has all the different artifacts about that alert (e.g., different videos, information, pictures, linear channels, flash channels, etc.). As such, some implementations of the application extensions may comprise display of weather forecasts, sports statistics, emergency alerting, or other news content that are at least temporarily co-displayed with main video content.
[00428] In some embodiments, application 246 may comprise a plurality of applications executable at each of BATs 104. For example, one such application may be the broadcast application; and the broadcast application may immediately or, upon being loaded with complementary modules, monitor content consumption behavior. In these or other embodiments, the application extensions may be installed at BAT 104 to update (e.g., functionally extend or replace) the broadcast application.
[00429] In some embodiments, the base application and the modular extensions may each be a different file or type that informs, via metadata, the OS (or the basic or rudimentary MVP app functionality) of the BAT when and/or how to run. And the metadata may, for example, inform what type of module it is. Base or main application 246 may be
very small (e.g., on the order of a few megabytes) such that BAT 104 may run it very quickly once received over emissions 108. The extensions may be obtained later over a period of time. Alternatively, the whole broadcast application may be much larger and also sent in a carousel, but far less frequently.
[00430] In some implementations, a user may tune to a second channel to obtain at least a portion of application 246 quicker than by tuning to a first channel within which a carousel rebroadcast of this application and/or its extensions module is performed less frequently. In these or other implementations, separate modules may be provided via different channels to which a downstream BAT is operable to tune.
[00431] When first purchasing BAT 104 or TV 103, there may be nothing on it except the base runtime (e.g., which is provided by the industry and preinstalled by the manufacturer). This base runtime may run the MMT signal, but there may be no Chrome, DAI, or other special capabilities, this device operating just like a legacy TV does today. The application extensions to this base runtime may thus arrive over time such that those capabilities appear (or become available via an indication) before a user’s eyes as content is being consumed.
[00432] In some embodiments, broadcast application 246 may be stored via CDN PoP 242. And this storage may be for a much longer TTL than regular content (e.g., up to a year and thus of a different scale or order of magnitude for the main application with respect to its extensions or other content downloaded via emissions 108), for example, because a user of the receiver previously consumed such content such that there may be a predicted demand for that application tomorrow or next week, when the user switches back to this same channel.
[00433] In some embodiments, a broadcast application may be stored in cache of next generation TV 103 or BAT 104 such that, when a user tunes away from a channel, the cache may be flushed, and the app may be lost. So, when the user tunes back to this channel, the application and its modules may need to be received anew. In other embodiments, the broadcast app may be semi-permanently or permanently stored in another type of memory of the receiver.
[00434] In some embodiments, files or modules associated with application 246 may be delivered in ROUTE packages. Such broadcaster application may execute inside a worldwide web consortium (W3C)-compliant user agent accessing some of the graphical elements of the receiver to render the user interface or accessing some of the resources or information provided by the receiver (e.g., BAT 104). If this application requires access to
resources such as information known to the receiver, or if the broadcaster application requires the receiver to perform a specific action that is not defined by standard W3C user agent APIs (implemented by browsers), then the broadcaster application may send a request to a WebSocket utilizing a set of JSON-remote procedure call (RPC) messages to provide the APIs that are required by the broadcaster application to access the resources that are otherwise not reachable. In an implementation, the receiver may use its user agent to launch or terminate the broadcaster application referenced by a URL provided in broadcast signaling. In some embodiments, the broadcaster application package may be downloaded, signaled, launched, and managed.
[00435] In some embodiments, BAT 104 may be configured to obtain the modular extensions via an IP backchannel, if such Internet or other networked access is available and if an amount of time needed to wait for the next carousel window breaches a threshold.
[00436] In some embodiments, one or more of application 246 and its extensions may be associated with an expiration date (e.g., for subsequent overwriting or immediate purge).
[00437] In some embodiments, BAT 104 may obtain another application 246 that is co-branded content with different components and/or skin formatting. For example, for a broadcaster emitting at two different stations (e.g., NBC and CW), one common application may be used by the receiver for the DMA. But, in this example, if the user is tuned into the NBC service, they may see the KSNB logo and the KSNB menu items with corresponding branding and colors. Continuing with this example, if the user instead tunes into the CW station, they may see the KVCW menu items, branding, and colors.
[00438] In some embodiments, BAT 104 may obtain one or more other applications and modules, and the other applications and modules may be sourced differently from the broadcaster application. For example, supplemental sources 206 may obtain content from the Internet and thus third-party users may pay for its broadcasting. As such, BTS 102 may implement an ecosystem like the Apple iPhone or Google Android app stores where entities may be innovative and build their own apps for their own broadcasted services. As such, the terminal/ main/base application or modules thereof may comprise third party applications, which may for example implement functionality emulating an application download store or platform for digital distribution on behalf of different third parties. This service may constitute a broadcast service made available for third parties and may include periodic rebroadcasts.
[00439] FIG. 8 illustrates an example process 800 for progressive OTA terminal application download and runtime. For convenience, process 800 of FIG. 8 is described with reference to BTS 102 and BATs 104 of FIGs. 1-2. Method 800 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 800 presented below are intended to be illustrative. In some embodiments, method 800 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 800 are illustrated in FIG. 8 and described herein is not intended to be limiting. In some embodiments, method 800 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 800 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 800. The following operations may each be performed using a processor component the same as or similar to application download and runtime component 243 (shown in FIG. 2).
[00440] In the example of FIG. 8, BTS 102 may use various technical mechanisms and procedures for service signaling and IP-based delivery of ATSC 3.0 services, contents, and the like over broadcast network 108, broadband networks 106,294,296, hybrid networks, and/or the like to one or more of BATs 104 that may each be implemented as and/or with an ATSC 3.0 receiver.
[00441] In this regard, the contents provided from BTS 102 to one or more of BATs 104 may include data. The data may include software, applications, information, documents, a terminal application, modules for the terminal application (e.g., to implement additional functionality, as described herein), and the like. The data may include a collection of interrelated documents intended to run in an application environment and perform one or more functions, such as providing interactivity, targeted ad insertion, software upgrades, executable files, or the like. The documents of an application may include HTML, JavaScript, CSS, XML, multimedia files, programs, and the like. An application may be configured to access other data that are not part of the application itself.
[00442] The data provided from BTS 102 to one or more of BATs 104 may include the terminal application, modules for the terminal application, and the like.
[00443] One or more of BATs 104 may include the terminal application. The one or more BATs may include the terminal application that initially only has basic or rudimentary functionality. The terminal application or modules thereof may facilitate content consumption by the BAT and/or facilitate provisioning of information about the content being consumed, for example, as described in further detail below, or otherwise. Thereafter, BTS 102 may provide to the one or more BATs modules for the terminal application. BTS 102 may provide to the one or more BATs modules for the terminal application to add functionality, features, and the like.
[00444] The data provided from BTS 102 to one or more of BATs 104 may include a new module for implementation in the terminal application. The data may be broadcast as ROUTE/D ASH-based services, MMT-based services, and/or the like.
[00445] At operation 801, BAT 104 may, for example, receive, in a carousel rebroadcast having a first periodicity, a terminal application from at least one BTS 102.
[00446] At operation 802, one or more of BATs 104 may execute the terminal application. The terminal application may be implemented in one or more components of one of BATs 104. The terminal application may be implemented with modules, as described herein. The terminal application implementing the modules may include one or more features or functionalities, as described herein.
[00447] At operation 804, one or more of BATs 104 may collect and transmit characteristics of use of BAT 104 by a user to at least one BTS 102. In some embodiments, BAT 104 running the broadcast application implementing its modular extensions may be configured to collect user-interaction data for transmission to at least one of BTS 102 and associates thereof. That is, one or more of BATs 104 may, for example, collect use data. The use data may include viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
[00448] In this regard, the use data may be utilized for selecting content, such as by filtering an advertisement (ad). And the ad filtering may be, for example, implemented to provide pre-positioning of various ads. More specifically, the ad filtering and/or the pre positioning of various ads may include ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like. In this regard, ad reception
functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like may be based on use data. In one aspect, ad selection functionality and ad insertion functionality may be based on one or more of viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
[00449] In an implementation, BAT 104 may lack connection to an IP backchannel. As such, emissions 108 may comprise pre-positioned ads. That is, rather than a vehicle vendor like Ford Motor Company merely providing a same F150 ad for everyone everywhere, they may provide many (e.g., ten) different ads (e.g., based on a same general theme). For example, there may be a default ad for the FI 50 whereas another ad may be for the Mustang or GT. But all the different Ford products may be pre-positioned at the receiver. When this receiver is connected, Internet decisioning may be used, but when not so connected the decisioning may be performed at the receiver with little or no information other than previous content consumption from emissions 108.
[00450] In some embodiments, a filter may be emitted as part of OTA emissions 108. And the filter may comprise a set of rules (e.g., like a JSON file) or logic used by the ad to play a segmented ad (e.g., rather than a targeted ad). That is, based on information of the user of the receiver, a segment to which the user belongs may be identified and used to play an appropriate ad.
[00451] Certain content items may be flagged for preferential transmission in particular time periods or under particular conditions. For example, content requiring a high bandwidth or low latency may be flagged for transmission at a time when network conditions permit. On this basis, in some embodiments, content may be slated to play (e.g., an hour later) via a marker in the real-time stream. For example, the marker may manage, when content is about to be played, characteristics associated with the client terminal such that if applicable characteristics are identified then the corresponding content is selected and played; otherwise, the viewer may be displayed the default content.
[00452] In some embodiments, a set of default ads may be embedded into a live stream whereas alternative ads may be transparently provided beforehand via carousel.
[00453] In some embodiments, a linear service, via emissions 108, may comprise (i) an app-based enhancement that runs in the background and manages the insertion of targeted ads and (ii) another app-based enhancement that contains a collection of apps that provide an interactive viewing experience to enhance the audio/video program. Each app-based
enhancement may be separately signaled so that the creators of diverse apps do not need to coordinate their signaling.
[00454] One or more of BATs 104 may, for example, transmit consumption data to at least one BTS 102. The use data may be used for implementation of real-time analytics.
The real-time analytics may track revenue, profit, subscribers, growth, a success of any individual piece of content based on viewership of video content, hits for a given piece of video content, revenue data for a given video content, breakdowns by geographic region of viewers, average length of viewership, and the like.
[00455] BTS 102 may transmit to one or more of BATs 104 a new module for implementation in the terminal application, for example, via emissions 108 comprising IP multicast traffic. The one or more BATs may receive and store in memory the new module. Thereafter, the one or more BATs may determine as illustrated by operation 806, that the BAT system has a new module for the terminal application.
[00456] If one or more of BATs 104 determine it has received a new module for the terminal application, the process may advance to operation 808. On the other hand, if the one or more BATs determine it has not received a new module for the terminal application, the process may return to operation 802 and continue to execute the terminal application.
[00457] With reference to operation 808, when one or more of BATs 104 has received one or more new modules for the terminal application, the one or more BATs may (i) install the new module in, for, and/or with the terminal application and (ii) advantageously integrate into the terminal application additional functionality based on information of the at least one other module.
[00458] In some embodiments, the one or more new modules may be compiled with the terminal application. In other embodiments, the new module may be compiled separately, via separate compilation with and/or for the terminal application. In some embodiments, the new module may be pre-compiled before transmission to the BAT. In other embodiments, the new module may be compiled upon reception or soon after being received at the BAT. Where the module is in executable form at installation, the installation step may comprise execution of the module. Accordingly, implementation of a particular feature may be seen as being realized by compiling or installing the relevant module or modules. The new module may be linked by a linker. The new module may utilize a JIT compiler that may perform construction on-the-fly, e.g., at run time. In some embodiments, a JIT compiler may operate upstream of BAT 104, where artifacts are combined for distribution into a packaged broadcast app distribution. In these or other embodiments, the JIT compiler may be used to perform on-the-
fly construction at runtime. Other processes and approaches of implementing, compiling, updating, installing, and/or the like the new module with the terminal application are contemplated as well.
[00459] Once one or more of BATs 104 have completed implementing, compiling, updating, installing, and/or the like the new module with the terminal application, the one or more of BATs may execute the terminal application implementing the new module as illustrated by operation 810. Thereafter, process 800 may return to operation 802.
[00460] The modules of the terminal application may be implemented during a runtime or execution time of the terminal application. In this regard, when the terminal application is to be executed, a loader may perform the necessary memory setup and links to the terminal application with any dynamically linked libraries and modules the terminal application may need. Thereafter, execution may begin starting from an entry point of the terminal application.
[00461] A feature provided by the terminal application implementing the modules may include channel scanning, channel list creation, signal standard type determination (ATSC 1 standard, ATSC 3.0 standard, or the like), channel logo presentation, audio track switching capabilities, subtitle display capabilities, gateway connection capabilities, gateway connection discovery capabilities, information presentation regarding current broadcast events, full-screen player capabilities, and the like.
[00462] When BATs 104 and next generation TVs 103 power on and/or upon a reset, application 246 may be downloaded. Upon an initial scan, such a device may generate a list of all available channels. The channels may each be virtual and at any suitable frequency or band. Without this application yet running, another more general application preloaded at the receiver may display information of previous tunes or content consumption. This content may be based on MMT emissions. Application 246 may be previously obtained via emissions 108, e.g., of a carousel in periodic emissions (e.g., every ten minutes). In this example, these example emissions may not comprise the other, more-general application.
[00463] A feature provided by the terminal application implementing the modules may include obtaining a get list of files with file identifications in timestamp order.
[00464] A feature provided by the terminal application implementing the modules may include functionality to subscribe to file changes, a functionality to receive notification for file changes, implementation of file cache management, implementation of integrated testing, and/or the like.
[00465] A feature provided by the terminal application implementing the modules may include player modification, channel polling, error handling, platform optimization, Apple TV platform utilization, testing and bug fixing, stream testing, and the like.
[00466] A feature provided by the terminal application implementing the modules may include distribution of device software updates such as distribution of macOS and iOS software updates.
[00467] A feature provided by the terminal application implementing the modules may include distribution of application software updates (e.g., for Microsoft Windows or Office) for devices.
[00468] A feature provided by the terminal application implementing the modules may include distribution of streaming devices such as Apple TV and Apple TV+ Streaming Video (VOD, live events, and the like).
[00469] A feature provided by the terminal application implementing the modules may include income determination and/or allocation from distribution of other OTT content within implementations such as Apple TV (Netflix, HBO, Showtime, etc.)
[00470] A feature provided by the terminal application implementing the modules may include distribution of emergency responder information and alerting on a more robust infrastructure.
[00471] A feature provided by the terminal application implementing the modules may include menus, graphical user interfaces, interactive menus, and the like.
[00472] A feature provided by the terminal application implementing the modules may include emergency messaging functionality.
[00473] A feature provided by the terminal application implementing the modules may include collection of use data.
[00474] A feature provided by the terminal application implementing the modules may include ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like.
[00475] In this regard, ad reception functionality, ad storage functionality, ad selection functionality, ad insertion functionality, and the like may be based on use data. In one aspect, ad selection functionality and ad insertion functionality may be based on one or more of viewer behavior, viewer identification, viewer demographics, viewer age, viewer gender, viewer location, viewer interests, viewer market segment, content viewed, time of viewing, length of time viewing, channels viewed, system interaction, features used, and the like.
[00476] A feature provided by the terminal application implementing the modules may include OTT content or functionality including OTT television, OTT messaging, OTT voice calling, and the like.
[00477] A feature provided by the terminal application implementing the modules may include video streaming platform functionality. The video streaming platform functionality may include a video hosting platform to organize video content including receiving video content, uploading video content, hosting video content, managing video content, tagging video content, recognizing tagged video content, delivering video content, and/or the like.
[00478] A feature provided by the terminal application implementing the modules may include closed captions for video content.
[00479] A feature provided by the terminal application implementing the modules may include implementation of chapter markers that may enable navigation within video content.
[00480] A feature provided by the terminal application implementing the modules may include implementation of real-time analytics. The real-time analytics may track revenue, profit, subscribers, growth, a success of any individual piece of content based on viewership of that video, hits for a given piece of content, revenue data for a given video, breakdowns by geographic region of viewers, average length of viewership, and the like.
[00481] A feature provided by the terminal application implementing the modules may include customer support, professional assistance, and the like support options.
[00482] A feature provided by the terminal application implementing the modules may include statistics for sports betting. In this regard, the modules may generate a graphical user interface that may be utilized by users to input or select various teams, matches, and the like and the terminal application implementing the statistics for sports betting may provide statistics based on the input. In this regard, the statistics for sports betting may include statistics regarding total (Over/Under) values based on the total score between two teams; the statistics for sports betting may include statistics regarding a proposition on a specific outcome of a match; the statistics for sports betting may include statistics on parlays that involve multiple bets that rewards successful bettors with a greater payout only if all bets in the parlay win; and the statistics for sports betting may include statistics regarding other forms of betting.
[00483] A feature provided by the terminal application implementing the modules may include interactive modules for betting. In this regard, the interactive modules may
include betting for sports, games, and/or the like. The modules may generate a graphical user interface that may be utilized by users to input or select various teams, matches, games, and the like. The interactive modules for betting may utilize the graphical user interface for implementing funds transfers based on credit card, electronic check, certified check, money order, wire transfer, cryptocurrencies, and/or the like. For example, bettors may upload funds, make bets, play the games offered, and the like through the graphical user interface. Thereafter, bettors may cash out any winnings through the graphical user interface. In this regard, betting may include total (Over/Under) values based on the total score between two teams, proposition on a specific outcome of a match, parlays that involves multiple bets that rewards successful bettors with a greater payout only if all bets in the parlay win, and the like.
[00484] A feature provided by the terminal application implementing the modules may include interactive modules for purchasing items represented through product placement. In this regard, the modules may generate a graphical user interface that may be utilized by users to input or select products. In this regard, the modules may include electronically buying items represented through product placement through electronic commerce. The electronic commerce may utilize mobile commerce, electronic funds transfer, supply chain management, marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, automated data collection systems, and the like.
[00485] A feature provided by the terminal application implementing the modules may include interactive modules for social media use. In this regard, the modules may generate a graphical user interface that may be utilized by users for social media. More specifically, the social media may be interactive computer-mediated technologies that facilitate the creation and sharing of information, ideas, career interests, and other forms of expression via virtual communities and networks. The variety of social media services may include social media interactive Internet-based applications, user-generated content, such as text posts or comments, digital photos or videos, and data generated through all online interactions. The variety of social media services may include generation of user service- specific profiles and identities for a website, an application, and/or the like that are designed and maintained by a social media organization. The variety social media services may include the development of online social networks by connecting a user's profile with those of other individuals or groups.
[00486] BTS 102 may utilize a carousel rebroadcast process to deliver one or more new modules and/or the terminal application to one or more BAT systems 104. In this regard, the carousel rebroadcast process may implement a data and object carousel that may be used for repeatedly delivering one or more new modules in a continuous cycle. The carousel rebroadcast process may allow the one or more new modules to be pushed from BTS 102 to the one or more BATs by transmitting the one or more new modules repeatedly in a standard format. For example, BTS 102 may periodically rebroadcast the one or more new modules to the BAT. The BAT may monitor the one or more new modules and obtain the one or more new modules during the periodic rebroadcast of the data to BTS 102.
[00487] The terminal application may implement modular programming. In this regard, the terminal application may implement modules that separate various functionalities of the terminal application into independent and/or interchangeable modules. In one aspect, each of the modules of the terminal application may contain everything necessary to execute only one aspect of the desired functionality of the terminal application.
[00488] The modules of the terminal application may include a module interface. The module interface may express various elements that may be provided and required by the module. The elements may be defined in the interface and may be detectable by other modules of the terminal application.
[00489] The modules of the terminal application may include an implementation. The implementation may contain working code that corresponds to the elements. The elements may be declared in the interface.
[00490] The modules of the terminal application may include programming utilizing structured type programming, object-oriented type programming, and the like. The modules of the terminal application may facilitate construction of one or more of the features as described herein for the terminal application by decomposition of various features.
[00491] The modules of the terminal application may refer to high-level decomposition of the code of the terminal application into pieces having structured control flow, object-oriented programming that may use objects, assemblies, components, packages, and/or the like.
[00492] The modules of the terminal application may utilize programming that may include one or more of the following languages Ada, Algol, BlitzMax, C#, Clojure, COBOL, D, Dart, eC, Erlang, Elixir, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, IBM i Control Language (CL), IBM RPG, Java, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, Component Pascal, Object Pascal, Turbo
Pascal, UCSD Pascal, Perl, PL/I, PureBasic, Python, Ruby, Rust, JavaScript, Visual Basic .NET, WebDNA, Objective-C, Modula, Oberon, and/or the like.
CDN Micro-POP in the Home for OTA Data Delivery
[00493] A client receiver, such as a BAT 104, may serve as a content delivery network (CDN) point of presence (PoP) for one or more TVs 103 and/or UE 170 connected to the BAT (e.g., via network 294, 296) at a home or other facility, e.g., by collecting OTA broadcast data from a variety of channels, then storing and redelivering data as required via a variety of mechanisms. As such, there may be provided a CDN PoP for ad hoc delivery of OTA data. Broadcast data may encompass not just broadcast media, but also application extensions and public and private data casting, including such disparate information types as ad pre-positioning and emergency broadcast information. CDN PoP 242 may be established, for example, in a home receiver unit, or in a mobile device such as a phone or tablet, for example, via the use of a software defined radio receiver or transceiver chip or integrated circuit (as described with reference to FIG. 3, for example), optionally along with associated software. Within a home network, a single PoP and/or multiple PoPs may service all devices - including Internet of things (IoT) devices. Candidate integrations may include various devices such as iPhone, Macs, Apple TVs, Third-party Home Gateway devices, wireless devices, and the like.
[00494] Referring again to FIG. 2, BAT 104 may act as a PoP by hosting a data services function 240. The data services function 240 may act more generally than either the API services 244 and applications 246, in as much at the data services function 240 may provide general functionality for managing the reception and distribution data in the manner, for example, of a PoP in a CDN comprising the transmission 108 and downstream devices.
[00495] The PoP functionality may underly many of the data services of BAT 104. For example, by coordinating the processing and storage of received transmission data, supplementing received transmission data with data gathered by retransmission by other BATs and locally available networks, managing received data queries from users, handling dedicated return channel communications with the BTS, implementing data retention policies, supporting applications and APIs, the PoP functionality may directly or indirectly support the support of legacy devices, OTA API services, progressive video enhancement, gleaning data packaged in video baseband padding packets, collaborative object delivery and recovery, data delivery via fragmentation, progressive OTA application download and runtime, and flash channel requests and processing.
[00496] In some embodiments, CDN PoP 242 may be context-aware (e.g., in terms of what the fragments represent). For example, the CDN PoP may recognize different kinds of content, e.g., by file type or subcategory within files (such that ancillary metadata is deemed to be of lower importance whereas a content decryption key may be deemed of substantially higher importance, for making sure the latter is received or recovered), and take action accordingly to obtain missing fragments, alert users to conditions, or otherwise act automatically on received data, or lacunae in received data, in accordance with significance or priority of the data and available options or corrective or preventative action. In this or another example, PoP 242 may not make available any file or object that is not valid (e.g., based on a checksum, another integrity checker, signature, Widevine encryption, or another criterion). As such, contents of the CDN may be preloaded, e.g. by the broadcast.
[00497] CDN PoP 242 may be a potentially potent entity that can be expert in understanding users' needs. For example, this CDN PoP may be allowed to remediate data, e.g., by reactively plugging holes in data, so it may be clear that it could also be arranged to proactively attend to a user's experience. Thus, the aforementioned preventative action may imply that if it notes a bad trend in connection quality, it may shift functionality to pre positioning objects that would otherwise be expected to appear in a normal rotation of broadcasted content. In an example, there may be 10 CDN-PoPs consuming content and serving a region via the cached prepositioning rather than requiring 1000 copies to be emitted and consumed therein and rather than vying for scarce bandwidth resources of a broadband ISP (e.g., using 4G/5G antennas). In this example, the CDN PoPs may intelligently and automatically determine deduplications of objects requested in the region (e.g., without requiring user involvement) by being able to transparently preposition the content to the audience set via excess capacity.
[00498] The PoP may act as an intermediary for many types of data received from various sources and used by various users, APIs, and applications supported by a BAT. This may include, for example, traditional DVR functions of semi-permanent media content, and permanent storage and support of tools and data for various applications, and various data needs of users, such as storage of home computer and personal device application and OS upgrades. PoP 242 may opportunistically identify and acquire various information broadcast in rotation by BTS 102, even before any user or application of BAT 104 is aware of the need to acquire such information, e.g., for the distribution of a large periodic upgrade (e.g., annual Microsoft Office 365 update). In this way, the PoP may provide, via a substantially (or even
exclusively) unidirectional OTA broadcast data service, many of the advantages traditionally associated with terrestrial bi-directional networks.
[00499] In some embodiments, one or more CDNs 242 may perform file cache management for local UE 170 consumption and/or consumption via UI device 118 at BAT 104. In some embodiments, broadcast application 246 and other data may be stored via CDN PoP 242 with a much longer TTL than the other data. In these or other embodiments, CDN PoP may interoperate with a file cache manager to have a substantially complete or whole database (e.g., an encoded copy or snapshot of the Netflix library). For example, if plugged in and watching for several months, BAT 104 may locally emit to one or more UE 170 and/or visually display to any of the movies previously obtained via emissions 108 during the several months. Such consumption may be necessary for the user, e.g., when BAT 104 and/or the UE are located or configured such that they have no, resource-poor, or otherwise limited other viable means of access (e.g., via the Internet or another network). The resource-poor or otherwise limited other viable means (e.g., dial-up or 2G cellular) may support key exchange, but it cannot support UHD, for example.
[00500] A receiving device (e.g., BAT 104) may possess sufficient storage, e.g., when implementing aspects of the A/331 standard, including a distribution window descriptor (DWD), to provide relevant start/end times of NRT emissions. Further, filter codes (e.g., for selective application-specific caching of resources) and an overall object expiration mechanism may be provided for management in an extended file delivery table (EFDT).FDT- instance element. In an embodiment, a conditional access mechanism for client-side DRM (e.g., Widevine content license key acquisition lifecycle) may be provided along with license renewal from a connected license server, e.g., to ensure the integrity and protection of the content at rest such that only authorized subscribers would be able to play back the content.
As such, content provided by CDN 242 may be digitally signed via Widevine DRM or SSL/TLS DRM. Widevine’s DRM solution provides the capability to license, securely distribute, and protect playback of content at or in relation to BAT 104.
[00501] In some embodiments, CDN PoP 242 may perform file cache management using storage policies obtained OTA (e.g., with NRT files), e.g., for informing the local, home gateway of BAT 104 how (e.g., in terms of TTL) to persist each of those files., for example, when storage 236 nears its capacity, an internal storage policy may operate (e.g., as first-in-first-out (FIFO), least-commonly-used, or another algorithm) to not overflow its own cache. Implementations using DWD may thus provide guidance of when to persist, when to purge, and other attributes.
[00502] In some embodiments, a single BAT 104 may implement a CDN. In other embodiments, a plurality of geographically distributed BATs 104 may implement the CDN.
In either of the embodiments, the CDN may provide high availability by being spatially proximate to end users (e.g., users of BATs 104, UE 170, or next generation TVs 103). CDN 242 may, for example, provide web objects (e.g., text, graphics, and/or scripts), downloadable objects (e.g., media files, software, and/or documents), applications (e.g., e-commerce, portals, etc.), live streaming media, on-demand streaming media, and/or social media sites. CDN 242 may be, for example, configured to receive payment from content owners to deliver their content to the end users. In some embodiments, CDN 242 may be implemented standalone. In other embodiments, CDN 242 may be at least partially hosted at a datacenter of an Internet service provider (ISP).
[00503] In some embodiments, CDN 242 may perform such content delivery services as video streaming, software downloads, web and mobile content acceleration, license management, transparent caching, and performance measurement (e.g., load balancing, switching and analytics, and cloud intelligence). CDN 242 may further perform security, such as distributed denial-of-service (DDoS) protection, a web application firewall (WAF), and WAN optimization.
[00504] In some embodiments, by being in a same building, home, or other structure of end user(s), CDN PoP 242 may be optimally positioned at the edge to serve content (e.g., over network 294). As such, CDN PoP 242 may implement a demarcation point or interface point between communicating entities. In doing so, CDN PoP 242 may implement a router, a network switch, a multiplexer, and/or other network interface equipment otherwise located in a server or datacenter. And CDN PoP 242 may implement decompression at the edge.
[00505] By being a local access point for end users, CDN PoP 242 may operate as an ISP or as equipment that enables users to connect to the Internet via a more typical ISP. CDN PoP may, for example, implement one or more unique IP addresses and a set of other, assignable IP addresses for the end users.
[00506] In some embodiments, CDN 242 may operate responsive to application calls it intercepts (e.g., from application 246 or from local UE 170). For example, a URL request may be responded to with a hit if CDN 242 stores the requested content (and an error 404 miss otherwise). As such, CDN 242 may create a facade simulating an Internet connection, which is filled mostly from live broadcasts, including carousel emissions 108, VOD, etc. Thus, contrary to known CDNs, which are filled actively responsive to URL
misses, CDN 242 may implement a home gateway that is passively pre-filled via broadcasted emissions 108. For example, BAT 104 may operate with two tuners, one that the user tunes to a station, and the other may be always tuned to another station for continual storage of that data.
[00507] In some embodiments, CDN 242 may respond with any suitable response code depending on the requested content being in cache or another scenario, such as a 204 no content, a 305 use proxy, a 400 bad request, a 404 not found, or a 5XX gateway error.
[00508] In some embodiments, CDN PoP 242 may be filled via live and NRT broadcast (e.g., including MMT MPUs and application extensions) and facilitate data casting, ad prepositioning (e.g., by superseding or overlaying an ad received with broadcasted content by a targeted ad previously obtained by emissions 108 and stored at CDN 242), and emergency information delivery. As such, CDN 242 may minimize latencies in users loading web content and may offload traffic from content servers to seamlessly improve users’ web experience.
[00509] In some embodiments, ads may be obtained via emissions 108 in real-time or via previous such emissions. These ads may be displayed similarly as regular content via UI devices 118. Ads, though, may also be displayed in an L-bar (or another shape) of broadcast app 246, e.g., by not being part of the regular broadcast but rather by being an NRT ad. Alternatively or additionally, an alert may be displayed in the L-bar. The regular video may be shrunken down to fit the L-bar. The ad may be a video ad, or it could be a static ad.
[00510] In some embodiments, CDN PoP 242 may be filled of fragmented content via OTA but also OTT. This PoP may, for example, determine how to obtain content requested at the CDN. For example, due to the BitTorrent fragmentation, a portion may be available from a previous broadcast, but the remaining portion may be obtained via carousel reemission, a collaborative peer’s DRC, or an available IP (e.g., broadband) backchannel. As such, PoP 242 itself may determine to obtain all of these files’ fragments based on different rules for different files. For example, some files may warrant taking advantage of a different distribution pass, whereas others may wait for the next round on the carousel to potentially take a less expedited approach.
[00511] In some embodiments, BTS 102 may perform delivery and synchronization of media and non-timed data in system 100. For example, the delivery functionality may include mechanisms for the synchronization of media components delivered on the same or different transport networks, and application-layer FEC methods that enable error-free reception and consumption of media streams or discrete file objects.
[00512] In some embodiments, some FEC may be implemented at the software defined radio of BAT 104, whereas other FEC may be implemented at the application layer (e.g., in the layered coding transport (LCT), as per the A/331 standard).
[00513] FIG. 11 illustrates an example method 1100 for implementing a CDN PoP. Method 1100 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 1100 presented below are intended to be illustrative. Method 1100 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1100 are illustrated in FIG. 11 and described below is not intended to be limiting. Method 1100 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 1100 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1100. The following operations may each be performed using a system the same as or similar to CDN PoP 242 (shown in FIG. 2).
[00514] At operation 1102 of method 1100, metadata may be determined, at an orchestration unit of a BTS. The metadata may be associated with content in the broadcast. As an example, CDN PoP 242 may be pre-specified (e.g., via metadata from an orchestration unit of BTS 102) with policies on how the obtained data is to be retained, stored, and locally managed. In this or another example, the CDN PoP may transparently mirror elements, including HTML, CSS, software downloads, and media objects originally from third party servers. And this CDN PoP may be automatically chosen based on a type of requested content and a location of a user making the request.
[00515] At operation 1104 of method 1100, data may be fragmented via the BitTorrent protocol at BTS 102 (e.g., via fragmentation component 207) such that downstream BAT 104’s fragmentation component 237 is operable to attempt the fragments’ reassembly.
[00516] At operation 1106 of method 1100, the fragments may be broadcasted, in an IP -multi cast network (e.g., using the MMT protocol) at BTS 102.
[00517] At operation 1108 of method 1100, the broadcast may be temporarily preempted, with a higher priority broadcast (e.g., an alert or other content).
[00518] At operation 1110 of method 1100, it may be determined whether the CDN PoP is to supplement a subset of the content actually received at the BAT, in which case a determination as to whether to obtain the data via carousel rebroadcast network 108, DRC peer network 292, or another IP -based connection 106 may be performed at micro CDN PoP 242 based on the metadata. The supplementing may be on-demand and may be performed using data stored at the CDN PoP, as discussed above.
[00519] At operation 1112 of method 1100, the data may be reconstructed, at micro CDN PoP 242 using component 237, by reordering the fragments (e.g., to provide VOD and/or application extension updates) in NRT. As an example, a user may be notified when all the fragments are obtained, and/or a subset of the content may be forwarded to the user via UI devices 118. An original file may be fragmented into a plurality of content portions at the BTS (e.g., as described above based on a peer-to-peer file sharing protocol that is decentralized), and portions broadcast via the IP multicast traffic (or otherwise) may be reconstituted by the CDN PoP.
[00520] At operation 1114 of method 1100, data casting, ad prepositioning, and/or emergency information delivery may be performed from BAT 104.
Flash Channels Using Dynamic ALP Management
[00521] There is a need for a broadcaster to immediately multiplex-in a certain type of data, when there is little or no available capacity in the broadcasted emissions. ATSC 3.0 transmissions 108 may thus encompass dynamically generated flash channels.
[00522] A flash channel is a channel that may be generated and automatically be added to the broadcast stream based on events occurring in real-time, such as the unexpected prolonged coverage of a live event (e.g., a football game running in overtime) or the unexpected need to cover breaking news or warnings to the public (e.g., a tornado warning). ATSC 3.0 transmissions may also encompass dynamic reallocation of transmission bandwidth among channels (or components of services) based on the manner the offered services are consumed. For example, a client’s viewing mode may involve heavy consumption (e.g., for a prolonged period of past time) such that the dynamic reallocation only mildly affects quality of the emission (e.g., as observed by the user), or the viewing mode may involve light consumption (e.g., for a brief period of past time) such that the dynamic reallocation is more aggressive. Dynamic reallocation of bandwidth - both for
allowing the flashing in (adding) of a new channel (service) into the broadcast stream or for reprioritizing service components by reassigning each a different transmission resiliency level - may be carried out by automatically reconfiguring the PLP, including reconfiguring modulation modes, coding, or bitrates amount used to encode each service component. This dynamic reallocation of resources may be performed by the transmitter based on metrics that may be aggregated in real-time from viewers’ receivers or from third parties; these metrics may then facilitate the transmitter’s real-time decision-making of bandwidth reallocation.
[00523] In general, content, including linear audio/video (AV) services for example, may be distributed by one or more transmitters and in more than one RF channel (e.g., broadcast stream). Multiple transmitters, emitting broadcast streams, may be operating from one facility or may each be operating from multiple facilities, in accordance with the ATSC 3.0 standard. Each broadcast stream may comprise one or more PLPs, each having its own modulation mode and coding parameters. A PLP may contain one or more service components (e.g., video, audio, and NRT data); however, a service may be delivered in more than one PLP.
[00524] In an aspect, a transmitter may use information collected in real-time from receivers on a grid to manage the allocation of bandwidth among services provided in its broadcast stream. Such information, for example, may be a viewing mode used by a user of a receiver - e.g., what service (e.g., station) or service component is currently being viewed and on what platform. Accordingly, a real-time data distribution of users’ viewing modes may affect the transmitter’s bandwidth allocation when encoding each component of service in its broadcast stream. For example, a video component’s encoding parameters - e.g., frame- rate, frame-resolution, or bit-depth - or parameters of FEC techniques may be determined based on the current viewing modes of receivers on the grid.
[00525] Real time data, such as data received by users of receivers on the grid (e.g., viewing modes), may be used by a transmitter to reconfigure the modulation modes that are applied to the transmission of each service in the PLP. For example, ATSC 3.0 allows for dynamic adjustment of modulation characteristics that may control the reach of the transmitted data. A transmitter, based on real time data, received from receivers on the grid or a third party, may decide using a modulation method that is set to better perform with respect to receivers at a certain locality (e.g., in a car outdoors versus in a home indoors) or with respect to a type of receivers (e.g., mobile receivers versus stationary receivers). Thus, a transmitter may use real time data to decide whether to give preference to certain devices by using a modulation method or mode that deliver a more reliable data transmission to those
devices. For example, modulation and/or coding of a PLP may be determined based on a manner in which to-be-replaced content is previously consumed such that preference is given to first devices by delivering a more reliable data emission than that delivered to second devices.
[00526] In an aspect, viewing mode data may be available to the transmitter in real time by means of a dedicated return channel (DRC) as defined by the ATSC 3.0 A/323 standard. Alternatively, any other communication link may be used by a receiver to inform the transmitter about its current viewing mode. In an aspect, the viewing mode data of receivers on the grid may be sent to an associated server, wherein the data will be aggregated, analyzed, and then, recommendations (or controls) may be sent forward to the transmitter for the latter to base its resource allocation on as it encodes the services in its broadcast stream.
[00527] In an aspect, a transmitter may be transmitting a broadcast stream containing several services (e.g., stations) each providing one or more linear, AV services - where each service may comprise multiple video components and corresponding one or more audio components. Based on analysis of viewing mode data, the transmitter (or associated server) may find that a large number of users of receivers on the grid consumes a certain service using platforms with limited display capabilities - e.g., a low-resolution display or a standard dynamic range (SDR) display. In such a case, the transmitter may encode that service at low frame-rate, or at low resolution, or at low bit-depth, thereby reserving encoding resources (bits) for the encoding of another service in the broadcast stream (or another component of the same service, such as audio).
[00528] In a further aspect, the viewing mode data may show that a certain service is not being viewed at all by the majority of the users of the receivers in the grid, in which case the transmitter may encode that service at a reduced bitrate, by reducing the frame-rate, the resolution, or the bit-depth for example. In an aspect, the viewing mode data may reveal that most users do not view a provided service in a 3DTV, therefore the transmitter may decide to transmit in the broadcast stream only one component of a stereoscopic or multi view content. A similar approach may be used with respect to a service: providing, in addition to a video component, also multiple audio components each of which is associated with a different language. In such a case, when the viewing mode data reveal that most users of the receivers on the grid do not use a certain audio component, the transmitter of that service may use less resources (bits) in encoding it. Alternatively, the transmitter may decide to remove that audio component altogether from the service at least until the viewing mode data will indicate a change in demand for that audio component in the service.
[00529] In some embodiments, feed sources 202 may cause resolution reduction, e.g., for content emitted via network 108. In these or other embodiments, feed sources 202 may perform corrective behavior and/or increase the codec efficacies (e.g., by increasing the GOP length, by increasing the codec complexity by switching to a more computationally intensive codec in-flight, and/or by another suitable approach). In implementations involving HEVC/H.265 as the codec core (e.g., having suitable encode CPU and decode CPU utilization factors), a license cost or intrinsic tax may be substantial. In other implementations, alliance for open media (AOMedia) video 1 (AVI) may be used (e.g., which may be more compute heavy in comparative analysis but open source). That is, an AVI video codec computation cost may, for example, be less than a cost of HEVC licensing. There may thus be model 203-2 used to predict optimal input characteristics (e.g., resolution, an external variable that has a cost basis tied to it, another characteristic, etc.) for video encoding. In these or other implementations, VP9 video coding format may be used (e.g., which may be substantially more efficient at decoding than the aforementioned and/or other formats). In these or other implementations, video encoder MPEG-5 may be used.
[00530] In some embodiments, feed sources 202 may determine a set (e.g., matrix) of options for optimal ALP management (e.g., a more efficient change to an upstream configuration of different portions of the encoding and the ecosystem), to meet one or more objective of adding in a new resource. That is, an example implementation may include selective discard of data units that may be below a visual perception quality metric. But some embodiments may, for example, optimize for content aware use cases (e.g., by knowing that an extra couple hundred kilobits out of the reference emission flow are needed), via a deterministic, flow-management determination to perform selective discard (e.g., of a compressed data essence). In this or another example, a visual analysis engine at BTS 102 may make a determination that a channel falls above an SNR and/or satisfies another reception quality criterion so there is room for this channel to have its modulation and/or coding paired down to a lower quality for squeezing in the new channel. As such, a number of different methodologies are contemplated (e.g., as learned by trained model 203-2) in supporting the extra channel, which is significant because previously in ATSC 1 the only degrees of freedom were adjustment of the tower’s height and output power level.
[00531] In some embodiments, dynamic ALP management may include configuring parameters of the encoder and leveraging, from a yield perspective, non-monetizable content such as emergency alerting or other breaking news coverage. An arbitrage model may have provisos for data emission that does not have a directly monetizable unit of value but may be
represented in a monetizable unit of value by defining it as a goodwill emission, which effectively balances its weight as to its value as content.
[00532] An example prediction of AI model 203-2 may cause a dropping of data files that are otherwise to be transmitted (e.g., based on respective importance). Another example prediction of model 203-2 may cause a dropping of one or more ALPs (e.g., another video asset) or channels. In these or another example, emissions 108 may comprise a plurality of subchannels, e.g., with one including an emergency event; the machine learning of feed sources 202 may, for example, determine not to make an adjustment for the new channel. Herein contemplated are thus several different ways and combinations of ways in responding to a request (e.g., from feed sources 202 or supplemental sources 206) for creating room for an extra channel (e.g., besides only compressing and squeezing). In an example implementation targeting mobile receptivity, certain content transmission (e.g., an audio portion) may be placed on a much more robust PLP so, even though a receiver may only get one out of ten video frames (e.g., due to the MODCOD not being robust enough or this device is in motion), reception of the audio frames may be uninterrupted and without loss.
The matrix of parameters/options may thus be, for example, adjusted for durability reasons.
[00533] The flash channel may be, for example, added for a sporting event’s overtime, e.g., when the 11 o’clock news is otherwise supposed to start. In these implementations, the game may be continued, but the 11 o’clock news may not be preempted if emitted as the flash channel, e.g., without disturbing either audience. Such delivery of a plurality of services may be indefinitely continued. In other implementations, the flash channel may comprise emergency alerting or the like, e.g., while concurrently emitting regular programming (e.g., without having to destroy a media essence in full-blown or fully- filled coverage of an event). Even in example breaking news with wall-to-wall coverage, no ads may be displayed; in another example, ads may be added as part of emissions 108 (e.g., by compressing or adjusting configurations of existing emissions). And the flash channel may be contractually (monetarily or via another type of value) provided as a service (e.g., for a local community or geographic region). This service may be of higher quality (e.g., by being near the event) for, for example, enhancing goodwill and brand awareness.
[00534] Multiple transmitters, possibly located at multiple facilities, may transmit multiple RF channels (or broadcast streams). Receivers on a grid within coverage of multiple broadcast streams may send viewing mode data to a transmitter including viewing modes with respect to the services transmitted by another transmitter. Alternatively, such data that include the viewing modes with respect to services transmitted by multiple broadcast streams
(or transmitters) may be aggregated and may be analyzed by a server in communication with one or more of the transmitters. The server may then manage and optimize bandwidth allocation for all the broadcast streams emitted from the different transmitters, possibly located at different broadcast facilities. Such a server may utilize the temporal availability of bandwidth in a certain transmitter and direct the distribution of packets of NRT data through that transmitter’s broadcast stream.
[00535] In a situation wherein a delay in time exists between the generation of viewing modes by the various receivers, and the server’s delivery of bandwidth optimization recommendations (or controls) to a certain transmitter, the server may operate to predict a future optimal bandwidth allocation based on statistics of the viewing modes - computed, for example, based on viewing mode data that had been received within a preceding window of time. Thus, if the server aggregates viewing mode data from the receivers within a time segment of to-ti, it may predict the optimal bandwidth allocation for a certain transmitter at a time t2 (> ti) based on statistics of viewing mode data formed within a window preceding to, for example.
[00536] Aspects disclosed in this disclosure may also be utilized to optimize bandwidth utilization across multiple broadcast streams that cover high-scale live events (e.g., Superbowl). Delivering content covering a live event may involve large numbers of content providers each distributing services of multiple live feeds - such as video feeds from multiple cameras, audio feeds of commentaries, and event-dependent computer-generated feeds. A server, based on viewing mode data received from receivers of viewers of the live event may, for example, prioritize the services (the various live feeds) and may recommend optimal resource allocation to the transmitters of the broadcast streams that encode these services. A server may also use other sources of information to prioritize the services. For example, other sources may include manual or automatic means indicating the priority of a certain feed at a certain time based on analysis of its content or based on other context derived from the live event’s activities. The server may identify temporal segments of a service with low priority and may utilize that to distribute NRT data.
[00537] In some embodiments, feed sources component 202 may increase data availability (e.g., at the facility through network 108). For example, this component may algorithmically implement the A/331 standard to dynamically bring up and emit a linear, AV service. In this or another example, the bring-up and/or emission of this service may be expedited via cloud computing. As such the dynamic allocation of a linear AV service may be facilitated between an on-premise facility and the virtualized cloud environment.
Components of system 100 may thus facilitate the automatic coordination and collaboration for an emission on network 108.
[00538] In some embodiments, feed sources component 202 may implement wide- scale data distribution, e.g., via a linear AV service and/or other MMT services. That is, an automation between service provisioning and data emission/delivery across network 108 may be performed in a single facility and/or in the cloud.
[00539] In some embodiments, feed sources component 202, via a statistical multiplex, may distribute bitrates between channels including the new, flash channel. For example, this component may determine bandwidth availability, distributing one data emission at one facility. In another example, herein contemplated is a distribution of a data emission at all of a plurality of facilities. That is, the data of the new channel may fit for distribution into a remaining, available bandwidth of each of the plurality of different facilities together serving a larger region or nation. Such adaption to particularly available capacities may be implemented with the distributive statistical multiplex to create emissions that may be managed and then multiplexed back in according to the PLP characteristics or the additional bandwidth configuration and utilization throughout from a national distribution feed. For example, feedback may be provided back indicating resource availability from the plurality of locations, to dynamically manage delivery of a content emission that may fit in through a plurality of locations.
[00540] Some embodiments may optimize resources in a linear, AV service of one facility, e.g., as a set of allocations that may not match another facility’s set of allocations. In an NRT service, data may be provided into opportunistic windows that are available. In a linear AV service, that same degree of temporal elasticity may not exist; a quantized projection of what that data availability looks like from the network perspective may be provided as a whole, being, for example, part of the distributive statistical multiplex. As such, a leading indicative signal may be provided to a network encoder, which may then produce derivative outputs that may be relevant for each one of those quantized units of channel capacity and availability, e.g., in real-time.
[00541] Some example embodiments of a statistical multiplex may include communication link sharing and adaptations to instantaneous traffic demands of the data streams that are transferred over each channel. That is, a communication channel may be, for example, divided into a set of variable-bitrate, digital channels or data streams. Example statistical multiplexing may provide a link utilization improvement or gain.
[00542] In some embodiments, a statistical multiplex may operate on a group of pictures by group of pictures basis, e.g., with an active feedback mechanism from the broadcast scheduler. In these or other embodiments, the statistical multiplexer may be used to optimize or reallocate channel utilization in a fixed model. As such, this statistical multiplexer may be operable to fill the PLPs when everything is stable, e.g., getting from 95 to 99 percent utilization of a pipe. And applied machine learning of the core may complement its functionality to manage what to decrease, what to adjust, or any other control parameter at BTS 102. Dynamic allocation may thus be performed by a statistical multiplexer of a respective transmitter.
[00543] In some embodiments, feed sources 202 may generate a new channel, including new service components. For example, BTS 102 may increase (e.g., incrementally or abruptly) available capacity in a window of time for extra content in the new channel, e.g., as the extra content replaces, during the window, primary content contemporaneously and/or previously emitted. In this or another example, an L-shaped bar may display an emergency alert; and, by the user clicking or selecting that emergency alert, they may be taken into a micro webpage that has all the different artifacts about that new (flash) channel. Example, temporary mechanisms for the flash channel, in one or more video or audio essence emissions, may include: decreasing the average bitrate of the video encoding essence (e.g., reduce bitrate); decreasing the spatial or temporal resolution of the video encoding essence; removing of HDR metadata of the video encoding essence; increasing the GOP length for the video essence; application of a hard-cap (e.g., not to exceed N-kilobit/sec., N being any number) of the video output profile, resulting in extra encoder utilization to meet this variable Q target; reducing the bit-depth (audio) of the encoding essence; and/or removal of tertiary audio tracks.
[00544] In some embodiments, a carousel schedule may comprise a set of files to be emitted via emissions 108. And the emissions may, for example, have a bit rate that is reduced in the carousel. For example, the bit rate may be slowed to support inclusion of a new channel. In another example, the bit rate may be reduced substantially more, e.g., with 50 percent or more of the items on that carousel not being determined to be of high importance resulting thus in a further reduction of their bit rate to dynamically make room for that flash channel.
[00545] Feed sources 202 of BTS 102 may, for example, determine how to fit a new channel such that flash content handler 241 of BAT 104 (or another downstream component such as TV 103) is operable to obtain it. For example, when the carousel includes a set of
data files, only one data file may need to be dropped. In another example, five SD channels may be emitted but one of them is being watched substantially less than the others. That is, BTS 102 may have trajectory information about how BATs 104 are consuming content of emissions 108 to dynamically adjust MODCOD (e.g., for improving or decreasing penetrative reach of the respective emission) and/or other allocation configuration (e.g., preempt a live transmission, pause an NRT transmission, or adjust another parameter) changes. In this or another example, feed sources 202 may drop data channel(s) (or an ALP) to fit new channel(s), e.g., which may comprise a new HD channel. In any of these or another example, feed sources 202 may, for example, determine that a set of content items is of substantially more importance as data payload than one or more other content items. The bit rate for the other content may thus be reduced and another set of content of less importance may even be dropped altogether. For example, one or a couple of users may have their content stream interrupted while tornado evacuation information becomes available to everybody.
[00546] In some embodiments, feed sources 202 may subject the content of emissions 108 to one or more grading criteria. When an additional (e.g., fixed) bandwidth is required to facilitate flash content distribution with existing emissions, the modulation and/or encoding may be, for example, adjusted (e.g., to provide a more even share of the resources).
[00547] In some embodiments, feed sources 202 may make an adjustment to increase an amount of available capacity only temporarily, e.g., until the flash, additional channel takes over as the primary channel. As such, certain content may temporarily be degraded when increasing the available capacity incrementally for a window of time, for example, reverting back to a base configuration thereafter.
[00548] One or aspects of the MODCOD may be (e.g., incrementally) increased in some instances, and in others decreased. For example, few bits may be sent to lots of receivers, or lots of bits may be sent to a few receivers, e.g., for the flash or existing programming. In another example, less bandwidth may be, for example, made available as a whole to the flash channel and/or the existing emissions, to provide a substantially higher degree of reach to the potential universe of receivers. The tradeoff-oriented adjustment(s) to preemptible data may be incremental, e.g., to provide enough sharing of bandwidth in an encoder configuration for the additional channel (feed). In an implementation where an encoding profile cannot or will not be adjusted, the flash channel may be emitted with less robust modulation; in another implementation, no change may be performed in the modulation but a relatively impactful FEC as a marginal change may open up enough
capacity for that additional channel. In other, more aggressive implementations (e.g., for a wider or widest reach/distribution of important content), a profile may be configured causing some other ALP transport(s) to be shut down while the flash channel is sent with a much more robust MODCOD (e.g., having a lower net bandwidth by lowering the resolution, spectral efficiency, and/or bit-rate). In an example, everything may be dropped except for the broadcast app that facilitates display of the flash channel. In this or another example, one or more types of NRT files on a data carousel may be kept emitting over network 108 while supporting emission of the flash channel.
[00549] Adjustments to the modulation and/or coding may be based on real-time viewership information. For example, a machine learning model may be used to leam how viewers are consuming services, such as the modes of the receivers (e.g., BATs 104, TVs 103, UE 170, etc.) and/or their reception quality (e.g., packet loss, jitter, transmission delay, etc.). And this information may be developed at a server that forwards findings to BTS 102 (e.g., through DRC network 292 or other means such as a VAST channel). For example, live linear transport distribution via the MMT protocol may include a series of measurement messages that may be signaled via network 108 to a set of receiving devices. As such, a receiver or a plurality of receivers may be predicted by model 203-2 to be in a marginal reception area or some other scenario that is causing a degree of packet loss, e.g., to make a determination if additional FEC would be beneficial and/or if other parameters in the RF transmission may be beneficially adjusted (e.g., reducing the modulation from 256 QAM to 16 QAM to match available transmission capacity). In some embodiments, model 203-2 may perform micro or incremental tests to determine whether adjustments (e.g., increasing robustness of the FEC or adjusting bit-depth) have a positive, neutral, or negative impact upon a downstream device’s ability to receive through network 108. For example, feed sources 202 may perform such real-time reach management by sending a message to the universe of receiving devices for them to respond back with telemetry metrics (e.g., packet loss).
[00550] In some embodiments, feed sources 202 may have access to models 203-2 implementing AI for the new-channel-handling determinations. For example, a prediction may be made that results in dropping the TV show Grey’s Anatomy from HD to SD. The prediction may be based on a learned pattern wherein dropping Grey’s Anatomy from HD to SD loses more viewers and more ad revenue than if this channel were dropped completely.
[00551] Artificial neural networks (ANNs) are models used in machine learning and may include statistical learning algorithms conceived from biological neural networks
(particularly of the brain in the central nervous system of an animal) in machine learning and cognitive science. ANNs may refer generally to models that have artificial neurons (nodes) forming a network through synaptic interconnections (weights), and acquires problem solving capability as the strengths of the interconnections are adjusted, e.g., at least throughout training.
[00552] An ANN may be configured to determine a classification based on input data (e.g., from feed sources 202 or another component associated with BTS 102). An ANN is a network or circuit of artificial neurons or nodes. Such artificial networks may be used for predictive modeling.
[00553] The prediction models may be and/or include one or more neural networks (e.g., deep neural networks, artificial neural networks, or other neural networks), other machine learning models, or other prediction models. As an example, the neural networks referred to variously herein may be based on a large collection of neural units (or artificial neurons). Neural networks may loosely mimic the manner in which a biological brain works (e.g., via large clusters of biological neurons connected by axons). Each neural unit of a neural network may be connected with many other neural units of the neural network. Such connections may be enforcing or inhibitory in their effect on the activation state of connected neural units. These neural network systems may be self-learning and trained, rather than explicitly programmed, and may perform significantly better in certain areas of problem solving, as compared to traditional computer programs. In some embodiments, neural networks may include multiple layers (e.g., where a signal path traverses from input layers to output layers). In some embodiments, back propagation techniques may be utilized to train the neural networks, where forward stimulation is used to reset weights on the front neural units. In some embodiments, stimulation and inhibition for neural networks may be more free-flowing, with connections interacting in a more chaotic and complex fashion.
[00554] Disclosed implementations of artificial neural networks may apply a weight and transform the input data by applying a function, this transformation being a neural layer. The function may be linear or, more preferably, a nonlinear activation function, such as a logistic sigmoid, Tanh, or rectified linear activation function (ReLU). Intermediate outputs of one layer may be used as the input into a next layer. The neural network, through repeated transformations, leams multiple layers that may be combined into a final layer that makes predictions. This learning (e.g., training) may be performed by varying weights or parameters to minimize the difference between the predictions and expected values. In some embodiments, information may be fed forward from one layer to the next. In these or other
embodiments, the neural network may have memory or feedback loops that form, e.g., a neural network. Some embodiments may cause parameters to be adjusted, e.g., via back- propagation.
[00555] An ANN is characterized by features of its model, the features including an activation function, a loss or cost function, a learning algorithm, an optimization algorithm, and so forth. The structure of an ANN may be determined by a number of factors, including the number of hidden layers, the number of hidden nodes included in each hidden layer, input feature vectors, target feature vectors, and so forth. Hyperparameters may include various parameters which need to be initially set for learning, much like the initial values of model parameters. The model parameters may include various parameters sought to be determined through learning. And the hyperparameters are set before learning, and model parameters can be set through learning to specify the architecture of the ANN.
[00556] Learning rate and accuracy of an ANN rely not only on the structure and learning optimization algorithms of the ANN but also on the hyperparameters thereof. Therefore, in order to obtain a good learning model, it is important to choose a proper structure and learning algorithms for the ANN, but also to choose proper hyperparameters.
[00557] The hyperparameters may include initial values of weights and biases between nodes, mini-batch size, iteration number, learning rate, and so forth. Furthermore, the model parameters may include a weight between nodes, a bias between nodes, and so forth.
[00558] In general, the ANN is first trained by experimentally setting hyperparameters to various values, and based on the results of training, the hyperparameters can be set to optimal values that provide a stable learning rate and accuracy.
[00559] In some embodiments, the learning of models 203-2 may be of reinforcement, supervised, and/or unsupervised types. For example, there may be a model for certain predictions that is learned with one of these types but another model for other predictions that is learned with another of these types.
[00560] Reinforcement learning is a technique in the field of artificial intelligence where a learning agent interacts with an environment and receives observations characterizing a current state of the environment. Namely, a deep reinforcement learning network is trained in a deep learning process to improve its intelligence for effectively making predictions. The training of a deep learning network may be referred to as a deep learning method or process. The deep learning network may be a neural network, Q-leaming network, dueling network, or any other applicable network.
[00561] Reinforcement learning may be based on a theory that given the condition under which a reinforcement learning agent can determine what action to choose at each time instance, the agent can find an optimal path to a solution solely based on experience of its interaction with the environment. For example, reinforcement learning may be performed mainly through a Markov decision process (MDP). MDP may comprise four stages: first, an agent is given a condition containing information required for performing a next action; second, how the agent behaves in the condition is defined; third, which actions the agent should choose to get rewards and which actions to choose to get penalties are defined; and fourth, the agent iterates until a future reward is maximized, thereby deriving an optimal policy.
[00562] Deep reinforcement learning (DRL) techniques capture the complexities of the RF environment in a model-free manner and leam about it from direct observation. DRL can be deployed in different ways such as for example via a centralized controller, hierarchal or in a fully distributed manner. There are many DRL algorithms and examples of their applications to various environments. In some embodiments, deep learning techniques may be used to solve complicated decision-making problems in wireless network optimization.
For example, deep learning networks may be trained to adjust one or more parameters of a wireless network, or a plurality of cells in the wireless network so as to achieve optimization of the wireless network with respect to an optimization goal.
[00563] Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It may infer a function from labeled training data comprising a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. And the algorithm may correctly determine the class labels for unseen instances.
[00564] Unsupervised learning is a type of machine learning that looks for previously undetected patterns in a dataset with no pre-existing labels. In contrast to supervised learning that usually makes use of human-labeled data, unsupervised learning does not via principal component (e.g., to preprocess and reduce the dimensionality of high dimensional datasets while preserving the original structure and relationships inherent to the original dataset) and cluster analysis (e.g., which identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data).
Semi-supervised learning is also contemplated, which makes use of supervised and unsupervised techniques.
[00565] Feed sources 202 of FIG. 2 may prepare one or more prediction models to generate predictions. Models 203-2 may analyze made predictions against a reference set of data called the validation set. In some use cases, the reference outputs may be provided as input to the prediction models, which the prediction model may utilize to determine whether its predictions are accurate, to determine the level of accuracy or completeness with respect to the validation set data, or to make other determinations. Such determinations may be utilized by the prediction models to improve the accuracy or completeness of their predictions. In another use case, accuracy or completeness indications with respect to the prediction models’ predictions may be provided to the prediction model, which, in turn, may utilize the accuracy or completeness indications to improve the accuracy or completeness of its predictions with respect to input data. For example, a labeled training dataset may enable model improvement. That is, the training model may use a validation set of data to iterate over model parameters until the point where it arrives at a final set of parameters/weights to use in the model.
[00566] In some embodiments, feed sources 202 may implement an algorithm for building and training one or more deep neural networks. A used model may follow this algorithm and already be trained on data. In some embodiments, feed sources 202 may train a deep learning model on training data 203-1 providing even more accuracy, after successful tests with these or other algorithms are performed and after the model is provided a large enough dataset.
[00567] A model implementing a neural network may be trained using training data obtained by feed sources 202 from training data 203-1 of a storage/database. The training data may include many attributes of a plurality of content moving towards and through downstream receivers. For example, this training data obtained from prediction database 203 of FIG. 2 may comprise hundreds, thousands, or even many millions of pieces of information (e.g., continually learning new patterns, second by second at the microsecond level) describing content consumption. The dataset may be split between training, validation, and test sets in any suitable fashion. For example, some embodiments may use about 60% or 80% of the information for training or validation, and the other about 40% or 20% may be used for validation or testing. In another example, feed sources 202 may randomly split the labelled information, the exact ratio of training versus test data varying throughout. When a
satisfactory model is found, feed sources 202 may train it on 95% of the training data and validate it further on the remaining 5%.
[00568] The validation set may be a subset of the training data, which is kept hidden from the model to test accuracy of the model. The test set may be a dataset, which is new to the model to test accuracy of the model. The training dataset used to train prediction models 203-2 may leverage an SQL server and a Pivotal Greenplum database for data storage and extraction purposes.
[00569] In some embodiments, feed sources 202 may be configured to obtain training data from any suitable source, via local electronic storage, external resources, network 70, and/or UI devices. The connection to network 70 may be wireless or wired. In some embodiments, feed sources 202 may enable one or more prediction models to be trained. The training of the neural networks may be performed via several iterations. For each training iteration, a classification prediction (e.g., output of a layer) of the neural network(s) may be determined and compared to the corresponding, known classification. For example, information known to describe content consumption may be input, during the training or validation, into the neural network to determine whether the prediction model may dynamically predict its presence. As such, the neural network is configured to receive at least a portion of the training data as an input feature space. Once trained, the model(s) may be stored in database/storage 203-2 of prediction database 203, as shown in FIG. 2, and then used to classify samples of content consumption information based on observed attributes.
[00570] Training data augmentation may be performed to improve the training process, e.g., by giving the model a greater diversity of consumption information. And the data augmentation may help teach the network desired invariance and robustness properties, e.g., when only few training samples are available.
[00571] In some embodiments, trained model 203-2 may be used to perform yield management (e.g., to maximize value of each bit emitted via emissions 108). For example, a BBP may have a par value (e.g., a fraction of a penny) based on its reach (e.g., number of potential content consumers) and frequency (e.g., number of times that the customer may be exposed to the content). This value or model, though, in ATSC 3.0 may be substantially changed (e.g., including arbitrage), being, for example, unpredictable. For example, some implementations may include broadcast market exchange (BMX) policy running and cognitive spectrum resource management for cost basis management and control of data pieces. As such, herein contemplated is revenue optimization and yield extraction, e.g., as an options contract or options fulfillment exercise, each bit having a value in the future. At some
point, that option for that bit will expire and it may be up to the broadcast scheduler to make a best determination for optimum execution of those options in place, effectively setting up (e.g., not just a channel sharing ecosystem and arrangement) an ATSC 3.0 resource sharing and real-time vending ecosystem to determine what are the best distribution characteristics, what are the best reach characteristics, and what is the temporal priority (e.g., whether needing to come out now or waiting 30 seconds). These metrics may come into play in a cognitive ATSC 3.0 scheduler ecosystem, with the objective of yield management for the business. The flash channel itself may have intrinsic value, e.g., when alerting a looming crisis or disaster (or airing a presidential debate) to serve the public trust, so the trained model may nevertheless help manage the PLPs by properly performing a tradeoff based on a goodwill attribute in emitting a non-revenue-generating alert.
[00572] In some embodiments, trained model 203-2 may be used to perform dynamic ad insertions (e.g., not by preempting a pre-pur chased, upfront ad position in an ad break), e.g., to provide audience segmentation for a traditional media buyer in a digital marketplace. As such, ad position inventory buyers may be provided an opportunity to carve up a same inventory ad break position for different demographics. That is, different people may be given different ads at a same time, e.g., based on a viewer’s demographic so that the media buyer is aligned with the position that they purchased, without selling out their inventory from underneath them for digital media. For example, some implementations may correlate an initial linear insertion with a respective ad-ID and then determine a plurality of derivative ad-IDs for then filter code matching or for profile or for persona matching. A record may, for example, reflect a plurality of derivative insertions that may be allowed for preemption in a linear essence. And feed sources 202 may, for example, take metadata (e.g., between ATSC 1 emissions) and supply it for this ad decisioning process, applying those persona profiles, behavior, etc., and then preempting what is only in that universe of available creatives to match that demographic for that audience. In sum, a correlation of that linear insertion may be made to that additional plurality of potentially preemptible ad creatives that are compatible with it for digital distribution.
[00573] In some embodiments, EIDR may be used for broadcast ad insertions, and some of those linear insertions may be barred replacement, e.g., with a segmentation marker that would include an ad ID. Ad ID has traditionally been a digital attribute that allows for utilization of creatives across multiple different ad networks and exchanges. In these or other embodiments, feed sources 202 may glue that ad ID that comes in through traditional linear insertion with what an opportunistic digital ecosystem and experience would be. For
example, there may be a set of ancillary creatives compatible with the linear ad that are from this same advertiser or media buyer but that are more refined to the specific demographic that is consuming this content. So rather than a highest deal or purchase for this ad break winning the time slot across the whole network, the ancillary creatives may be used and replaced if there is demographic targeting that warrants a more personalized or more relevant audience impression.
[00574] Feed sources 202 may, for example, form part of an AI core that, for example, controls a scheduler (e.g., of FIG. 4A), packagers, and encoders. For example, model 203-2 of the core may predict an optimal degradation (e.g., reduction in the resolution or quality) of one of the video services to create more bandwidth for other data and thus to effectively optimize the scarce capacity of emissions 108 for different services. And that may be based on the encoders and packagers, e.g., before the scheduler puts it back together.
Then, the core may, for example, inform the scheduler of a learned configurations’ set-up, for the PLPs and ALPs for the different services. In another example, model 203-2 of the core may, rather than (or in addition to) adjusting resolution or quality, implement other mechanisms for creating additional capacity for the flash channel, including creating a base state and then pausing, discarding, or offloading (e.g., to an IP backchannel transmission) an existing OTA data distribution. Once the flash channel transient is complete, the base state may be resumed.
[00575] In some embodiments, models 203-2 may be trained for predicting a value of different types of services whether it be monetary or some other kind of value such that a better determination as to how to balance resources is made.
[00576] In some embodiments, flash content from feed sources 202 may be configured for delivery via a plurality of PLPs (e.g., one having high penetration into buildings and parking garages at 576p, one with video optimized for mobile receivers traveling at 40 miles per hour, another for stationary receivers or TVs at 720P, audio on a more robust PLP, a scalable rendition for fixed devices, and/or another configuration based on similar gradation). And some implementations may support devices that have multiple tuners by determining whether to split excess channel capacity across different RF frequency transmissions. For example, one may be at 587 MHz, and one may be at 593 MHz, but they may not by themselves have enough space to be able to facilitate three PLP grading models (e.g., high-durable audio, mobile optimized video, and scalable renditions for fixed receivers). A channel-bonded PLP may thus be used, which is a split between those two channels for any essence delivery. For example, the audio may be on a more robust one
channel that is on 587 MHz, and the video may be split between the 587 and 593 MHz channels in the video configuration. This may be a function of not just what resources are available in one single RF emission block, but what resources would be available across a whole market transmission capability in the ATSC 3.0 network space. In an example of the channel bonding, a resulting, synthesized PLP may be created from 2 channels for flash data delivery. In an example of channel diversity, a more robust VHF band may be used for audio transmission, while UHF is used for higher capacity video transmission.
[00577] FIG. 12 illustrates an example method 1200 for instantly adding anew channel (e.g., using dynamic ALP management). Method 1200 may be performed with a computer system comprising one or more computer processors and/or other components. The processors are configured by machine readable instructions to execute computer program components. The operations of method 1200 presented below are intended to be illustrative. Method 1200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1200 are illustrated in FIG. 12 and described below is not intended to be limiting. Method 1200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The processing devices may include one or more devices executing some or all of the operations of method 1200 in response to instructions stored electronically on an electronic storage medium. The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 1200. The following operations may each be performed using a component the same as or similar to feed sources 202 (shown in FIG. 2).
[00578] At operation 1202 of method 1200, available capacity information may be obtained, from each of a plurality of differently-located transmitters (e.g., BTSs 102).
[00579] At operation 1204 of method 1200, a same-sized emission may be determined, for each to-be-added channel, e.g., based on the available capacity information of the transmitters. In an example, MODCOD may be adjusted and/or other, planned traffic may be preempted (e.g., eliminated from the mix). The same-sized emission may be determined, for example, at a virtualized environment of the overall network, to be less than the available capacity information of each transmitter.
[00580] At operation 1206 of method 1200, a bandwidth portion may be dynamically allocated, to regionally emit in one or more PLPs a set of linear, AV content dynamically generated based on the size. As an example, the dynamic allocation may be based on a number of BATs 104 intended to receive the emission (e.g., a threshold number of BATs being affected by the flash channel). In this or another example, the dynamic allocation may be based on metrics aggregated in real-time from viewers’ receivers and/or from third parties. In some embodiments, flash content handler 241 may obtain flash information that is emitted from feed sources 202 via broadcast network 108. In some embodiments, bandwidth adjustment (e.g., via changing encoding) may be performed to fit the flash information. In these or other embodiments, a canceling of other content stream(s) within a same PLP may be performed. In other embodiments, the flash content may otherwise be obtained at BAT 104 using an IP backchannel.
[00581] At operation 1208 of method 1200, available capacity at a transmitter may be incrementally increased, for a portion of the emission, as the linear, AV content replaces emission of existing services. The duration to generate and emit the at least one portion may satisfy a criterion. The criterion may be defined in terms of a number of bits, a number of packets, a time duration, or the like. In this or another example, a flash channel may be created within 1 second. In some embodiments, feedback may be generated for certain decision making, e.g., as to how the flash content will be emitted using a certain bandwidth or space made available responsive to the decision.
[00582] At operation 1210 of method 1200, at least one of: (i) adjusting resolution, (ii) adjusting bit rate, and (iii) removing an existing channel may be performed, via a prediction made by a machine learning model (e.g., model 203-2).
[00583] At operation 1212 of method 1200, coordination and collaboration may be performed between the transmitters and ISP(s) for an overall network (e.g., as discussed above), to deliver the existing services and the new linear, AV content.
[00584] At operation 1214 of method 1200, data emissions may be prioritized based on analysis of the content and/or a context derived from a live event, to replace low priority data with the linear, AV content. As an example, a set of primary audio data may be subsequently emitted using less resources than a set of secondary audio data based on the consumption manner indicating less consumption of the primary emission, wherein each of the sets is associated with a different language.
[00585] Some embodiments may perform a method for dynamic allocation of bandwidth of a broadcast stream. This method may comprise: reallocating, among service
components transmitted in one or more PLPs of the broadcast stream, bandwidth by: analyzing one or more metrics, including a viewing mode obtained from each of a plurality of receivers; and determining, based on the one or more analyzed metrics, the reallocation of the bandwidth among the service components of currently delivered services in the broadcast stream such that a new service component is added. This reallocation may further be performed by: obtaining, from each of a plurality of differently-located transmitters, available capacity information; and replacing, based on the information, at least one of the service components with more content of the new service component.
Claims
1. A method, comprising: determining, based on a plurality of capabilities at least one of which is different from at least one input requirement, a set of mechanisms and corresponding processes configured to translate emissions received based on a substantially static plurality of input requirements that include the at least one requirement; obtaining, from user equipment (UE), a request for the translated emissions, the request including information about the capabilities; and responsive to the request, generating, by a broadcast access terminal (BAT), the set of mechanisms and corresponding processes just in time (JIT).
2. The method of claim 1, wherein the capabilities comprise (i) transport attributes of a network connecting the BAT and the UE and/or (ii) characteristics of the UE.
3. The method of claim 1, wherein the determination is further based on a licensing contract associated with a component of the mechanisms and corresponding processes.
4. The method of claim 1 or 2, wherein the determination is further based on computation power necessary for performing the translation.
5. The method of any of claims 1 to 4, wherein the translation comprises at least one of a transport translation, a packaging translation, and an encoding translation.
6. The method of claim 5, wherein the translation is performed in a manner that diverges from a standard or protocol, and wherein the set of mechanisms and corresponding processes adapts to the divergence.
7. The method of any of claims 1 to 6, wherein the substantially static plurality of input requirements is periodically revised.
8. The method of any of claims 1 to 7, wherein the emissions comprise Internet protocol (IP) multicast traffic based on advanced television systems committee (ATSC) version 3 emissions.
9. The method of claim 8, wherein the UE implements hypertext transfer protocol (HTTP) live streaming (HLS), and wherein the JIT generation comprises determining a package label suitable for at least one of (i) moving picture experts group (MPEG) media transport (MMT) protocol to HLS translation, (ii) reliable internet streaming transport (RIST) protocol to HLS translation, (iii) secure reliable transport (SRT) protocol to HLS translation, and (iv) real-time object delivery over unidirectional transport (ROUTE) / dynamic adaptive streaming over HTTP (DASH) to HLS translation.
10. The method of claim 1, further comprising: the translating, by the JIT-generated set of mechanisms and corresponding processes, of an MPEG transport stream (MPEG-TS) to HLS, wherein the emissions are ATSC version 1 emissions.
11. The method of any of claims 1 to 10, further comprising: determining, based on the request, whether to perform JIT generation; and performing the JIT generation based on the determination that the request indicates that the UE does not have a compatible or matching decoder set for the emissions.
12. The method of any of claims 1 to 10, further comprising: selecting, from among a master manifest received in the emissions, playback characteristics for the UE based on the information of the request.
13. The method of any of claims 1 to 10, further comprising: generating a manifest, the manifest not having been received in the emissions.
14. An apparatus, comprising a processor, a memory, communication circuitry, and instructions stored in the memory which, when executed, cause the processor to: determine, based on a plurality of capabilities at least one of which is different from at least one input requirement, a set of mechanisms and corresponding processes configured to translate emissions received based on a substantially static plurality of input requirements that include the at least one requirement;
obtain, from UE, a request for the translated emissions, the request including information about the capabilities; and responsive to the request, generate the set of mechanisms and corresponding processes just in time (JIT).
15. The apparatus of claim 14, wherein the capabilities comprise (i) transport attributes of a network connecting the BAT and the UE and/or (ii) characteristics of the UE.
16. The apparatus of claim 14 or 15, wherein the translation comprises two or more of a transport translation, a packaging translation, and an encoding translation.
17. The apparatus of any of claims 14 to 16, wherein the emissions comprise IP multicast traffic based on ATSC version 3 emissions.
18. The apparatus of any of claims 14 to 17, wherein the UE implements HLS, and wherein the JIT generation comprises determining a package label suitable for at least one of (i) MMT protocol to HLS translation, (ii) RIST protocol to HLS translation, (iii) SRT protocol to HLS translation, and (iv) ROUTE/DASH to HLS translation.
19. The apparatus of any of claims 14 to 18, wherein the processor is further caused to: determine, based on the request, whether to perform JIT generation; and perform the JIT generation based on the determination that the request indicates that the UE does not have a compatible or matching decoder set for the emissions.
20. The apparatus of any of claims 14 to 19, wherein the processor is further caused to: select, from among a master manifest received in the emissions, playback characteristics for the UE based on the information of the request.
21. An apparatus supporting over-the-air (OTA) application programming interface (API) services, the apparatus comprising a processor, a memory, communication
circuitry, and instructions stored in the memory which, when executed, cause the processor to: receive, from a broadcast access terminal (BAT), OTA broadcast data, the OTA broadcast data being from a broadcast television station (BTS); and provide, via the communication circuitry to one or more local devices, the API services, which comprise (i) an OTA capabilities query API, (ii) a broadcast content directory API, (iii) a BAT tuner API, and (iv) a media-content delivery API.
22. The apparatus of claim 21, wherein an application running on one of the one or more local devices coordinates data reception via a set of the APIs.
23. The apparatus of claim 22, wherein the reception is (i) controlled by a gateway or by the application and (ii) based on the apparatus being bound to a region.
24. The apparatus of any of claims 21 to 23, wherein the received OTA broadcast data, which comprises an emergency alert, is discarded based on the apparatus being irrelevantly outside of a region.
25. The apparatus of any of claims 21 to 24, wherein the BTS informs the apparatus a duration during which to store a set of non-real-time (NRT) emissions.
26. The apparatus of claim 25, wherein a contractual obligation for the NRT emissions is determined based on the storage duration.
27. The apparatus of any of claims 21 to 26, wherein the apparatus comprises a gateway for the one or more local devices, and wherein the apparatus and the one or more local devices are located in a same building or home.
28. The apparatus of any of claims 21 to 27, wherein the processor is further caused to: resolve an accessibility conflict between a plurality of tuners and a plurality of channels requested by a plurality of devices from the apparatus, a number of the plurality of devices being greater than a number of the plurality of tuners.
29. The apparatus of any of claims 21 to 28, wherein an application running on one of the one or more local devices reconstructs data missing from the OTA broadcast data using data obtained from an Internet protocol (IP) backchannel.
30. The apparatus of any of claims 21 to 28, wherein the apparatus reconstructs data missing from the OTA broadcast data using an IP backchannel such that the one or more local devices do not detect the missing data as being missing.
31. The apparatus of any of claims 21 to 30, wherein the broadcast content directory API obtains a list of channels available for live viewing, wherein the BAT tuner API adjusts a tuner operation of the BAT, wherein the media-content delivery API receives video data and corresponding metadata, and wherein the OTA capabilities query API determines the OTA capabilities of the BAT.
32. The apparatus of any of claims 21 to 31, wherein each of the one or more local devices interfaces with the APIs via a wireless fidelity (Wi-Fi) connection.
33. The apparatus of claim 21, wherein a content item is obtained as part of the OTA broadcast data in real-time or from a storage of a content delivery network (CDN), when the obtained content item was previously broadcast.
34. The apparatus of claim 33, wherein another content item is obtained and then displayed overlaying or replacing the obtained content item, when the other content item is better suited to a demographic of a user of at least one of the one or more local devices.
35. A method for providing OTA API services, the method comprising: receiving, from a BAT, OTA broadcast data, the OTA broadcast data being from a
BTS; and providing, by the BAT to one or more local devices, the API services, which comprise (i) a broadcast content directory API, (ii) a BAT tuner API, (iii) a media-content delivery API, and (iv) an OTA capabilities query API.
36. The method of claim 35, wherein the received OTA broadcast data, which comprises an emergency alert, is discarded based on the BAT being irrelevantly outside of a region.
37. The method of claim 35 or 36, wherein the BTS informs the BAT a duration during which to store a set of NRT emissions.
38. The method of claim 37, wherein a payment for the NRT emissions is determined based on the storage duration.
39. The method of any of claims 35 to 38, wherein the BAT comprises a gateway for the one or more local devices, and wherein the BAT and the one or more local devices are located in a same building or home.
40. The method of any of claims 35 to 39, wherein an application running on one of the one or more local devices reconstructs data missing from the OTA broadcast data using data obtained from an IP backchannel.
41. A method for providing progressive video enhancement, the method comprising: determining an attribute for each of a base layer and one or more enhancement layers to be combined with the base layer such that a stream quality metric satisfies a criterion, the one or more enhancement layers being determined such that the combination is needed for displaying a higher quality stream, wherein each of the enhancement layers is determined based on changes from the base layer.
42. The method of claim 41, wherein one of the enhancement layers comprises changes from another of the enhancement layers that is associated with a lower quality stream.
43. The method of claim 41 or 42, wherein the attributes comprise an I-Frame, an emission frequency of which is determined differently for each of the base layer and the one or more enhancement layers.
44. The method of any of claims 41 to 43, wherein the layers are obtained from over the air (OTA) emissions broadcasting a multicast network, and wherein at least one of the enhancement layers is obtained unicast, via an over the top (OTT) network.
45. The method of any of claims 41 to 44, wherein a group of pictures (GOP) size of one of the enhancement layers is larger than a GOP size of the base layer.
46. The method of any of claims 41 to 45, wherein the enhancement layers are enhancements of the base layer both temporally and spatially, the spatial enhancement being based on resolution, and the temporal enhancement being based on frames per second (FPS).
47. The method of claim 43, wherein an interval of video represents a GOP that is to be independently decoded.
48. The method of claim 46, wherein the decoding is based on the I-Frame.
49. The method of claim 46, wherein the satisfaction is further based on at least one of high-dynamic-range (HDR) achievement from a standard dynamic range (SDR), a wider color gamut, and GOP size increase.
50. The method of any of claims 41 to 49, wherein the satisfaction is based on a difference mean opinion score (DMOS) satisfying one or more criteria.
51. The method of claim 45, wherein each of the GOPs is determined based on a user-selected resolution to be used at a downstream user device.
52. The method of claim 41, further comprising: incorporating elements of another stream such that a first stream is enhanced by incorporating information in a coding tree unit (CTU) of the other stream.
53. An apparatus, comprising a processor, a memory, communication circuitry, and instructions stored in the memory which, when executed, cause the processor to: receive, from a broadcast access terminal (BAT), over-the-air (OTA) broadcast data, the OTA broadcast data being from a broadcast television station (BTS), the OTA broadcast data comprising multiple encodings for a single media asset, the multiple encodings for the single media asset comprising a first encoding and a second encoding, the first encoding being a base viewing quality encoding, and the second encoding being additive data for the construction of a higher quality viewing stream in combination with the first encoding; and create the higher quality viewing stream by combining the first encoding and the second encoding.
54. The apparatus of claim 53, wherein the processor is further caused to: assess a viewing quality need of a viewer; and assess a transmission quality of the first encoding and the second encoding;
55. The apparatus of claim 53 or 54, wherein the creation is performed when the higher quality viewing stream meets the viewing quality need of the viewer, and when the transmission quality of the first encoding and the second encoding meet a quality requirement.
56. The apparatus of claim 55, wherein each of the one or more enhancement layers is dependent on the base layer for a presentation of a video quality stream higher than the base layer’s quality stream.
57. A broadcast television encoder arranged to provide multiple encodings for a single media asset, the multiple encodings for the single media asset comprising a first encoding and a second encoding, the first encoding being a base viewing quality encoding, and the second encoding being additive data for the construction of a higher quality viewing stream in combination with the first encoding.
58. The encoder of claim 57, wherein each of the first and second encodings comprises an I-Frame having a different emission frequency.
59. The encoder of claim 57 or 58, wherein the single media asset is obtained from OTA emissions broadcasting a multicast network.
60. The encoder of claim 57, wherein a GOP size of the second encoding is larger than a GOP size of the first encoding.
61. A device for repurposing padding in baseband packets by dynamically injecting opportunistic data at a studio-to-transmitter link tunneling protocol (STLTP) feed, the device comprising: non-transitory memory; and a processor coupled to the memory storing instructions that, when executed, cause the processor to: obtain a set of non-real-time (NRT) data; obtain a plurality of baseband packets from the STLTP feed; determine an amount of excess capacity within each of the baseband packets; incrementally extract, for each of the baseband packets, portions of the NRT data, each of the portions having a size determined based on the respectively determined amount; and multiplex the extracted portions into the STLTP feed.
62. The device of claim 61, wherein the multiplexed portions replace the padding.
63. The device of claim 61 or 62, wherein the baseband packets are obtained after STLTP formatting and error correction code (ECC) encoding, and wherein the multiplexing is performed before ECC decoding and STLTP demultiplexing of the STLTP feed.
64. The device of any of claims 61 to 63, wherein the processor is further caused to: decode each of the baseband packets to determine injection locations, wherein the extracted portions of the NRT data are multiplexed into the STLTP feed at the determined locations.
65. The device of any of claims 61 to 64, wherein the excess capacity is determined by identifying capacity that has been flagged as padding in one or more of the baseband packets, the identification being based on at least one of (i) a previous baseband packet, (ii) a next baseband packet header, and (iii) a pointer value atribute of a current baseband packet header.
66. The device of any of claims 61 to 65, wherein the excess capacity comprises a trailing or tail-end portion of the baseband packet.
67. The device of any of claims 61 to 66, wherein downstream user equipment becomes aware of the null data replacement by emiting, from a broadcast television station (BTS), a service location table (SLT) that identifies the NRT in the emission.
68. The device of any of claims 61 to 67, wherein the baseband packets are obtained by decoding inner and outer real-time transport protocol (RTP) headers.
69. The device of any of claims 61 to 68, wherein each of the baseband packets comprises at most 8191 bytes (B) of payload.
70. The device of any of claims 61 to 69, wherein the processor is further caused to: obtain (i) a baseband packet header pointer, which indicates a start of the padding to be replaced, and (ii) an extension field, which indicates a trailing portion of a baseband packet that is to be similarly replaced, the replacements being performed using the NRT data portions.
71. The device of claim 70, wherein the processor is further caused to: determine, from among the indications, where to insert each of the NRT data portions.
72. The device of any of claims 61 to 71, wherein the processor is further caused to: obtain a packet header indicating that an entire payload of a packet, which comprises the packet header, is useable for replacing the padding.
73. A device for repurposing null data, the device comprising: non-transitory memory; and a processor coupled to the memory storing instructions that, when executed, cause the processor to: obtain a plurality of baseband packets from an STLTP feed; determine an amount of excess capacity within each of the baseband packets; and adjust a physical layer pipe (PLP) configuration for a more robust modulation and coding (MODCOD) of a subsequent downstream emission based on the determined amounts.
74. The device of claim 73, wherein the baseband packets are obtained from among a plurality of timing and management packets and a plurality of preamble packets.
75. The device of claim 73, wherein the STLTP feed comprises data to be broadcast from each of a plurality of different entities or content sources.
76. The device of any of claims 73 to 75, wherein the MODCOD is adjusted to increase capacity or frequency of content emission.
77. The device of any of claims 73 to 75, wherein the MODCOD is adjusted to increase an audience universe able to successfully receive content emission.
78. The device of any of claims 73 to 77, wherein the excess capacity is determined by identifying capacity that has been flagged as padding in one or more of the baseband packets.
79. The device of claim 78, wherein the identification is based on at least one of (i) a previous baseband packet, (ii) a next baseband packet header, and (iii) a pointer value attribute of a current baseband packet header.
80. The device of claim 73, wherein the excess capacity comprises a trailing portion of at least one of the baseband packets.
81. A method for augmenting data reception integrity via collaborative object delivery, the method comprising: coordinating a broadcast television station (BTS) and a scheduler such that at least one of a set of broadcast access terminals (BATs) is operable to emit recovery data via IP multicast to a subset of the BATs in a spatial region and in a licensed portion of a spectrum otherwise utilized by the BTS.
82. The method of claim 81, wherein the emission (i) comprises a plurality of content portions determined based on a peer-to-peer file sharing protocol that is decentralized and (ii) is performed in a bandwidth made available for a dedicated return channel (DRC), into which the at least one BAT performs the emission, and in a time interval or segment determined by the BTS, and wherein the emission of the at least one BAT is managed by the BTS, which operates as a spectrum manager and a resource manager, such that emission of the at least one BAT is performed in a carousel scheduled differently from a carousel of the BTS’ emission.
83. The method of claim 82, wherein the management is based on an identification of at least one data portion, among data emitted by the BTS, that requires a data reception integrity satisfying a criterion.
84. The method of claim 82, wherein the management comprises sending, to the at least one BAT, a signal directing the emission of the data, which operates as a reemission of the data, the data being originally emitted by the BTS.
85. The method of claim 84, further comprising: determining which radio frequency (RF) interval in coded orthogonal frequency- division multiplexing (COFDM) modulation into which the at least one BAT reemits.
86. The method of claim 84, wherein the signal is sent responsive to receiving, from another BAT at the BTS, a request for one or more portions of missing content, and wherein the request is emitted via an IP-based backchannel or the DRC.
87. The method of any of claims 81 to 86, wherein the emission of the at least one BAT is autonomously managed by a BAT, which operates as a resource manager, the BTS being a spectrum manager, and wherein the manager of the emission determines the emission based on information about one or more missing pieces of content.
88. The method of claim 87, wherein the manager of the emission further implements transport management, and wherein the information is determined locally at the at least one BAT by the at least one BAT discovering that data emitted by the BTS requires recovery.
89. The method of claim 87, wherein the information is determined by one or more other BATs in the region and emitted to the manager of the emission by the one or more other BATs.
90. The method of any of claims 81 to 89, wherein the emission of the at least one BAT is performed compliant with a set of ATSC version 3 standards, and wherein the emission of the at least one BAT is substantially similar to an emission of the BTS.
91. The method of any of claims 81 to 90, wherein the emission of the at least one BAT (i) is performed via wireless fidelity (Wi-Fi) direct such that use of a wireless access point (WAP) is rendered unnecessary and (ii) comprises non-real-time (NRT) video on demand (VOD) that was stored from previous emissions of the BTS.
92. The method of any of claims 81 to 91, further comprising: performing, via another BAT in the region, at least one of (i) storage of the emitted data; (ii) a forwarding of the emitted data; and (iii) an acknowledgment of previous storage of the emitted data.
93. The method of claim 92, further comprising: emitting, via the other BAT in the region, metadata in an IP-based backchannel or in a DRC such that at least one selection is made from among a list of remedies, wherein the selection is based on a type or importance of the data.
94. The method of claim 92, wherein the at least one selection causes the BTS to perform a reemission of the data.
95. The method of claim 92, wherein the at least one selection causes the BTS’ forward error correction (FEC) to be adjusted.
96. The method of claim 92, wherein the at least one selection causes the at least one BAT to perform the emission of the data on demand.
97. The method of claim 81, wherein the at least one selection causes the at least one BAT to perform the emission of the data under control of the BTS.
98. The method of any of claims 81 to 97, wherein the augmentation comprises: determining whether to increase FEC in a portion of an initial emission or in a portion of a carousel reemission.
99. The method of any of claims 81 to 97, wherein the augmentation comprises: identifying an ability to use layered division multiplexing (LDM) or transmit carrier offset of an RF emission from the BTS.
100. The method of any of claims 81 to 99, wherein the spatial region is determined based on a transmit power of the at least one BAT.
101. A BTS for supporting hybrid delivery of fragmented data, the BTS comprising a processor, memory, communication circuitry, and instructions stored in the memory that, when executed, cause the processor to: obtain information indicating a number of devices, which satisfies a predetermined criterion, received all emitted content; and determine whether to rebroadcast, in a carousel, one or more portions of the emitted content based on the information, wherein the emitted content is previously fragmented into a plurality of content portions based on a peer-to-peer file sharing protocol that is decentralized.
102. The BTS of claim 101, wherein the one or more content portions are missing at a BAT but obtained in the carousel rebroadcast at a later time.
103. The BTS of claim 101, wherein a set of missing content portions is obtained via an Internet protocol (IP) backchannel.
104. The BTS of any of claims 101 to 103, wherein the determination is further based on a request for missing content being obtained at the BTS.
105. The BTS of claim 104, wherein the processor is further caused to: inform a BAT, which made the request, of a next time at which the carousel rebroadcast of the missing content will occur.
106. The BTS of any of claims 101 to 104, wherein the processor is further caused to: determine a periodicity of the carousel rebroadcast based on whether a user fulfilled a contractual obligation for the previously emitted content or at least a portion thereof.
107. The BTS of any of claims 101 to 104, wherein the processor is further caused to: determine a periodicity of the carousel rebroadcast based on a priority or urgency of missing content.
108. The BTS of any of claims 101 to 107, wherein a BAT implements the decentralized protocol by rearranging received content portions into an intended ordering.
109. The BTS of any of claims 101 to 108, wherein the emitted content comprises at least one of: software, applications, information, and documents.
110. A BAT, comprising a processor, memory, communication circuitry, and instructions stored in the memory that, when executed, cause the processor to: receive a plurality of content portions emitted from a BTS as IP multicast traffic; identify a set of missing portions based on the received content portions;
determine whether to utilize a rebroadcast and/or another connection to obtain the set of missing portions; and responsive to the determination, obtain the set of missing portions via the rebroadcast in other IP multicast traffic from the BTS and/or via the other connection.
111. The BAT of claim 110, wherein the determination is based on at least one of: a time threshold until the rebroadcast, an urgency of the set of missing portions, a cost of obtaining the set of missing portions, and one or more user preferences.
112. The BAT of claim 111, wherein the time threshold is predetermined.
113. The BAT of claim 111, wherein the time threshold is set by a user via a user interface.
114. The BAT of any of claims 110 to 113, wherein the received content portions comprise a cryptographic hash contained in a descriptor.
115. The BAT of claim 110, wherein the other connection comprises another IP- based connection.
116. The BAT of claim 110, wherein the other connection is performed by a peer via a DRC.
117. The BAT of claim 115, wherein the other IP-based connection is a unicast IP backchannel.
118. The BAT of any of claims 110 to 117, wherein the processor is further caused to: implement a decentralized protocol by combining and rearranging both the received content portions and the set of missing portions into an ordering intended by the BTS.
119. The BAT of any of claims 110 to 119, wherein the processor is further caused to:
emit, via a DRC or another IP-based connection to the BTS, information indicating a number of the received content portions.
120. The BAT of any of claims 110 to 119, wherein the received content portions form part of a module that enhances an application currently running at the BAT.
121. A BAT supporting progressive OTA application download and runtime, the BAT comprising a processor, memory, communication circuitry, and instructions stored in the memory that, when executed, cause the processor to: responsive to reception of at least one module of a terminal application, install the at least one module such that the terminal application begins to run; responsive to reception of at least one other module of the terminal application, integrate into the terminal application additional functionality based on information of the at least one other module, wherein the modules are obtained at the BAT via emissions comprising IP multicast traffic.
122. The BAT of claim 121, wherein the processor is further caused to: tune to a second channel to obtain at least a portion of the terminal application in less time than by tuning to a first channel within which a rebroadcast of the at least one module and/or the at least one other module is performed.
123. The BAT of claim 121 or 122, wherein the terminal application is a main application for content distribution opportunities via a set of channels, and wherein the additional functionality comprises one or more of weather, entertainment, news, and emergency information.
124. The BAT of any of claims 121 to 123, wherein the installation comprises executing the at least one module, which is pre-compiled.
125. The BAT of any of claims 121 to 124, wherein at least one of the modules has an expiration date.
126. The BAT of any of claims 121 to 125, wherein the terminal application, which has not yet implemented the at least one other module, is only operable to perform rudimentary functionality as compared to the terminal application, which has already implemented the at least one other module.
127. The BAT of any of claims 121 to 126, wherein the emissions comprise ATSC version 3 emissions, including services based on a ROUTE/DASH and/or MMT protocol.
128. The BAT of any of claims 121 to 127, wherein the terminal application implementing the at least one other module collects user-interaction data for transmission to at least one BTS or associates thereof.
129. The BAT of any of claims 121 to 128, wherein the additional functionality provided by the terminal application implementing the at least one other module includes at least one of: channel scanning, channel list creation, signal standard type determination, channel logo presentation, audio track switching capabilities, subtitle display capabilities, gateway connection capabilities, gateway connection discovery capabilities, information presentation regarding current broadcast events, and full-screen player capabilities.
130. The BAT of any of claims 121 to 129, wherein the additional functionality provided by the terminal application implementing the at least one other module includes obtaining a list of files with file identifications in timestamp order.
131. The BAT of any of claims 121 to 130, wherein an emission carousel periodicity of the at least one module is greater than an emission carousel periodicity of the at least one other module.
132. The BAT of any of claims 121 to 130, wherein an emission carousel periodicity of the at least one other module is based on an urgency or importance of the at least one other module.
133. A BTS, comprising a processor, memory, communication circuitry, and instructions stored in the memory that, when executed, cause the processor to:
transmit at least one module for a terminal application to implement additional functionality, wherein the terminal application initially only provides rudimentary functionality, and wherein the at least one module for the terminal application is transmitted by broadcast utilizing one of the following: ROUTE/DASH-based services and MMT-based services.
134. The BTS of claim 133, wherein the terminal application comprises a main application that facilitates (i) content consumption by a downstream BAT and (ii) provisioning of information about the content being consumed.
135. The BTS of claim 133 or 134, wherein the terminal application implementing the additional functionality emulates an application download store or platform for digital distribution on behalf of different third parties.
136. The BTS of claim 133, wherein the processor is further caused to: transmit another application developed by a third party that contracts out a broadcast service, including periodic rebroadcasts.
137. The BTS of claim 133, wherein the implementation is performed by compiling and/or installing the at least one module.
138. The BTS of any of claims 133 to 137, wherein the terminal application is initially transmitted by the BTS in at least one other module such that a downstream BAT pre-installs the terminal application before obtaining the at least one module.
139. The BTS of any of claims 133 to 138, wherein the at least one module and the at least one other module are provided at different channels to which a downstream BAT is operable to tune.
140. The BTS of any of claims 133 to 139, wherein the processor is further caused to:
determine a periodicity for emitting, in a carousel, the at least one module, the periodicity being greater than a periodicity for emitting, in the carousel, any other type of data.
141. A method for implementing a content delivery network (CDN) point of presence (PoP) for ad hoc delivery of OTA data, the method comprising: providing a CDN PoP that comprises a software defined radio on an integrated circuit (IC) that obtains broadcasted IP multicast traffic, wherein contents of the CDN are pre-loaded by the broadcast.
142. The method of claim 141, wherein the CDN PoP transparently mirrors elements, including hypertext markup language (HTML), cascading style sheets (CSS), software downloads, and media objects originally from third party servers, and wherein the CDN PoP is automatically chosen based on a type of requested content and a location of a user making the request.
143. The method of claim 141 or 142, wherein the radio implements both a receiver operable to receive the traffic from the antenna and a transmitter operable to transmit supplemental traffic to a set of peers in a regional subset of the broadcast.
144. The method of any of claims 141 to 143, wherein the CDN PoP is formed within a mobile device, wherein the multicast IP traffic is broadcasted from a tower or another antenna, and wherein the IC consumes 0.1 Watts or less.
145. The method of any of claims 141 to 144, further comprising: obtaining at least a portion of metadata associated with content in the broadcast.
146. The method of claim 145, further comprising: determining whether the CDN PoP is to supplement a subset of the content actually received at a set of BATs, the supplementing being based on at least one of (i) a carousel re broadcast, (ii) a peer utilizing a DRC, and (iii) another IP-based connection, wherein the determination is based on the metadata.
147. The method of claim 146, further comprising: obtaining the subset of the content; and forwarding the subset of the content to a user interface.
148. The method of claim 146, wherein the supplementing is on-demand and is performed using data stored at the CDN PoP.
149. The method of claim 141, wherein the storage of the CDN PoP comprises at least 1 terabytes (TB).
150. The method of claim 146, wherein the supplementing comprises an alert that has higher priority.
151. The method of claim 145, further comprising: determining, in an orchestration unit at a BTS, the metadata.
152. The method of claim 151, further comprising: fragmenting, at the BTS, an original file into a plurality of content portions based on a peer-to-peer file sharing protocol that is decentralized; broadcasting, via the IP multicast traffic, the portions; and reconstituting, by the CDN PoP, the original file.
153. The method of claim 152, wherein the original file or each of the content potions is digitally signed using digital rights management (DRM).
154. The method of claim 141, further comprising: preempting a less priority broadcast with an emergency broadcast; and returning to the less priority broadcast after completing the emergency broadcast, the less priority broadcast being an application update.
155. A mobile apparatus, comprising a processor, memory, communication circuitry, and instructions stored in the memory that, when executed, cause the processor to:
store OTA broadcast data as a CDN PoP, the OTA broadcast data being from a BTS; and provide, to one or more local devices, content delivery services using at least a portion of the OTA broadcast data.
156. The apparatus of claim 155, wherein the OTA broadcast data comprises an operating system (OS) module for upgrading functionality of the mobile apparatus.
157. The apparatus of claim 155, wherein the OTA broadcast data is stored by the apparatus in accordance with a policy from the BTS.
158. The apparatus of claim 155, wherein the processor interfaces with a software defined radio implemented on an IC contained in a same enclosure.
159. The apparatus of claim 155, wherein the CDN PoP transparently mirrors elements, including hypertext markup language (HTML), cascading style sheets (CSS), software downloads, and media objects originally from third party servers.
160. The apparatus of claim 155, wherein the CDN PoP provides (i) NRT data using the MMT protocol, (ii) ad prepositioning, and (iii) emergency information delivery.
161. A method for adding one or more channels to a transmission, the method comprising: obtaining, from each of a plurality of differently-located transmitters, available capacity information; determining, for each of the channels based on the available capacity information of the transmitters, an emission having a same size; and dynamically allocating a bandwidth portion to regionally emit a set of linear, audio video (AV) content dynamically generated based on the size.
162. The method of claim 161, further comprising: incrementally increasing, for at least one of the transmitters, available capacity for at least one portion of the emission as the linear, AV content replaces, during the at least one
portion, emission of other content, wherein a duration to generate and emit the at least one portion satisfies a criterion.
163. The method of claim 161 or 162, wherein the dynamic allocation comprises: performing, via a machine learning model, at least one of: (i) adjusting resolution, (ii) adjusting bit rate, and (iii) removing an existing channel.
164. The method of any of claims 161 to 163, wherein the dynamic allocation is based on a number of BATs intended to receive the emission.
165. The method of any of claims 161 to 164, wherein the one or more channels are added responsive to an entertainment event running beyond a schedule or to occurrence of a natural disaster or health crisis.
166. The method of any of claims 162 to 165, wherein modulation and/or coding of a PLP is determined based on a manner in which the other content is being consumed such that preference is given to first devices by delivering a more reliable data emission than that delivered to second devices, wherein the first devices are indoor, and wherein the second devices are outdoor.
167. The method of claim 166, wherein the other content is reprioritized such that a different emission resiliency level is assigned by automatically adjusting the modulation and/or coding.
168. The method of claim 161, wherein the dynamic allocation comprises: balancing consumptive value of at least a portion of the other content with a determined goodwill value of the linear AV content.
169. The method of any of claims 161 to 168, wherein the dynamic allocation is based on metrics aggregated in real-time from viewers’ receivers and/or from third parties.
170. The method of claim 166, wherein the modulation and/or coding comprise at least one of frame-rate, frame-resolution, bit-depth, and FEC.
171. The method of claim 166, further comprising: providing, to a transmitter of the one or more channels, the consumption manner in real-time via a DRC.
172. The method of claim 166, further comprising: emitting only one component of stereoscopic or multi-view content in a broadcast stream based on the consumption manner.
173. The method of claim 166, further comprising: subsequently emitting a set of primary audio data using less resources than a set of secondary audio data based on the consumption manner indicating less consumption of the primary emission, wherein each of the sets is associated with a different language.
174. The method of claim 166, wherein a server predicts an optimal bandwidth allocation for a later time based on current statistics of the consumption manner.
175. The method of claim 161, further comprising: prioritizing data being emitted based on an analysis of content therein and/or on a context derived from an associated ongoing live event such that low priority data of a PLP are replaced with the linear, AV content.
176. The method of claim 162, further comprising: coordinating and collaborating between the differently -located transmitters and an Internet service provider such that the other content, which comprises data delivery, is emitted and the linear, AV content, which comprises service delivery, is obtained, the coordination and the collaboration being performed for an overall network.
177. The method of claim 162, wherein the dynamic allocation comprises reallocating bandwidth for the linear, AV content and for the other content, in a same broadcast stream.
178. The method of claim 176, wherein the same-sized emission is determined, at a virtualized environment of the overall network, to be less than the available capacity information of each transmitter.
179. A method for dynamic allocation of bandwidth of a broadcast stream, the method comprising: reallocating, among service components transmitted in one or more PLPs of the broadcast stream, bandwidth by: analyzing one or more metrics, including a viewing mode obtained from each of a plurality of receivers; and determining, based on the one or more analyzed metrics, the reallocation of the bandwidth among the service components of currently delivered services in the broadcast stream such that a new service component is added.
180. The method of claim 179, wherein the reallocation is further performed by: obtaining, from each of a plurality of differently-located transmitters, available capacity information; and replacing, based on the information, at least one of the service components with more content of the new service component.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201962936828P | 2019-11-18 | 2019-11-18 | |
| US62/936,828 | 2019-11-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021101934A1 true WO2021101934A1 (en) | 2021-05-27 |
Family
ID=75981457
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2020/060963 Ceased WO2021101934A1 (en) | 2019-11-18 | 2020-11-18 | Adaptive broadcast media and data services |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021101934A1 (en) |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113656370A (en) * | 2021-08-16 | 2021-11-16 | 南方电网数字电网研究院有限公司 | Data processing method and device for power measurement system and computer equipment |
| CN113691812A (en) * | 2021-10-25 | 2021-11-23 | 广州朗国电子科技股份有限公司 | Hongmon system-based distributed video processing method, terminal and readable medium |
| CN113886449A (en) * | 2021-08-30 | 2022-01-04 | 帝杰曼科技股份有限公司 | Big data information analysis system based on Internet of things |
| US11282609B1 (en) * | 2021-06-13 | 2022-03-22 | Chorus Health Inc. | Modular data system for processing multimodal data and enabling parallel recommendation system processing |
| US20220321627A1 (en) * | 2021-03-31 | 2022-10-06 | Tencent America LLC | Methods and apparatus for just-in-time content preparation in 5g networks |
| CN115333843A (en) * | 2022-08-16 | 2022-11-11 | 中国电信股份有限公司 | Information security system and information security data processing method |
| CN115361367A (en) * | 2022-10-20 | 2022-11-18 | 湖南康通电子股份有限公司 | Dual-channel broadcasting system for emergency broadcasting |
| US11671635B2 (en) * | 2020-01-02 | 2023-06-06 | Board Of Trustees Of Michigan State University | Systems and methods for enhanced multimedia signal broadcast, reception, data delivery, and data collection |
| US11695488B2 (en) * | 2021-07-31 | 2023-07-04 | Sony Interactive Entertainment Inc. | ATSC over-the-air (OTA) broadcast of public volumetric augmented reality (AR) |
| CN116545455A (en) * | 2023-07-04 | 2023-08-04 | 北京紫光青藤微系统有限公司 | Method and device for adjusting energy dissipation of transmitter antenna and transmitter |
| US11818414B2 (en) | 2021-08-06 | 2023-11-14 | Sony Group Corporation | Telepresence through OTA VR broadcast streams |
| US20240055025A1 (en) * | 2021-01-10 | 2024-02-15 | Blings Io Ltd | System and method for dynamic, data-driven videos |
| WO2024067076A1 (en) * | 2022-09-29 | 2024-04-04 | 中兴通讯股份有限公司 | Media data transmission method and device, storage medium, and electronic device |
| WO2024239247A1 (en) * | 2023-05-23 | 2024-11-28 | Shenzhen Tcl New Technology Co., Ltd. | Hybrid communication system and communication method |
| US12225251B2 (en) | 2022-04-22 | 2025-02-11 | Trilogy 5G, Inc. | Return path for broadcast system and method |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030018491A1 (en) * | 2001-07-17 | 2003-01-23 | Tohru Nakahara | Content usage device and network system, and license information acquisition method |
| US20080141303A1 (en) * | 2005-12-29 | 2008-06-12 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
| US20120287883A1 (en) * | 2009-12-29 | 2012-11-15 | Telecom Italia S.P.A. | Adaptive scheduling data transmission based on the transmission power and the number of physical resource blocks |
| US20130111528A1 (en) * | 2011-10-31 | 2013-05-02 | Verizon Patent And Licensing, Inc. | Dynamic provisioning of closed captioning to user devices |
| US20160249116A1 (en) * | 2015-02-25 | 2016-08-25 | Rovi Guides, Inc. | Generating media asset previews based on scene popularity |
-
2020
- 2020-11-18 WO PCT/US2020/060963 patent/WO2021101934A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030018491A1 (en) * | 2001-07-17 | 2003-01-23 | Tohru Nakahara | Content usage device and network system, and license information acquisition method |
| US20080141303A1 (en) * | 2005-12-29 | 2008-06-12 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
| US20120287883A1 (en) * | 2009-12-29 | 2012-11-15 | Telecom Italia S.P.A. | Adaptive scheduling data transmission based on the transmission power and the number of physical resource blocks |
| US20130111528A1 (en) * | 2011-10-31 | 2013-05-02 | Verizon Patent And Licensing, Inc. | Dynamic provisioning of closed captioning to user devices |
| US20160249116A1 (en) * | 2015-02-25 | 2016-08-25 | Rovi Guides, Inc. | Generating media asset previews based on scene popularity |
Cited By (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11671635B2 (en) * | 2020-01-02 | 2023-06-06 | Board Of Trustees Of Michigan State University | Systems and methods for enhanced multimedia signal broadcast, reception, data delivery, and data collection |
| US12389052B2 (en) | 2020-01-02 | 2025-08-12 | Board Of Trustees Of Michigan State University | Systems and methods for enhanced multimedia signal broadcast, reception, data delivery, and data collection |
| US12198729B2 (en) * | 2021-01-10 | 2025-01-14 | Blings Io Ltd | System and method for dynamic, data-driven videos |
| US20240055025A1 (en) * | 2021-01-10 | 2024-02-15 | Blings Io Ltd | System and method for dynamic, data-driven videos |
| US20220321627A1 (en) * | 2021-03-31 | 2022-10-06 | Tencent America LLC | Methods and apparatus for just-in-time content preparation in 5g networks |
| US12219002B2 (en) * | 2021-03-31 | 2025-02-04 | Tencent America LLC | Methods and apparatus for just-in-time content preparation in 5G networks |
| US11282609B1 (en) * | 2021-06-13 | 2022-03-22 | Chorus Health Inc. | Modular data system for processing multimodal data and enabling parallel recommendation system processing |
| US11695488B2 (en) * | 2021-07-31 | 2023-07-04 | Sony Interactive Entertainment Inc. | ATSC over-the-air (OTA) broadcast of public volumetric augmented reality (AR) |
| US11818414B2 (en) | 2021-08-06 | 2023-11-14 | Sony Group Corporation | Telepresence through OTA VR broadcast streams |
| CN113656370B (en) * | 2021-08-16 | 2024-04-30 | 南方电网数字电网集团有限公司 | Data processing method and device for electric power measurement system and computer equipment |
| CN113656370A (en) * | 2021-08-16 | 2021-11-16 | 南方电网数字电网研究院有限公司 | Data processing method and device for power measurement system and computer equipment |
| CN113886449A (en) * | 2021-08-30 | 2022-01-04 | 帝杰曼科技股份有限公司 | Big data information analysis system based on Internet of things |
| CN113691812A (en) * | 2021-10-25 | 2021-11-23 | 广州朗国电子科技股份有限公司 | Hongmon system-based distributed video processing method, terminal and readable medium |
| US12225251B2 (en) | 2022-04-22 | 2025-02-11 | Trilogy 5G, Inc. | Return path for broadcast system and method |
| CN115333843A (en) * | 2022-08-16 | 2022-11-11 | 中国电信股份有限公司 | Information security system and information security data processing method |
| WO2024067076A1 (en) * | 2022-09-29 | 2024-04-04 | 中兴通讯股份有限公司 | Media data transmission method and device, storage medium, and electronic device |
| CN115361367A (en) * | 2022-10-20 | 2022-11-18 | 湖南康通电子股份有限公司 | Dual-channel broadcasting system for emergency broadcasting |
| WO2024239247A1 (en) * | 2023-05-23 | 2024-11-28 | Shenzhen Tcl New Technology Co., Ltd. | Hybrid communication system and communication method |
| CN116545455B (en) * | 2023-07-04 | 2023-11-03 | 北京紫光青藤微系统有限公司 | Method and device for adjusting energy dissipation of transmitter antenna and transmitter |
| CN116545455A (en) * | 2023-07-04 | 2023-08-04 | 北京紫光青藤微系统有限公司 | Method and device for adjusting energy dissipation of transmitter antenna and transmitter |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021101934A1 (en) | Adaptive broadcast media and data services | |
| US10560756B2 (en) | Terrestrial broadcast market exchange network platform and broadcast augmentation channels for hybrid broadcasting in the internet age | |
| US20240048791A1 (en) | Apparatus and methods for recording a media stream | |
| US10790917B2 (en) | Apparatus for transmitting broadcast signal, apparatus for receiving broadcast signal, method for transmitting broadcast signal and method for receiving broadcast signal | |
| US10687121B2 (en) | Method for a primary device communicating with a companion device, and a primary device communicating with a companion device | |
| US20190268641A1 (en) | Methods and apparatus for content delivery and replacement in a network | |
| US9043849B2 (en) | Method for linking MMT media and DASH media | |
| US20140351871A1 (en) | Live media processing and streaming service | |
| US9420027B1 (en) | Systems and methods of communicating platform-independent representation of source code | |
| US11671635B2 (en) | Systems and methods for enhanced multimedia signal broadcast, reception, data delivery, and data collection | |
| US12120365B2 (en) | Reception device, reception method, transmission device, and transmission method | |
| US20210288735A1 (en) | Information processing apparatus, client apparatus, and data processing method | |
| US10878076B2 (en) | Receiving apparatus, transmitting apparatus, and data processing method | |
| US20160261912A1 (en) | Collaborative place-shifting of video content from a plurality of sources to a video presentation device | |
| US20250193470A1 (en) | System and method for generating a live output stream manifest based on an event | |
| KR20170109296A (en) | Apparatus and method for service validating in atsc 3.0 based broadcasting system | |
| US11336967B2 (en) | Receiver apparatus, transmitter apparatus, and data processing method | |
| KR20100129816A (en) | Multiplatform Digital Broadcasting System and Method | |
| Vaz et al. | Integrated broadband broadcast video scalability usage proposal to next-generation of brazilian DTTB system | |
| Fay et al. | Next-generation broadcast television: An overview of enabling technology | |
| Gurjão et al. | Reference Architectures for Telecommunications Systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20890672 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20890672 Country of ref document: EP Kind code of ref document: A1 |