US20160330398A1 - Media Content Creation Application - Google Patents
Media Content Creation Application Download PDFInfo
- Publication number
- US20160330398A1 US20160330398A1 US15/216,996 US201615216996A US2016330398A1 US 20160330398 A1 US20160330398 A1 US 20160330398A1 US 201615216996 A US201615216996 A US 201615216996A US 2016330398 A1 US2016330398 A1 US 2016330398A1
- Authority
- US
- United States
- Prior art keywords
- video
- user
- videos
- topic
- content creation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
- H04N5/772—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B31/00—Arrangements for the associated working of recording or reproducing apparatus with related apparatus
- G11B31/006—Arrangements for the associated working of recording or reproducing apparatus with related apparatus with video camera or receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23109—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion by placing content in organized collections, e.g. EPG data repository
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23614—Multiplexing of additional data and video streams
- H04N21/23617—Multiplexing of additional data and video streams by inserting additional data into a data carousel, e.g. inserting software modules into a DVB carousel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47202—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4756—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H04N5/2257—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
Definitions
- Some embodiments described herein relate generally to the methods and apparatus for a media content creation application.
- Traditional news/entertainment programs provide multimedia presentations of curated content from professional content providers and presented by paid professional presenters/anchors. Said another way, current news programs produce content in-house, or select third party content to present on a television or online program. Current news programs do not, however, currently produce programs that consist predominantly of curated professional quality amateur submissions that were submitted in response to a request.
- Such media content creation applications are not, however, integrated with a media player or dynamically integrated with topic requests associated with a media player such that video produced by users are automatically formatted according to the request and linked to the topic request via the media player
- an apparatus includes a processor included within a compute device, operatively coupled to a memory, and is configured to execute an application module and a recording module.
- the application module is configured to receive, at a compute device, an indication of a user selection of a video request from a list of video requests.
- the video request is associated with a list of characteristics and is associated with a video request identifier.
- the application module is configured to, based on the video request identifier, define a video recording environment in accordance with the list of characteristics.
- the recording module is configured to record, during a time and using a camera integrated with the compute device, a video such in accordance with the list of characteristics.
- the application module is configured to associate the video with a relationship indicator indicative of the relationship between the video and the video request.
- the application module is configured to send, to a server device from the compute device, a copy of the video including the relationship indicator such that the server (1) stores a copy of the video and (2) associates, based on the relationship indicator, the video with other videos that each have a relationship with the video request.
- FIG. 1 is a block diagram showing a multimedia presentation system, according to an embodiment.
- FIG. 2 is a block diagram depicting a compute device from the multimedia presentation system shown in FIG. 1 .
- FIG. 3 is a block diagram depicting a server device from the multimedia presentation system shown in FIG. 1 .
- FIG. 4 is a block diagram depicting a compute device configured to execute a media content creation application and that is operatively coupled to a server device according to an embodiment.
- FIGS. 5A-5N are graphical representations of a content creation environment defined by the media content creation application shown in FIG. 4 .
- FIGS. 6A-6C depict examples of tables included in a database coupled to a media content creation application according to an embodiment.
- an apparatus includes a processor included within a compute device, operatively coupled to a memory, and configured to execute an application module and a recording module.
- the application module is configured to receive, at a compute device, an indication of a user selection of a video request from a list of video requests.
- the video request is associated with a list of characteristics and is associated with a video request identifier.
- the application module is configured to, based on the video request identifier, define a video recording environment in accordance with the list of characteristics.
- the recording module is configured to record, during a time and using a camera integrated with the compute device, a video such in accordance with the list of characteristics.
- the application module is configured to associate the video with a relationship indicator indicative of the relationship between the video and the video request.
- the application module is configured to send, to a server device from the compute device, a copy of the video including the relationship indicator such that the server (1) stores a copy of the video and (2) associates, based on the relationship indicator, the video with other videos that each have a relationship with the video request.
- a method includes receiving, at a mobile device, an indication of a user selection of a video request from a list of video requests.
- the video request is associated with a list of characteristics including a first characteristic, a second characteristic and a third characteristic.
- the method includes receiving, at the mobile device and from a compute device, (1) an indication of the first characteristic, (2) a list of multimedia elements associated with the second characteristic, and (3) an overlay associated with the third characteristic.
- the method includes receiving, at the mobile device, an indication of a user selection of a multimedia element from the list of multimedia elements.
- the method includes recording, during a time and using a camera integrated with the mobile device, a video such that a first characteristic of the video corresponds to the first characteristic from the list of characteristics.
- the method includes sending a signal to cause the mobile device to present (1) during at least a first portion of the time and using a multimedia output integrated with the mobile device, the overlay, and (2) during at least a second portion of the time and using the display, the multimedia element.
- the method includes sending, to the compute device from the mobile device, a copy of the video.
- a method includes receiving, at a compute device from a first mobile device, (1) a first video recorded at the first mobile device in response to a video request, (2) an indication of a selection of a first multimedia element, and (3) an overlay associated with the video request.
- the first video includes a relationship indicator indicative of a relationship between the first video and the video request.
- the method includes receiving, at the compute device from a second mobile device, (1) a second video recorded at the second mobile device in response to the video request, (2) an indication of a selection of a second multimedia element, and (3) the overlay associated with the video request.
- the second video includes the relationship indicator indicative of a relationship between the second video and the video request.
- the method includes defining, at the compute device, a list to include the first video and the second video based on the both the first video and the second video including the relationship indicator.
- the method includes sending, from the compute device, an indication of a ranking of the first video relative to the second video.
- a module can be, for example, any assembly and/or set of operatively-coupled electrical components associated with performing a specific function(s), and can include, for example, a memory, a processor, electrical traces, optical connectors, software (that is stored in memory and/or executing in hardware) and/or the like.
- a compute device is intended to mean a single compute device or a combination of compute devices.
- a media content creation application (1) can be linked to a media player, (2) can receive parameters associated with a topic request, and (3) can package user generated content in accordance with the received parameters for export and viewing to the media player and/or other video sharing applications.
- the media content creation application can be linked to a media player. In this manner, if a user is viewing a topic request in the media player and desires to make a video in response to the topic request, clicking on the appropriate embedded link in the media player will open the media content creation application, and the media content creation application, if opened in response to a topic request, will open preloaded with that topic request's parameters.
- the media content creation application can receive parameters associated with a topic request.
- the parameters can be pushed to or pulled from the media player and/or other storage location (e.g., a database) in response to the user accepting a request to respond to a topic request.
- the parameters can be, for example, length of video, formatting of video (filters, etc.), formatting of the intro page and trailer, additional media (video, still pictures, audio, music, etc.) approved for use in the response, for example, as a cut-in.
- the media content creation application can package, i.e. assemble, the final user video based on, among other things, the received parameters.
- the media content creation application can restrict the maximum length of the video; automatically insert the selected cut-in(s), overlays, music, etc. to fit the parameters.
- the media content creation application can optionally submit the user's content for curation/selection and inclusion in a produced “show” including commentary and selected responses related to the topic.
- FIG. 1 is block diagram showing a multimedia presentation system (“system”) 100 according to an embodiment.
- the system 100 includes a compute device 102 and a server device 120 that are coupled via a network 116 .
- Compute device 102 includes an application module 108 and is operatively coupled to a multimedia input 104 and a multimedia output 106 .
- the compute device 102 (e.g., a mobile compute device) and the server device 120 are in communication via the network 116 .
- the network 116 can be any suitable network or combination of networks.
- the network 116 can be a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a worldwide interoperability for microwave access network (WiMAX), an intranet, the Internet, an optical fiber (or fiber optic)-based network, a virtual network, and/or any combination thereof.
- at least a portion of the network 116 can be implemented as a wireless network.
- the compute device 102 can be in communication with the network 116 via a wireless access point or the like (not shown in FIG. 1 ) that is operably coupled to the network 116 .
- the server device 120 can similarly be in communication with the network 160 via a wired and/or wireless connection.
- the compute device 102 can be any suitable compute device.
- the compute device 102 is a mobile compute device (smartphone, tablet, laptop, etc.) that is wirelessly in communication with the network 116 and/or the server device 120 .
- compute device 102 is a desktop computer, television, set-top box, etc.
- the compute device 102 includes the application module 108 .
- the compute device 102 includes a memory 112 , a processor 114 , and a communication interface 110 .
- multimedia input 104 and multimedia output 106 can be integral with the compute device 102 (shown as dashed lines in FIG. 2 , by way of example, a smartphone or tablet).
- multimedia input 104 and multimedia output 106 can be separate from the compute device 102 (by way of example, a desktop computer).
- the memory 112 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and/or the like.
- the memory 112 can store, for example, one or more software modules and/or code, for example application module 108 , that can include instructions to cause the processor 114 to perform one or more processes, functions, and/or the like.
- the memory 112 can include a software module and/or code that can include instructions to cause the processor 114 to operate media player application and/or a multimedia content creation application.
- the memory 112 can further include instructions to cause the communication interface 110 to send and/or receive one or more signals associated with the input to or output from, respectively, the server device 120 , as described in further detail herein.
- the processor 114 can be any suitable processing device configured to run or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or the like.
- the memory 112 can store instructions, for example, application module 108 , to cause the processor 114 to execute modules, processes, and/or functions associated with, for example, a media player application and/or a multimedia content creation application, as described in further detail herein.
- the multimedia input 104 can be any suitable component, subsystem, device and/or combination of devices.
- the multimedia input device 104 can be an input port or the like that can be operably coupled to the memory 112 and the processor 114 , as well as, for example, a camera, a haptic input device, an audio input device, an accelerometer, and/or the like (not shown in FIGS. 1 and 2 ).
- the multimedia input 104 can be configured to receive a signal (e.g., from a camera) associated with a media player application and/or a multimedia content creation application, can forward the signal and/or otherwise send another signal representing that signal to the processor 114 for any suitable processing and/or analyzing process, as described in further detail herein.
- the multimedia input 104 can be an integrated camera, for example, a camera that shares a housing with compute device 102 (e.g. a smartphone, tablet, laptop, etc.)
- the multimedia input 104 can be a peripheral camera, for example, a camera having a housing distinct from compute device 102 , but that is coupled to and co-located with compute device 102 (e.g. an add-on webcam, a digital camera or camcorder, etc.)
- the multimedia input 104 can be a combination of elements, for example, a camera coupled to a microphone and/or an accelerometer.
- the multimedia output 106 of the compute device 102 can be any suitable component, subsystem, device and/or combination of devices.
- the multimedia output 106 that can provide an audio-visual user interface, haptic output, etc. for the compute device 102 .
- the multimedia output 106 can be at least one display.
- the multimedia output 106 can be a cathode ray tube (CRT) display, a liquid crystal display (LCD) display, a light emitting diode (LED) display, and/or the like.
- the multimedia output device 106 can be a speaker that can receive a signal to cause the speaker to output audible sounds such as, for example, instructions, verification questions, confirmations, etc.
- the multimedia output device 106 can be a haptic device that can receive a signal to cause the haptic output device to vibrate at any number of different frequencies.
- the multimedia output 106 can provide the user interface for a software application (e.g., mobile application, interne web browser, and/or the like).
- the multimedia output 106 can be a combination of elements, for example, a display coupled to a speaker and/or a haptic output device.
- the communication interface 110 of the compute device 102 can be any suitable component, subsystem, device that can communicate with the network 116 . More specifically, the communication interface 110 can include one or more wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, and/or asynchronous transfer mode (ATM) interfaces. In some embodiments, the communication interface 110 can be, for example, a network interface card and/or the like that can include at least a wireless radio (e.g., a WiFi® radio, a Bluetooth® radio, etc.). As such, the communication interface 110 can send signals to and/or receive signals from the server device 120 .
- a wireless radio e.g., a WiFi® radio, a Bluetooth® radio, etc.
- the server device 120 can include and/or can otherwise be operably coupled to a database 126 .
- the database 126 can be, for example, a table, a repository, a relational database, an object-oriented database, an object-relational database, a SQL database, and XML database, and/or the like.
- the database 126 can be stored in a memory of the server device 120 and/or the like.
- the database 126 can be stored in, for example, a network access storage device (NAS) and/or the like that is operably coupled to the server device 120 .
- the database 126 can be in communication with the server device 120 via the network 116 .
- the database 126 can communicate with the network 116 via a wired or a wireless connection.
- the database 126 can be configured to at least temporarily store data such as, for example, data associated with multimedia presentations.
- at least a portion of the database 126 can be stored in, for example, the memory 112 of the compute device 102 .
- the server device 120 can be any type of device that can send data to and/or to receive data from one or more compute devices (e.g., the compute device 102 ) and/or databases (e.g., the database 126 ) via the network 16 .
- the server device 120 can function as, for example, a server device (e.g., a web server device), a network management device, an administrator device, and/or so forth.
- the server device 120 can be located within a central location, distributed in multiple locations, and/or a combination thereof.
- some or all of a set of components of the server device 120 can be located within a user device (e.g., the compute device 102 ) and/or any other device or server in communication with the network 116 .
- the server device 120 includes a communication interface 128 , a memory 122 , a processor 124 , and the database 126 .
- the communication interface 178 of the server device 120 can be any suitable device that can communicate with the network 160 via a wired or wireless communication.
- the communication interface 128 can include one or more wired or wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, and/or asynchronous transfer mode (ATM) interfaces.
- the communication interface 128 can be, for example, an Ethernet port, a network interface card, and/or the like.
- the communication module 128 can include a wireless radio (e.g., a WiFi® radio, a Bluetooth® radio, etc.) that can communicate with the network 116 .
- a wireless radio e.g., a WiFi® radio, a Bluetooth® radio, etc.
- the memory 122 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and/or the like.
- the memory 122 can be configured to store, for example, one or more software modules and/or code that can include instructions to cause the processor 122 to perform one or more processes, functions, and/or the like.
- the memory 122 can include a software module and/or code that can include instructions to cause the communication interface 128 to receive and/or send one or more signals from or to, respectively, the compute device 102 (via the network 116 ).
- the one or more signals can be associated with media player applications and/or a multimedia content creation applications, and/or the like.
- the memory 122 can further include instructions to cause the processor 124 to analyze, classify, compare, verify, and/or otherwise process data received from the compute device 102 .
- the memory 122 can include instructions to cause the processor 124 to query, update, and/or access data stored in the database 126 , as described in further detail herein.
- the processor 124 of the server device 120 can be any suitable processing device configured to run or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a front end processor, a network processor, and/or the like.
- the memory 122 can store instructions to cause the processor 124 to execute modules, processes, and/or functions associated with, for example, sending and/or receiving signals via the network 120 , analyzing; classifying, comparing, verifying, and/or processing data; and/or querying, updating, and/or otherwise accessing data stored in the database 126 , and/or the like.
- FIG. 4 is a block diagram depicting a compute device 202 operatively coupled to a server device 220 via a network 216 .
- Compute device 202 , network 216 and server device 220 can be similar to and include similar elements as compute device 102 , network 116 and server 120 , respectively.
- compute device 202 includes a processor 212 configured to execute a media content creation application 208 .
- FIGS. 5A-5N are graphical representations of a user interface for, and an output of, a content creation environment (“content creation environment”) 440 defined by the media content creation application 208 .
- the media content creation application 208 can include software modules and/or code (stored in memory or implemented in hardware such as processor 212 ) that can include instructions to cause processor 212 to define the content creation environment 440 .
- the media content creation application can be a native application on a desktop and/or mobile computing device.
- the media content creation application can be a web (browser) based application.
- the media content creation application 208 includes a display module 230 , a selection module 232 , and a recording module 234 .
- the display module 230 can be configured to send signals to cause a display (not shown) of compute device 202 to render a graphical representation of the content creation environment 440 .
- the selection module 232 can to receive signals indicative of a selection by a user of a compute device 202 .
- the selection can be any number of inputs, backgrounds, music, cut-ins, teleprompter configurations, recording, previewing, editing, uploading, etc.
- the selection module 232 can communicate with the display module 230 to cause the compute device 202 to display the content creation environment 440 .
- the recording module 234 can receive signals from a multimedia input (not shown) of compute device 202 to capture audio, video and/or other inputs from the user of compute device 202 .
- content creation environment 440 includes a graphical representation of a campaign request 442 , a toolbar 444 , and a user interaction portion 446 .
- the user interaction portion can include one or more fields and/or selections 448 .
- Content creation environment 440 defined by media content creation application 208 can be configured to allow a user of compute device to produce multimedia content, for example, in response to a campaign/topic request, in response to a response to the campaign request (and responses to responses), etc.
- Content creation environment 440 can allow a user or administrator to define the campaign request itself, and define the characteristics of the responses, and response to responses.
- Content creation environment 440 can allow a user define a user account for use with media content creation application 208 and associated media players, websites, etc.
- a user can enter user information (name, physical location, place of birth, nationality, race, gender, age, sexual orientation, education level, area of career, hobby expertise, income level, political affiliation, religious affiliation, social influence level, a graphical representation of the user or an opinion of the user and/or etc.).
- the content creation environment 440 can allow the user to associate a picture with their user profile, for example, taking a photograph of the user using a multimedia input element of compute device 202 (integrated camera, webcam, etc, as described above).
- Content creation environment 440 includes the graphical representation of a campaign request 442 , the toolbar 444 , and the user interaction portion 446 .
- the user interaction portion 446 can include one or more fields and/or selections 448 .
- the graphical representation of a campaign request 442 can display “Where will the Dow Close at the End of 2015?”
- the toolbar 444 can include one or more icons indicative of one or more interactive buttons configured to navigate a user through defining a video response to the campaign request, select backgrounds, vote, music, and/or etc.
- the tool bar includes “Background,” “Music,” “Cut-in,” “Quote,” “Teleprompter/Studio Settings,” “Takes.” Additional icons can be available on the toolbar by scrolling through the toolbar, and/or based on the characteristics selected by the definer of the campaign request.
- a background/voting page the user interaction portion 446 includes a first field 448 that prompts a user to manually enter their prediction, and a second field 448 prompts the user to select a background picture based on their prediction. While FIGS. 5B-5N omit references numbers for clarity, the content creation environments depicted in FIGS. 5B-5N include similar features and similar reference numbers are applicable throughout.
- 5B-5D depict alternative background voting pages, for example, select a background only ( FIG. 5B ), select a stance including a preselected background based on the selected stance ( FIG. 5C ), select a vote including a preselected background based on the selected vote ( FIG. 5D ).
- a user can also select a background that is not shown, for example, from the user's photo library.
- multimedia content creation application 208 can cause content creation environment 440 to change such that the user interaction portion displayed can change to allow the user to select music for the video response ( FIG. 5E ).
- the user can check one of fields 448 .
- the user can listen to a preview of the music prior, during, or after the selection.
- Multimedia content creation application 208 can similarly cause content creation environment 440 to change to display different user interaction portions 446 depending on, among other things, what icon from toolbar 444 is selected. For example, a user can select a cut-in ( FIG.
- FIGS. 5G and 5H configure a various studio settings (teleprompter, camera, countdowns, lighting, etc.)
- FIGS. 5G and 5H compose a quote/title to describe their video response
- FIG. 5I manage recorded takes
- FIG. 5J record the video response
- FIG. 5K preview the video response, i.e., including selected background, music, cut-ins, etc.
- FIG. 5L preview the video response, i.e., including selected background, music, cut-ins, etc.
- FIG. 5M edit the video response, i.e., cut out portions, set beginning and end, zoom, etc.
- FIG. 5N upload the video response
- multimedia content creation application 208 includes features to improve the quality of the user video, for example, an ability to load a teleprompter script to be displayed from the recording device as the user is recording the video, delay countdown timer to allow the user to properly setup prior to record beginning, recording time countdown to end of the specified recording length, overlays, green screen instructions and implementation to make the user appear to be in a specific environment commensurate with the video topic.
- an overlay can be tools for assisting a user in creating content, for example, an indication where the user's face should be located on the screen, teleprompter text, etc.
- an overlay can include information about the user (name, location, etc.) and/or about the campaign request (title, etc).
- overlays can include multimedia elements that are added to a video response and representative of, for example, a user's name and/or location (hometown and/or current), a users prediction and/or other quote, additional text, audio, pictures and/or video selected by the generator of the campaign request.
- the various campaign request and response video characteristics described herein can include multimedia elements, for example, images, videos, sounds, text, and combinations of elements.
- multimedia elements can be associated with the various aspects of the content creation environment 440 , for example, those features described herein with respect to FIGS. 5A-5N (backgrounds, cut-ins, music, etc.).
- the multimedia elements can be stored in a database (not shown in FIGS. 5A-5N ) accessible by both a user compute device (not shown in FIGS. 5A-5N ) and a server device (not shown in FIGS. 5A-5N ).
- the compute device when a user of a compute device selects a campaign request to which to respond, the compute device can retrieve the multimedia elements or representation and/or identifications of the multimedia elements. In this manner, the compute device can define both a raw and/or transcoded/encoded response video. In some embodiments, when a server device receives a raw response video, the server device can retrieve the multimedia elements from the database and can transcode/encode the raw response video to define a final response video. In this manner, the compute device and the server device may not need to transmit the multimedia elements between each other, which can reduce the size of transmissions between the compute device and the server device.
- a user of a compute device can launch, i.e., open, start, etc., a media content creation application by, for example, clicking on an icon representative of the media content creation application on a display associated with the compute device.
- the user of the compute device can launch the media content creation application by, for example, clicking a link in an associated application, for example, a media player application.
- a user can be operating the media player application, for example a media player application as described in Attorney Docket No.
- the user can be viewing a campaign request video, and can click a link that is configured to provide an indication to define a video response to the campaign request video. Clicking on such a link can automatically (i.e., without specific addition input from the use) launch the media content creation application.
- the media content creation application can be provided with a campaign request identifier associated with the campaign request video and can request and/or otherwise retrieve characteristics that are to be associated with the campaign request video associated with the campaign request.
- the content creation application can associate the campaign request identifier with the finished video response such that the finished video response can have a linked relationship with the campaign request, as described herein.
- server device 220 includes a processor configured to execute a database module 272 , an analysis module 274 , and a curation module 270 .
- Processor 224 can be configured to execute database module 272 to store, organize retrieve user information, campaign characteristics, campaign request videos, response videos, and/or responses to responses.
- database module 272 can retrieve user information and/or campaign information from a database (not shown), and can provide retrieved information to the multimedia content creation application 208 on compute device 202 .
- database module 272 can cause the uploaded video to be stored in a database (not shown), along with appropriate identifiers.
- Processor 224 can be configured to execute analysis module 274 and curation module 270 to analyze campaign request videos, response videos, and responses to responses, in view of, among other things, users associated with those videos and responses and identifiers associated with those videos and responses.
- server device 220 can (a) approve or deny videos and responses, (b) store videos and responses, (c) encode and/or transcode videos and responses, (d) present the videos and responses, and user information, for curation.
- analysis module 274 can analyze the selected content, specifically analyze any identifiers associated with the selected content, and interface with a database, via, for example, database module 272 to retrieve identification of predefined characteristics associated with a response and send those identifications of predefined characteristics to the media content creation application 208 .
- server 220 receives a response video from compute device 202
- analysis module 274 can analyze any identifier associated with the response video as well as, for example, the users response to an opinion/vote/stance, any selected background, music, cut-in, etc., and can cause server 220 to encode and/or transcode the response video in accordance with those selections.
- databased module 272 can then cause the video (raw and/or encoded/transcoded) to be stored.
- Processor 224 can execute curation module 270 to manage the curations, approval and/or ranking of this content based on, among other things, the user who produced a particular piece of content.
- curation module 270 can use a name, physical location, place of birth, nationality, race, gender, age, sexual orientation, education level, area of career, hobby expertise, income level, political affiliation, religious affiliation, graphical, social influence level, a graphical representation of the user and/or an opinion of the user to approve, disapprove, score, and/or otherwise rank the submitted content.
- curation module 270 can automatically approve future responses until such as a time as the user is flagged, for example, for inappropriate content. Conversely, if a user submits a number of disapproved content, that user can be banned from further submission. All approved content can be made available to, for example, a media player substantially immediately upon approval. Curation module 270 can further present all approved content, ranked and/or otherwise organized (for example, by gender, political orientation, age, etc. of the user), to a curator or other user for inclusion in a multimedia presentation.
- FIGS. 6A-6C depict example of tables that can be included in a database operatively coupled to a media content creation application.
- a table 680 includes a title column 682 , a user ID column 684 and a campaign ID column 686 ;
- a table 690 includes a title column 692 , a campaign ID column 694 and a multimedia presentation ID column 696 ; as shown in FIG.
- a table 680 ′ can be similar to table 680 , and can include a title column 682 ′, a user ID column 684 ′ and a campaign ID column 686 ′, and can additionally included an approval column 687 ′, a warning column 688 ′ and a ranking column 689 ′. While shown and described as having a particular number and type of columns, on other embodiments, either of table 680 , table 690 and/or table 680 ′ can include more, fewer and/or different combinations of column including different relationship identifiers.
- each users content creation application can identify the Campaign ID of Campaign Request 1 as “ABCD,” and can cause a processor to display a user interface and a content creation environment (“content creation environment”) and to retrieve any characteristics associated with Campaign Request 1.
- a user's respective content creation application can transmit the video response and can append Campaign ID “ABCD” as well as a respective User ID to the video response such that a server device can organize, approve and/or rank the video responses as they are received.
- User 1234 may be “pre-approved,” such that when Response Video 1 is received, Response Video 1 is automatically approved.
- Response Video 2, Response Video 4, and Response Video 5 may be approved or disapproved automatically or manually depending on a status of the respective user.
- a video response can be “Pending” until that video response is approved or disapproved.
- a video response can be disapproved after being pending or approved, for example, after receiving a warning.
- a viewer of Response Video 4 may determine that Response Video 4 is offensive and push an interactive button of a media player indicative of a warning.
- server device can change the status of Response Video 5 from approved (not shown) to disapproved as shown in FIG. 6C .
- the server device may additional flag Response Video 5 as having a warning in column 688 ′.
- the server device can rank the response video as described herein, and can indicate that ranking in the ranking column 689 ′. As shown in FIG. 6C , only approved response videos are ranked. In some alternative embodiments, pending and/or disapproved videos can be ranked.
- the media player can also be used as a self-service platform.
- the media player can be used to show videos associated with a wedding, other party or celebration, internal business promotion, product testimonial, corporate survey, employment application, etc.
- a user can produce a request for videos for a selected group of users. The request can be to tell a story about the bride and/or groom. All and/or approved video can be viewable via one of the multimedia streams.
- a user, or a third party professional producer can prepare and present the final wedding video (multimedia presentation) that includes selected videos submitted in response to the request.
- the application can be used to create user profile or biographical videos, videos which respond to a specific user's previous video submission (e.g. instead of responding to the overall topic itself, User A creates a response to User B's response to the topic request), as well as responses to those responses. All of these can include their own parameters.
- Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations.
- the computer-readable medium or processor-readable medium
- the media and computer code may be those designed and constructed for the specific purpose or purposes.
- non-transitory computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices.
- ASICs Application-Specific Integrated Circuits
- PLDs Programmable Logic Devices
- ROM Read-Only Memory
- RAM Random-Access Memory
- Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter.
- embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools.
- Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Information Transfer Between Computers (AREA)
- Computer Networks & Wireless Communication (AREA)
Abstract
In some embodiments an apparatus includes a processor included within a compute device, operatively coupled to a memory, and is configured to execute an application module and a recording module. The application module is configured to receive, at a compute device, an indication of a user selection of a video request from a list of video requests. The video request is associated with a list of characteristics and is associated with a video request identifier. The application module is configured to, based on the video request identifier, define a video recording environment in accordance with the list of characteristics. The recording module is configured to record, during a time and using a camera integrated with the compute device, a video such in accordance with the list of characteristics. The application module is configured to associate the video with a relationship indicator indicative of the relationship between the video and the video request. The application module is configured to send, to a server device from the compute device, a copy of the video including the relationship indicator such that the server (1) stores a copy of the video and (2) associates, based on the relationship indicator, the video with other videos that each have a relationship with the video request.
Description
- This application is a continuation of non-provisional application Ser. No. 14/706,934, filed May 7, 2015. The above identified application is incorporated herein by reference in its entirety.
- This application is related to non-provisional application Ser. No. 14/706,933, filed May 7, 2015 (now U.S. Pat. No. 9,329,748), entitled “Single Media Player Simultaneously Incorporating Multiple Different Streams for Linked Content”, which is incorporated herein by reference in its entirety.
- Some embodiments described herein relate generally to the methods and apparatus for a media content creation application.
- Traditional news/entertainment programs provide multimedia presentations of curated content from professional content providers and presented by paid professional presenters/anchors. Said another way, current news programs produce content in-house, or select third party content to present on a television or online program. Current news programs do not, however, currently produce programs that consist predominantly of curated professional quality amateur submissions that were submitted in response to a request.
- Current media content creation applications offer a platform for users to produce media for public viewing. Currently, a user can record audio and/or video, import that recorded audio and/or video into an editing portion of the content creation application, or a separate editing application. A user can then edit the media and then upload and/or otherwise share their edited content for public or limited sharing.
- Such media content creation applications are not, however, integrated with a media player or dynamically integrated with topic requests associated with a media player such that video produced by users are automatically formatted according to the request and linked to the topic request via the media player
- Accordingly, a need exists for media content creation application that can link responses to topic requests.
- In some embodiments an apparatus includes a processor included within a compute device, operatively coupled to a memory, and is configured to execute an application module and a recording module. The application module is configured to receive, at a compute device, an indication of a user selection of a video request from a list of video requests. The video request is associated with a list of characteristics and is associated with a video request identifier. The application module is configured to, based on the video request identifier, define a video recording environment in accordance with the list of characteristics. The recording module is configured to record, during a time and using a camera integrated with the compute device, a video such in accordance with the list of characteristics. The application module is configured to associate the video with a relationship indicator indicative of the relationship between the video and the video request. The application module is configured to send, to a server device from the compute device, a copy of the video including the relationship indicator such that the server (1) stores a copy of the video and (2) associates, based on the relationship indicator, the video with other videos that each have a relationship with the video request.
-
FIG. 1 is a block diagram showing a multimedia presentation system, according to an embodiment. -
FIG. 2 is a block diagram depicting a compute device from the multimedia presentation system shown inFIG. 1 . -
FIG. 3 is a block diagram depicting a server device from the multimedia presentation system shown inFIG. 1 . -
FIG. 4 is a block diagram depicting a compute device configured to execute a media content creation application and that is operatively coupled to a server device according to an embodiment. -
FIGS. 5A-5N are graphical representations of a content creation environment defined by the media content creation application shown inFIG. 4 . -
FIGS. 6A-6C depict examples of tables included in a database coupled to a media content creation application according to an embodiment. - In some embodiments an apparatus includes a processor included within a compute device, operatively coupled to a memory, and configured to execute an application module and a recording module. The application module is configured to receive, at a compute device, an indication of a user selection of a video request from a list of video requests. The video request is associated with a list of characteristics and is associated with a video request identifier. The application module is configured to, based on the video request identifier, define a video recording environment in accordance with the list of characteristics. The recording module is configured to record, during a time and using a camera integrated with the compute device, a video such in accordance with the list of characteristics. The application module is configured to associate the video with a relationship indicator indicative of the relationship between the video and the video request. The application module is configured to send, to a server device from the compute device, a copy of the video including the relationship indicator such that the server (1) stores a copy of the video and (2) associates, based on the relationship indicator, the video with other videos that each have a relationship with the video request.
- In some embodiments a method includes receiving, at a mobile device, an indication of a user selection of a video request from a list of video requests. The video request is associated with a list of characteristics including a first characteristic, a second characteristic and a third characteristic. The method includes receiving, at the mobile device and from a compute device, (1) an indication of the first characteristic, (2) a list of multimedia elements associated with the second characteristic, and (3) an overlay associated with the third characteristic. The method includes receiving, at the mobile device, an indication of a user selection of a multimedia element from the list of multimedia elements. The method includes recording, during a time and using a camera integrated with the mobile device, a video such that a first characteristic of the video corresponds to the first characteristic from the list of characteristics. The method includes sending a signal to cause the mobile device to present (1) during at least a first portion of the time and using a multimedia output integrated with the mobile device, the overlay, and (2) during at least a second portion of the time and using the display, the multimedia element. The method includes sending, to the compute device from the mobile device, a copy of the video.
- In some embodiments a method includes receiving, at a compute device from a first mobile device, (1) a first video recorded at the first mobile device in response to a video request, (2) an indication of a selection of a first multimedia element, and (3) an overlay associated with the video request. The first video includes a relationship indicator indicative of a relationship between the first video and the video request. The method includes receiving, at the compute device from a second mobile device, (1) a second video recorded at the second mobile device in response to the video request, (2) an indication of a selection of a second multimedia element, and (3) the overlay associated with the video request. The second video includes the relationship indicator indicative of a relationship between the second video and the video request. The method includes defining, at the compute device, a list to include the first video and the second video based on the both the first video and the second video including the relationship indicator. The method includes sending, from the compute device, an indication of a ranking of the first video relative to the second video.
- As used in this specification, a module can be, for example, any assembly and/or set of operatively-coupled electrical components associated with performing a specific function(s), and can include, for example, a memory, a processor, electrical traces, optical connectors, software (that is stored in memory and/or executing in hardware) and/or the like.
- As used in this specification, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, the term “a compute device” is intended to mean a single compute device or a combination of compute devices.
- As described herein, by way of example, a media content creation application (1) can be linked to a media player, (2) can receive parameters associated with a topic request, and (3) can package user generated content in accordance with the received parameters for export and viewing to the media player and/or other video sharing applications.
- The media content creation application can be linked to a media player. In this manner, if a user is viewing a topic request in the media player and desires to make a video in response to the topic request, clicking on the appropriate embedded link in the media player will open the media content creation application, and the media content creation application, if opened in response to a topic request, will open preloaded with that topic request's parameters.
- The media content creation application can receive parameters associated with a topic request. The parameters can be pushed to or pulled from the media player and/or other storage location (e.g., a database) in response to the user accepting a request to respond to a topic request. The parameters can be, for example, length of video, formatting of video (filters, etc.), formatting of the intro page and trailer, additional media (video, still pictures, audio, music, etc.) approved for use in the response, for example, as a cut-in.
- The media content creation application can package, i.e. assemble, the final user video based on, among other things, the received parameters. By way of example, the media content creation application can restrict the maximum length of the video; automatically insert the selected cut-in(s), overlays, music, etc. to fit the parameters. Furthermore, the media content creation application can optionally submit the user's content for curation/selection and inclusion in a produced “show” including commentary and selected responses related to the topic.
-
FIG. 1 is block diagram showing a multimedia presentation system (“system”) 100 according to an embodiment. As shown inFIG. 1 , thesystem 100 includes acompute device 102 and aserver device 120 that are coupled via anetwork 116.Compute device 102 includes anapplication module 108 and is operatively coupled to amultimedia input 104 and amultimedia output 106. - The compute device 102 (e.g., a mobile compute device) and the
server device 120 are in communication via thenetwork 116. Thenetwork 116 can be any suitable network or combination of networks. For example, in some embodiments, thenetwork 116 can be a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a worldwide interoperability for microwave access network (WiMAX), an intranet, the Internet, an optical fiber (or fiber optic)-based network, a virtual network, and/or any combination thereof. Moreover, at least a portion of thenetwork 116 can be implemented as a wireless network. For example, in some embodiments, thecompute device 102 can be in communication with thenetwork 116 via a wireless access point or the like (not shown inFIG. 1 ) that is operably coupled to thenetwork 116. Theserver device 120 can similarly be in communication with the network 160 via a wired and/or wireless connection. - The
compute device 102 can be any suitable compute device. For example, in some embodiments, thecompute device 102 is a mobile compute device (smartphone, tablet, laptop, etc.) that is wirelessly in communication with thenetwork 116 and/or theserver device 120. In other embodiments,compute device 102 is a desktop computer, television, set-top box, etc. Thecompute device 102 includes theapplication module 108. - As shown in
FIG. 2 , thecompute device 102 includes amemory 112, aprocessor 114, and acommunication interface 110. In some embodiments, such as, for example, as shown inFIG. 2 ,multimedia input 104 andmultimedia output 106 can be integral with the compute device 102 (shown as dashed lines inFIG. 2 , by way of example, a smartphone or tablet). In other embodiments,multimedia input 104 andmultimedia output 106 can be separate from the compute device 102 (by way of example, a desktop computer). Thememory 112 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and/or the like. In some embodiments, thememory 112 can store, for example, one or more software modules and/or code, forexample application module 108, that can include instructions to cause theprocessor 114 to perform one or more processes, functions, and/or the like. For example, in some embodiments, thememory 112 can include a software module and/or code that can include instructions to cause theprocessor 114 to operate media player application and/or a multimedia content creation application. Thememory 112 can further include instructions to cause thecommunication interface 110 to send and/or receive one or more signals associated with the input to or output from, respectively, theserver device 120, as described in further detail herein. - The
processor 114 can be any suitable processing device configured to run or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or the like. As such, thememory 112 can store instructions, for example,application module 108, to cause theprocessor 114 to execute modules, processes, and/or functions associated with, for example, a media player application and/or a multimedia content creation application, as described in further detail herein. - The
multimedia input 104 can be any suitable component, subsystem, device and/or combination of devices. For example, in some embodiments, themultimedia input device 104 can be an input port or the like that can be operably coupled to thememory 112 and theprocessor 114, as well as, for example, a camera, a haptic input device, an audio input device, an accelerometer, and/or the like (not shown inFIGS. 1 and 2 ). Themultimedia input 104 can be configured to receive a signal (e.g., from a camera) associated with a media player application and/or a multimedia content creation application, can forward the signal and/or otherwise send another signal representing that signal to theprocessor 114 for any suitable processing and/or analyzing process, as described in further detail herein. In some embodiments, themultimedia input 104 can be an integrated camera, for example, a camera that shares a housing with compute device 102 (e.g. a smartphone, tablet, laptop, etc.) In other embodiments, themultimedia input 104 can be a peripheral camera, for example, a camera having a housing distinct fromcompute device 102, but that is coupled to and co-located with compute device 102 (e.g. an add-on webcam, a digital camera or camcorder, etc.) In some embodiments, themultimedia input 104 can be a combination of elements, for example, a camera coupled to a microphone and/or an accelerometer. - The
multimedia output 106 of thecompute device 102 can be any suitable component, subsystem, device and/or combination of devices. For example, in some embodiments, themultimedia output 106 that can provide an audio-visual user interface, haptic output, etc. for thecompute device 102. In some embodiments, themultimedia output 106 can be at least one display. For example, themultimedia output 106 can be a cathode ray tube (CRT) display, a liquid crystal display (LCD) display, a light emitting diode (LED) display, and/or the like. In some embodiments, themultimedia output device 106 can be a speaker that can receive a signal to cause the speaker to output audible sounds such as, for example, instructions, verification questions, confirmations, etc. In other embodiments, themultimedia output device 106 can be a haptic device that can receive a signal to cause the haptic output device to vibrate at any number of different frequencies. As described in further detail herein, themultimedia output 106 can provide the user interface for a software application (e.g., mobile application, interne web browser, and/or the like). In some embodiments, themultimedia output 106 can be a combination of elements, for example, a display coupled to a speaker and/or a haptic output device. - The
communication interface 110 of thecompute device 102 can be any suitable component, subsystem, device that can communicate with thenetwork 116. More specifically, thecommunication interface 110 can include one or more wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, and/or asynchronous transfer mode (ATM) interfaces. In some embodiments, thecommunication interface 110 can be, for example, a network interface card and/or the like that can include at least a wireless radio (e.g., a WiFi® radio, a Bluetooth® radio, etc.). As such, thecommunication interface 110 can send signals to and/or receive signals from theserver device 120. - Referring back to
FIG. 1 , theserver device 120 can include and/or can otherwise be operably coupled to adatabase 126. Thedatabase 126 can be, for example, a table, a repository, a relational database, an object-oriented database, an object-relational database, a SQL database, and XML database, and/or the like. In some embodiments, thedatabase 126 can be stored in a memory of theserver device 120 and/or the like. In other embodiments, thedatabase 126 can be stored in, for example, a network access storage device (NAS) and/or the like that is operably coupled to theserver device 120. In some embodiments, thedatabase 126 can be in communication with theserver device 120 via thenetwork 116. In such embodiments, thedatabase 126 can communicate with thenetwork 116 via a wired or a wireless connection. Thedatabase 126 can be configured to at least temporarily store data such as, for example, data associated with multimedia presentations. In some embodiments, at least a portion of thedatabase 126 can be stored in, for example, thememory 112 of thecompute device 102. - The
server device 120 can be any type of device that can send data to and/or to receive data from one or more compute devices (e.g., the compute device 102) and/or databases (e.g., the database 126) via the network 16. In some embodiments, theserver device 120 can function as, for example, a server device (e.g., a web server device), a network management device, an administrator device, and/or so forth. Theserver device 120 can be located within a central location, distributed in multiple locations, and/or a combination thereof. Moreover, some or all of a set of components of theserver device 120 can be located within a user device (e.g., the compute device 102) and/or any other device or server in communication with thenetwork 116. - As shown in
FIG. 3 , theserver device 120 includes acommunication interface 128, amemory 122, aprocessor 124, and thedatabase 126. The communication interface 178 of theserver device 120 can be any suitable device that can communicate with the network 160 via a wired or wireless communication. More specifically, thecommunication interface 128 can include one or more wired or wireless interfaces, such as, for example, Ethernet interfaces, optical carrier (OC) interfaces, and/or asynchronous transfer mode (ATM) interfaces. In some embodiments, thecommunication interface 128 can be, for example, an Ethernet port, a network interface card, and/or the like. In some embodiments, thecommunication module 128 can include a wireless radio (e.g., a WiFi® radio, a Bluetooth® radio, etc.) that can communicate with thenetwork 116. - The
memory 122 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and/or the like. In some embodiments, thememory 122 can be configured to store, for example, one or more software modules and/or code that can include instructions to cause theprocessor 122 to perform one or more processes, functions, and/or the like. For example, in some embodiments, thememory 122 can include a software module and/or code that can include instructions to cause thecommunication interface 128 to receive and/or send one or more signals from or to, respectively, the compute device 102 (via the network 116). In some instances, the one or more signals can be associated with media player applications and/or a multimedia content creation applications, and/or the like. Thememory 122 can further include instructions to cause theprocessor 124 to analyze, classify, compare, verify, and/or otherwise process data received from thecompute device 102. In addition, thememory 122 can include instructions to cause theprocessor 124 to query, update, and/or access data stored in thedatabase 126, as described in further detail herein. - The
processor 124 of theserver device 120 can be any suitable processing device configured to run or execute a set of instructions or code such as, for example, a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a front end processor, a network processor, and/or the like. As such, thememory 122 can store instructions to cause theprocessor 124 to execute modules, processes, and/or functions associated with, for example, sending and/or receiving signals via thenetwork 120, analyzing; classifying, comparing, verifying, and/or processing data; and/or querying, updating, and/or otherwise accessing data stored in thedatabase 126, and/or the like. -
FIG. 4 is a block diagram depicting acompute device 202 operatively coupled to aserver device 220 via anetwork 216.Compute device 202,network 216 andserver device 220 can be similar to and include similar elements ascompute device 102,network 116 andserver 120, respectively. As shown inFIG. 4 ,compute device 202 includes aprocessor 212 configured to execute a mediacontent creation application 208.FIGS. 5A-5N are graphical representations of a user interface for, and an output of, a content creation environment (“content creation environment”) 440 defined by the mediacontent creation application 208. The mediacontent creation application 208 can include software modules and/or code (stored in memory or implemented in hardware such as processor 212) that can include instructions to causeprocessor 212 to define thecontent creation environment 440. In some embodiments, the media content creation application can be a native application on a desktop and/or mobile computing device. In some embodiments, the media content creation application can be a web (browser) based application. As shown inFIG. 4 , the mediacontent creation application 208 includes adisplay module 230, aselection module 232, and arecording module 234. - The
display module 230 can be configured to send signals to cause a display (not shown) ofcompute device 202 to render a graphical representation of thecontent creation environment 440. Theselection module 232 can to receive signals indicative of a selection by a user of acompute device 202. In some embodiments, the selection can be any number of inputs, backgrounds, music, cut-ins, teleprompter configurations, recording, previewing, editing, uploading, etc. Theselection module 232 can communicate with thedisplay module 230 to cause thecompute device 202 to display thecontent creation environment 440. Therecording module 234 can receive signals from a multimedia input (not shown) ofcompute device 202 to capture audio, video and/or other inputs from the user ofcompute device 202. - As shown in
FIG. 5A ,content creation environment 440 includes a graphical representation of acampaign request 442, atoolbar 444, and auser interaction portion 446. The user interaction portion can include one or more fields and/orselections 448.Content creation environment 440 defined by mediacontent creation application 208 can be configured to allow a user of compute device to produce multimedia content, for example, in response to a campaign/topic request, in response to a response to the campaign request (and responses to responses), etc.Content creation environment 440 can allow a user or administrator to define the campaign request itself, and define the characteristics of the responses, and response to responses.Content creation environment 440 can allow a user define a user account for use with mediacontent creation application 208 and associated media players, websites, etc. In this manner, a user can enter user information (name, physical location, place of birth, nationality, race, gender, age, sexual orientation, education level, area of career, hobby expertise, income level, political affiliation, religious affiliation, social influence level, a graphical representation of the user or an opinion of the user and/or etc.). Thecontent creation environment 440 can allow the user to associate a picture with their user profile, for example, taking a photograph of the user using a multimedia input element of compute device 202 (integrated camera, webcam, etc, as described above). -
Content creation environment 440 includes the graphical representation of acampaign request 442, thetoolbar 444, and theuser interaction portion 446. Theuser interaction portion 446 can include one or more fields and/orselections 448. By way of example, and with reference toFIG. 5A , the graphical representation of acampaign request 442 can display “Where will the Dow Close at the End of 2015?” Thetoolbar 444 can include one or more icons indicative of one or more interactive buttons configured to navigate a user through defining a video response to the campaign request, select backgrounds, vote, music, and/or etc. In this example, the tool bar includes “Background,” “Music,” “Cut-in,” “Quote,” “Teleprompter/Studio Settings,” “Takes.” Additional icons can be available on the toolbar by scrolling through the toolbar, and/or based on the characteristics selected by the definer of the campaign request. In this example, a background/voting page, theuser interaction portion 446 includes afirst field 448 that prompts a user to manually enter their prediction, and asecond field 448 prompts the user to select a background picture based on their prediction. WhileFIGS. 5B-5N omit references numbers for clarity, the content creation environments depicted inFIGS. 5B-5N include similar features and similar reference numbers are applicable throughout.FIGS. 5B-5D depict alternative background voting pages, for example, select a background only (FIG. 5B ), select a stance including a preselected background based on the selected stance (FIG. 5C ), select a vote including a preselected background based on the selected vote (FIG. 5D ). As shown in each ofFIGS. 5A-5D , a user can also select a background that is not shown, for example, from the user's photo library. - In this example, a user can select the “Music” icon on
toolbar 444. In response, multimediacontent creation application 208 can causecontent creation environment 440 to change such that the user interaction portion displayed can change to allow the user to select music for the video response (FIG. 5E ). In this example, the user can check one offields 448. In some embodiments, the user can listen to a preview of the music prior, during, or after the selection. Multimediacontent creation application 208 can similarly causecontent creation environment 440 to change to display differentuser interaction portions 446 depending on, among other things, what icon fromtoolbar 444 is selected. For example, a user can select a cut-in (FIG. 5F ), configure a various studio settings (teleprompter, camera, countdowns, lighting, etc.) (FIGS. 5G and 5H ), compose a quote/title to describe their video response (FIG. 5I ), manage recorded takes (FIG. 5J ), record the video response (FIG. 5K ), preview the video response, i.e., including selected background, music, cut-ins, etc. (FIG. 5L ), edit the video response, i.e., cut out portions, set beginning and end, zoom, etc. (FIG. 5M ), and/or upload the video response (FIG. 5N ). - As shown in
FIGS. 5A-5N , and in addition to what is shown inFIGS. 5A-5N , multimediacontent creation application 208 includes features to improve the quality of the user video, for example, an ability to load a teleprompter script to be displayed from the recording device as the user is recording the video, delay countdown timer to allow the user to properly setup prior to record beginning, recording time countdown to end of the specified recording length, overlays, green screen instructions and implementation to make the user appear to be in a specific environment commensurate with the video topic. In some embodiments, an overlay can be tools for assisting a user in creating content, for example, an indication where the user's face should be located on the screen, teleprompter text, etc. In some embodiments, an overlay can include information about the user (name, location, etc.) and/or about the campaign request (title, etc). In this manner, overlays can include multimedia elements that are added to a video response and representative of, for example, a user's name and/or location (hometown and/or current), a users prediction and/or other quote, additional text, audio, pictures and/or video selected by the generator of the campaign request. - In additional to any multimedia elements associated with an overlay, the various campaign request and response video characteristics described herein can include multimedia elements, for example, images, videos, sounds, text, and combinations of elements. Such multimedia elements can be associated with the various aspects of the
content creation environment 440, for example, those features described herein with respect toFIGS. 5A-5N (backgrounds, cut-ins, music, etc.). The multimedia elements can be stored in a database (not shown inFIGS. 5A-5N ) accessible by both a user compute device (not shown inFIGS. 5A-5N ) and a server device (not shown inFIGS. 5A-5N ). In some embodiments, when a user of a compute device selects a campaign request to which to respond, the compute device can retrieve the multimedia elements or representation and/or identifications of the multimedia elements. In this manner, the compute device can define both a raw and/or transcoded/encoded response video. In some embodiments, when a server device receives a raw response video, the server device can retrieve the multimedia elements from the database and can transcode/encode the raw response video to define a final response video. In this manner, the compute device and the server device may not need to transmit the multimedia elements between each other, which can reduce the size of transmissions between the compute device and the server device. - In some embodiments, a user of a compute device can launch, i.e., open, start, etc., a media content creation application by, for example, clicking on an icon representative of the media content creation application on a display associated with the compute device. In some embodiments, the user of the compute device can launch the media content creation application by, for example, clicking a link in an associated application, for example, a media player application. In such an embodiment, a user can be operating the media player application, for example a media player application as described in Attorney Docket No. SNME-001/00US 323152-2002, entitled “Single Media Player Simultaneously Incorporating Multiple Different Streams for Linked Content.” Specifically the user can be viewing a campaign request video, and can click a link that is configured to provide an indication to define a video response to the campaign request video. Clicking on such a link can automatically (i.e., without specific addition input from the use) launch the media content creation application. In such an example, the media content creation application can be provided with a campaign request identifier associated with the campaign request video and can request and/or otherwise retrieve characteristics that are to be associated with the campaign request video associated with the campaign request. Furthermore, when the user uploads a finished video response, the content creation application can associate the campaign request identifier with the finished video response such that the finished video response can have a linked relationship with the campaign request, as described herein.
- As shown in
FIG. 4 ,server device 220 includes a processor configured to execute adatabase module 272, ananalysis module 274, and acuration module 270.Processor 224 can be configured to executedatabase module 272 to store, organize retrieve user information, campaign characteristics, campaign request videos, response videos, and/or responses to responses. In this manner, when a user selects a particular campaign request video (or video response) to which to record a response to,database module 272 can retrieve user information and/or campaign information from a database (not shown), and can provide retrieved information to the multimediacontent creation application 208 oncompute device 202. Similarly, when a user uploads a video,database module 272 can cause the uploaded video to be stored in a database (not shown), along with appropriate identifiers. -
Processor 224 can be configured to executeanalysis module 274 andcuration module 270 to analyze campaign request videos, response videos, and responses to responses, in view of, among other things, users associated with those videos and responses and identifiers associated with those videos and responses. In this manner,server device 220 can (a) approve or deny videos and responses, (b) store videos and responses, (c) encode and/or transcode videos and responses, (d) present the videos and responses, and user information, for curation. Accordingly, when a user selects content to respond to (campaign request, video response, response to response, etc.),analysis module 274 can analyze the selected content, specifically analyze any identifiers associated with the selected content, and interface with a database, via, for example,database module 272 to retrieve identification of predefined characteristics associated with a response and send those identifications of predefined characteristics to the mediacontent creation application 208. Similarly, whenserver 220 receives a response video fromcompute device 202,analysis module 274 can analyze any identifier associated with the response video as well as, for example, the users response to an opinion/vote/stance, any selected background, music, cut-in, etc., and can causeserver 220 to encode and/or transcode the response video in accordance with those selections. As described above,databased module 272 can then cause the video (raw and/or encoded/transcoded) to be stored. - When a campaign request is published, as described herein, users can submit related content, specifically responses to the campaign request, and responses to responses to the campaign requests.
Processor 224 can executecuration module 270 to manage the curations, approval and/or ranking of this content based on, among other things, the user who produced a particular piece of content. By way of example,curation module 270 can use a name, physical location, place of birth, nationality, race, gender, age, sexual orientation, education level, area of career, hobby expertise, income level, political affiliation, religious affiliation, graphical, social influence level, a graphical representation of the user and/or an opinion of the user to approve, disapprove, score, and/or otherwise rank the submitted content. In such an example, when a user has a number of previously approved responses,curation module 270 can automatically approve future responses until such as a time as the user is flagged, for example, for inappropriate content. Conversely, if a user submits a number of disapproved content, that user can be banned from further submission. All approved content can be made available to, for example, a media player substantially immediately upon approval.Curation module 270 can further present all approved content, ranked and/or otherwise organized (for example, by gender, political orientation, age, etc. of the user), to a curator or other user for inclusion in a multimedia presentation. -
FIGS. 6A-6C depict example of tables that can be included in a database operatively coupled to a media content creation application. As shown inFIG. 6A , a table 680 includes atitle column 682, auser ID column 684 and acampaign ID column 686; as shown inFIG. 6B a table 690 includes atitle column 692, acampaign ID column 694 and a multimedia presentation ID column 696; as shown inFIG. 6C , a table 680′ can be similar to table 680, and can include atitle column 682′, auser ID column 684′ and acampaign ID column 686′, and can additionally included anapproval column 687′, awarning column 688′ and aranking column 689′. While shown and described as having a particular number and type of columns, on other embodiments, either of table 680, table 690 and/or table 680′ can include more, fewer and/or different combinations of column including different relationship identifiers. - In one example with reference to
FIGS. 6A-6C , four users (1234, 5678, 2468, and 3579) can select to view Campaign Request 1 (ABCD), for example, by pushing an interactive button in a social media and/or media player application. In such an example, each users content creation application can identify the Campaign ID ofCampaign Request 1 as “ABCD,” and can cause a processor to display a user interface and a content creation environment (“content creation environment”) and to retrieve any characteristics associated withCampaign Request 1. After completion of a video response, a user's respective content creation application can transmit the video response and can append Campaign ID “ABCD” as well as a respective User ID to the video response such that a server device can organize, approve and/or rank the video responses as they are received. In this example,User 1234 may be “pre-approved,” such that whenResponse Video 1 is received,Response Video 1 is automatically approved. In such an example,Response Video 2,Response Video 4, andResponse Video 5 may be approved or disapproved automatically or manually depending on a status of the respective user. A video response can be “Pending” until that video response is approved or disapproved. In some embodiments, a video response can be disapproved after being pending or approved, for example, after receiving a warning. In this example, a viewer ofResponse Video 4 may determine thatResponse Video 4 is offensive and push an interactive button of a media player indicative of a warning. In response, server device can change the status ofResponse Video 5 from approved (not shown) to disapproved as shown inFIG. 6C . The server device may additionalflag Response Video 5 as having a warning incolumn 688′. Finally, the server device can rank the response video as described herein, and can indicate that ranking in theranking column 689′. As shown inFIG. 6C , only approved response videos are ranked. In some alternative embodiments, pending and/or disapproved videos can be ranked. - While generally described with respect to news programs (politics, entertainment, etc.), the media player can also be used as a self-service platform. For example, the media player can be used to show videos associated with a wedding, other party or celebration, internal business promotion, product testimonial, corporate survey, employment application, etc. In such an example, a user can produce a request for videos for a selected group of users. The request can be to tell a story about the bride and/or groom. All and/or approved video can be viewable via one of the multimedia streams. And finally a user, or a third party professional producer, can prepare and present the final wedding video (multimedia presentation) that includes selected videos submitted in response to the request.
- While generally described herein as user defining to responses to topic requests, the application can be used to create user profile or biographical videos, videos which respond to a specific user's previous video submission (e.g. instead of responding to the overall topic itself, User A creates a response to User B's response to the topic request), as well as responses to those responses. All of these can include their own parameters.
- Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The computer-readable medium (or processor-readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices.
- Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
- While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Where methods described above indicate certain events occurring in certain order, the ordering of certain events may be modified. Additionally, certain of the events may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above.
Claims (16)
1. A method, comprising:
receiving, at a compute device from a first mobile device, a first video recorded at the first mobile device in response to a topic video, the topic video having an associated topic identifier, the first video associated with the topic identifier such that the first video has a linked relationship with the topic video;
receiving, at the compute device from a second mobile device, a second video recorded at the second mobile device in response to the topic video, the second video associated with the topic identifier such that the second video has a linked relationship with the topic video; and
generating a multimedia presentation using a plurality of videos generated in response to the topic video, the plurality of videos including the first video and the second video, wherein the identification of the plurality of videos for the multimedia presentation is based on a ranking of videos that considers an attribute associated with a first user that generated the first video and an attribute associated with a second user that generated the second video.
2. The method of claim 1 , wherein the ranking of videos considers an opinion of the first user.
3. The method of claim 1 , wherein the ranking of videos considers a social influence level of the first user.
4. The method of claim 1 , wherein the ranking of videos considers a personal attribute of the first user.
5. The method of claim 1 , wherein the ranking of videos considers a physical location of the first user.
6. A non-transitory computer-readable medium having a content creation tool stored thereon for use on one or more server devices, the content creation tool including:
a database section that when executed, causes the content creation tool to store a first video recorded at a first mobile device in response to a topic video, the topic video having an associated topic identifier, the first video associated with the topic identifier such that the first video has a linked relationship with the topic video, and causes the content creation tool to store a second video recorded at a second mobile device in response to the topic video, the second video associated with the topic identifier such that the second video has a linked relationship with the topic video;
an analysis section that when executed, causes the content creation tool to rank a plurality of videos that considers an attribute associated with a first user that generated the first video and an attribute associated with a second user that generated the second video, wherein the plurality of videos includes the first video and the second video; and
a curation section that when executed, causes the content creation tool to generate a multimedia presentation based on the ranking of the plurality of videos.
7. The non-transitory computer-readable medium of claim 6 , wherein the analysis section generates a ranked list of the plurality of videos, and the curation section generates the multimedia presentation based on the ranked list.
8. The non-transitory computer-readable medium of claim 6 , wherein the ranking of the plurality of videos considers an opinion of the first user.
9. The non-transitory computer-readable medium of claim 6 , wherein the ranking of the plurality of videos considers a social influence level of the first user.
10. The non-transitory computer-readable medium of claim 6 , wherein the ranking of the plurality of videos considers a personal attribute of the first user.
11. The non-transitory computer-readable medium of claim 6 , wherein the ranking of the plurality of videos considers a physical location of the first user.
12. A computer implemented method performed by a content creation tool on one or more server devices, comprising:
causing the content creation tool to store a first video recorded at a first mobile device in response to a topic video, the topic video having an associated topic identifier, the first video associated with the topic identifier such that the first video has a linked relationship with the topic video;
causing the content creation tool to store a second video recorded at a second mobile device in response to the topic video, the second video associated with the topic identifier such that the second video has a linked relationship with the topic video;
causing the content creation tool to rank a plurality of videos that considers an attribute associated with a first user that generated the first video and an attribute associated with a second user that generated the second video, wherein the plurality of videos includes the first video and the second video; and
causing the content creation tool to generate a multimedia presentation based on the ranking of the plurality of videos.
13. The computer implemented method of claim 12 , wherein the ranking of videos considers an opinion of the first user.
14. The computer implemented method of claim 12 , wherein the ranking of videos considers a social influence level of the first user.
15. The computer implemented method of claim 12 , wherein the ranking of videos considers a personal attribute of the first user.
16. The computer implemented method of claim 12 , wherein the ranking of videos considers a physical location of the first user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/216,996 US20160330398A1 (en) | 2015-05-07 | 2016-07-22 | Media Content Creation Application |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/706,934 US9402050B1 (en) | 2015-05-07 | 2015-05-07 | Media content creation application |
US15/216,996 US20160330398A1 (en) | 2015-05-07 | 2016-07-22 | Media Content Creation Application |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/706,934 Continuation US9402050B1 (en) | 2015-05-07 | 2015-05-07 | Media content creation application |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160330398A1 true US20160330398A1 (en) | 2016-11-10 |
Family
ID=56411324
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/706,934 Expired - Fee Related US9402050B1 (en) | 2015-05-07 | 2015-05-07 | Media content creation application |
US15/216,996 Abandoned US20160330398A1 (en) | 2015-05-07 | 2016-07-22 | Media Content Creation Application |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/706,934 Expired - Fee Related US9402050B1 (en) | 2015-05-07 | 2015-05-07 | Media content creation application |
Country Status (1)
Country | Link |
---|---|
US (2) | US9402050B1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10728443B1 (en) | 2019-03-27 | 2020-07-28 | On Time Staffing Inc. | Automatic camera angle switching to create combined audiovisual file |
US10963841B2 (en) | 2019-03-27 | 2021-03-30 | On Time Staffing Inc. | Employment candidate empathy scoring system |
US11023735B1 (en) | 2020-04-02 | 2021-06-01 | On Time Staffing, Inc. | Automatic versioning of video presentations |
WO2021178824A1 (en) * | 2020-03-06 | 2021-09-10 | Johnson J R | Video script generation, presentation and video recording with flexible overwriting |
US11127232B2 (en) | 2019-11-26 | 2021-09-21 | On Time Staffing Inc. | Multi-camera, multi-sensor panel data extraction system and method |
US11144882B1 (en) | 2020-09-18 | 2021-10-12 | On Time Staffing Inc. | Systems and methods for evaluating actions over a computer network and establishing live network connections |
US11423071B1 (en) | 2021-08-31 | 2022-08-23 | On Time Staffing, Inc. | Candidate data ranking method using previously selected candidate data |
US11727040B2 (en) | 2021-08-06 | 2023-08-15 | On Time Staffing, Inc. | Monitoring third-party forum contributions to improve searching through time-to-live data assignments |
US11907652B2 (en) | 2022-06-02 | 2024-02-20 | On Time Staffing, Inc. | User interface and systems for document creation |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201914377D0 (en) * | 2019-10-04 | 2019-11-20 | Sprat Ltd | Shoutout video |
CN115086730B (en) * | 2022-06-16 | 2024-04-02 | 平安国际融资租赁有限公司 | Subscription video generation method, subscription video generation system, computer equipment and subscription video generation medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080281854A1 (en) * | 2007-05-07 | 2008-11-13 | Fatdoor, Inc. | Opt-out community network based on preseeded data |
US20100093320A1 (en) * | 2006-10-11 | 2010-04-15 | Firstin Wireless Technology Inc | Methods and systems for providing a name-based communication service |
US20150000703A1 (en) * | 2011-09-09 | 2015-01-01 | Wylie Ott | Bowling ball maintenance device |
Family Cites Families (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5778181A (en) | 1996-03-08 | 1998-07-07 | Actv, Inc. | Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments |
US6715126B1 (en) | 1998-09-16 | 2004-03-30 | International Business Machines Corporation | Efficient streaming of synchronized web content from multiple sources |
US20030001880A1 (en) | 2001-04-18 | 2003-01-02 | Parkervision, Inc. | Method, system, and computer program product for producing and distributing enhanced media |
US6724403B1 (en) | 1999-10-29 | 2004-04-20 | Surfcast, Inc. | System and method for simultaneous display of multiple information sources |
US7028264B2 (en) | 1999-10-29 | 2006-04-11 | Surfcast, Inc. | System and method for simultaneous display of multiple information sources |
KR100867760B1 (en) | 2000-05-15 | 2008-11-10 | 소니 가부시끼 가이샤 | Playback device, playback method and recording medium |
US8234218B2 (en) | 2000-10-10 | 2012-07-31 | AddnClick, Inc | Method of inserting/overlaying markers, data packets and objects relative to viewable content and enabling live social networking, N-dimensional virtual environments and/or other value derivable from the content |
ES2488096T3 (en) | 2000-10-11 | 2014-08-26 | United Video Properties, Inc. | Systems and methods to complement multimedia on demand |
US20020082730A1 (en) | 2000-12-21 | 2002-06-27 | Microsoft Corporation | Universal media player |
US7073130B2 (en) | 2001-01-31 | 2006-07-04 | Microsoft Corporation | Methods and systems for creating skins |
US6903779B2 (en) | 2001-05-16 | 2005-06-07 | Yahoo! Inc. | Method and system for displaying related components of a media stream that has been transmitted over a computer network |
US7027054B1 (en) | 2002-08-14 | 2006-04-11 | Avaworks, Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US7466334B1 (en) * | 2002-09-17 | 2008-12-16 | Commfore Corporation | Method and system for recording and indexing audio and video conference calls allowing topic-based notification and navigation of recordings |
US7493646B2 (en) * | 2003-01-30 | 2009-02-17 | United Video Properties, Inc. | Interactive television systems with digital video recording and adjustable reminders |
GB0306603D0 (en) | 2003-03-21 | 2003-04-30 | First Person Invest Ltd | Method and apparatus for broadcasting communications |
US20060026638A1 (en) | 2004-04-30 | 2006-02-02 | Vulcan Inc. | Maintaining a graphical user interface state that is based on a selected type of content |
US9948882B2 (en) | 2005-08-11 | 2018-04-17 | DISH Technologies L.L.C. | Method and system for toasted video distribution |
US20070094333A1 (en) | 2005-10-20 | 2007-04-26 | C Schilling Jeffrey | Video e-mail system with prompter and subtitle text |
CA2538438A1 (en) | 2006-03-01 | 2007-09-01 | Legalview Assets, Limited | Systems and methods for media programming |
US20100198697A1 (en) * | 2006-07-21 | 2010-08-05 | Videoegg, Inc. | Fixed Position Interactive Advertising |
US20090037802A1 (en) | 2007-07-31 | 2009-02-05 | Matthias Klier | Integrated System and Method to Create a Video Application for Distribution in the Internet |
US7941092B2 (en) | 2006-11-22 | 2011-05-10 | Bindu Rama Rao | Media distribution server that presents interactive media to a mobile device |
US8756333B2 (en) | 2006-11-22 | 2014-06-17 | Myspace Music Llc | Interactive multicast media service |
US8700714B1 (en) | 2006-12-06 | 2014-04-15 | Google, Inc. | Collaborative streaning of video content |
US20080270467A1 (en) | 2007-04-24 | 2008-10-30 | S.R. Clarke, Inc. | Method and system for conducting a remote employment video interview |
CN101681194A (en) | 2007-05-02 | 2010-03-24 | 谷歌公司 | user interfaces for web-based video player |
US20090144237A1 (en) | 2007-11-30 | 2009-06-04 | Michael Branam | Methods, systems, and computer program products for providing personalized media services |
US8359303B2 (en) | 2007-12-06 | 2013-01-22 | Xiaosong Du | Method and apparatus to provide multimedia service using time-based markup language |
US20090216577A1 (en) | 2008-02-22 | 2009-08-27 | Killebrew Todd F | User-generated Review System |
EP2291763A4 (en) | 2008-06-18 | 2012-05-09 | Political Media Australia Ltd | Assessing ditigal content across a communications network |
WO2010002921A1 (en) | 2008-07-01 | 2010-01-07 | Yoostar Entertainment Group, Inc. | Interactive systems and methods for video compositing |
US8645599B2 (en) | 2009-01-07 | 2014-02-04 | Renesas Electronics America, Inc. | Consumer media player |
US8539359B2 (en) | 2009-02-11 | 2013-09-17 | Jeffrey A. Rapaport | Social network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic |
US20100318520A1 (en) | 2009-06-01 | 2010-12-16 | Telecordia Technologies, Inc. | System and method for processing commentary that is related to content |
US8799253B2 (en) | 2009-06-26 | 2014-08-05 | Microsoft Corporation | Presenting an assembled sequence of preview videos |
US20110035466A1 (en) | 2009-08-10 | 2011-02-10 | Sling Media Pvt Ltd | Home media aggregator system and method |
US9442690B2 (en) | 2009-10-01 | 2016-09-13 | Iheartmedia Management Services, Inc. | Graphical user interface for content management |
US8677284B2 (en) | 2009-11-04 | 2014-03-18 | Alpine Electronics, Inc. | Method and apparatus for controlling and displaying contents in a user interface |
US20110218948A1 (en) | 2009-12-15 | 2011-09-08 | Fabricio Benevenuto De Souza | Methods for detecting spammers and content promoters in online video social networks |
US20110154404A1 (en) | 2009-12-17 | 2011-06-23 | At & T Intellectual Property I, L.P. | Systems and Methods to Provide Data Services for Concurrent Display with Media Content Items |
US8341037B2 (en) | 2009-12-18 | 2012-12-25 | Apple Inc. | Mixed source media playback |
US9294526B2 (en) | 2009-12-28 | 2016-03-22 | Microsoft Technology Licensing, Llc | Managing multiple dynamic media streams |
US8732605B1 (en) | 2010-03-23 | 2014-05-20 | VoteBlast, Inc. | Various methods and apparatuses for enhancing public opinion gathering and dissemination |
US8291452B1 (en) | 2011-05-20 | 2012-10-16 | Google Inc. | Interface for watching a stream of videos |
US8713592B2 (en) | 2010-06-29 | 2014-04-29 | Google Inc. | Self-service channel marketplace |
US20120158527A1 (en) | 2010-12-21 | 2012-06-21 | Class6Ix, Llc | Systems, Methods and/or Computer Readable Storage Media Facilitating Aggregation and/or Personalized Sequencing of News Video Content |
US9876827B2 (en) | 2010-12-27 | 2018-01-23 | Google Llc | Social network collaboration space |
US20120192225A1 (en) | 2011-01-25 | 2012-07-26 | Youtoo Technologies, LLC | Administration of Content Creation and Distribution System |
AU2011202182B1 (en) * | 2011-05-11 | 2011-10-13 | Frequency Ip Holdings, Llc | Creation and presentation of selective digital content feeds |
US10127564B2 (en) * | 2011-09-15 | 2018-11-13 | Stephan HEATH | System and method for using impressions tracking and analysis, location information, 2D and 3D mapping, mobile mapping, social media, and user behavior and information for generating mobile and internet posted promotions or offers for, and/or sales of, products and/or services |
US10120877B2 (en) * | 2011-09-15 | 2018-11-06 | Stephan HEATH | Broad and alternative category clustering of the same, similar or different categories in social/geo/promo link promotional data sets for end user display of interactive ad links, coupons, mobile coupons, promotions and sale of products, goods and services integrated with 3D spatial geomapping and mobile mapping and social networking |
US8732579B2 (en) | 2011-09-23 | 2014-05-20 | Klip, Inc. | Rapid preview of remote video content |
HK1206116A1 (en) | 2011-10-10 | 2015-12-31 | Vivoom, Inc. | Network-based rendering and steering of visual effects |
US9633016B2 (en) | 2011-11-01 | 2017-04-25 | Google Inc. | Integrated social network and stream playback |
WO2013082142A1 (en) | 2011-11-28 | 2013-06-06 | Discovery Communications, Llc | Methods and apparatus for enhancing a digital content experience |
US9032020B2 (en) | 2011-12-29 | 2015-05-12 | Google Inc. | Online video enhancement |
US20130179925A1 (en) | 2012-01-06 | 2013-07-11 | United Video Properties, Inc. | Systems and methods for navigating through related content based on a profile associated with a user |
US20130201305A1 (en) | 2012-02-06 | 2013-08-08 | Research In Motion Corporation | Division of a graphical display into regions |
US8687947B2 (en) | 2012-02-20 | 2014-04-01 | Rr Donnelley & Sons Company | Systems and methods for variable video production, distribution and presentation |
US20130262585A1 (en) | 2012-03-30 | 2013-10-03 | Myspace Llc | System and method for presentation of video streams relevant to social network users |
US8682809B2 (en) | 2012-04-18 | 2014-03-25 | Scorpcast, Llc | System and methods for providing user generated video reviews |
US20130294751A1 (en) | 2012-05-07 | 2013-11-07 | Toshitsugu Maeda | Method and Computer-Readable Medium for Creating and Editing Video Package Using Mobile Communications Device |
US20140036023A1 (en) | 2012-05-31 | 2014-02-06 | Volio, Inc. | Conversational video experience |
US20140013230A1 (en) | 2012-07-06 | 2014-01-09 | Hanginout, Inc. | Interactive video response platform |
US9699485B2 (en) | 2012-08-31 | 2017-07-04 | Facebook, Inc. | Sharing television and video programming through social networking |
US20140068437A1 (en) * | 2012-09-06 | 2014-03-06 | Zazoom, Llc | Computerized system and method of communicating about digital content |
US20140096167A1 (en) | 2012-09-28 | 2014-04-03 | Vringo Labs, Inc. | Video reaction group messaging with group viewing |
US20140101548A1 (en) | 2012-10-05 | 2014-04-10 | Apple Inc. | Concurrently presenting interactive invitational content and media items within a media station through the use of bumper content |
EP2720470B1 (en) | 2012-10-12 | 2018-01-17 | Sling Media, Inc. | Aggregated control and presentation of media content from multiple sources |
US20140253727A1 (en) * | 2013-03-08 | 2014-09-11 | Evocentauri Inc. | Systems and methods for facilitating communications between a user and a public official |
US9148398B2 (en) | 2013-03-13 | 2015-09-29 | Google Inc. | Prioritized and contextual display of aggregated account notifications |
US20140280090A1 (en) | 2013-03-15 | 2014-09-18 | Call-It-Out, Inc. | Obtaining rated subject content |
WO2014190216A1 (en) * | 2013-05-22 | 2014-11-27 | Thompson David S | Fantasy sports interleaver |
US9304648B2 (en) * | 2013-06-26 | 2016-04-05 | Google Inc. | Video segments for a video related to a task |
US20150007030A1 (en) | 2013-07-01 | 2015-01-01 | Pnina Noy | System and method for associating video files |
US20150020106A1 (en) * | 2013-07-11 | 2015-01-15 | Rawllin International Inc. | Personalized video content from media sources |
US20150046812A1 (en) | 2013-08-12 | 2015-02-12 | Google Inc. | Dynamic resizable media item player |
US20150365725A1 (en) * | 2014-06-11 | 2015-12-17 | Rawllin International Inc. | Extract partition segments of personalized video channel |
US9619751B2 (en) * | 2014-06-27 | 2017-04-11 | Microsoft Technology Licensing, Llc | Intelligent delivery of actionable content |
-
2015
- 2015-05-07 US US14/706,934 patent/US9402050B1/en not_active Expired - Fee Related
-
2016
- 2016-07-22 US US15/216,996 patent/US20160330398A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100093320A1 (en) * | 2006-10-11 | 2010-04-15 | Firstin Wireless Technology Inc | Methods and systems for providing a name-based communication service |
US20080281854A1 (en) * | 2007-05-07 | 2008-11-13 | Fatdoor, Inc. | Opt-out community network based on preseeded data |
US20150000703A1 (en) * | 2011-09-09 | 2015-01-01 | Wylie Ott | Bowling ball maintenance device |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11457140B2 (en) | 2019-03-27 | 2022-09-27 | On Time Staffing Inc. | Automatic camera angle switching in response to low noise audio to create combined audiovisual file |
US10963841B2 (en) | 2019-03-27 | 2021-03-30 | On Time Staffing Inc. | Employment candidate empathy scoring system |
US11961044B2 (en) | 2019-03-27 | 2024-04-16 | On Time Staffing, Inc. | Behavioral data analysis and scoring system |
US10728443B1 (en) | 2019-03-27 | 2020-07-28 | On Time Staffing Inc. | Automatic camera angle switching to create combined audiovisual file |
US11863858B2 (en) | 2019-03-27 | 2024-01-02 | On Time Staffing Inc. | Automatic camera angle switching in response to low noise audio to create combined audiovisual file |
US11127232B2 (en) | 2019-11-26 | 2021-09-21 | On Time Staffing Inc. | Multi-camera, multi-sensor panel data extraction system and method |
US11783645B2 (en) | 2019-11-26 | 2023-10-10 | On Time Staffing Inc. | Multi-camera, multi-sensor panel data extraction system and method |
WO2021178824A1 (en) * | 2020-03-06 | 2021-09-10 | Johnson J R | Video script generation, presentation and video recording with flexible overwriting |
US11343463B2 (en) | 2020-03-06 | 2022-05-24 | Johnson, J.R. | Video script generation, presentation and video recording with flexible overwriting |
US11861904B2 (en) | 2020-04-02 | 2024-01-02 | On Time Staffing, Inc. | Automatic versioning of video presentations |
US11636678B2 (en) | 2020-04-02 | 2023-04-25 | On Time Staffing Inc. | Audio and video recording and streaming in a three-computer booth |
US11184578B2 (en) | 2020-04-02 | 2021-11-23 | On Time Staffing, Inc. | Audio and video recording and streaming in a three-computer booth |
US11023735B1 (en) | 2020-04-02 | 2021-06-01 | On Time Staffing, Inc. | Automatic versioning of video presentations |
US11720859B2 (en) | 2020-09-18 | 2023-08-08 | On Time Staffing Inc. | Systems and methods for evaluating actions over a computer network and establishing live network connections |
US11144882B1 (en) | 2020-09-18 | 2021-10-12 | On Time Staffing Inc. | Systems and methods for evaluating actions over a computer network and establishing live network connections |
US11727040B2 (en) | 2021-08-06 | 2023-08-15 | On Time Staffing, Inc. | Monitoring third-party forum contributions to improve searching through time-to-live data assignments |
US11966429B2 (en) | 2021-08-06 | 2024-04-23 | On Time Staffing Inc. | Monitoring third-party forum contributions to improve searching through time-to-live data assignments |
US11423071B1 (en) | 2021-08-31 | 2022-08-23 | On Time Staffing, Inc. | Candidate data ranking method using previously selected candidate data |
US11907652B2 (en) | 2022-06-02 | 2024-02-20 | On Time Staffing, Inc. | User interface and systems for document creation |
US12321694B2 (en) | 2022-06-02 | 2025-06-03 | On Time Staffing Inc. | User interface and systems for document creation |
Also Published As
Publication number | Publication date |
---|---|
US9402050B1 (en) | 2016-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9402050B1 (en) | Media content creation application | |
US10362340B2 (en) | Techniques for creation of auto-montages for media content | |
JP7293338B2 (en) | Video processing method, apparatus, device and computer program | |
US9460752B2 (en) | Multi-source journal content integration systems and methods | |
JP5981024B2 (en) | Sharing TV and video programs via social networking | |
US9886229B2 (en) | System and method for multi-angle videos | |
US20130262564A1 (en) | Interactive media distribution systems and methods | |
US20140143218A1 (en) | Method for Crowd Sourced Multimedia Captioning for Video Content | |
US20150127643A1 (en) | Digitally displaying and organizing personal multimedia content | |
US20140344661A1 (en) | Personalized Annotations | |
US20150046842A1 (en) | System for providing a social media compilation | |
US20190362053A1 (en) | Media distribution network, associated program products, and methods of using the same | |
US20170302974A1 (en) | Systems and methods for enhanced video service | |
CN112235603B (en) | Video distribution system, method, computing device, user equipment and video playing method | |
US20160063087A1 (en) | Method and system for providing location scouting information | |
CN114143592B (en) | Video processing method, video processing device and computer-readable storage medium | |
US9721321B1 (en) | Automated interactive dynamic audio/visual performance with integrated data assembly system and methods | |
CN111433767B (en) | System and method for filtering supplemental content of an electronic book | |
KR20220160025A (en) | Automatically generating enhancements to AV content | |
US20150312344A1 (en) | Intelligent Media Production Systems and Services | |
US9329748B1 (en) | Single media player simultaneously incorporating multiple different streams for linked content | |
US10264324B2 (en) | System and method for group-based media composition | |
US12101516B1 (en) | Voice content selection for video content | |
US12335584B2 (en) | Method and system for generating smart thumbnails | |
US10924441B1 (en) | Dynamically generating video context |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SNIPME, INC., VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RECCHIA, PHILIP ANTHONY;MITCHELL, ROBERT E.;REEL/FRAME:039221/0110 Effective date: 20160119 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |