US20130101162A1 - Multimedia System with Processing of Multimedia Data Streams - Google Patents
Multimedia System with Processing of Multimedia Data Streams Download PDFInfo
- Publication number
- US20130101162A1 US20130101162A1 US13/628,750 US201213628750A US2013101162A1 US 20130101162 A1 US20130101162 A1 US 20130101162A1 US 201213628750 A US201213628750 A US 201213628750A US 2013101162 A1 US2013101162 A1 US 2013101162A1
- Authority
- US
- United States
- Prior art keywords
- media
- multimedia data
- module
- data streams
- data stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
Definitions
- the present disclosure relates generally to multimedia data streams, and more specifically to recording and playing back of processed multimedia data streams.
- a conventional media capture module such as a digital camera to provide an example, can record and/or store a scene in its field of view.
- These conventional media capture modules can include extremely wide angle lenses, such as a pin hole lens and a fish eye lens to provide some examples, to capture images of the scene within a large field of view at very high resolutions.
- conventional media playback devices such as monitors, televisions, mobile communications devices, such as a smart phones or portable computers to provide some examples, are only capable of playing back the images of the scene at much lower resolutions. This lower resolution of the conventional media playback devices allows the high resolution images of the scene to be modified by, for example, zooming, cutting, and/or cropping, for play back without detrimentally affecting the quality of the images.
- users of the conventional media capture module have to manually modify the high resolution images of the scene to select, extract, and/or merge one or more images from a multimedia data stream depicting the scene.
- participation in the scene by these users is rather limited.
- a parent at a birthday party conventionally operates the conventional media capture module to capture images, video, and or audio of the birthday party.
- this parent's participation in the birthday party is rather limited.
- FIG. 1 illustrates a block diagram of an exemplary remote processing media system according to an exemplary embodiment of the present disclosure
- FIG. 2 illustrates a block diagram of a second exemplary remote processing media system according to an exemplary embodiment of the present disclosure
- FIG. 3 illustrates a block diagram of a third exemplary remote processing media system according to an exemplary embodiment of the present disclosure
- FIG. 4 illustrates a block diagram of an exemplary local processing media system according to an exemplary embodiment of the present disclosure
- FIG. 5 illustrates a block diagram of a first media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure
- FIG. 6 illustrates a block diagram of a second media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure.
- Embodiments of the disclosure can be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the disclosure can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by one or more processors.
- a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
- a machine-readable medium can include non-transitory machine-readable mediums such as read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others.
- the machine-readable medium can include transitory machine-readable medium such as electrical, optical, acoustical, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
- transitory machine-readable medium such as electrical, optical, acoustical, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
- firmware, software, routines, instructions can be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
- module shall be understood to include at least one of software, firmware, and hardware (such as one or more circuits, microchips, or devices, or any combination thereof), and any combination thereof.
- each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module.
- multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
- the following Detailed Description describes various media systems that can record and/or store a scene in their field of view as a multimedia data stream.
- these various media systems capture the scene in resolutions that exceed resolutions that can be played back.
- the various media systems can be processed by, for example, zooming, cutting, and/or cropping, for play back without detrimentally affecting the quality of the scene.
- the various media systems can automatically process the multimedia data streams depicting the scene locally or remotely through a communications network with minimal user assistance.
- the user of these media systems can identify one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, and/or or one or more particular backgrounds to provide some examples, from a scene in a field of view of the media systems.
- the media systems can automatically select, extract, and/or merge portions of a multimedia data stream depicting the scene that includes the one or more particular objects to compile a new multimedia data stream as a processed multimedia data stream for playback.
- users of the media systems can identify one or more people from the scene.
- the media systems select, extract, and/or merge portions of the multimedia data stream that include these people to provide the processed multimedia data stream. This effectively allows the media systems to essentially track or to follow the people within the scene.
- the media systems can store previously recorded multimedia data streams of other scenes. These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples.
- the user of these media systems can identify one or more particular objects, can identify the one or more particular objects from these previously recorded multimedia data streams in a similar manner as identifying the one or more particular objects from the multimedia data stream.
- the media systems can automatically select, extract, and/or merge portions of the previously recorded multimedia data streams that include the one or more particular objects and compile these extracted and/or framed portions, as well as extracted and/or framed portions of the multimedia data stream, as a new multimedia data stream as a processed multimedia data stream for playback.
- users of the media systems can identify one or more people from previously captured scenes.
- the media systems selects, extracts, and/or merges portions of the previously recorded multimedia data streams that include these people and compile these extracted and/or framed portions to provide the processed multimedia data stream.
- FIG. 1 illustrates a block diagram of an exemplary remote processing media system according to an exemplary embodiment of the present disclosure.
- An exemplary media system 100 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the exemplary media system 100 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from images, video, and/or audio of previously recorded multimedia data streams to provide a processed multimedia data stream for playback.
- the exemplary media system 100 includes a media capture module 102 , a remote media processing system 104 , a remote storage system 106 , and a media playback device 108 that are communicatively coupled via a communication network 110 .
- the media capture module 102 records and/or stores a scene in its field of view as a multimedia data stream. Specifically, media capture module 102 records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the multimedia data stream can represent a still image of the scene at a particular instance in time, commonly referred to as an image, a series of the images over a duration in time that represents the scene in motion, commonly referred to a video, a representation of various sounds occurring within, or near, the scene, commonly referred to an audio, or any combination thereof
- the media capture module 102 can represent a digital camera, a digital video camera, a mobile communications device, such as a smart phone or portable computer, that has an integrated camera, or any other electronic device that is capable of recording and/or storing images, video, and/or audio that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure.
- the media capture module 102 provides the multimedia data stream to the remote media processing system 104 for processing via the communication network 110 .
- the media capture module 102 can pre-process the multimedia data stream before providing it to the remote media processing system 104 .
- This pre-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples.
- the media capture module 102 can communicate with the remote media processing system 104 to assist the remote media processing system 104 .
- the media capture module 102 can include an interface with the remote media processing system 104 to direct processing by the remote media processing system 104 in response to commands from the media capture module 102 .
- These commands can be automatically generated by the media capture module 102 and/or be generated in response to a user of the media capture module 102 .
- the media capture module 102 can playback the multimedia data stream using the interface.
- a user of the media capture module 102 can provide user information to assist in identifying one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, one or more particular backgrounds, or any other supplemental, known objects to provide some examples, from the multimedia data stream for selecting, extracting, and/or merging.
- the user of the media capture module 102 can touch, or make a gesture around, the one or more particular objects as the multimedia data stream is being played back.
- the media capture module 102 can send the commands to cause the remote media processing system 104 to select, to extract, and/or to merge these one or more particular objects from the multimedia data stream.
- the media capture module 102 can capture multiple independent multimedia data streams that each correspond to a scene from among a series of independent scenes. For example, a first user of the media capture module 102 can capture a first scene as a first independent multimedia data stream and the first user, or a second user, of the media capture module 102 can capture a second scene as a second independent multimedia data stream.
- each scene from among the series of independent scenes is independent from each other in location, duration, and/or time to provide some examples.
- the remote media processing system 104 operates in conjunction with the remote storage system 106 to process multimedia data streams, such as the multimedia data stream and/or previously recorded multimedia data streams of other scenes that are stored within the remote storage system 106 . These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples.
- the media capture module 102 and/or the media playback device 108 provide commands to the remote media processing system 104 to identify the one or more particular objects from the multimedia data stream and/or from the previously recorded multimedia data streams. Additionally, the remote media processing system 104 can automatically identify the one or more particular objects from the multimedia data stream and/or from previously recorded multimedia data streams.
- the user of the media capture module 102 and/or of the media playback device 108 can provide user information to assist in identifying the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams for selecting, extracting, and/or merging.
- This user information can include one or more pointers, such as one or more textual inputs, that can be used to identify the one or more particular objects.
- the remote media processing system 104 can retrieve a portion of the previously recorded multimedia data streams from the remote storage system 106 that corresponds to the one or more pointers and select, extract, and/or merge the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams using the portion of the previously recorded multimedia data streams that corresponds to the one or more pointers.
- the remote media processing system 104 identifies one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various image, video, and/or audio recognition techniques.
- these image, video, and/or audio recognition techniques compare a portion of previously recorded multimedia data streams that include image, video, and/or audio of various previously recognized objects that are stored within the remote storage system 106 with the one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams to recognize the one or more particular objects.
- the previously recognized objects represent supplemental known information such as one or more previously recognized people, one or more previously recognized animals, or one or more previously recognized objects, one or more previously recognized scenes, or one or more previously recognized voices, and/or or one or more previously recognized backgrounds to provide some examples.
- the remote media processing system 104 can identify one or more objects that are common between the multimedia data stream and/or the previously recorded multimedia data streams and to recognizes these one or more common objects as being the one or more particular objects using various image, video, and/or audio recognition techniques.
- the remote media processing system 104 can request assistance with recognition of one or more objects from the media capture module 102 and/or the media playback device 108 when none of the one or more objects have been previously recognized. For example, when the multimedia data stream and/or the previously recorded multimedia data streams includes one or more new objects that have not been previously recognized by the remote media processing system 104 , the remote media processing system 104 can request assistance to recognize these new objects as previously recognized objects in the future.
- the remote media processing system 104 can select, extract, and/or merge portions of the multimedia data stream and/or the previously recorded multimedia data streams that include the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams and compile a new multimedia data stream that includes these portions as the processed multimedia data stream for playback by the media playback device 108 .
- the remote media processing system 104 can select, extract, and/or merge the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams as well as selecting, extracting, and/or merging other portions of the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams that surround the one or more particular objects.
- the remote storage system 106 stores the multimedia data stream for future retrieval by the remote media processing system 104 . Also, the remote storage system 106 stores previously recorded images, video, and/or audio of previously recorded multimedia data streams for recognition of the one or more particular objects by the remote media processing system 104 . Typically, the remote storage system 106 stores portions of previously recorded multimedia data streams that include previously recognized objects which can be indexed by one or pointers. The remote storage system 106 can receive one or more pointers from the remote media processing system 104 and provide the portions of previously recorded multimedia data streams that correspond to the one or more pointers. The remote storage system 106 can update the previously recorded images, video, and/or audio when new objects are recognized by the remote media processing system 104 within the multimedia data stream and/or the previous multimedia data streams.
- the media playback device 108 plays back the processed multimedia data stream from the remote media processing system 104 .
- the media playback device 108 can display the images and/or the video and/or play back the audio within the processed multimedia data stream.
- the media playback device 108 can post-process the processed multimedia data stream before play back. This post-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples.
- the media playback device 108 can communicate with the remote media processing system 104 to assist the remote media processing system 104 .
- the media playback device 108 can include an interface with the remote media processing system 104 to direct processing by the remote media processing system 104 in response to commands from the media playback device 108 .
- the media playback device 108 can playback the processed multimedia data stream using the interface.
- a user of the user of the media playback device 108 can provide user information regarding the one or more particular objects, from the processed multimedia data stream for selecting, extracting, and/or merging.
- the user of the media playback device 108 can identify the one or more particular objects from the processed multimedia data stream for selecting, extracting, and/or merging.
- the user of the media playback device 108 can simply touch, or make a gesture around, the one or more particular objects as the processed multimedia data stream is being played back.
- the media playback device 108 can send the commands to cause the remote media processing system 104 to select, to extract, and/or to merge these one or more particular objects from the processed multimedia data.
- the media playback device 108 can include a monitor, a television, a mobile communications device, such as a smart phone or portable computer, or any other electronic device that is capable of playing back the processed multimedia data stream that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure.
- the communication network 110 communicatively couples the media capture module 102 , the remote media processing system 104 , the remote storage system 106 , and the media playback device 108 .
- the communication network 110 can include any suitable wireless communication network, such as a cellular network to provide an example, any suitable wired communication network, such as a fiber optic network or cable network to provide some examples, or any combination of wireless and wired communication networks.
- FIG. 1 illustrates a single media capture module 102 and/or a single media playback device 108
- the remote media processing system 104 and the remote storage system 106 can service multiple media capture modules 102 and/or multiple media playback devices 108 without departing from the spirit and scope of the present disclosure.
- These multiple media capture modules 102 and/or multiple media playback devices 108 can provide multimedia data streams and/or commands from multiple users of these devices to the remote media processing system 104 for processing in a substantially similar manner as described above.
- media capture module 102 and the media playback device 108 can be integrated onto a single platform, such as mobile communications device, a personal computer, a laptop computer, or any other integrated platform without departing from the spirit and scope of the present invention.
- aspects of the present disclosure can be integrated within conventional platforms so as to provide enhanced services.
- the remote media processing system 104 can be integrated within a web search service platform, such as those provided by GoogleTM or BingTM to provide some examples.
- a first multimedia data stream, gathered by multiple media capture modules, such as multiple media capture modules 102 , of each of a first plurality of users, is uploaded to search engine platform for secure, encrypted storage in return for a service fee.
- a second multimedia data stream, gathered by such multiple media capture modules, is stored locally, also with secure encryption.
- the remote media processing system 104 operates within the web search engine services platform by performing image, audio and video recognition and constructing a reverse indexed recognition database wherein textual descriptions are associated with recognized objects, sounds, speakers, persons, buildings, scenes, animals, and so on.
- recognition approaches can be shared with those used for image, audio and video search services. That is, search platforms currently support searching of a plurality of web located images using an uploaded image. This can easily be extended to searching amongst image frames in video and searching audio using an audio upload segment.
- search platforms currently support searching of a plurality of web located images using an uploaded image. This can easily be extended to searching amongst image frames in video and searching audio using an audio upload segment.
- pre-recognized audio, image and video elements other audio, video and image elements can be found in other media. For example, known and textually identified items such as the Eiffel Tower can be recognized currently in an uploaded image.
- various common content can be identified within the first multimedia data stream.
- popular elements can be grouped and, with user assistance, fully identified via prompting for associated text.
- Other meta data can also be gathered to support searching. For example, time of day, date, latitude, longitude, and so on.
- further media recognition processing can be added. If employed, such processing may operate as a substitute for the remote media processing system 104 . For example, all media recognition could be performed on the second multimedia data stream locally. Alternatively, local processing could operate cooperatively with the remote media processing system 104 .
- the remote media processing system 104 can identify characteristics of common elements within a given user's media while the media capture module 102 can prompt interaction from the user to gather textual and verbal descriptions of those common elements.
- the media capture module 102 can store recognition data associated with each such element, and use the stored recognition data to locally process new multimedia data streams.
- FIG. 2 illustrates a block diagram of a second exemplary remote processing media system according to an exemplary embodiment of the present disclosure.
- An exemplary media system 200 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the exemplary media system 200 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from previously recorded multimedia data streams to provide a processed multimedia data stream for playback.
- the exemplary media system 200 includes a media capture module 261 , a remote media processing system 202 , a cloud storage system 223 , and a media playback device 271 that are communicatively coupled via the communication network 110 .
- the media capture module 261 , the cloud storage system 223 , the remote media processing system 202 and the media playback device 271 can represent an exemplary embodiment of the media capture module 102 , the remote media processing system 104 , the remote storage system 106 , and the media playback device 108 , respectively.
- the media capture module 261 records and/or stores a scene in its field of view as a multimedia data stream. Additionally, the media capture module 261 can communicate with the remote media processing system 104 to assist the remote media processing system 104 .
- the media capture module 261 includes an upload/security support module 263 , imager(s) and microphone(s) module 265 , a media pre-processing module 267 , a local storage module 268 , and a cloud interface module 269 .
- the imager(s) and microphone(s) module 265 can record a still image of the scene at a particular instance in time, commonly referred to as an image, a series of the images over a duration in time that represents the scene in motion, commonly referred to a video, a representation of various sounds occurring within, or near, the scene, commonly referred to an audio, or any combination thereof to provide the multimedia data stream.
- the imager(s) and microphone(s) module 265 includes a wide angle or panoramic lens, such as pin hole and/or fish eye to provide some examples, for capturing the images and/or the video within the scene.
- the wide angle lens refers to a lens, or series of lens, whose effective focal length is substantially smaller than a focal length of a non-wide angle lens for a given film plane.
- the local storage module 268 stores the images, the video, and/or the audio of the multimedia data stream.
- the local storage module 268 can include random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or any other suitable electrical, mechanical, electromechanical device that is capable of storing the multimedia data stream.
- the media pre-processing module 267 pre-processes the multimedia data stream before providing it to the remote media processing system 202 .
- This pre-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples.
- the media pre-processing module 26 can retrieve the multimedia data stream from the local storage module 268 for pre-processing or can pre-processes the multimedia data stream before being stored by the local storage module 268 .
- the media pre-processing module 267 can cause a pre-processed multimedia data stream to be stored in the local storage module 268 along with, or in lieu, of the multimedia data stream.
- the media capture module 261 communicates with the remote media processing system 202 to assist the remote media processing system 202 .
- the cloud interface module 269 provides an interface with the remote media processing system 202 to direct processing by the remote media processing system 202 in response to commands from the media capture module 261 . These commands can be automatically generated by the media capture module 261 and/or be generated in response to a user of the media capture module 261 .
- the user of the media capture module 261 can identify one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, and/or or one or more particular backgrounds to provide some examples, from the multimedia data stream for selecting, extracting, and/or merging using the cloud interface module 269 .
- the user of the media capture module 261 can touch, or make a gesture around, the one or more particular objects as the multimedia data stream is being played back.
- the cloud interface module 269 can send the commands to cause the remote media processing system 202 to select, extract, and/or merge these one or more particular objects from the multimedia data stream.
- the commands can cause the remote media processing system 202 to identify and/or recognize the one or more particular objects from previously recorded multimedia data streams of other scenes that are stored within the cloud storage system 223 .
- These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples.
- the commands can also identify various processing parameters of the processing to be performed by the remote media processing system 202 .
- These various processing parameters can include preferred output type, such as image, video, and/or audio to provide some examples, to be played back by the media playback device 271 , a length, such as in time or in bytes to provide some examples, of the multimedia data streams to be played back by the media playback device 271 , one or more instances, or a range of instances, for which the previously recorded multimedia data stream was recorded.
- the upload/security support module 263 operates in conjunction with the cloud interface module 269 to establish a secure connection, such as a secure interface through a web browser to provide an example, to the remote media processing system 202 .
- the media capture module 261 can then securely, through authentication and/or authorization to provide some examples, provide the multimedia data stream and/or the pre-processed multimedia data stream from the local storage module 268 as well as the commands from the cloud interface module 269 and/or other commands from other modules within the remote media processing system 202 to the remote media processing system 202 via this secure connection.
- the remote media processing system 202 operates in conjunction with the cloud storage system 223 to process multimedia data streams, such as the multimedia data stream and/or the previously recorded multimedia data streams, to provide the processed multimedia data stream.
- the remote media processing system 202 includes a cloud service processing module 251 and a cloud support processing module 241 .
- the cloud service processing module 251 provides an interface between the media capture module 261 and the cloud support processing module 241 .
- the cloud service processing module 251 directs the cloud support processing module 241 to process the multimedia data stream and/or the previously recorded multimedia data streams in response to commands from the media capture module 261 and/or the media player 273 .
- the cloud service processing module 251 includes a user upload/download services module 253 , a reference query interaction module 255 , an extracting servicing module 257 , and a sign in—account service module 259 .
- the user upload/download services module 253 operates in conjunction with the media capture module 261 to receive the multimedia data stream via the communication network 110 , commonly referred to as upload, and in conjunction with the media playback device 271 to provide the processed multimedia data stream to the communication network 110 , commonly referred to as download.
- the uploading of the multimedia data stream and/or the downloading of the processed multimedia data stream can occur in real-time or in non-real time, namely at some point in the future.
- the extracting servicing module 257 directs the cloud support processing module 241 to recognize the one or more particular objects from the multimedia data stream and/or from the previously recorded multimedia data streams in response to the commands from the media capture module 261 and to provide the processed multimedia data stream.
- the extracting servicing module 257 provides one or more pointers corresponding to the one or more particular objects to the cloud support processing module 241 in response to the commands from the media capture module 261 .
- the extracting servicing module 257 provides the various processing parameters to direct the processing to be performed on the multimedia data stream and/or the previously recorded multimedia data streams by the cloud support processing module 241 .
- the reference query interaction module 255 operates in conjunction with the media capture module 261 and/or the media playback device 271 to assist the cloud support processing module 241 in recognizing one or more newly discovered objects, such as any newly discovered images, video, and/or audio to provide some examples, within the multimedia data stream and/or the previously recorded multimedia data streams.
- the reference query interaction module 255 receives a response from the cloud support processing module 241 that one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams have not been previously recognized.
- the reference query interaction module 255 operates in conjunction with the media capture module 261 and/or the media playback device 271 to identify these not previously recognized objects and, optionally, to update the previously recognized objects stored in the cloud storage system 223 to include these previously unrecognized objects.
- the reference query interaction module 255 can receive a portion of previously recorded multimedia data streams that include image, video, and/or audio of the previously unrecognized objects. The reference query interaction module 255 can provide this portion to the media capture module 261 and/or the media player 273 for recognition.
- the sign in—account service module 259 establishes a secure connection, such as the secure interface through a web browser to provide an example, to the media capture module 261 .
- a secure connection such as the secure interface through a web browser to provide an example, to the media capture module 261 .
- the sign in—account service module 259 can then securely, after authentication and/or authorization, receive the multimedia data stream and/or the previously recorded multimedia data stream from the local storage module 268 as well as the commands from the cloud interface module 269 and/or other commands from other modules within the exemplary media system 200 via this secure connection.
- the cloud support processing module 241 processes the multimedia data stream and/or the previously recorded multimedia data streams in response to commands from the cloud service processing module 251 to provide the processed multimedia data stream.
- the cloud support processing module 241 includes a recognition module 242 and an extraction module 243 .
- the recognition module 242 identifies one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various image, video, and/or audio recognition techniques. These image, video, and/or audio recognition techniques can include video based, audio based, and/or face, person, object, and scene based recognition.
- these image, video, and/or audio recognition techniques compare a portion of previously recorded multimedia data streams that include image, video, and/or audio of various previously recognized objects that are stored within the cloud storage system 223 with the one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams to recognize the one or more particular objects.
- the recognition module 242 can request assistance from the cloud service processing module 251 for recognition of newly discovered objects, namely those one or more objects that do not match the various previously recognized objects. For example, when the multimedia data stream and/or the previously recorded multimedia data streams includes one or more new objects that have not been previously recognized by the recognition module 242 , the recognition module 242 can request assistance from the cloud service processing module 251 to recognize these new objects as previously recognized objects in the future.
- the extraction module 243 extracts and/or frame portions of the multimedia data stream and/or the previously recorded multimedia data streams that include the one or more particular objects, as recognized by the recognition module 242 , from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams and compile a new multimedia data stream that includes these portions as the processed multimedia data stream.
- the extraction module 243 can select, extract, and/or merge the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams as well as selecting, extracting, and/or merging other portions of the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams that surround the one or more particular objects.
- the extraction module 243 can transform the one or more particular objects from a two dimensional (2D) representation for viewing as a three dimensional (3D) representation to enable playback in 3D.
- the extraction module 243 can also perform various audio and/or video encoding, decoding, and/or transcoding on the multimedia data stream and/or the previously recorded multimedia data streams.
- the cloud storage system 223 stores multimedia data streams and previously recognized objects of one or more users in a corresponding user data module from among one or more user data modules 215 .
- the cloud storage system 223 stores the multimedia data stream and/or the previously recorded multimedia data streams for later retrieval by the remote media processing system 104 in a compressed/raw media module 221 .
- the compressed/raw media module 221 stores the multimedia data stream as well as previously recorded multimedia data streams for later retrieval by the remote media processing system 104 .
- the compressed/raw media module 221 can store these data stream in their raw, or uncompressed, form in a collected data module or can optionally compress these data streams before their storage in the collected data module.
- the compressed/raw media module 221 can optionally include an associated meta data module 223 to collect various metadata about the multimedia data stream as well as previously recorded data streams.
- This metadata can include an identification of the author, or user, whom recorded the multimedia data stream and/or the previously recorded multimedia data streams as well as an identification of media capture module 261 , such as type of device, name of device, manufacturer of device to provide some examples, that recorded these data streams.
- the metadata can includes various parameters, such as standard or protocol used to record these data streams, resolution of these data streams, date of recording of these data streams, time of record of these data streams, to provide some examples, to assist in identification of the multimedia data stream and the previously recorded multimedia data streams for their later retrieval.
- the cloud storage system 223 stores previously recognized objects such as one or more previously recognized people, one or more previously recognized animals, or one or more previously recognized objects, one or more previously recognized scenes, or one or more previously recognized voices, and/or or one or more previously recognized backgrounds to provide some examples, in a preprocessed matched information module 233 , a reference sounds/pointers module 224 , a reference image/pointer module 227 , and a characterization data module 231 .
- previously recognized objects such as one or more previously recognized people, one or more previously recognized animals, or one or more previously recognized objects, one or more previously recognized scenes, or one or more previously recognized voices, and/or or one or more previously recognized backgrounds to provide some examples, in a preprocessed matched information module 233 , a reference sounds/pointers module 224 , a reference image/pointer module 227 , and a characterization data module 231 .
- the preprocessed matched information module 233 stores portions of previously recorded multimedia data streams that include previously recognized objects which were previously identified and/or recognized by the recognition module 242 .
- the preprocessed matched information module 233 can index the portions of previously recorded multimedia data streams using various pointers.
- the preprocessed matched information module 233 can provide these portions of previously recorded multimedia data streams to the remote media processing system 202 .
- the reference sounds/pointers module 224 stores various multimedia data streams, or portions thereof, that include various well-known audio.
- These well-known audio can include well-known voices, well-known backgrounds, and/or well-known animals to provide some examples.
- the reference image/pointer module 227 stores various multimedia data streams, or portions thereof, that include well-known images, and/or video, that are shared between the one or more user data modules 215 .
- These well-known images, and/or video can include well-known persons, well-known objects, and/or well-known scenes to provide some examples.
- the characterization data module 231 stores various multimedia data streams, or portions thereof, that include various reference images, video, and/or audio to assist the remote media processing system 202 in recognizing the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams.
- These reference images, video, and/or audio can include various historical variations that indicate changes to the one or more particular objects that can occur over time.
- the historical variations can indicate how these reference images, video, and/or audio can appear in the past or the future.
- the media playback device 271 plays back the processed multimedia data stream from the remote media processing system 104 . Additionally, the media playback device 271 can communicate with the remote media processing system 202 to assist the remote media processing system 202 .
- the media playback device 271 includes a media player 273 , an enhanced extraction user interface 275 , and a security/access user interface 277 .
- the media player 273 can display the images and/or the video and/or play back the audio within the processed multimedia data stream.
- the media player 273 can include a monitor, a television, a mobile communications device, such as a smart phone or portable computer, or any other electronic device that is capable of playing back the processed multimedia data stream that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure.
- the media player 273 includes a real time extracting download interface to direct processing provide commands to the enhanced extraction user interface 275 .
- the enhanced extraction user interface 275 can communicate with the remote media processing system 202 to assist the remote media processing system 202 .
- the enhanced extraction user interface 275 includes an extraction element identification module and recognition assist module.
- the extraction element identification module and the recognition assist module generate the commands in response to the real time extracting download interface to direct processing by the remote media processing system 202 .
- These commands can be automatically generated by the enhanced extraction user interface 275 and/or be generated in response to a user of the media player 273 .
- the media player 273 can playback the processed multimedia data stream using the interface.
- a user of the media playback media playback device 271 can identify the one or more particular objects from the processed multimedia data stream for selecting, extracting, and/or merging by the remote media processing system 202 .
- the user of the media playback device 271 can simply touch, or make a gesture around, the one or more particular objects as the processed multimedia data stream is being played back.
- the extraction element identification module and the recognition assist module can send the commands to cause the remote media processing system 202 to select, extract, and/or merge these one or more particular objects from the processed multimedia data.
- the commands can also identify various processing parameters of the processing to be performed by the remote media processing system 202 .
- These various processing parameters can include preferred output type, such as image, video, and/or audio to provide some examples, to be played back by the media playback device 271 , a length, such as in time or in bytes to provide some examples, of the multimedia data streams to be played back by the media playback device 271 , one or more instances, or a range of instances, for which the previously recorded multimedia data stream was recorded.
- preferred output type such as image, video, and/or audio to provide some examples, to be played back by the media playback device 271
- a length such as in time or in bytes to provide some examples, of the multimedia data streams to be played back by the media playback device 271 , one or more instances, or a range of instances, for which the previously recorded multimedia data stream was recorded.
- the security/access user interface 277 establishes a secure connection, such as a secure interface through a web browser to provide an example, to the remote media processing system 202 .
- the media player 273 can then securely, through authentication and/or authorization to provide some examples, receive the processed multimedia data stream from the remote media processing system 202 and/or provide the commands to the remote media processing system 202 via this secure connection.
- FIG. 3 illustrates a block diagram of a third exemplary remote processing media system according to an exemplary embodiment of the present disclosure.
- An exemplary media system 300 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the exemplary media system 300 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from images, video, and/or audio of previously recorded multimedia data streams to provide a processed multimedia data stream for playback.
- the exemplary media system 300 includes a media capture module 302 , a remote media processing and storage system 304 , and a media playback device 306 that are communicatively coupled via a communication network 308 .
- the exemplary media system 300 can represent an exemplary embodiment of the exemplary media system 100 and/or of the exemplary media system 200 .
- the media capture module 302 represents a mobile communications device for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the media capture module 302 performs substantially similar functions as discussed above in regard to the media capture module 102 and/or the media capture module 261 and will not be described in further detail.
- the remote media processing and storage system 304 processes multimedia data streams, such as the multimedia data stream and/or previously recorded multimedia data streams to provide some examples, to provide a processed multimedia data stream.
- the remote media processing and storage system 304 performs substantially similar functions as discussed above in regard to the remote media processing system 104 , the remote storage system 106 , the remote media processing system 202 , and/or the cloud storage system 223 and will not be described in further detail.
- the media playback device 306 plays back the processed multimedia data stream from the remote media processing and storage system 304 .
- the media playback device 306 performs substantially similar functions as discussed above in regard to the media playback device 108 and/or the media playback device 271 and will not be described in further detail.
- the communication network 308 communicatively couples the media capture module 302 , the remote media processing and storage system 304 , and the media playback device 306 .
- the communication network 308 can include a first communication pathway 350 for communications between the media capture module 302 and the remote media processing and storage system 304 , a second communication pathway 352 for communications between the remote media processing and storage system 304 and the media playback device 306 , and a third communication pathway 354 for communications between the media capture module 302 and the media playback device 306 .
- the first communication pathway 350 , the second communication pathway 352 , and/or the third communication pathway 354 can be part of any suitable wireless communication network, such as a cellular network to provide an example, any suitable wired communication network, such as a fiber optic network or cable network to provide some examples, or any combination of wireless and wired communication networks.
- any suitable wireless communication network such as a cellular network to provide an example
- any suitable wired communication network such as a fiber optic network or cable network to provide some examples, or any combination of wireless and wired communication networks.
- FIG. 4 illustrates a block diagram of an exemplary local processing media system according to an exemplary embodiment of the present disclosure.
- An exemplary media system 400 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the exemplary media system 400 locally extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or previously recorded multimedia data streams to provide a processed multimedia data stream for playback.
- the exemplary media system 400 includes a media capture module 402 , a local media processing system 404 , a local storage module 406 , and a media playback device 408 .
- the media capture module 402 , the local media processing system 404 , the local storage module 406 , and the media playback device 408 are integrated onto a single platform.
- the media capture module 402 records and/or stores a scene in its field of view as a multimedia data stream.
- the media capture module 402 includes imager(s) and microphone(s) module 410 to record the images, the video, and/or the audio representing the scene in its field of view into the multimedia data stream.
- the imager(s) and microphone(s) module 410 provides the multimedia data stream in a substantially similar manner as the imager(s) and microphone(s) module 265 and will not be described in further detail.
- the local media processing system 404 operates in conjunction with the local storage module 406 to process multimedia data streams and/or the previously recorded multimedia data streams that are stored in the local storage module 406 .
- the local media processing system 404 includes a recognition module 410 and an extraction module 412 .
- the recognition module 410 and the extraction module 412 operate in a substantially similar manner as the recognition module 242 and the extraction module 243 , respectively, and will not be described in further detail.
- the recognition module 410 identifies one or more objects within the multimedia data stream and/or previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various audio and/or video recognition techniques.
- audio and/or video recognition techniques can include video based, audio based, and/or face, person, object, and scene based recognition.
- these audio and/or video recognition techniques compare various previously recognized objects stored in the local storage module 406 with the one or more objects within the multimedia data stream to recognize the one or more particular objects.
- the extraction module 412 extracts and/or frames the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams to provide the processed multimedia data stream.
- a 2D-3D Viewpoint module 414 can transform the one or more particular objects from a two dimensional (2D) representation for viewing as a three dimensional (3D) representation to enable playback in 3D.
- the local storage module 406 stores multimedia data streams and previously recognized objects of one or more users in a substantially similar manner as the cloud storage system 223 .
- the media playback device 408 plays back the processed multimedia data stream from the remote media processing system 104 . Additionally, the media playback device 408 can communicate with the local media processing system 404 to assist the local media processing system 404 .
- the media playback device 408 includes a media player 418 and an enhanced extraction user interface 420 .
- the media player 418 and the enhanced extraction user interface 420 operate in a substantially similar manner as the media player 273 and the enhanced extraction user interface 275 , respectively, and will not be described in further detail.
- FIG. 5 illustrates a block diagram of a first media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure.
- a media capture module 500 represents a stationary media capture module for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the media capture module 500 includes a media recording module 502 and a stationary mount 504 .
- the media capture module 500 can represent an exemplary embodiment of the media capture module 102 , the media capture module 261 , and/or the exemplary media system 400 .
- the media recording module 502 includes one or more media recording devices 506 . 1 through 506 . i and a processing module 508 .
- the media recording devices 506 . 1 through 506 . i records images, video, and/or audio representing a scene in their fields of view into the multimedia data stream.
- the media recording module 502 includes a sufficient number of the media recording devices 506 . 1 through 506 . i to capture a wide angle or panoramic view of the scene; however, those skilled in the relevant art will recognize that any suitable number of the media recording devices 506 . 1 through 506 . i can be used without departing from the spirit and scope of the present disclosure.
- the media recording devices 506 . 1 through 506 . i are substantially similar to one another, but can include a different number of audio capture devices, illumination devices, and image capturing devices; therefore one the media recording device 506 . 1 is to be described in further detail below.
- the media recording device 506 . 1 records images, video, and/or audio representing the scene in its fields of view.
- the media recording device 506 . 1 includes one or more illumination devices 510 . 1 through 510 . k, one or more audio capture devices 512 . 1 through 512 . m, and an image capture device 514 .
- the one or more illumination devices 510 . 1 through 510 . k to provide illuminate the scene.
- This illumination can be characterized as being a relatively short duration, typically 1/1000 to 1/200 of a second, commonly referred to as a flash, or it can be characterized as being a longer duration, and can include any suitable portion, or portions, of the electromagnetic spectrum.
- the one or more audio capture devices 512 . 1 through 512 . m can capture a representation of various sounds occurring within, or near, the scene.
- the one or more audio capture devices 512 . 1 through 512 . m are typically implemented using one or more microphones although those skilled in the relevant art(s) will recognize that any electrical, mechanical, electro-mechanical device that can convert sound into electrical signal can be used without departing from the sprit and scope of the present disclosure.
- the one or more audio capture devices 512 . 1 through 512 are typically implemented using one or more microphones although those skilled in the relevant art(s) will recognize that any electrical, mechanical, electro-mechanical device that can convert sound into electrical signal can be used without departing from the sprit and scope of the present disclosure.
- the one or more audio capture devices 512 . 1 through 512 are typically implemented using one or more microphones although those skilled in the relevant art(s) will recognize that any electrical, mechanical, electro-mechanical device that can convert sound into electrical signal can be used without departing from the spri
- m includes at least two audio capture devices to capture the various sounds occurring within, or near, the scene, as stereophonic sounds or, more commonly, stereo, although those skilled in the relevant art(s) will recognize that any suitable number of audio capture devices can be used without departing from the sprit and scope of the present disclosure.
- the image capture device 514 records a still image of the scene at a particular instance in time and/or a series of the images over a duration in time that represents the scene in motion.
- the image capture device can record the scene using any suitable any suitable portion, or portions, of the electromagnetic spectrum that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of the present disclosure. Although, only one image capture device 514 is shown in FIG. 5 , more than one image capture device 514 can be present within the media recording device 506 . 1 to record the scene in three dimensions (3D).
- the processing module 508 processes the images, video, and/or audio from the one or more media recording devices 506 . 1 through 506 . i to provide the multimedia data stream to a remote processing system, such as the remote processing system 104 and/or the remote processing system 202 to provide some examples and/or to a local processing system, such as the local media processing system 404 to provide an example.
- a remote processing system such as the remote processing system 104 and/or the remote processing system 202 to provide some examples and/or to a local processing system, such as the local media processing system 404 to provide an example.
- the stationary mount 504 is coupled to the media recording module 502 .
- the stationary mount 504 is placed within the scene to allow the media recording module 502 to records images, video, and/or audio representing the scene in its field of view.
- the stationary mount 504 represents a stationary or fixed mount to stabilize the media recording module 502 .
- the stationary mount 504 can telescope to adjust a field of view of the media recording module 502 .
- the stationary mount 504 as shown in FIG. 5 is for illustrative purposes only; those skilled in the relevant art(s) will recognize that other stationary or fixed mounts can be used without departing from the spirit and scope of the present disclosure.
- FIG. 6 illustrates a block diagram of a second media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure.
- a media capture module 600 represents a mobile media capture module for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream.
- the media capture module 600 includes media recording devices 602 . 1 through 602 . i and a processing module 604 .
- the media recording devices 602 . 1 through 602 . i and the processing module 604 operate in a substantially similar manner as the media recording devices 506 . 1 through 506 . i and the processing module 508 , respectively, and will not be described in further detail.
- the media capture module 600 can represent an exemplary embodiment of the media capture module 102 , the media capture module 261 , and/or the exemplary media system 600 .
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automatic Focus Adjustment (AREA)
- Studio Devices (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Television Signal Processing For Recording (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
- Image Processing (AREA)
Abstract
Description
- The present application claims the benefit of U.S. Provisional Patent Appl. No. 61/549,495, filed Oct. 20, 2011, which is incorporated herein by reference in its entirety.
- 1. Field of Disclosure
- The present disclosure relates generally to multimedia data streams, and more specifically to recording and playing back of processed multimedia data streams.
- 2. Related Art
- A conventional media capture module, such as a digital camera to provide an example, can record and/or store a scene in its field of view. These conventional media capture modules can include extremely wide angle lenses, such as a pin hole lens and a fish eye lens to provide some examples, to capture images of the scene within a large field of view at very high resolutions. Often, conventional media playback devices, such as monitors, televisions, mobile communications devices, such as a smart phones or portable computers to provide some examples, are only capable of playing back the images of the scene at much lower resolutions. This lower resolution of the conventional media playback devices allows the high resolution images of the scene to be modified by, for example, zooming, cutting, and/or cropping, for play back without detrimentally affecting the quality of the images.
- Conventionally, users of the conventional media capture module have to manually modify the high resolution images of the scene to select, extract, and/or merge one or more images from a multimedia data stream depicting the scene. As a result, participation in the scene by these users is rather limited. For example, a parent at a birthday party conventionally operates the conventional media capture module to capture images, video, and or audio of the birthday party. As a result, this parent's participation in the birthday party is rather limited.
- Embodiments of the disclosure are described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left most digit(s) of a reference number identifies the drawing in which the reference number first appears.
-
FIG. 1 illustrates a block diagram of an exemplary remote processing media system according to an exemplary embodiment of the present disclosure; -
FIG. 2 illustrates a block diagram of a second exemplary remote processing media system according to an exemplary embodiment of the present disclosure; -
FIG. 3 illustrates a block diagram of a third exemplary remote processing media system according to an exemplary embodiment of the present disclosure; -
FIG. 4 illustrates a block diagram of an exemplary local processing media system according to an exemplary embodiment of the present disclosure; -
FIG. 5 illustrates a block diagram of a first media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure; and -
FIG. 6 illustrates a block diagram of a second media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure. - The disclosure will now be described with reference to the accompanying drawings. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the reference number.
- The following Detailed Description refers to accompanying drawings to illustrate exemplary embodiments consistent with the disclosure. References in the Detailed Description to “one exemplary embodiment,” “an exemplary embodiment,” “an example exemplary embodiment,” etc., indicate that the exemplary embodiment described can include a particular feature, structure, or characteristic, but every exemplary embodiment can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same exemplary embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an exemplary embodiment, it is within the knowledge of those skilled in the relevant art(s) to affect such feature, structure, or characteristic in connection with other exemplary embodiments whether or not explicitly described.
- The exemplary embodiments described herein are provided for illustrative purposes, and are not limiting. Other exemplary embodiments are possible, and modifications can be made to the exemplary embodiments within the spirit and scope of the disclosure. Therefore, the Detailed Description is not meant to limit the disclosure. Rather, the scope of the disclosure is defined only in accordance with the following claims and their equivalents.
- Embodiments of the disclosure can be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the disclosure can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by one or more processors. A machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium can include non-transitory machine-readable mediums such as read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others. As another example, the machine-readable medium can include transitory machine-readable medium such as electrical, optical, acoustical, or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Further, firmware, software, routines, instructions can be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
- The following Detailed Description of the exemplary embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge of those skilled in relevant art(s), readily modify and/or adapt for various applications such exemplary embodiments, without undue experimentation, without departing from the spirit and scope of the disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and plurality of equivalents of the exemplary embodiments based upon the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by those skilled in relevant art(s) in light of the teachings herein.
- For purposes of this discussion, the term “module” shall be understood to include at least one of software, firmware, and hardware (such as one or more circuits, microchips, or devices, or any combination thereof), and any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
- Overview
- The following Detailed Description describes various media systems that can record and/or store a scene in their field of view as a multimedia data stream. Typically, these various media systems capture the scene in resolutions that exceed resolutions that can be played back. As a result, the various media systems can be processed by, for example, zooming, cutting, and/or cropping, for play back without detrimentally affecting the quality of the scene.
- The various media systems can automatically process the multimedia data streams depicting the scene locally or remotely through a communications network with minimal user assistance. The user of these media systems can identify one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, and/or or one or more particular backgrounds to provide some examples, from a scene in a field of view of the media systems. Thereafter, the media systems can automatically select, extract, and/or merge portions of a multimedia data stream depicting the scene that includes the one or more particular objects to compile a new multimedia data stream as a processed multimedia data stream for playback. For example, users of the media systems can identify one or more people from the scene. In this example, the media systems select, extract, and/or merge portions of the multimedia data stream that include these people to provide the processed multimedia data stream. This effectively allows the media systems to essentially track or to follow the people within the scene.
- Additionally, the media systems can store previously recorded multimedia data streams of other scenes. These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples. The user of these media systems can identify one or more particular objects, can identify the one or more particular objects from these previously recorded multimedia data streams in a similar manner as identifying the one or more particular objects from the multimedia data stream. The media systems can automatically select, extract, and/or merge portions of the previously recorded multimedia data streams that include the one or more particular objects and compile these extracted and/or framed portions, as well as extracted and/or framed portions of the multimedia data stream, as a new multimedia data stream as a processed multimedia data stream for playback. For example, users of the media systems can identify one or more people from previously captured scenes. In this example, the media systems selects, extracts, and/or merges portions of the previously recorded multimedia data streams that include these people and compile these extracted and/or framed portions to provide the processed multimedia data stream.
- First Exemplary Remote Processing Video Camera System
-
FIG. 1 illustrates a block diagram of an exemplary remote processing media system according to an exemplary embodiment of the present disclosure. Anexemplary media system 100 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Theexemplary media system 100 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from images, video, and/or audio of previously recorded multimedia data streams to provide a processed multimedia data stream for playback. Theexemplary media system 100 includes amedia capture module 102, a remotemedia processing system 104, aremote storage system 106, and amedia playback device 108 that are communicatively coupled via acommunication network 110. - The
media capture module 102 records and/or stores a scene in its field of view as a multimedia data stream. Specifically,media capture module 102 records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. The multimedia data stream can represent a still image of the scene at a particular instance in time, commonly referred to as an image, a series of the images over a duration in time that represents the scene in motion, commonly referred to a video, a representation of various sounds occurring within, or near, the scene, commonly referred to an audio, or any combination thereof Themedia capture module 102 can represent a digital camera, a digital video camera, a mobile communications device, such as a smart phone or portable computer, that has an integrated camera, or any other electronic device that is capable of recording and/or storing images, video, and/or audio that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure. - Typically, the
media capture module 102 provides the multimedia data stream to the remotemedia processing system 104 for processing via thecommunication network 110. Optionally, themedia capture module 102 can pre-process the multimedia data stream before providing it to the remotemedia processing system 104. This pre-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples. Additionally, themedia capture module 102 can communicate with the remotemedia processing system 104 to assist the remotemedia processing system 104. Themedia capture module 102 can include an interface with the remotemedia processing system 104 to direct processing by the remotemedia processing system 104 in response to commands from themedia capture module 102. These commands can be automatically generated by themedia capture module 102 and/or be generated in response to a user of themedia capture module 102. For example, themedia capture module 102 can playback the multimedia data stream using the interface. A user of themedia capture module 102 can provide user information to assist in identifying one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, one or more particular backgrounds, or any other supplemental, known objects to provide some examples, from the multimedia data stream for selecting, extracting, and/or merging. The user of themedia capture module 102 can touch, or make a gesture around, the one or more particular objects as the multimedia data stream is being played back. Themedia capture module 102 can send the commands to cause the remotemedia processing system 104 to select, to extract, and/or to merge these one or more particular objects from the multimedia data stream. - In some situations, the
media capture module 102 can capture multiple independent multimedia data streams that each correspond to a scene from among a series of independent scenes. For example, a first user of themedia capture module 102 can capture a first scene as a first independent multimedia data stream and the first user, or a second user, of themedia capture module 102 can capture a second scene as a second independent multimedia data stream. Typically, each scene from among the series of independent scenes is independent from each other in location, duration, and/or time to provide some examples. - The remote
media processing system 104 operates in conjunction with theremote storage system 106 to process multimedia data streams, such as the multimedia data stream and/or previously recorded multimedia data streams of other scenes that are stored within theremote storage system 106. These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples. Typically, themedia capture module 102 and/or themedia playback device 108 provide commands to the remotemedia processing system 104 to identify the one or more particular objects from the multimedia data stream and/or from the previously recorded multimedia data streams. Additionally, the remotemedia processing system 104 can automatically identify the one or more particular objects from the multimedia data stream and/or from previously recorded multimedia data streams. For example, the user of themedia capture module 102 and/or of themedia playback device 108 can provide user information to assist in identifying the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams for selecting, extracting, and/or merging. This user information can include one or more pointers, such as one or more textual inputs, that can be used to identify the one or more particular objects. Next, the remotemedia processing system 104 can retrieve a portion of the previously recorded multimedia data streams from theremote storage system 106 that corresponds to the one or more pointers and select, extract, and/or merge the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams using the portion of the previously recorded multimedia data streams that corresponds to the one or more pointers. - Generally, the remote
media processing system 104 identifies one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various image, video, and/or audio recognition techniques. Typically, these image, video, and/or audio recognition techniques compare a portion of previously recorded multimedia data streams that include image, video, and/or audio of various previously recognized objects that are stored within theremote storage system 106 with the one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams to recognize the one or more particular objects. The previously recognized objects represent supplemental known information such as one or more previously recognized people, one or more previously recognized animals, or one or more previously recognized objects, one or more previously recognized scenes, or one or more previously recognized voices, and/or or one or more previously recognized backgrounds to provide some examples. Also, the remotemedia processing system 104 can identify one or more objects that are common between the multimedia data stream and/or the previously recorded multimedia data streams and to recognizes these one or more common objects as being the one or more particular objects using various image, video, and/or audio recognition techniques. - Additionally, the remote
media processing system 104 can request assistance with recognition of one or more objects from themedia capture module 102 and/or themedia playback device 108 when none of the one or more objects have been previously recognized. For example, when the multimedia data stream and/or the previously recorded multimedia data streams includes one or more new objects that have not been previously recognized by the remotemedia processing system 104, the remotemedia processing system 104 can request assistance to recognize these new objects as previously recognized objects in the future. - After identification of the one or more objects, the remote
media processing system 104 can select, extract, and/or merge portions of the multimedia data stream and/or the previously recorded multimedia data streams that include the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams and compile a new multimedia data stream that includes these portions as the processed multimedia data stream for playback by themedia playback device 108. The remotemedia processing system 104 can select, extract, and/or merge the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams as well as selecting, extracting, and/or merging other portions of the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams that surround the one or more particular objects. - The
remote storage system 106 stores the multimedia data stream for future retrieval by the remotemedia processing system 104. Also, theremote storage system 106 stores previously recorded images, video, and/or audio of previously recorded multimedia data streams for recognition of the one or more particular objects by the remotemedia processing system 104. Typically, theremote storage system 106 stores portions of previously recorded multimedia data streams that include previously recognized objects which can be indexed by one or pointers. Theremote storage system 106 can receive one or more pointers from the remotemedia processing system 104 and provide the portions of previously recorded multimedia data streams that correspond to the one or more pointers. Theremote storage system 106 can update the previously recorded images, video, and/or audio when new objects are recognized by the remotemedia processing system 104 within the multimedia data stream and/or the previous multimedia data streams. - The
media playback device 108 plays back the processed multimedia data stream from the remotemedia processing system 104. Themedia playback device 108 can display the images and/or the video and/or play back the audio within the processed multimedia data stream. Optionally, themedia playback device 108 can post-process the processed multimedia data stream before play back. This post-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples. Additionally, themedia playback device 108 can communicate with the remotemedia processing system 104 to assist the remotemedia processing system 104. Themedia playback device 108 can include an interface with the remotemedia processing system 104 to direct processing by the remotemedia processing system 104 in response to commands from themedia playback device 108. For example, themedia playback device 108 can playback the processed multimedia data stream using the interface. A user of the user of themedia playback device 108 can provide user information regarding the one or more particular objects, from the processed multimedia data stream for selecting, extracting, and/or merging. For example, the user of themedia playback device 108 can identify the one or more particular objects from the processed multimedia data stream for selecting, extracting, and/or merging. The user of themedia playback device 108 can simply touch, or make a gesture around, the one or more particular objects as the processed multimedia data stream is being played back. Themedia playback device 108 can send the commands to cause the remotemedia processing system 104 to select, to extract, and/or to merge these one or more particular objects from the processed multimedia data. Themedia playback device 108 can include a monitor, a television, a mobile communications device, such as a smart phone or portable computer, or any other electronic device that is capable of playing back the processed multimedia data stream that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure. - The
communication network 110 communicatively couples themedia capture module 102, the remotemedia processing system 104, theremote storage system 106, and themedia playback device 108. Thecommunication network 110 can include any suitable wireless communication network, such as a cellular network to provide an example, any suitable wired communication network, such as a fiber optic network or cable network to provide some examples, or any combination of wireless and wired communication networks. - Although the description of
FIG. 1 illustrates a singlemedia capture module 102 and/or a singlemedia playback device 108, those skilled in the relevant art(s) will recognize that the remotemedia processing system 104 and theremote storage system 106 can service multiple media capturemodules 102 and/or multiplemedia playback devices 108 without departing from the spirit and scope of the present disclosure. These multiple media capturemodules 102 and/or multiplemedia playback devices 108 can provide multimedia data streams and/or commands from multiple users of these devices to the remotemedia processing system 104 for processing in a substantially similar manner as described above. Additionally, those skilled in the relevant art(s) will recognize that themedia capture module 102 and themedia playback device 108 can be integrated onto a single platform, such as mobile communications device, a personal computer, a laptop computer, or any other integrated platform without departing from the spirit and scope of the present invention. - Moreover, aspects of the present disclosure can be integrated within conventional platforms so as to provide enhanced services. For example, the remote
media processing system 104 can be integrated within a web search service platform, such as those provided by Google™ or Bing™ to provide some examples. A first multimedia data stream, gathered by multiple media capture modules, such as multiple media capturemodules 102, of each of a first plurality of users, is uploaded to search engine platform for secure, encrypted storage in return for a service fee. A second multimedia data stream, gathered by such multiple media capture modules, is stored locally, also with secure encryption. The remotemedia processing system 104 operates within the web search engine services platform by performing image, audio and video recognition and constructing a reverse indexed recognition database wherein textual descriptions are associated with recognized objects, sounds, speakers, persons, buildings, scenes, animals, and so on. Such recognition approaches can be shared with those used for image, audio and video search services. That is, search platforms currently support searching of a plurality of web located images using an uploaded image. This can easily be extended to searching amongst image frames in video and searching audio using an audio upload segment. Thus, by using pre-recognized audio, image and video elements, other audio, video and image elements can be found in other media. For example, known and textually identified items such as the Eiffel Tower can be recognized currently in an uploaded image. By processing all uploaded media in such a manner, various common content can be identified within the first multimedia data stream. Similarly, by recognizing similarities within the uploaded first multimedia data stream itself, popular elements can be grouped and, with user assistance, fully identified via prompting for associated text. Other meta data can also be gathered to support searching. For example, time of day, date, latitude, longitude, and so on. - Locally, perhaps within the multiple media capture modules, further media recognition processing, although not needed, can be added. If employed, such processing may operate as a substitute for the remote
media processing system 104. For example, all media recognition could be performed on the second multimedia data stream locally. Alternatively, local processing could operate cooperatively with the remotemedia processing system 104. For example, the remotemedia processing system 104 can identify characteristics of common elements within a given user's media while themedia capture module 102 can prompt interaction from the user to gather textual and verbal descriptions of those common elements. In addition, themedia capture module 102 can store recognition data associated with each such element, and use the stored recognition data to locally process new multimedia data streams. - Second Exemplary Remote Processing Video Camera System
-
FIG. 2 illustrates a block diagram of a second exemplary remote processing media system according to an exemplary embodiment of the present disclosure. Anexemplary media system 200 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Theexemplary media system 200 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from previously recorded multimedia data streams to provide a processed multimedia data stream for playback. Theexemplary media system 200 includes amedia capture module 261, a remotemedia processing system 202, acloud storage system 223, and amedia playback device 271 that are communicatively coupled via thecommunication network 110. Themedia capture module 261, thecloud storage system 223, the remotemedia processing system 202 and themedia playback device 271 can represent an exemplary embodiment of themedia capture module 102, the remotemedia processing system 104, theremote storage system 106, and themedia playback device 108, respectively. - The
media capture module 261 records and/or stores a scene in its field of view as a multimedia data stream. Additionally, themedia capture module 261 can communicate with the remotemedia processing system 104 to assist the remotemedia processing system 104. Themedia capture module 261 includes an upload/security support module 263, imager(s) and microphone(s)module 265, amedia pre-processing module 267, alocal storage module 268, and acloud interface module 269. - The imager(s) and microphone(s)
module 265 can record a still image of the scene at a particular instance in time, commonly referred to as an image, a series of the images over a duration in time that represents the scene in motion, commonly referred to a video, a representation of various sounds occurring within, or near, the scene, commonly referred to an audio, or any combination thereof to provide the multimedia data stream. In some embodiments, the imager(s) and microphone(s)module 265 includes a wide angle or panoramic lens, such as pin hole and/or fish eye to provide some examples, for capturing the images and/or the video within the scene. The wide angle lens refers to a lens, or series of lens, whose effective focal length is substantially smaller than a focal length of a non-wide angle lens for a given film plane. - The
local storage module 268 stores the images, the video, and/or the audio of the multimedia data stream. Thelocal storage module 268 can include random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or any other suitable electrical, mechanical, electromechanical device that is capable of storing the multimedia data stream. - The
media pre-processing module 267 pre-processes the multimedia data stream before providing it to the remotemedia processing system 202. This pre-processing can include various imaging processing techniques such as cropping, image straightening, red-eye effect removal, contrast adjustment, color adjustment, image retouching to provide some examples. Typically, the media pre-processing module 26 can retrieve the multimedia data stream from thelocal storage module 268 for pre-processing or can pre-processes the multimedia data stream before being stored by thelocal storage module 268. Themedia pre-processing module 267 can cause a pre-processed multimedia data stream to be stored in thelocal storage module 268 along with, or in lieu, of the multimedia data stream. - Additionally, the
media capture module 261 communicates with the remotemedia processing system 202 to assist the remotemedia processing system 202. Thecloud interface module 269 provides an interface with the remotemedia processing system 202 to direct processing by the remotemedia processing system 202 in response to commands from themedia capture module 261. These commands can be automatically generated by themedia capture module 261 and/or be generated in response to a user of themedia capture module 261. For example, the user of themedia capture module 261 can identify one or more particular objects, such as one or more particular people, one or more particular animals, or one or more particular objects, one or more particular scenes, or one or more particular voices, and/or or one or more particular backgrounds to provide some examples, from the multimedia data stream for selecting, extracting, and/or merging using thecloud interface module 269. The user of themedia capture module 261 can touch, or make a gesture around, the one or more particular objects as the multimedia data stream is being played back. Thecloud interface module 269 can send the commands to cause the remotemedia processing system 202 to select, extract, and/or merge these one or more particular objects from the multimedia data stream. The commands can cause the remotemedia processing system 202 to identify and/or recognize the one or more particular objects from previously recorded multimedia data streams of other scenes that are stored within thecloud storage system 223. These other scenes can be different scenes than depicted in the multimedia data stream or similar scenes as depicted in the multimedia data stream at different times to provide some examples. - The commands can also identify various processing parameters of the processing to be performed by the remote
media processing system 202. These various processing parameters can include preferred output type, such as image, video, and/or audio to provide some examples, to be played back by themedia playback device 271, a length, such as in time or in bytes to provide some examples, of the multimedia data streams to be played back by themedia playback device 271, one or more instances, or a range of instances, for which the previously recorded multimedia data stream was recorded. - The upload/
security support module 263 operates in conjunction with thecloud interface module 269 to establish a secure connection, such as a secure interface through a web browser to provide an example, to the remotemedia processing system 202. Themedia capture module 261 can then securely, through authentication and/or authorization to provide some examples, provide the multimedia data stream and/or the pre-processed multimedia data stream from thelocal storage module 268 as well as the commands from thecloud interface module 269 and/or other commands from other modules within the remotemedia processing system 202 to the remotemedia processing system 202 via this secure connection. - The remote
media processing system 202 operates in conjunction with thecloud storage system 223 to process multimedia data streams, such as the multimedia data stream and/or the previously recorded multimedia data streams, to provide the processed multimedia data stream. The remotemedia processing system 202 includes a cloudservice processing module 251 and a cloudsupport processing module 241. The cloudservice processing module 251 provides an interface between themedia capture module 261 and the cloudsupport processing module 241. The cloudservice processing module 251 directs the cloudsupport processing module 241 to process the multimedia data stream and/or the previously recorded multimedia data streams in response to commands from themedia capture module 261 and/or themedia player 273. The cloudservice processing module 251 includes a user upload/download services module 253, a referencequery interaction module 255, an extractingservicing module 257, and a sign in—account service module 259. - The user upload/
download services module 253 operates in conjunction with themedia capture module 261 to receive the multimedia data stream via thecommunication network 110, commonly referred to as upload, and in conjunction with themedia playback device 271 to provide the processed multimedia data stream to thecommunication network 110, commonly referred to as download. The uploading of the multimedia data stream and/or the downloading of the processed multimedia data stream can occur in real-time or in non-real time, namely at some point in the future. - The extracting
servicing module 257 directs the cloudsupport processing module 241 to recognize the one or more particular objects from the multimedia data stream and/or from the previously recorded multimedia data streams in response to the commands from themedia capture module 261 and to provide the processed multimedia data stream. Typically, the extractingservicing module 257 provides one or more pointers corresponding to the one or more particular objects to the cloudsupport processing module 241 in response to the commands from themedia capture module 261. Additionally, the extractingservicing module 257 provides the various processing parameters to direct the processing to be performed on the multimedia data stream and/or the previously recorded multimedia data streams by the cloudsupport processing module 241. - The reference
query interaction module 255 operates in conjunction with themedia capture module 261 and/or themedia playback device 271 to assist the cloudsupport processing module 241 in recognizing one or more newly discovered objects, such as any newly discovered images, video, and/or audio to provide some examples, within the multimedia data stream and/or the previously recorded multimedia data streams. Typically, the referencequery interaction module 255 receives a response from the cloudsupport processing module 241 that one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams have not been previously recognized. In this situation, the referencequery interaction module 255 operates in conjunction with themedia capture module 261 and/or themedia playback device 271 to identify these not previously recognized objects and, optionally, to update the previously recognized objects stored in thecloud storage system 223 to include these previously unrecognized objects. Typically, the referencequery interaction module 255 can receive a portion of previously recorded multimedia data streams that include image, video, and/or audio of the previously unrecognized objects. The referencequery interaction module 255 can provide this portion to themedia capture module 261 and/or themedia player 273 for recognition. - The sign in—
account service module 259 establishes a secure connection, such as the secure interface through a web browser to provide an example, to themedia capture module 261. Typically, the sign in—account service module 259 can then securely, after authentication and/or authorization, receive the multimedia data stream and/or the previously recorded multimedia data stream from thelocal storage module 268 as well as the commands from thecloud interface module 269 and/or other commands from other modules within theexemplary media system 200 via this secure connection. - The cloud
support processing module 241 processes the multimedia data stream and/or the previously recorded multimedia data streams in response to commands from the cloudservice processing module 251 to provide the processed multimedia data stream. The cloudsupport processing module 241 includes arecognition module 242 and anextraction module 243. Therecognition module 242 identifies one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various image, video, and/or audio recognition techniques. These image, video, and/or audio recognition techniques can include video based, audio based, and/or face, person, object, and scene based recognition. Typically, these image, video, and/or audio recognition techniques compare a portion of previously recorded multimedia data streams that include image, video, and/or audio of various previously recognized objects that are stored within thecloud storage system 223 with the one or more objects within the multimedia data stream and/or the previously recorded multimedia data streams to recognize the one or more particular objects. Therecognition module 242 can request assistance from the cloudservice processing module 251 for recognition of newly discovered objects, namely those one or more objects that do not match the various previously recognized objects. For example, when the multimedia data stream and/or the previously recorded multimedia data streams includes one or more new objects that have not been previously recognized by therecognition module 242, therecognition module 242 can request assistance from the cloudservice processing module 251 to recognize these new objects as previously recognized objects in the future. - The
extraction module 243 extracts and/or frame portions of the multimedia data stream and/or the previously recorded multimedia data streams that include the one or more particular objects, as recognized by therecognition module 242, from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams and compile a new multimedia data stream that includes these portions as the processed multimedia data stream. Theextraction module 243 can select, extract, and/or merge the one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams as well as selecting, extracting, and/or merging other portions of the images, video, and/or audio of the multimedia data stream and/or the previously recorded multimedia data streams that surround the one or more particular objects. Optionally, theextraction module 243 can transform the one or more particular objects from a two dimensional (2D) representation for viewing as a three dimensional (3D) representation to enable playback in 3D. Theextraction module 243 can also perform various audio and/or video encoding, decoding, and/or transcoding on the multimedia data stream and/or the previously recorded multimedia data streams. - The
cloud storage system 223 stores multimedia data streams and previously recognized objects of one or more users in a corresponding user data module from among one or moreuser data modules 215. Thecloud storage system 223 stores the multimedia data stream and/or the previously recorded multimedia data streams for later retrieval by the remotemedia processing system 104 in a compressed/raw media module 221. The compressed/raw media module 221 stores the multimedia data stream as well as previously recorded multimedia data streams for later retrieval by the remotemedia processing system 104. The compressed/raw media module 221 can store these data stream in their raw, or uncompressed, form in a collected data module or can optionally compress these data streams before their storage in the collected data module. The compressed/raw media module 221 can optionally include an associatedmeta data module 223 to collect various metadata about the multimedia data stream as well as previously recorded data streams. This metadata can include an identification of the author, or user, whom recorded the multimedia data stream and/or the previously recorded multimedia data streams as well as an identification ofmedia capture module 261, such as type of device, name of device, manufacturer of device to provide some examples, that recorded these data streams. Additionally, the metadata can includes various parameters, such as standard or protocol used to record these data streams, resolution of these data streams, date of recording of these data streams, time of record of these data streams, to provide some examples, to assist in identification of the multimedia data stream and the previously recorded multimedia data streams for their later retrieval. - Also, the
cloud storage system 223 stores previously recognized objects such as one or more previously recognized people, one or more previously recognized animals, or one or more previously recognized objects, one or more previously recognized scenes, or one or more previously recognized voices, and/or or one or more previously recognized backgrounds to provide some examples, in a preprocessed matchedinformation module 233, a reference sounds/pointers module 224, a reference image/pointer module 227, and acharacterization data module 231. - The preprocessed matched
information module 233 stores portions of previously recorded multimedia data streams that include previously recognized objects which were previously identified and/or recognized by therecognition module 242. The preprocessed matchedinformation module 233 can index the portions of previously recorded multimedia data streams using various pointers. The preprocessed matchedinformation module 233 can provide these portions of previously recorded multimedia data streams to the remotemedia processing system 202. - The reference sounds/pointers module 224 stores various multimedia data streams, or portions thereof, that include various well-known audio. These well-known audio can include well-known voices, well-known backgrounds, and/or well-known animals to provide some examples.
- The reference image/
pointer module 227 stores various multimedia data streams, or portions thereof, that include well-known images, and/or video, that are shared between the one or moreuser data modules 215. These well-known images, and/or video can include well-known persons, well-known objects, and/or well-known scenes to provide some examples. - The
characterization data module 231 stores various multimedia data streams, or portions thereof, that include various reference images, video, and/or audio to assist the remotemedia processing system 202 in recognizing the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams. These reference images, video, and/or audio can include various historical variations that indicate changes to the one or more particular objects that can occur over time. For example, the historical variations can indicate how these reference images, video, and/or audio can appear in the past or the future. - The
media playback device 271 plays back the processed multimedia data stream from the remotemedia processing system 104. Additionally, themedia playback device 271 can communicate with the remotemedia processing system 202 to assist the remotemedia processing system 202. Themedia playback device 271 includes amedia player 273, an enhancedextraction user interface 275, and a security/access user interface 277. - The
media player 273 can display the images and/or the video and/or play back the audio within the processed multimedia data stream. Themedia player 273 can include a monitor, a television, a mobile communications device, such as a smart phone or portable computer, or any other electronic device that is capable of playing back the processed multimedia data stream that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of present disclosure. Themedia player 273 includes a real time extracting download interface to direct processing provide commands to the enhancedextraction user interface 275. - The enhanced
extraction user interface 275 can communicate with the remotemedia processing system 202 to assist the remotemedia processing system 202. The enhancedextraction user interface 275 includes an extraction element identification module and recognition assist module. The extraction element identification module and the recognition assist module generate the commands in response to the real time extracting download interface to direct processing by the remotemedia processing system 202. These commands can be automatically generated by the enhancedextraction user interface 275 and/or be generated in response to a user of themedia player 273. For example, themedia player 273 can playback the processed multimedia data stream using the interface. A user of the media playbackmedia playback device 271 can identify the one or more particular objects from the processed multimedia data stream for selecting, extracting, and/or merging by the remotemedia processing system 202. The user of themedia playback device 271 can simply touch, or make a gesture around, the one or more particular objects as the processed multimedia data stream is being played back. The extraction element identification module and the recognition assist module can send the commands to cause the remotemedia processing system 202 to select, extract, and/or merge these one or more particular objects from the processed multimedia data. The commands can also identify various processing parameters of the processing to be performed by the remotemedia processing system 202. These various processing parameters can include preferred output type, such as image, video, and/or audio to provide some examples, to be played back by themedia playback device 271, a length, such as in time or in bytes to provide some examples, of the multimedia data streams to be played back by themedia playback device 271, one or more instances, or a range of instances, for which the previously recorded multimedia data stream was recorded. - The security/
access user interface 277 establishes a secure connection, such as a secure interface through a web browser to provide an example, to the remotemedia processing system 202. Themedia player 273 can then securely, through authentication and/or authorization to provide some examples, receive the processed multimedia data stream from the remotemedia processing system 202 and/or provide the commands to the remotemedia processing system 202 via this secure connection. - Third Exemplary Remote Processing Video Camera System
-
FIG. 3 illustrates a block diagram of a third exemplary remote processing media system according to an exemplary embodiment of the present disclosure. Anexemplary media system 300 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Theexemplary media system 300 remotely extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or from images, video, and/or audio of previously recorded multimedia data streams to provide a processed multimedia data stream for playback. Theexemplary media system 300 includes amedia capture module 302, a remote media processing andstorage system 304, and amedia playback device 306 that are communicatively coupled via acommunication network 308. Theexemplary media system 300 can represent an exemplary embodiment of theexemplary media system 100 and/or of theexemplary media system 200. - The
media capture module 302 represents a mobile communications device for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Themedia capture module 302 performs substantially similar functions as discussed above in regard to themedia capture module 102 and/or themedia capture module 261 and will not be described in further detail. - The remote media processing and
storage system 304 processes multimedia data streams, such as the multimedia data stream and/or previously recorded multimedia data streams to provide some examples, to provide a processed multimedia data stream. The remote media processing andstorage system 304 performs substantially similar functions as discussed above in regard to the remotemedia processing system 104, theremote storage system 106, the remotemedia processing system 202, and/or thecloud storage system 223 and will not be described in further detail. - The
media playback device 306 plays back the processed multimedia data stream from the remote media processing andstorage system 304. Themedia playback device 306 performs substantially similar functions as discussed above in regard to themedia playback device 108 and/or themedia playback device 271 and will not be described in further detail. - The
communication network 308 communicatively couples themedia capture module 302, the remote media processing andstorage system 304, and themedia playback device 306. Thecommunication network 308 can include afirst communication pathway 350 for communications between themedia capture module 302 and the remote media processing andstorage system 304, asecond communication pathway 352 for communications between the remote media processing andstorage system 304 and themedia playback device 306, and athird communication pathway 354 for communications between themedia capture module 302 and themedia playback device 306. Thefirst communication pathway 350, thesecond communication pathway 352, and/or thethird communication pathway 354 can be part of any suitable wireless communication network, such as a cellular network to provide an example, any suitable wired communication network, such as a fiber optic network or cable network to provide some examples, or any combination of wireless and wired communication networks. - First Exemplary Local Processing Video Camera System
-
FIG. 4 illustrates a block diagram of an exemplary local processing media system according to an exemplary embodiment of the present disclosure. Anexemplary media system 400 locally records and/or stores images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Theexemplary media system 400 locally extracts and/or frames one or more particular objects from the images, video, and/or audio of the multimedia data stream and/or previously recorded multimedia data streams to provide a processed multimedia data stream for playback. Theexemplary media system 400 includes amedia capture module 402, a localmedia processing system 404, alocal storage module 406, and amedia playback device 408. Typically, themedia capture module 402, the localmedia processing system 404, thelocal storage module 406, and themedia playback device 408 are integrated onto a single platform. - The
media capture module 402 records and/or stores a scene in its field of view as a multimedia data stream. Themedia capture module 402 includes imager(s) and microphone(s)module 410 to record the images, the video, and/or the audio representing the scene in its field of view into the multimedia data stream. The imager(s) and microphone(s)module 410 provides the multimedia data stream in a substantially similar manner as the imager(s) and microphone(s)module 265 and will not be described in further detail. - The local
media processing system 404 operates in conjunction with thelocal storage module 406 to process multimedia data streams and/or the previously recorded multimedia data streams that are stored in thelocal storage module 406. The localmedia processing system 404 includes arecognition module 410 and anextraction module 412. Therecognition module 410 and theextraction module 412 operate in a substantially similar manner as therecognition module 242 and theextraction module 243, respectively, and will not be described in further detail. - The
recognition module 410 identifies one or more objects within the multimedia data stream and/or previously recorded multimedia data streams and recognizes the one or more objects as being the one or more particular objects using various audio and/or video recognition techniques. These audio and/or video recognition techniques can include video based, audio based, and/or face, person, object, and scene based recognition. Typically, these audio and/or video recognition techniques compare various previously recognized objects stored in thelocal storage module 406 with the one or more objects within the multimedia data stream to recognize the one or more particular objects. - The
extraction module 412 extracts and/or frames the one or more particular objects from the multimedia data stream and/or the previously recorded multimedia data streams to provide the processed multimedia data stream. Optionally, a 2D-3D Viewpoint module 414 can transform the one or more particular objects from a two dimensional (2D) representation for viewing as a three dimensional (3D) representation to enable playback in 3D. - The
local storage module 406 stores multimedia data streams and previously recognized objects of one or more users in a substantially similar manner as thecloud storage system 223. - The
media playback device 408 plays back the processed multimedia data stream from the remotemedia processing system 104. Additionally, themedia playback device 408 can communicate with the localmedia processing system 404 to assist the localmedia processing system 404. Themedia playback device 408 includes amedia player 418 and an enhancedextraction user interface 420. Themedia player 418 and the enhancedextraction user interface 420 operate in a substantially similar manner as themedia player 273 and the enhancedextraction user interface 275, respectively, and will not be described in further detail. - Exemplary Media Capture Modules
-
FIG. 5 illustrates a block diagram of a first media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure. Amedia capture module 500 represents a stationary media capture module for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Themedia capture module 500 includes amedia recording module 502 and astationary mount 504. Themedia capture module 500 can represent an exemplary embodiment of themedia capture module 102, themedia capture module 261, and/or theexemplary media system 400. - The
media recording module 502 includes one or more media recording devices 506.1 through 506.i and aprocessing module 508. The media recording devices 506.1 through 506.i records images, video, and/or audio representing a scene in their fields of view into the multimedia data stream. Typically, themedia recording module 502 includes a sufficient number of the media recording devices 506.1 through 506.i to capture a wide angle or panoramic view of the scene; however, those skilled in the relevant art will recognize that any suitable number of the media recording devices 506.1 through 506.i can be used without departing from the spirit and scope of the present disclosure. The media recording devices 506.1 through 506.i are substantially similar to one another, but can include a different number of audio capture devices, illumination devices, and image capturing devices; therefore one the media recording device 506.1 is to be described in further detail below. - The media recording device 506.1 records images, video, and/or audio representing the scene in its fields of view. The media recording device 506.1 includes one or more illumination devices 510.1 through 510.k, one or more audio capture devices 512.1 through 512.m, and an
image capture device 514. The one or more illumination devices 510.1 through 510.k to provide illuminate the scene. This illumination can be characterized as being a relatively short duration, typically 1/1000 to 1/200 of a second, commonly referred to as a flash, or it can be characterized as being a longer duration, and can include any suitable portion, or portions, of the electromagnetic spectrum. - The one or more audio capture devices 512.1 through 512.m can capture a representation of various sounds occurring within, or near, the scene. The one or more audio capture devices 512.1 through 512.m are typically implemented using one or more microphones although those skilled in the relevant art(s) will recognize that any electrical, mechanical, electro-mechanical device that can convert sound into electrical signal can be used without departing from the sprit and scope of the present disclosure. Typically, the one or more audio capture devices 512.1 through 512.m includes at least two audio capture devices to capture the various sounds occurring within, or near, the scene, as stereophonic sounds or, more commonly, stereo, although those skilled in the relevant art(s) will recognize that any suitable number of audio capture devices can be used without departing from the sprit and scope of the present disclosure.
- The
image capture device 514 records a still image of the scene at a particular instance in time and/or a series of the images over a duration in time that represents the scene in motion. The image capture device can record the scene using any suitable any suitable portion, or portions, of the electromagnetic spectrum that will be apparent to those skilled in the relevant art(s) without departing from the spirit and scope of the present disclosure. Although, only oneimage capture device 514 is shown inFIG. 5 , more than oneimage capture device 514 can be present within the media recording device 506.1 to record the scene in three dimensions (3D). - The
processing module 508 processes the images, video, and/or audio from the one or more media recording devices 506.1 through 506.i to provide the multimedia data stream to a remote processing system, such as theremote processing system 104 and/or theremote processing system 202 to provide some examples and/or to a local processing system, such as the localmedia processing system 404 to provide an example. - The
stationary mount 504 is coupled to themedia recording module 502. Typically, thestationary mount 504 is placed within the scene to allow themedia recording module 502 to records images, video, and/or audio representing the scene in its field of view. Thestationary mount 504 represents a stationary or fixed mount to stabilize themedia recording module 502. As shown inFIG. 5 , thestationary mount 504 can telescope to adjust a field of view of themedia recording module 502. Thestationary mount 504 as shown inFIG. 5 is for illustrative purposes only; those skilled in the relevant art(s) will recognize that other stationary or fixed mounts can be used without departing from the spirit and scope of the present disclosure. -
FIG. 6 illustrates a block diagram of a second media capture module that can be used within the exemplary video camera system according to an exemplary embodiment of the present disclosure. Amedia capture module 600 represents a mobile media capture module for recording and/or storing images, video, and/or audio representing a scene in its field of view into a multimedia data stream. Themedia capture module 600 includes media recording devices 602.1 through 602.i and aprocessing module 604. The media recording devices 602.1 through 602.i and theprocessing module 604 operate in a substantially similar manner as the media recording devices 506.1 through 506.i and theprocessing module 508, respectively, and will not be described in further detail. Themedia capture module 600 can represent an exemplary embodiment of themedia capture module 102, themedia capture module 261, and/or theexemplary media system 600. - It is to be appreciated that the Detailed Description section, and not the Abstract section, is intended to be used to interpret the claims. The Abstract section can set forth one or more, but not all exemplary embodiments, of the disclosure, and thus, are not intended to limit the disclosure and the appended claims in any way.
- The disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
- It will be apparent to those skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus the disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/628,750 US20130101162A1 (en) | 2011-10-20 | 2012-09-27 | Multimedia System with Processing of Multimedia Data Streams |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161549495P | 2011-10-20 | 2011-10-20 | |
US13/628,750 US20130101162A1 (en) | 2011-10-20 | 2012-09-27 | Multimedia System with Processing of Multimedia Data Streams |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130101162A1 true US20130101162A1 (en) | 2013-04-25 |
Family
ID=48135540
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/361,579 Abandoned US20130100334A1 (en) | 2011-10-20 | 2012-01-30 | Method and System for an Adaptive Auto-Focus Algorithm |
US13/397,240 Active 2032-08-01 US8749607B2 (en) | 2011-10-20 | 2012-02-15 | Face equalization in video conferencing |
US13/435,909 Abandoned US20130101275A1 (en) | 2011-10-20 | 2012-03-30 | Video Memory Having Internal Programmable Scanning Element |
US13/628,750 Abandoned US20130101162A1 (en) | 2011-10-20 | 2012-09-27 | Multimedia System with Processing of Multimedia Data Streams |
US13/655,910 Abandoned US20130100026A1 (en) | 2011-10-20 | 2012-10-19 | Proximity Screen Display and User Interface |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/361,579 Abandoned US20130100334A1 (en) | 2011-10-20 | 2012-01-30 | Method and System for an Adaptive Auto-Focus Algorithm |
US13/397,240 Active 2032-08-01 US8749607B2 (en) | 2011-10-20 | 2012-02-15 | Face equalization in video conferencing |
US13/435,909 Abandoned US20130101275A1 (en) | 2011-10-20 | 2012-03-30 | Video Memory Having Internal Programmable Scanning Element |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/655,910 Abandoned US20130100026A1 (en) | 2011-10-20 | 2012-10-19 | Proximity Screen Display and User Interface |
Country Status (3)
Country | Link |
---|---|
US (5) | US20130100334A1 (en) |
CN (1) | CN103226279A (en) |
TW (1) | TW201329554A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9246898B2 (en) * | 2012-11-20 | 2016-01-26 | Utility Associates, Inc. | System and method for securely distributing legal evidence |
US10321009B2 (en) * | 2012-09-05 | 2019-06-11 | Intel Corporation | Protocol for communications between platforms and image devices |
US20240312487A1 (en) * | 2023-02-08 | 2024-09-19 | Lenovo (Beijing) Limited | Multimedia data recording method and device |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101417387B1 (en) * | 2012-11-01 | 2014-07-09 | 주식회사 팬택 | Portable Device and Method for providing User Interface thereof |
KR101956073B1 (en) * | 2012-12-20 | 2019-03-08 | 삼성전자주식회사 | 3d volumetric display device for providing user interface using visual indicator and method thereof |
US9483118B2 (en) * | 2013-12-27 | 2016-11-01 | Rovi Guides, Inc. | Methods and systems for selecting media guidance functions based on tactile attributes of a user input |
US10136061B2 (en) | 2015-01-30 | 2018-11-20 | Microsoft Technology Licensing, Llc | Automatic processing of automatic image capture parameter adjustment |
CN105138962A (en) * | 2015-07-28 | 2015-12-09 | 小米科技有限责任公司 | Image display method and image display device |
TWI579826B (en) * | 2016-03-30 | 2017-04-21 | 佳世達科技股份有限公司 | Display device and oreration method thereof |
WO2018004001A1 (en) * | 2016-06-30 | 2018-01-04 | 株式会社ニコン | Camera |
US9881194B1 (en) * | 2016-09-19 | 2018-01-30 | Hand Held Products, Inc. | Dot peen mark image acquisition |
US10049625B1 (en) * | 2016-12-21 | 2018-08-14 | Amazon Technologies, Inc. | Context-based rendering |
CN107086027A (en) * | 2017-06-23 | 2017-08-22 | 青岛海信移动通信技术股份有限公司 | Character displaying method and device, mobile terminal and storage medium |
KR102646750B1 (en) | 2018-03-21 | 2024-03-13 | 삼성전자주식회사 | Method for adjusting focus based on spread-level of display object and electronic device implementing the same |
CN109521547B (en) * | 2018-12-21 | 2021-03-26 | 广州医软智能科技有限公司 | Variable-step-length automatic focusing method and system |
CN113269211B (en) * | 2020-02-14 | 2024-09-06 | 神盾股份有限公司 | Integration method and system of processing unit in sensor and computing unit in memory |
TWI724788B (en) * | 2020-02-14 | 2021-04-11 | 國立清華大學 | Method for integrating processing-in-sensor and in-memory computing and system thereof |
CN113938599B (en) * | 2020-07-14 | 2024-03-08 | 浙江宇视科技有限公司 | Electric lens focusing method and device, electronic equipment and storage medium |
JP7556233B2 (en) * | 2020-08-25 | 2024-09-26 | 富士フイルムビジネスイノベーション株式会社 | Display control device, display device and program |
USD1070830S1 (en) * | 2023-01-27 | 2025-04-15 | Gn Audio A/S | Video bar |
USD1070800S1 (en) * | 2023-01-27 | 2025-04-15 | Gn Audio A/S | Video bar |
USD1070798S1 (en) * | 2023-01-27 | 2025-04-15 | Gn Audio A/S | Video bar |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070097268A1 (en) * | 2005-10-31 | 2007-05-03 | Broadcom Corporation | Video background subtractor system |
US20100104146A1 (en) * | 2008-10-23 | 2010-04-29 | Kabushiki Kaisha Toshiba | Electronic apparatus and video processing method |
US20100247061A1 (en) * | 2009-03-31 | 2010-09-30 | Broadcom Corporation | Collection and concurrent integration of supplemental information related to currently playing media |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5260736A (en) * | 1991-09-04 | 1993-11-09 | Fuji Photo Film Co., Ltd. | Auto focus control device |
US6236431B1 (en) * | 1993-05-27 | 2001-05-22 | Canon Kabushiki Kaisha | Video camera apparatus with distance measurement area adjusted based on electronic magnification |
US6496277B1 (en) * | 1999-07-23 | 2002-12-17 | Xerox Corporation | Data flow control and storage facility for an image reproduction system |
US7079289B2 (en) * | 2001-10-01 | 2006-07-18 | Xerox Corporation | Rank-order error diffusion image processing |
US7538815B1 (en) * | 2002-01-23 | 2009-05-26 | Marena Systems Corporation | Autofocus system and method using focus measure gradient |
US7187413B2 (en) * | 2002-07-25 | 2007-03-06 | Lockheed Martin Corporation | Method and system for using an image based autofocus algorithm |
EP1528419B1 (en) * | 2002-08-07 | 2018-01-24 | Panasonic Intellectual Property Management Co., Ltd. | Focusing device |
JP2005055618A (en) * | 2003-08-01 | 2005-03-03 | Seiko Precision Inc | Projector, focusing method and focusing program for the same |
JP2005156971A (en) * | 2003-11-26 | 2005-06-16 | Tamron Co Ltd | Autofocusing device, camera provided with the same, and camera body |
WO2005091067A2 (en) * | 2004-03-15 | 2005-09-29 | 1... Limited | Camera autofocus |
US7515201B2 (en) * | 2004-06-16 | 2009-04-07 | Hoya Corporation | Focus detection method and focus detection apparatus |
US8432582B2 (en) * | 2004-08-20 | 2013-04-30 | Xerox Corporation | Uniformity compensation in halftoned images |
US8154769B2 (en) * | 2005-02-15 | 2012-04-10 | Ricoh Co. Ltd | Systems and methods for generating and processing evolutionary documents |
ATE430345T1 (en) * | 2005-08-08 | 2009-05-15 | Mep Imaging Technologies Ltd | ADAPTIVE EXPOSURE CONTROL |
TW200723077A (en) * | 2005-12-14 | 2007-06-16 | Elan Microelectronics Corp | Movement detection method for multiple objects on a capacitive touchpad |
EP1976268A1 (en) * | 2005-12-28 | 2008-10-01 | Olympus Corporation | Imaging system and image processing program |
EP2033066A4 (en) * | 2006-05-31 | 2012-08-15 | Ibm | Method and system for transformation of logical data objects for storage |
KR100780957B1 (en) * | 2006-08-21 | 2007-12-03 | 삼성전자주식회사 | Image Selection Device and Method |
US8144186B2 (en) * | 2007-03-09 | 2012-03-27 | Polycom, Inc. | Appearance matching for videoconferencing |
KR100897768B1 (en) * | 2007-05-01 | 2009-05-15 | 삼성전자주식회사 | Auto focusing method and devices that can use the method |
JP5302322B2 (en) * | 2007-10-19 | 2013-10-02 | クォルコム・メムズ・テクノロジーズ・インコーポレーテッド | Display with integrated photovoltaic |
US8432372B2 (en) * | 2007-11-30 | 2013-04-30 | Microsoft Corporation | User input using proximity sensing |
WO2010036249A1 (en) * | 2008-09-24 | 2010-04-01 | Nikon Corporation | Autofocus technique utilizing gradient histogram distribution characteristics |
US8289286B2 (en) * | 2008-12-19 | 2012-10-16 | Verizon Patent And Licensing Inc. | Zooming keyboard/keypad |
US8294858B2 (en) * | 2009-03-31 | 2012-10-23 | Intel Corporation | Integrated photovoltaic cell for display device |
TW201118823A (en) * | 2009-11-27 | 2011-06-01 | Univ Nat Taiwan | Transflective display device |
US8120838B2 (en) * | 2010-05-19 | 2012-02-21 | Au Optronics Corporation | Electrophoretic display device |
JP5569329B2 (en) * | 2010-10-15 | 2014-08-13 | 大日本印刷株式会社 | Conference system, monitoring system, image processing apparatus, image processing method, image processing program, etc. |
US20120306767A1 (en) * | 2011-06-02 | 2012-12-06 | Alan Stirling Campbell | Method for editing an electronic image on a touch screen display |
-
2012
- 2012-01-30 US US13/361,579 patent/US20130100334A1/en not_active Abandoned
- 2012-02-15 US US13/397,240 patent/US8749607B2/en active Active
- 2012-03-30 US US13/435,909 patent/US20130101275A1/en not_active Abandoned
- 2012-09-10 TW TW101133027A patent/TW201329554A/en unknown
- 2012-09-27 US US13/628,750 patent/US20130101162A1/en not_active Abandoned
- 2012-10-19 US US13/655,910 patent/US20130100026A1/en not_active Abandoned
- 2012-12-26 CN CN2012105761813A patent/CN103226279A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070097268A1 (en) * | 2005-10-31 | 2007-05-03 | Broadcom Corporation | Video background subtractor system |
US20100104146A1 (en) * | 2008-10-23 | 2010-04-29 | Kabushiki Kaisha Toshiba | Electronic apparatus and video processing method |
US20100247061A1 (en) * | 2009-03-31 | 2010-09-30 | Broadcom Corporation | Collection and concurrent integration of supplemental information related to currently playing media |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10321009B2 (en) * | 2012-09-05 | 2019-06-11 | Intel Corporation | Protocol for communications between platforms and image devices |
US9246898B2 (en) * | 2012-11-20 | 2016-01-26 | Utility Associates, Inc. | System and method for securely distributing legal evidence |
US20240312487A1 (en) * | 2023-02-08 | 2024-09-19 | Lenovo (Beijing) Limited | Multimedia data recording method and device |
Also Published As
Publication number | Publication date |
---|---|
US20130100026A1 (en) | 2013-04-25 |
TW201329554A (en) | 2013-07-16 |
US20130101275A1 (en) | 2013-04-25 |
US20130100334A1 (en) | 2013-04-25 |
US8749607B2 (en) | 2014-06-10 |
CN103226279A (en) | 2013-07-31 |
US20130100235A1 (en) | 2013-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130101162A1 (en) | Multimedia System with Processing of Multimedia Data Streams | |
JP5092000B2 (en) | Video processing apparatus, method, and video processing system | |
KR101531783B1 (en) | Video summary including a particular person | |
US9779775B2 (en) | Automatic generation of compilation videos from an original video based on metadata associated with the original video | |
US9940969B2 (en) | Audio/video methods and systems | |
US9754159B2 (en) | Automatic generation of video from spherical content using location-based metadata | |
US8665345B2 (en) | Video summary including a feature of interest | |
EP2619761B1 (en) | Enriching digital photographs | |
US20160099023A1 (en) | Automatic generation of compilation videos | |
US20150222815A1 (en) | Aligning videos representing different viewpoints | |
CN102906818A (en) | Storing video summary as metadata | |
EP2816564B1 (en) | Method and apparatus for smart video rendering | |
KR20150083355A (en) | Augmented media service providing method, apparatus thereof, and system thereof | |
CN105979246A (en) | Method and device for photographing panoramic contents | |
US20130071088A1 (en) | Method and apparatus for displaying summary video | |
JP7428763B2 (en) | Information acquisition system | |
JP2014045259A (en) | Terminal device, server, and program | |
JP2010021638A (en) | Device and method for adding tag information, and computer program | |
CN111698522A (en) | Live system based on mixed reality | |
KR20150083491A (en) | Methed and system for synchronizing usage information between device and server | |
CN111524518A (en) | Augmented reality processing method and device, storage medium and electronic equipment | |
WO2013187796A1 (en) | Method for automatically editing digital video files | |
WO2017045068A1 (en) | Methods and apparatus for information capture and presentation | |
CN111223220B (en) | Image correlation method and device and server | |
JP2025509198A (en) | Camera Auto Exposure for Use in Wearable Multimedia Devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VITSNUDEL, ILIA;BENNETT, JAMES;SIGNING DATES FROM 20120927 TO 20121018;REEL/FRAME:029165/0044 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |