CN108632541B - Multi-video-clip merging method and device - Google Patents
Multi-video-clip merging method and device Download PDFInfo
- Publication number
- CN108632541B CN108632541B CN201710166502.5A CN201710166502A CN108632541B CN 108632541 B CN108632541 B CN 108632541B CN 201710166502 A CN201710166502 A CN 201710166502A CN 108632541 B CN108632541 B CN 108632541B
- Authority
- CN
- China
- Prior art keywords
- video
- segment
- merged
- data segment
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012544 monitoring process Methods 0.000 description 40
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/262—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
- H04N21/26258—Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the invention discloses a method and a device for merging multiple video segments, wherein a first preset number of video segments to be merged and attribute information of each video segment are obtained by the method, and a file information segment aiming at the first preset number of video segments is generated according to the attribute information, and at least the file information segment comprises an initial position corresponding to each video segment; merging the video clips into merged data segments according to the initial positions corresponding to the video clips; and combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file. Therefore, the scheme provided by the embodiment of the invention is applied to merge multiple video clips, and the multiple video clips can be merged and stored in one file, so that the multiple video clips do not need to be opened one by one when opened, the speed of opening the multiple video clips is improved, and the difficulty of managing the multiple video clips is further reduced.
Description
Technical Field
The invention relates to the technical field of video monitoring, in particular to a method and a device for merging multiple video clips.
Background
In recent years, with the improvement of safety consciousness of people, a video monitoring system is visible everywhere in production and life of people, and the video monitoring system can effectively prevent and attack illegal crimes, traffic violations and other civilized phenomena, so that the video monitoring system is widely applied to places such as chain stores, kindergartens, homes, factories, public security fire fighting, banks, military facilities, highways, markets, hotels, tourist attractions, communities, hospitals, prisons, ports and the like.
A plurality of cameras are often arranged in a video monitoring system, and are usually installed at different positions, so that video clips shot by different cameras are often different, and even if video clips shot by the same camera at different time periods are different, a plurality of video clips can be generated in the video monitoring system, and the plurality of video clips are called as multiple video clips. In order to manage the multiple video clips, the multiple video clips generally need to be stored, and currently, the process of storing the multiple video clips is roughly as follows: the method comprises the steps of storing a plurality of video clips in the same folder to form a folder containing the plurality of video clips, and generally opening the plurality of video clips for viewing at the same time when a user views the plurality of video clips.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for merging multiple video segments, so as to increase the speed of opening multiple video segments, thereby reducing the complexity of managing multiple video segments.
In order to achieve the above object, an embodiment of the present invention discloses a method for merging multiple video segments, including:
obtaining a first preset number of video clips to be merged;
obtaining attribute information of each video clip, wherein the attribute information at least comprises the playing duration of the video clip;
generating file information segments aiming at the first preset number of video segments according to the attribute information, wherein the file information segments at least comprise initial positions corresponding to the video segments;
merging the video clips into a merged data segment according to the initial positions corresponding to the video clips in the file information segment;
and combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file.
Optionally, before the step of combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment in one merged file, the method further comprises:
generating index data corresponding to each video clip in the merged data segment;
generating an index data segment aiming at the merged data segment according to the index data corresponding to each video segment;
the combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file includes:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
Optionally, the step of generating index data corresponding to each video segment in the merged data segment includes:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
Optionally, the step of generating an index data segment for the merged data segment according to the index data corresponding to each video segment includes:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
Optionally, the method further comprises:
receiving an opening instruction aiming at the merged file;
reading the merged file, and determining the total number of the video clips and the initial positions corresponding to the video clips according to the file information segment;
establishing decoding playing channels with the same number as the total number, wherein each video clip corresponds to one decoding playing channel;
and leading the video clips into corresponding decoding playing channels for decoding playing according to the initial positions corresponding to the video clips.
Optionally, the method further comprises:
in the process of decoding and playing the video clips, receiving play positioning information which is input by a user and aims at one or more video clips in the merged file, wherein the play positioning information comprises: at least one of frame number, timestamp information and playing time information;
obtaining an index data segment aiming at the merged data segment according to the merged file;
determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
and skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
Optionally, the step of obtaining an index data segment for the merged data segment according to the merged file includes:
and detecting whether an index start code exists in the merged file, and if so, determining data with a preset length behind the index start code as an index data segment.
In order to achieve the above object, an embodiment of the present invention further discloses a multi-video segment merging device, including:
the device comprises a first obtaining module, a second obtaining module and a merging module, wherein the first obtaining module is used for obtaining a first preset number of video clips to be merged;
a second obtaining module, configured to obtain attribute information of each video segment, where the attribute information at least includes a playing duration of the video segment;
a first generating module, configured to generate, according to the attribute information, file information segments for the first preset number of video segments, where the file information segments at least include start positions corresponding to the video segments;
the merging module is used for merging the video clips into a merged data segment according to the initial positions corresponding to the video clips in the file information segment;
and the combination module is used for combining the file information segment and the merged data segment and storing the combined data segment into a merged file.
Optionally, the apparatus further comprises:
the second generation module is used for generating index data corresponding to each video clip in the merged data segment;
a third generating module, configured to generate an index data segment for the merged data segment according to index data corresponding to each video segment;
the combination module is specifically used for:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
Optionally, the second generating module is specifically configured to:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
Optionally, the third generating module is specifically configured to:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving an opening instruction aiming at the merged file;
the reading module is used for reading the merged file and determining the total number of the video clips and the initial positions corresponding to the video clips according to the file information segment;
the establishing module is used for establishing decoding playing channels with the same number as the total number, and each video clip corresponds to one decoding playing channel;
and the decoding module is used for guiding the video clips into corresponding decoding playing channels for decoding playing according to the initial positions corresponding to the video clips.
Optionally, the apparatus further comprises:
a second receiving module, configured to receive, during a process of decoding and playing the video segments, play positioning information that is input by a user and is for one or more video segments in the merged file, where the play positioning information includes: at least one of frame number, timestamp information and playing time information;
a third obtaining module, configured to obtain, according to the merged file, an index data segment for the merged data segment;
the determining module is used for determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
and the playing module is used for skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
Optionally, the third obtaining module is specifically configured to:
and detecting whether an index start code exists in the merged file, and if so, determining data with a preset length behind the index start code as an index data segment.
In summary, in the solution provided in the embodiment of the present invention, the obtained first preset number of video segments to be merged and the attribute information of each video segment generate a file information segment for the first preset number of video segments according to the attribute information, where the file information segment at least includes a start position corresponding to each video segment; merging the video clips into merged data segments according to the initial positions corresponding to the video clips; combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file; therefore, the scheme provided by the embodiment of the invention is applied to merge multiple video segments, and the multiple video segments can be merged and stored in one file, so that when the multiple video segments are opened, only the file needs to be opened, and the multiple video segments do not need to be opened one by one, thereby improving the speed of opening the multiple video segments and further reducing the difficulty of managing the multiple video segments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a multi-video segment merging method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating specific contents of a file information segment according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the detailed contents of merged segments according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another method for merging multiple video segments according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the detailed contents of index data segments according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a multi-video segment merging apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a multi-video segment merging device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present invention will be described in detail below with reference to specific examples.
Fig. 1 is a schematic flowchart of a multi-video segment merging method according to an embodiment of the present invention, where the method includes the steps of:
s101: obtaining a first preset number of video clips to be merged;
as can be understood by those skilled in the art, a video clip is a broad concept, and a video with a certain playing duration is generally considered to be a video clip, for example, a small video recorded by some shooting devices such as a mobile phone, a computer, a video camera, a monitoring device, etc. is a video clip.
In video surveillance systems, a plurality of cameras are often arranged, and are usually installed at different positions. For example: the video monitoring system is arranged aiming at the 'Changning road' of the road, 8 cameras are arranged in the video monitoring system, each camera is correspondingly arranged at a road intersection, if the camera 1 is installed at the intersection 1, the camera 2 is installed at the intersections 2 and …, the camera 8 is installed at the intersection 8, under the condition that the 8 cameras work normally, the 8 cameras can shoot the monitoring videos of the corresponding intersections respectively, the 8 cameras are independent, therefore, the surveillance videos shot by each camera are also independent, in order to prevent the surveillance videos shot by the cameras from being lost, the surveillance videos shot by the cameras are usually stored at intervals, each video is a video clip, and in order to facilitate management and viewing, a plurality of video clips are combined into one file to be stored. Assuming that the cameras 1-8 respectively capture surveillance videos 1-8 (i.e., video clips 1-8) in 2016-10-10 on the day, the surveillance videos 1-8 can be regarded as 8 video clips to be combined, and thus, 8 video clips to be combined can be obtained.
In a specific implementation manner provided by the embodiment of the present invention, a video clip list selection interface may be provided, where a plurality of video clips are provided in the video clip list, a user may select a video clip that is to be merged from the video clip list, and the video clip selected by the user is a video clip to be merged.
For the solution provided by the embodiment of the present invention, the specific form and the specific content of the Video clip are not limited, for example, 5 Music Videos (MV), 6 movies and 8 videos shot by the user are locally stored, and the videos are all Video clips; then, the video segment to be merged may be at least one locally stored video segment, and then, the video segment to be merged may be: 3 MVs and 2 movies, or 5 MVs and 4 videos shot by oneself, or 6 videos shot by oneself, or other combinations, and the embodiment of the present invention does not limit the form, content, and number of video clips.
S102: obtaining attribute information of each video clip, wherein the attribute information at least comprises the playing duration of the video clip;
it is understood that each video clip has corresponding attribute information, and the attribute information may include a playing time of the video clip, a name of the video clip, a size of the video clip, a format of the video clip, and the like. It should be noted that, in order to accurately merge video segments, in the solution provided in the embodiment of the present invention, the attribute information at least needs to include a playing time length of the video segment. After the video segments to be combined are obtained, the corresponding playing time length of each video segment is determined and known, so that the attribute information of each video segment can be obtained.
For example, assuming that the 8 video segments to be merged are surveillance videos 1 to 8, and the playing time lengths corresponding to the surveillance videos 1 to 8 are 100min (minutes), 120min, 80min, 100min, 90min, 60min, 80min, and 80min, respectively, the obtained attribute information of the surveillance video 1 is: the playing time is 100min, and the obtained attribute information of the surveillance video 1 is as follows: the playing time is 120min and … …, and the obtained attribute information of the surveillance video 8 is: the playing time is 80 min.
It should be noted that, by taking the example that the attribute information includes the playing time length of the video segment, the attribute information of the video segment may include, in addition to the playing time length of the corresponding video segment, a name of the video segment, a size of the video segment, a format of the video segment, and the like, and therefore, the embodiment of the present invention does not further limit specific contents of the attribute information of the video segment.
S103: generating file information segments aiming at the first preset number of video segments according to the attribute information, wherein the file information segments at least comprise initial positions corresponding to the video segments;
as can be seen from the above, the attribute information of the video segments to be merged at least includes the playing time of the video segment, so that the start position of each video segment can be determined according to the obtained attribute information for each video segment, and in a default state, the order of obtaining the video segments can be assumed to be: the method comprises the steps of monitoring video 1 → monitoring video 2 → monitoring video 3 → monitoring video 4 → monitoring video 5 → monitoring video 6 → monitoring video 7 → monitoring video 8, if no special setting exists, merging can be carried out according to the sequence from the monitoring video 1 to the monitoring video 8, and the simple understanding is that the monitoring video 2 is immediately followed by the monitoring video 1, namely the end position of the monitoring video 1 is the starting position of the monitoring video 2, the starting position of the monitoring video 3 is the end position of the monitoring video 2, and according to the rule, the starting position of the monitoring video 8 is the end position of the monitoring video 7.
For example, assuming that the start position corresponding to the surveillance video 1 is 0 time, in a default state, the end position corresponding to the surveillance video 1 may be determined to be 100min according to the above sequence, and the start position and the end position of the surveillance video 2 are: 100min and 220min, the start position and the end position of the surveillance video 3 are respectively: 220min and 300min, … …, the start and end positions of the surveillance video 8 are: 630min and 710 min. It can be seen that, the above can determine the start position and the end position of each video segment, but for the embodiment of the present invention, the start position of each video segment is necessary, and the end position corresponding to the video segment can be calculated according to the playing time length corresponding to the video segment, so that only the start position corresponding to the video segment can be determined. The corresponding start positions of each video clip together can form the corresponding file information segments of the 8 surveillance videos.
Since each video segment is peer-to-peer in form, the above-mentioned sorting manner is only a specific implementation manner, and the embodiment of the present invention is not particularly limited to the sequence between the video segments.
In consideration of the convenience of the later operation of the combined file, the generated file information segment may further include other information. For example, identification information (usually indicated by a start code) of the merged file, the length of the file information section, the total number of video clips, and the version number of the file information section, etc. As shown in fig. 2, which is a schematic diagram of specific contents of a file information segment in an embodiment of the present invention, the file information segment shown in fig. 2 may include: start code, length of file information segment, version number, total number of video segments, start position of video segment 1, end position of video segment 1, … …, start position of video segment 8, and end position of video segment 8. Wherein:
the start code, which indicates the beginning of the file information segment, is an identifier of the file information segment, and can determine that the file is a merged file, and can be identified by 4 bytes.
The length of the file information segment refers to the length of bytes occupied by the file information segment, and can be represented by 4 bytes.
And the version number represents the current state corresponding to the file information segment, and if any component in the file information segment is changed, the version number is correspondingly changed and can be changed in an upgrading mode.
The total number of video clips, i.e. the total number of video clips in the file information section.
The start position and the end position are the start and the end of the corresponding video segment, respectively.
It should be noted that the content shown in fig. 2 is only one type of specific content included in the file information segment, and the embodiment of the present invention does not limit the location of each piece of information included in fig. 2, and besides the information, the file information segment may also include a file name, an encoding format of a video clip, and the like.
S104: merging the video clips into a merged data segment according to the initial positions corresponding to the video clips in the file information segment;
it is understood that the obtained file information segment includes the start positions of the video segments, and then the video segments can be combined into one combined data segment only by combining the video segments corresponding to the start positions determined in the file information segment. It should be emphasized that, in the process of merging the video segments according to the corresponding start positions of the video segments, the video segments are independent from each other, and only end-to-end effects are achieved, and the video segments can be decoded independently, so that the encoding formats of the video segments to be merged may be the same or different, which is not limited in the embodiment of the present invention.
For example, taking the video segments 1 to 8 as the surveillance videos 1 to 8 as an example, it is understood that the start positions corresponding to the surveillance videos 1 to 8 determined in the file information segment are 0, 100min, 220min, 300min, 400min, 490min, 550min, and 630min, respectively, and then the surveillance videos 1 to 8 may be merged at the corresponding start positions, as shown in fig. 3, which is a schematic diagram of the specific contents of the merged data segment in the embodiment of the present invention.
S105: and combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file.
After the merged data segment is obtained, the merged data segment is merged with the generated file information segment and stored in a file. In general, the file information segment is located at the front end of the merged file, that is, the file information segment is located at the front end of the merged data segment, but the sequence of the file information segment and the merged data segment is not explicitly limited in the embodiment of the present invention.
Therefore, the positions of the video segments in the merged file can be determined through the file information segment, and meanwhile, a decoding playing channel can be provided for each video segment, so that the effect that a plurality of video segments are opened simultaneously can be realized, and the video segments do not need to be opened one by one for each video segment.
It should be noted that, the above embodiment only takes the video segments to be merged as the surveillance videos 1 to 8 as an example for description, the embodiment of the present invention does not limit the number of the video segments to be merged, and since the video segments are completely equivalent in form, the embodiment of the present invention also does not limit the encoding format of each video segment.
In summary, in the scheme provided in the embodiment of the present invention, the obtained first preset number of video segments to be merged and the attribute information of each video segment generate a file information segment for the first preset number of video segments according to the attribute information, where the file information segment at least includes a start position corresponding to each video segment; merging the video clips into merged data segments according to the initial positions corresponding to the video clips; combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file; therefore, by applying the scheme provided by the embodiment of fig. 1 to merge multiple video segments, multiple video segments can be merged and stored in one file, so that when the multiple video segments are opened, only one file needs to be opened, and the multiple video segments do not need to be opened one by one, thereby increasing the speed of opening the multiple video segments and further reducing the difficulty of managing the multiple video segments.
Fig. 4 is another schematic flow chart of a multi-video segment merging method according to an embodiment of the present invention, which includes steps S101 to S105 in the embodiment of fig. 1, and after step S104 and before step S105, the method further includes the steps of:
s106: generating index data corresponding to each video clip in the merged data segment;
in order to further manage the multiple video segments and conveniently and quickly search and locate the corresponding video segments, in an implementation manner provided in an embodiment of the present invention, the generating index data corresponding to each video segment in the merged data segment may include:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
Optionally, for each video segment in the merged data segment, the index data for the video segment may be generated according to the feature information of all key frames included in the video segment.
As will be understood by those skilled in the art, a video segment contains a plurality of key frames, which are generally referred to as I frames, and the I frames are independent frames that can be independently encoded without reference to any other video frame, so that the I frames can be used as key frames to mark content information of the video segment, and the I frames have corresponding characteristic information, which may include: the frame number, the timestamp information, the playing time information, etc., and of course, the characteristic information does not necessarily include all of the above information, and may be a combination or all of the above information, which is not an exhaustive list of the contents included in the characteristic information.
For each video clip, the feature information of the video clip including the key frame can be quickly searched, retrieved or directly obtained, for example, the surveillance video 1 includes 200 key frames (I frames), the frame number corresponding to the 1 st I frame is I0, and the timestamp is: 2016-10-10-10:10:10, the playing time is as follows: 00:00, the frame number corresponding to the 2 nd I frame is I16, and the timestamp is: 2016-10-10: 10:11, the playing time is as follows: 00:01, … …, the frame number corresponding to the 100 th I frame is I1600, and the timestamp is: 2016-10-10-11:00:09, the playing time is as follows: 49:59, … …, wherein the frame number corresponding to the 200 th I frame is I3200, and the timestamp is: 2016-10-10-11:50:09, the playing time is as follows: 99:59, it can be seen that the feature information of the 200I frames can be quickly located to any time period of the monitoring video 1, so that quick location can be realized, and the difficulty in locating video segments is greatly reduced. Since the monitoring videos have similar structures, they are not listed.
S107: generating an index data segment aiming at the merged data segment according to the index data corresponding to each video segment;
as can be seen from the above, after analyzing each video clip in the merged segment, index data corresponding to each video clip can be obtained, and in order to manage and search the index information, the index data may be arranged in a certain order to generate an index data segment for the merged segment.
In a specific implementation manner provided by the embodiment of the present invention, the generating an index data segment for the merged data segment according to the index data corresponding to each video segment may include:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
In order to correspond to the sequence of each video segment in the merged data segment, the index data corresponding to each video segment may be arranged together according to the arrangement sequence of each video segment in the merged data segment to form an index data sequence; in order to conveniently search the index data sequence and further implement an accurate positioning function, the scheme provided by the embodiment of the present invention may generate an index start code for the index data sequence, and merge the generated index start code at the front end of the index data sequence to generate an index data segment for the merged data segment. As shown in fig. 5, which is a schematic diagram of specific contents of an index data segment in an embodiment of the present invention, the index data segment shown in fig. 5 may include: index start code and index data sequence, wherein the index data sequence includes index data corresponding to video segments 1-8, and the index data of each video segment may include: the length of the index data and the index data are specifically as follows:
an index start code, which indicates the start of the index information segment, by which the position of the index data segment can be identified.
The index data length refers to the byte occupied by the corresponding index data.
Index data, i.e., the index data itself.
Of course, the content shown in fig. 5 is only one of specific contents included in the index data segment, and besides the information, the index data segment may further include a file name, an index data segment length, and the like.
It should be noted that, after the generating the corresponding index data segment for the merged data segment, the combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into one merged file may include:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
In summary, by applying the scheme provided in the embodiment of fig. 4, through parsing each video clip, feature information of a key frame included in each video clip can be obtained, index data for the video clip is generated according to the feature information, the index data is sorted to obtain an index data segment, the index data segment is merged with the merged data segment and the file information segment, and is stored in one merged file, and through the index data segment, the video clip can be accurately located, management work on multiple video files is facilitated, and user experience is improved.
Based on the embodiment provided in fig. 4, in another implementation manner provided in the embodiment of the present invention, after the merged file is stored, the method may further include the steps of opening the merged file and playing the video clip, specifically including:
step A: receiving an opening instruction aiming at the merged file;
and B: reading the merged file, and determining the total number of the video clips and the initial positions corresponding to the video clips according to the file information segment;
it can be understood that after the above-mentioned merging of multiple video segments, a merged file can be obtained, which can be stored in a local storage space for later management, and inevitably needs to be opened again, and when the merged file is opened, an open instruction for the merged file can be received for a device or a client; for example, the user issues an instruction to open the merged file by clicking a mouse, sliding a screen, or the like, and starts reading the merged file after receiving the open instruction. As can be seen from the above, the merged file includes three parts, which are: the file information segment, the merged data segment and the index data segment, the total number of the video segments can be determined through the file information segment, and the corresponding starting position of each video segment can also be determined.
Referring to the file information segment shown in fig. 2, it may be determined that the file is a merged file according to the start code of the file information segment, and it is determined that the total number of video segments included in the merged file is 8 according to the total number of video segments in the file information segment, and it may at least determine the start position corresponding to each video segment according to the start position and the end position of the video segment, where the start positions corresponding to the determined surveillance videos 1-8 are 0, 100min, 220min, 300min, 400min, 490min, 550min, and 630min, respectively, and these start positions are positions where the corresponding video segments start to be played, so that the corresponding video segments can be found according to these start positions.
And C: establishing decoding playing channels with the same number as the total number, wherein each video clip corresponds to one decoding playing channel;
because the video segments in the combined data segment are mutually independent, the video segments can be independently decoded and played, and based on the content, one decoding and playing channel can be established for each video segment, and the decoding and playing channels with the same number as the total number can be established. For example, taking the total number of video segments included in the merged data segment as 8 as an example, that is, the merged data segment includes the surveillance videos 1 to 8, then, decoding playing channels 1 to 8 corresponding to each surveillance video may be respectively established, which totally include 8 decoding playing channels, and the decoding playing channels 1 to 8 are used for decoding and playing the surveillance videos 1 to 8.
Step D: and leading the video clips into corresponding decoding playing channels for decoding playing according to the initial positions corresponding to the video clips.
After the decoding playing channel is established for each video clip, each video clip can be decoded and played through the corresponding decoding playing channel and can be played in the corresponding display window, and a user can check the specific content of the video clip. It is understood that there may be a plurality of display windows in the display, and each display window is corresponding to one video segment to be played, i.e. one decoding playing channel is connected.
In summary, with the scheme provided by the embodiment of the present invention, a corresponding decoding playing channel is established for each video segment, and then the video segments are guided into the corresponding decoding playing channel for decoding playing, where the video segments are independent from each other, and the multiple video segments can be decoded and played at the same time.
In another implementation manner provided by the embodiment of the present invention, on the basis of the previous embodiment, the method further includes:
step E: in the process of decoding and playing the video clips, receiving play positioning information which is input by a user and aims at one or more video clips in the merged file, wherein the play positioning information comprises: at least one of frame number, timestamp information and playing time information;
in a video surveillance system, since the merged surveillance videos usually have a certain correlation in time or space, for example, the surveillance videos 1-8 captured by the cameras 1-8 located on the "chaning avenue" have a certain correlation in space. The monitoring videos are independent from each other, and the decoding and playing time of each monitoring video may not have a sequence, so that if an offender is found in a certain monitoring video while the monitoring videos are watched, other related monitoring videos cannot be timely and effectively positioned to the same time period for playing. For example, at a certain time period (e.g., 11:00:00-11:20:00) of the day 2016-10-10, a troublemaker passing by the road segment appears in the frames of surveillance videos 1-49: 59, while several other surveillance videos may be played in other time periods and may not be able to simultaneously view the troublemaker's whereabouts on the "Changning Daiao" road segment.
In order to facilitate the fast and accurate locating of the location of the troublemaker appearing in the surveillance videos 1-8, during the process of viewing the surveillance videos, if the troublemaker appears in a surveillance video, the user may manually input a timestamp (e.g., 2016-10-10-11:00:09) for one or more surveillance videos 1-8, or the user may input a frame number (e.g., I1600) corresponding to the key frame, or may input a playing time (e.g., 49:59), or may input the above information at the same time, where the information input by the user is referred to as the locating information.
Step F: obtaining an index data segment aiming at the merged data segment according to the merged file;
after obtaining the above playing positioning information, it is necessary to search in the index data segment to find the index data corresponding to the playing positioning information, and in order to quickly find the position of the index data segment, obtaining the index data segment for the merged data segment according to the merged file may include:
and detecting whether an index start code exists in the merged file, and if so, determining data with a preset length behind the index start code as an index data segment.
It is to be understood that, the index data segment for each video segment in the merged data segment may have been previously generated in the merged file, and in order to facilitate searching for the index data segment, a position of the index data segment in the merged file may be generally marked with an identification information, where the identification information is an index start code, so that, in order to obtain the index data segment quickly, it may be detected whether an index start code exists in the merged file, and if so, data with a preset length following the index start code may be determined as the index data segment.
Step G: determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
step H: and skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
Through the steps, the index data segment can be obtained, wherein the index data segment comprises the index data of each video clip, and each index data comprises the characteristic information of each key frame in the corresponding video clip. Therefore, after receiving the play positioning information input by the user, according to the positioning information, the index data of the video segment corresponding to the positioning information may be determined first, and a key frame matching the positioning information is found from the determined index data, which is called as a first key frame.
The following play positioning information is input by the user for the surveillance videos 1-8: the whole process of the above-mentioned play positioning will be described by taking the time stamp information (2016-10-10-11:00:09) as an example.
When receiving playing positioning information aiming at the monitoring videos 1-8 input by a user, firstly determining an index data segment from a merged file according to an index start code; then according to the playing positioning information input by the user, determining the index data of the monitoring video corresponding to the playing information from the index data segment, wherein the playing positioning information input by the user is specific to the monitoring videos 1-8, so that the index data corresponding to the monitoring videos 1-8 can be determined; after the positioning information is played: and the timestamp information (2016-10-10-11:00:09) is used as a search key word, the search is respectively carried out in the determined index data, the corresponding key frames (first key frames) in the monitoring videos 1-8 can be respectively found, and finally, the monitoring videos 1-8 are all jumped to the key frame positions for playing. The aim of synchronous playing is achieved, the direction finding of the troublemaker on the long and peaceful road section is greatly facilitated, effective evidences are quickly obtained, and the finding efficiency is improved.
It should be noted that the play positioning information input by the user may be for one or more video clips, the embodiment for the surveillance videos 1 to 8 is only a specific example, and the play positioning information may also be specifically: the frame number or the playing time information, and therefore, the embodiment of the present invention does not limit the specific form of the positioning information and the number of the video segments.
Therefore, the scheme provided by the embodiment of the invention can achieve the purpose of synchronously playing the plurality of video clips, and can accurately and quickly locate the key position through playing the locating information, thereby greatly facilitating the searching work of the user on the target, quickly and effectively acquiring the evidence and improving the searching efficiency.
Fig. 6 is a schematic structural diagram of a multi-video segment merging apparatus according to an embodiment of the present invention, including: a first obtaining module 201, a second obtaining module 202, a first generating module 203, a merging module 204 and a combining module 205.
A first obtaining module 201, configured to obtain a first preset number of video segments to be merged;
a second obtaining module 202, configured to obtain attribute information of each video segment, where the attribute information at least includes a playing duration of the video segment;
a first generating module 203, configured to generate, according to the attribute information, file information segments for the first preset number of video segments, where the file information segments at least include start positions corresponding to the video segments;
a merging module 204, configured to merge the video segments into a merged data segment according to the starting positions corresponding to the video segments in the file information segment;
and the combining module 205 is configured to combine the file information segment and the merged data segment, and store the combined file information segment and the merged data segment in a merged file.
By applying the scheme provided by the embodiment of fig. 6 to merge multiple video clips, multiple video clips can be merged and stored in one file, so that when the multiple video clips are opened, only one file needs to be opened, the speed of opening the multiple video clips is increased, and the difficulty in managing the multiple video clips is further reduced.
Fig. 7 is a schematic structural diagram of a multiple video segment merging apparatus according to an embodiment of the present invention, which includes, in addition to the first obtaining module 201, the second obtaining module 202, the first generating module 203, the merging module 204, and the combining module 205 provided in the embodiment of fig. 1, the apparatus further includes: a second generation module 206 and a third generation module 207.
A second generating module 206, configured to generate index data corresponding to each video segment in the merged data segment;
a third generating module 207, configured to generate an index data segment for the merged data segment according to the index data corresponding to each video segment;
in the case that the embodiment of the present invention includes the second generating module 206 and the third generating module 207, the combining module 205 is specifically configured to:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
Optionally, the second generating module 206 is specifically configured to:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
Optionally, the third generating module is specifically configured to:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
By applying the scheme provided by the embodiment of fig. 7, through analyzing each video clip, the feature information of the key frame included in each video clip can be obtained, the index data for the video clip is generated according to the feature information, the index data is sequenced to obtain the index data segment, the index data segment is merged with the merged data segment and the file information segment, and is stored in a merged file, through the index data segment, the video clip can be accurately positioned, the management work of multiple video files is facilitated, and the user experience is improved.
Based on the embodiment provided in fig. 7, in a specific implementation manner provided in the embodiment of the present invention, the apparatus may further include: a first receiving module, a reading module, a building module and a decoding module (not shown in the figure), specifically:
the first receiving module is used for receiving an opening instruction aiming at the merged file;
a reading module, configured to read the merged file, and determine, according to the file information segment, a total number of video segments and an initial position corresponding to each video segment, where the total number is equal to the first preset number;
the establishing module is used for establishing decoding playing channels with the same number as the total number, and each video clip corresponds to one decoding playing channel;
and the decoding module is used for guiding the video clips into corresponding playing channels for decoding and playing according to the initial positions corresponding to the video clips.
In summary, with the scheme provided by the embodiment of the present invention, a corresponding decoding playing channel is established for each video segment, and then the video segments are guided into the corresponding decoding playing channel for decoding playing, where the video segments are independent from each other, and the multiple video segments can be decoded and played at the same time.
In another implementation manner provided in the embodiment of the present invention, on the basis of the previous embodiment, the apparatus further includes: a second receiving module, a third obtaining module, a determining module and a playing module (not shown in the figure), specifically:
a second receiving module, configured to receive, during a process of decoding and playing the video segments, play positioning information that is input by a user and is for one or more video segments in the merged file, where the play positioning information includes: at least one of frame number, timestamp information and playing time information;
a third obtaining module, configured to obtain, according to the merged file, an index data segment for the merged data segment;
the determining module is used for determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
and the playing module is used for skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
Therefore, the scheme provided by the embodiment of the invention can achieve the purpose of synchronously playing the plurality of video clips, and can accurately and quickly locate the key position through playing the locating information, thereby greatly facilitating the searching work of the user on the target, quickly and effectively acquiring the evidence and improving the searching efficiency.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Those skilled in the art will appreciate that all or part of the steps in the above method embodiments may be implemented by a program to instruct relevant hardware to perform the steps, and the program may be stored in a computer-readable storage medium, which is referred to herein as a storage medium, such as: ROM/RAM, magnetic disk, optical disk, etc.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (12)
1. A method for merging multiple video segments, the method comprising:
obtaining a first preset number of video clips to be merged;
obtaining attribute information of each video clip, wherein the attribute information at least comprises the playing duration of the video clip;
generating file information segments aiming at the first preset number of video segments according to the attribute information, wherein the file information segments at least comprise initial positions corresponding to the video segments;
merging the video clips into a merged data segment according to the initial positions corresponding to the video clips in the file information segment; the video clips in the merged data segment are independent; each video segment in the merged data segment can be independently decoded;
combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file;
the method further comprises the following steps:
receiving an opening instruction aiming at the merged file;
reading the merged file, and determining the total number of the video clips and the initial positions corresponding to the video clips according to the file information segment;
establishing decoding playing channels with the same number as the total number, wherein each video clip corresponds to one decoding playing channel;
and leading the video clips into corresponding decoding playing channels for decoding playing according to the initial positions corresponding to the video clips.
2. The method of claim 1, wherein prior to said step of storing said combined file information segment and said merged data segment in a merged file, said method further comprises:
generating index data corresponding to each video clip in the merged data segment;
generating an index data segment aiming at the merged data segment according to the index data corresponding to each video segment;
the combining the file information segment and the merged data segment and storing the combined file information segment and the merged data segment into a merged file includes:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
3. The method of claim 2, wherein the step of generating index data corresponding to each video clip in the merged data segment comprises:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
4. The method according to claim 3, wherein the step of generating the index data segment for the merged data segment according to the index data corresponding to each video segment comprises:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
5. The method of claim 1, further comprising:
in the process of decoding and playing the video clips, receiving play positioning information which is input by a user and aims at one or more video clips in the merged file, wherein the play positioning information comprises: at least one of frame number, timestamp information and playing time information;
obtaining an index data segment aiming at the merged data segment according to the merged file;
determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
and skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
6. The method of claim 5, wherein the step of obtaining the index data segment for the merged data segment from the merged file comprises:
and detecting whether an index start code exists in the merged file, and if so, determining data with a preset length behind the index start code as an index data segment.
7. A multi-video segment merging apparatus, the apparatus comprising:
the device comprises a first obtaining module, a second obtaining module and a merging module, wherein the first obtaining module is used for obtaining a first preset number of video clips to be merged;
a second obtaining module, configured to obtain attribute information of each video segment, where the attribute information at least includes a playing duration of the video segment;
a first generating module, configured to generate, according to the attribute information, file information segments for the first preset number of video segments, where the file information segments at least include start positions corresponding to the video segments;
the merging module is used for merging the video clips into a merged data segment according to the initial positions corresponding to the video clips in the file information segment; the video clips in the merged data segment are independent; each video segment in the merged data segment can be independently decoded;
the combination module is used for combining the file information segment and the merged data segment and storing the combined data segment into a merged file;
the device further comprises:
the first receiving module is used for receiving an opening instruction aiming at the merged file;
the reading module is used for reading the merged file and determining the total number of the video clips and the initial positions corresponding to the video clips according to the file information segment;
the establishing module is used for establishing decoding playing channels with the same number as the total number, and each video clip corresponds to one decoding playing channel;
and the decoding module is used for guiding the video clips into corresponding decoding playing channels for decoding playing according to the initial positions corresponding to the video clips.
8. The apparatus of claim 7, further comprising:
the second generation module is used for generating index data corresponding to each video clip in the merged data segment;
a third generating module, configured to generate an index data segment for the merged data segment according to index data corresponding to each video segment;
the combination module is specifically used for:
and combining the file information segment, the merged data segment and the index data segment and storing the combined file information segment, the merged data segment and the index data segment into a merged file.
9. The apparatus of claim 8, wherein the second generating module is specifically configured to:
acquiring feature information of key frames contained in each video clip, wherein the feature information comprises: at least one of frame number, timestamp information and playing time information;
and generating index data aiming at the video clips according to the characteristic information corresponding to each video clip.
10. The apparatus of claim 9, wherein the third generating module is specifically configured to:
according to the arrangement sequence of the video clips in the merged data segment, arranging the index data corresponding to the video clips to obtain an index data sequence;
and generating an index starting code, adding the index starting code in front of the index data sequence, and generating an index data segment aiming at the merged data segment.
11. The apparatus of claim 7, further comprising:
a second receiving module, configured to receive, during a process of decoding and playing the video segments, play positioning information that is input by a user and is for one or more video segments in the merged file, where the play positioning information includes: at least one of frame number, timestamp information and playing time information;
a third obtaining module, configured to obtain, according to the merged file, an index data segment for the merged data segment;
the determining module is used for determining a first key frame of a video clip corresponding to the playing positioning information according to the index data segment and the playing positioning information;
and the playing module is used for skipping the video clip corresponding to the playing positioning information to the corresponding first key frame position for playing.
12. The apparatus according to claim 11, wherein the third obtaining module is specifically configured to:
and detecting whether an index start code exists in the merged file, and if so, determining data with a preset length behind the index start code as an index data segment.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710166502.5A CN108632541B (en) | 2017-03-20 | 2017-03-20 | Multi-video-clip merging method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710166502.5A CN108632541B (en) | 2017-03-20 | 2017-03-20 | Multi-video-clip merging method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN108632541A CN108632541A (en) | 2018-10-09 |
| CN108632541B true CN108632541B (en) | 2021-07-20 |
Family
ID=63687158
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710166502.5A Active CN108632541B (en) | 2017-03-20 | 2017-03-20 | Multi-video-clip merging method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108632541B (en) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109933569B (en) * | 2019-01-28 | 2024-03-19 | 平安科技(深圳)有限公司 | File merging method, file opening method and related equipment |
| CN109922356B (en) * | 2019-03-01 | 2021-07-09 | 广州酷狗计算机科技有限公司 | Video recommendation method and device and computer-readable storage medium |
| CN109905749B (en) * | 2019-04-11 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Video playing method and device, storage medium and electronic device |
| CN111405358A (en) * | 2020-03-24 | 2020-07-10 | 上海依图网络科技有限公司 | Cache-based video frame extraction method, apparatus, medium, and system |
| CN111711861B (en) * | 2020-05-15 | 2022-04-12 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
| CN111654749B (en) * | 2020-06-24 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video data production method and device, electronic equipment and computer readable medium |
| CN111966845B (en) * | 2020-08-31 | 2023-11-17 | 重庆紫光华山智安科技有限公司 | Picture management method, device, storage node and storage medium |
| US11570496B2 (en) | 2020-12-03 | 2023-01-31 | Hulu, LLC | Concurrent downloading of video |
| CN114679621B (en) * | 2021-05-07 | 2024-07-09 | 腾讯云计算(北京)有限责任公司 | Video display method and device and terminal equipment |
| CN116233355A (en) * | 2021-12-03 | 2023-06-06 | 北京疯景科技有限公司 | Method, system and background server for acquiring monitoring video by multi-machine-position combination |
| CN117714814B (en) * | 2023-12-16 | 2024-05-17 | 浙江鼎世科技有限公司 | Video storage access system based on intelligent strategy |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101539845A (en) * | 2009-04-24 | 2009-09-23 | 无锡天脉聚源传媒科技有限公司 | Software video wall method for rich media interactive display |
| CN102413358A (en) * | 2011-08-12 | 2012-04-11 | 青岛海信传媒网络技术有限公司 | Storage and playing method, device and system of streaming media file |
| CN105141973A (en) * | 2015-09-01 | 2015-12-09 | 北京暴风科技股份有限公司 | Multi-segment media file mosaicing method and system |
| CN105519095A (en) * | 2014-12-14 | 2016-04-20 | 深圳市大疆创新科技有限公司 | Video processing processing method, apparatus and playing device |
| CN106162022A (en) * | 2015-04-08 | 2016-11-23 | 深圳市尼得科技有限公司 | Method, system and the mobile terminal of a kind of quick broadcasting video |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7127736B2 (en) * | 2000-11-17 | 2006-10-24 | Sony Corporation | Content processing apparatus and content processing method for digest information based on input of a content user |
| CN103165156B (en) * | 2011-12-08 | 2016-03-02 | 北京同步科技有限公司 | Audio video synchronization Play System and video broadcasting method, CD |
| CN105323652B (en) * | 2014-07-31 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Method and device for playing multimedia file |
| CN104284216B (en) * | 2014-10-23 | 2018-07-13 | Tcl集团股份有限公司 | A kind of method and its system generating video essence editing |
| CN104394380A (en) * | 2014-12-09 | 2015-03-04 | 浙江省公众信息产业有限公司 | Video monitoring management system and playback method of video monitoring record |
| CN104869477A (en) * | 2015-05-14 | 2015-08-26 | 无锡天脉聚源传媒科技有限公司 | Method and device for segmented playing of video |
| CN105302883B (en) * | 2015-10-13 | 2018-12-21 | 深圳市乐唯科技开发有限公司 | A kind of management method and system of time-based media file |
| CN105681683A (en) * | 2016-02-24 | 2016-06-15 | 北京金山安全软件有限公司 | Video and picture mixed playing method and device |
-
2017
- 2017-03-20 CN CN201710166502.5A patent/CN108632541B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101539845A (en) * | 2009-04-24 | 2009-09-23 | 无锡天脉聚源传媒科技有限公司 | Software video wall method for rich media interactive display |
| CN102413358A (en) * | 2011-08-12 | 2012-04-11 | 青岛海信传媒网络技术有限公司 | Storage and playing method, device and system of streaming media file |
| CN105519095A (en) * | 2014-12-14 | 2016-04-20 | 深圳市大疆创新科技有限公司 | Video processing processing method, apparatus and playing device |
| CN106162022A (en) * | 2015-04-08 | 2016-11-23 | 深圳市尼得科技有限公司 | Method, system and the mobile terminal of a kind of quick broadcasting video |
| CN105141973A (en) * | 2015-09-01 | 2015-12-09 | 北京暴风科技股份有限公司 | Multi-segment media file mosaicing method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108632541A (en) | 2018-10-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108632541B (en) | Multi-video-clip merging method and device | |
| CN110119711A (en) | A kind of method, apparatus and electronic equipment obtaining video data personage segment | |
| CN107483879B (en) | Video marking method and device and video monitoring method and system | |
| CN106411927B (en) | A kind of monitoring video recording method and device | |
| US8879788B2 (en) | Video processing apparatus, method and system | |
| CN109189987A (en) | Video searching method and device | |
| CN110019880A (en) | Video clipping method and device | |
| US9754630B2 (en) | System to distinguish between visually identical objects | |
| US11037604B2 (en) | Method for video investigation | |
| CN101616264A (en) | News Video Cataloging Method and System | |
| KR101887400B1 (en) | Method for providing c0ntents editing service using synchronization in media production enviroment | |
| US20120059914A1 (en) | Systems and methods for determining attributes of media items accessed via a personal media broadcaster | |
| WO2017015112A1 (en) | Media production system with location-based feature | |
| JP2006155384A (en) | Video comment input / display method, apparatus, program, and storage medium storing program | |
| BR112016006860B1 (en) | APPARATUS AND METHOD FOR CREATING A SINGLE DATA FLOW OF COMBINED INFORMATION FOR RENDERING ON A CUSTOMER COMPUTING DEVICE | |
| CN103780973A (en) | Video label adding method and video label adding device | |
| CN104335594A (en) | Automatic digital curation and tagging of action videos | |
| CN102595206B (en) | Data synchronization method and device based on sport event video | |
| US20150110461A1 (en) | Dynamic media recording | |
| US10448063B2 (en) | System and method for perspective switching during video access | |
| KR102561308B1 (en) | Method and apparatus of providing traffic information, and computer program for executing the method. | |
| KR20160005552A (en) | Imaging apparatus providing video summary and method for providing video summary thereof | |
| CN110876090B (en) | Video abstract playback method and device, electronic equipment and readable storage medium | |
| CN112383751A (en) | Monitoring video data processing method and device, terminal equipment and storage medium | |
| CN106899829A (en) | A kind of method for processing video frequency and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |