[go: up one dir, main page]

CN116528020A - Method and device for generating marked video and method and device for detecting video marks - Google Patents

Method and device for generating marked video and method and device for detecting video marks Download PDF

Info

Publication number
CN116528020A
CN116528020A CN202210080149.XA CN202210080149A CN116528020A CN 116528020 A CN116528020 A CN 116528020A CN 202210080149 A CN202210080149 A CN 202210080149A CN 116528020 A CN116528020 A CN 116528020A
Authority
CN
China
Prior art keywords
subtitle
offset
video
detected
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210080149.XA
Other languages
Chinese (zh)
Inventor
刘绍腾
杨天舒
常勤伟
黄磊超
刘华罗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210080149.XA priority Critical patent/CN116528020A/en
Publication of CN116528020A publication Critical patent/CN116528020A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to a method, a device, a computer device and a storage medium for generating a marked video. The method comprises the following steps: acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier; mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to the identifiers in the identifier sequence respectively; determining identifiers corresponding to the subtitles to be offset in the original subtitle file; acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs; determining a mark subtitle file according to the plurality of offset subtitles; the subtitle file is used to form a mark video corresponding to the object identifier together with the target video. The method can realize the video watermark embedding with strong robustness in the video field.

Description

Method and device for generating marked video and method and device for detecting video marks
Technical Field
The present invention relates to the field of network media technologies, and in particular, to a method and apparatus for generating a marked video, a computer device, a storage medium, and a computer program product, and a method and apparatus for detecting a video mark, a computer device, a storage medium, and a computer program product.
Background
In recent years, with the increase of public copyright consciousness, the importance of copyright protection of film and television works is increasingly reflected. Because the digital watermarking technology has the characteristics of good concealment, convenient tracing and convenient operation, the digital watermarking technology is increasingly introduced into video copyright protection scenes.
The existing video digital watermarking technology mainly adds an image watermark in a video picture so as to mark and display the source side of the video. However, under strong attacks such as scaling, cropping, degradation, and clipping, the image watermark added in this way is easily destroyed, resulting in difficulty in effectively protecting video copyrights.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, a storage medium, and a computer program product for generating a marked video, and a method, an apparatus, a computer device, a storage medium, and a computer program product for detecting a video mark, which can improve robustness of a digital watermark.
In one aspect, the present application provides a method for generating a marked video. The method comprises the following steps:
acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier;
Mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to identifiers in the identifier sequence respectively;
determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
On the other hand, the application also provides a device for generating the marked video. The device comprises:
the acquisition module is used for acquiring the object identifier and determining an original subtitle file of the target video requested by the object identifier;
the mapping module is used for mapping the object identifier into an identifier sequence and determining subtitle offset types corresponding to the identifiers in the identifier sequence respectively;
the determining module is used for determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
the determining module is further configured to obtain an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to a subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
The determining module is further used for determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
In another aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier;
mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to identifiers in the identifier sequence respectively;
determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
In another aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier;
mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to identifiers in the identifier sequence respectively;
determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
In another aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier;
Mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to identifiers in the identifier sequence respectively;
determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
According to the method, the device, the computer equipment, the storage medium and the computer program product for generating the marked video, the object identification of a video player is mapped into the identifier sequence formed by the identifiers through certain logic, the offset subtitles respectively corresponding to the to-be-offset subtitles in the original subtitle file are acquired based on the subtitle offset types respectively corresponding to the identifiers in the identifier sequence, the offset subtitles are synthesized into the marked subtitle file, the marked video is formed, the position of each to-be-offset subtitle in the original subtitle file is modified, the original subtitle file is converted into the marked subtitle file with the embedded object identification, and the marked subtitle file is used as marking information or watermark information, so that the marked subtitle file is embedded in the video in a hidden mode, and is more difficult to damage by attacks such as scaling, cutting, degradation, clipping and the like compared with a scheme for embedding in the picture dimension, thereby ensuring the detection rate and the accuracy in the source tracing.
On the other hand, the application also provides a detection method of the video mark. The method comprises the following steps:
acquiring a video to be detected, and determining a subtitle offset type corresponding to each video frame in the video to be detected;
for each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
based on the subtitle offset type corresponding to each subtitle to be detected, determining identifiers corresponding to the subtitles to be detected respectively;
and determining an identifier sequence based on identifiers corresponding to the subtitles to be detected respectively, and determining the object identification marked in the video to be detected based on the identifier sequence.
On the other hand, the application also provides a detection device for the video mark. The device comprises:
the acquisition module is used for acquiring the video to be detected and determining the subtitle offset type corresponding to each video frame in the video to be detected;
the determining module is used for determining, for each subtitle to be detected in the video to be detected, a subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
The determining module is further configured to determine identifiers corresponding to the subtitles to be detected respectively based on subtitle offset types corresponding to the subtitles to be detected respectively;
the determining module is further configured to determine an identifier sequence based on identifiers corresponding to the subtitles to be detected, and determine an object identifier marked in the video to be detected based on the identifier sequence.
In another aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a video to be detected, and determining a subtitle offset type corresponding to each video frame in the video to be detected;
for each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
based on the subtitle offset type corresponding to each subtitle to be detected, determining identifiers corresponding to the subtitles to be detected respectively;
and determining an identifier sequence based on identifiers corresponding to the subtitles to be detected respectively, and determining the object identification marked in the video to be detected based on the identifier sequence.
In another aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a video to be detected, and determining a subtitle offset type corresponding to each video frame in the video to be detected;
for each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
based on the subtitle offset type corresponding to each subtitle to be detected, determining identifiers corresponding to the subtitles to be detected respectively;
and determining an identifier sequence based on identifiers corresponding to the subtitles to be detected respectively, and determining the object identification marked in the video to be detected based on the identifier sequence.
In another aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
acquiring a video to be detected, and determining a subtitle offset type corresponding to each video frame in the video to be detected;
For each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
based on the subtitle offset type corresponding to each subtitle to be detected, determining identifiers corresponding to the subtitles to be detected respectively;
and determining an identifier sequence based on identifiers corresponding to the subtitles to be detected respectively, and determining the object identification marked in the video to be detected based on the identifier sequence.
According to the method, the device, the computer equipment, the storage medium and the computer program product for detecting the video mark, the subtitle offset type corresponding to each video frame in the video to be detected is determined, the subtitle offset type corresponding to each subtitle to be detected is determined, the identifier corresponding to each subtitle to be detected is determined based on the subtitle offset type corresponding to each subtitle to be detected, the identifier sequence is determined from the identifiers, and the object identifier of the mark in the video to be detected is determined based on the identifier sequence, so that the detection of watermark information indirectly embedded in the marked video in a subtitle offset mode is realized, the playing party or the leakage source of the video can be determined according to the marked object identifier, and the detection rate and the accuracy in tracing the source are ensured.
Drawings
FIG. 1 is an application environment diagram of a method of generating a marked video in one embodiment;
FIG. 2 is a flow diagram of a method of generating a marked video in one embodiment;
FIG. 3 is a flow chart of a method for detecting a video marker according to an embodiment;
FIG. 4A is a flow chart of a watermark embedding process according to one embodiment;
FIG. 4B is a flow chart of a watermark embedding process according to another embodiment;
FIG. 4C is a flow chart of a watermark detection process according to one embodiment;
FIG. 5 is a flow diagram of video alignment in one embodiment;
FIG. 6 is a block diagram showing the structure of a tag video generating apparatus according to an embodiment;
FIG. 7 is a block diagram of a video mark detection device in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In recent years, with the increase of public copyright consciousness, the importance of copyright protection of film and television works is increasingly reflected. Because the digital watermarking technology has the characteristics of good concealment, convenient tracing and convenient operation, the digital watermarking technology is increasingly introduced into video copyright protection scenes.
Digital watermarking is an information hiding technology, which uses the limitation of human sense to tightly combine and hide digital signals (such as images, characters, symbols, or numbers, etc. which can be marked information) with original data (such as images, audio, video data). Digital watermarking can provide complete and reliable evidence of ownership of copyrighted information products.
In the conventional video digital watermarking technology, the superposition mode of digital watermarking is mainly performed in the dimension of a video picture, namely, domain conversion is performed on an image, and watermark is embedded. However, the method of watermark superposition in the picture dimension has the disadvantages of large calculation amount and influence on the picture quality, and the robustness is unstable, and the watermark is easily destroyed under the attack of stronger scaling, clipping, degradation, clipping and the like.
In view of this, the embodiments of the present application provide a method for generating a marked video and a corresponding method for detecting a video mark, which creatively performs hidden watermark embedding in a subtitle dimension. By utilizing video encoding and decoding and computer vision related technologies, the embodiment of the application provides a method capable of embedding hidden watermarks in the processes of subtitle pressing and playing and detecting the watermarks of pressed videos. When the video is leaked and stolen, the leakage source can be traced according to the hidden watermark, so that the video copyright is protected. Compared with the existing scheme of embedding in the picture dimension, the embodiment of the application performs hidden watermark embedding in the subtitle dimension, so that the watermark embedding efficiency can be improved, and the method has strong attack resistance and strong robustness.
In order to facilitate better understanding of the technical content of the present application, the following description will explain related technical terms related to the embodiments of the present application.
Subtitles can be generally divided into hard subtitles, soft subtitles, and plug-in subtitles. The hard caption is embedded in the picture of the video and becomes a part of the image, so long as the video can be played, the caption can be seen. The soft caption is to pack the caption and the video picture in a container, and the caption can be selectively displayed when the video is played, or the caption can be separated; the subtitles and video pictures are separated within the container. The externally hung subtitle means that the subtitle is separated from the video container as a separate file from the video, and the subtitle can be loaded into the video container for playing through a playing tool. The subtitles referred to in the embodiments of the present application may be, but are not limited to, soft subtitles or plug-in subtitles.
The subtitles have formats including, but not limited to, SRT (Text format subtitles), SSA (SubStation Alpha), ASS (Advanced SubStation Alpha), etc. Illustrated in SRT format, the composition is: a line of subtitle sequence numbers, a line of time codes, and a line of subtitle data.
For example:
45
00:02:52,184-->00:02:53,617
A
the 45 th subtitle is shown, the display time is from 2 minutes 52.184 seconds to 2 minutes 53.617 seconds from the beginning of the video, and the specific contents of the subtitle are as follows: A.
the following sets forth the detailed description of the solution of the present application. The method for generating the marked video, provided by the embodiment of the application, can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a communication network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 requests the server 104 to play the target video, the server 104 acquires the object identifier corresponding to the terminal 102, and acquires the subtitle file corresponding to the object identifier based on the identifier sequence mapped by the object identifier. The server 104 issues the subtitle file with the target video to the terminal 102 for playback by the terminal 102.
The terminal 102 may be, but is not limited to, one or more of various desktop computers, notebook computers, smart phones, tablets, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircraft, etc. For example, terminal 102 may be a smart device capable of providing OTT (Over-The-Top) services, including but not limited to a smart television, a set-Top box, and The like. OTT refers to providing various application services through the internet, and typical OTT services include internet television services, application stores, and the like. The terminal 102 may be loaded with applications, such as a video playing application, a browser, a mailbox application, an instant messaging application, and the like, but not limited thereto. The application program can be an application program which is installed independently through an installation package, or an applet application which can be used without downloading and installing. The terminal may play the video through the loaded application.
The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. In some embodiments, as shown in fig. 2, a method for generating a marked video is provided, which may be performed by a server or a terminal, or may be performed by both the server and the terminal. The embodiment of the application is illustrated by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
step S202, obtaining an object identifier, and determining an original subtitle file of a target video requested by the object identifier.
Wherein the object identifier refers to a unique identifier of a terminal used for distinguishing different objects or objects, and may be, but not limited to, one or more of account information of the objects (including, but not limited to, account names, object IDs, etc.), unique code information of a video playing application, IP (Internet Protocol Address ) or MAC (Media Access Control Address, media access control address) of the terminal installed with the video playing application, etc. For example, the account name of the object registered in the video playing platform is "abc", and the corresponding object is identified as "abc". By acquiring the object identifier, the object identifier can be mapped and then used as watermark information of the video to be embedded in the subtitle, so that the leaked video can be traced in the future.
The subtitle offset type refers to a manner of offsetting the subtitle, and may include, but is not limited to, one or more of shifting a position of the subtitle, changing a word pitch of the subtitle, changing a font, a color, a size, etc. of the subtitle, and not offsetting the subtitle (i.e., maintaining a style of an original subtitle), etc. In order to avoid affecting the look and feel of the video as much as possible, the offset of the subtitles should be fine so that the change of the subtitles cannot be or is hardly perceived by the subject when watching the video. For example, for two subtitle offset types, one is to shift the subtitle uniformly by 1 pixel, for example, and the other is to shift the subtitle uniformly by 1 pixel.
Specifically, the terminal may send a video playing request to the server through an application program that runs for playing video, so as to obtain a target video for playing. The video play request includes an object identification, and video information (indicating which video is the target video requested to be played by the object identification, including but not limited to one or more of video name, video number, etc.). The server responds to the video playing request of the terminal, determines a corresponding target video and extracts an object identifier carried in the video playing request. According to the determined target video, the server finds the corresponding original subtitle file. Wherein the original subtitle file may be stored in a database in association with the target video.
Step S204, mapping the object identification into an identifier sequence, and determining the subtitle offset type corresponding to each identifier in the identifier sequence.
Because the object identifiers are different, the object identifiers are converted into uniform identifier sequences through preset mapping rules, so that the computer equipment can recognize the object identifiers more quickly and accurately, and the efficiency of generating the marked video is improved. Typically, the identifier sequence is preset with a fixed length and is made up of a preset number of identifiers. Wherein identifiers are used to specify subtitle offset types, one identifier uniquely corresponding to each subtitle offset type. The identifier may be a character such as an letter or a number. For example, assuming that 2 subtitle offset types are preset, different subtitle offset types may be represented by binary digits "0" and "1" respectively, or by two different letters "a" and "B". As another example, when it is assumed that there are more than 2 subtitle offset types, different subtitle offset types may be represented by numerals 0 to 9 or letters a to Z, or characters such as mixed letters and numerals may represent different subtitle offset types, or the like.
The subtitle offset type refers to a manner of offsetting the subtitle, and may include, but is not limited to, one or more of shifting a position of the subtitle, changing a word pitch of the subtitle, changing a font, a color, a size, etc. of the subtitle, and not offsetting the subtitle (i.e., maintaining a style of an original subtitle), etc. In order to avoid affecting the look and feel of the video as much as possible, the offset of the subtitles should be fine so that the change of the subtitles cannot be or is hardly perceived by the subject when watching the video. For example, for two subtitle offset types, one is to shift the subtitle uniformly by 1 pixel, for example, and the other is to shift the subtitle uniformly by 1 pixel.
Specifically, the server converts the obtained object identifier into an identifier sequence according to a preset mapping rule. Illustratively, the object ID is "abc", then the server may map it to aaaaabaabaaba, where AAAA stands for "a", AAAB stands for "b", and AABA stands for "c". Alternatively, the server may map it to 000110, 00 representing "a",01 representing "b", and 10 representing "c". When the identifier sequence obtained by mapping the object identifier is not equal to the preset length, the server may process the identifier sequence according to preset logic, for example, intercept or complement. Because the corresponding relation between the identifiers and the subtitle offset types is pre-stored, the server can determine the subtitle offset types corresponding to the identifiers in the identifier sequence, so that the positions of the subtitles to be offset can be modified according to different subtitle offset types later, and the subtitles to be offset can be offset.
Step S206, determining identifiers corresponding to the subtitles to be offset in the original subtitle file.
Specifically, after the server acquires the original subtitle file of the target video, splitting the original subtitle file according to the number of subtitle strips to obtain a plurality of subtitles to be offset. For each split subtitle to be offset, the server respectively determines identifiers corresponding to the subtitles to be offset.
The server can sequentially allocate identifiers according to the sequence of each subtitle to be offset in the original subtitle file, and can also randomly allocate identifiers. For example, it is determined that the identifier corresponding to the first subtitle to be offset is "0", the identifier corresponding to the second subtitle to be offset is "1", the identifier corresponding to the nth subtitle to be offset is "1" … …, and so on. Therefore, the server converts the original subtitle file into the watermark subtitle file embedded with the watermark information of the object identifier, namely the mark subtitle file by modifying the position information of the subtitle to be offset.
Step S208, obtaining the offset caption obtained by carrying out corresponding caption offset on the caption to be offset according to the caption offset type to which the identifier corresponding to the caption to be offset belongs.
The subtitle offset refers to one or more of performing offset processing on the subtitle, corresponding to the subtitle offset type, and performing offset processing on letters includes, but is not limited to, performing translation on the position of the subtitle, changing the word spacing of the subtitle, changing the font, color, size, etc. of the subtitle, and not performing offset on the subtitle (i.e., maintaining the style of the original subtitle). The offset processing (or subtitle offset processing) mentioned in the embodiment of the present application may be adding offset subtitles to the original video (or original video clip), or may be offset the original subtitles in the original video.
Specifically, the server determines the subtitle offset type corresponding to each subtitle to be offset according to the identifier corresponding to each subtitle to be offset. Illustratively, the server stores in advance correspondence between various types of identifiers and subtitle offset types, for example, identifier "0" corresponds to a subtitle offset type that shifts the subtitle upward, identifier "1" corresponds to a subtitle offset type that shifts the subtitle downward, as well as, for example, identifier "a" corresponds to a subtitle offset type that increases the subtitle word spacing, identifier "B" corresponds to a subtitle offset type that decreases the subtitle word spacing, and so on. And the server performs subtitle offset on the corresponding subtitle to be offset according to the subtitle offset type, so that offset subtitles are obtained. For example, if the identifier corresponding to the subtitle to be shifted is "0", the server translates the subtitle to be shifted upward by N pixels; if the identifier corresponding to the subtitle to be shifted is "1", the server translates the subtitle to be shifted down by N pixels, and so on.
It should be noted that, after determining the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs, the server may perform corresponding subtitle offset on the subtitle to be offset, so as to obtain the offset subtitle. In some embodiments, the server may also perform subtitle offset of various subtitle offset types on each subtitle to be offset in the original subtitle file in advance, so as to obtain offset subtitles under each type and store the offset subtitles in the storage medium; then in the above step, the server extracts the offset subtitle of the specific type from the storage medium to obtain the offset subtitle under the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs.
Step S210, determining a mark subtitle file according to a plurality of offset subtitles; the subtitle file is used to form a mark video corresponding to the object identifier together with the target video.
Specifically, for each subtitle to be offset in the original subtitle file, after the server acquires the corresponding offset subtitle, the server re-splices each offset subtitle to generate the mark subtitle file. The server can embed the mark subtitle file into the picture of the target video to generate the mark video; and when the marked video is subsequently sent to the terminal for playing, the terminal directly plays the marked video. Alternatively, the server may package the subtitle file and the target video in a container as the tagged video corresponding to the object identifier, so that the offset subtitle in the subtitle file may be selectively displayed when the video is played later. Or the server can also separate the mark subtitle file from the target video for storage, and when the mark subtitle file is subsequently sent to the terminal for playing, the terminal can load the offset subtitle in the mark subtitle file into the target video for playing through a playing tool.
In the method for generating the marked video, the object identifier of the video player is mapped into the identifier sequence formed by the identifiers through a certain logic, the offset subtitles respectively corresponding to the to-be-offset subtitles in the original subtitle file are acquired based on the subtitle offset types respectively corresponding to the identifiers in the identifier sequence, and then the offset subtitles are synthesized into the marked subtitle file, so that the marked video is formed, the original subtitle file is converted into the marked subtitle file embedded with the object identifier through modifying the positions of the to-be-offset subtitles in the original subtitle file, and the marked subtitle file is used as marking information or watermark information and is hidden and embedded into the video.
Generally, the object identifier is made up of a plurality of characters including, but not limited to, one or more of numbers, letters, special symbols, and the like. The server is preset with the mapping relation between each character and the identifier. For example, the character "a" corresponds to the identifier "0000", or the character "×" corresponds to the identifier "AAAB", or the like. In some embodiments, mapping the object identification to an identifier sequence includes: sequentially determining identifiers corresponding to the characters from the first digit in the plurality of characters; and arranging identifiers corresponding to each character according to a preset format to obtain an identifier sequence with a preset length.
Specifically, among the plurality of characters constituting the object identifier, the server sequentially searches for the identifier corresponding thereto from the beginning in accordance with a preset reading order, thereby determining to which identifier each character corresponds. The reading order is not limited, and may be from the first to the last, or vice versa, and is generally set from the first to the last in order from the viewpoint of the computer processing efficiency. The server then arranges the identifiers according to a preset format, thereby obtaining an identifier sequence with a certain length. The determined individual identifiers are typically arranged in sequence in the order of their corresponding characters to form a final identifier sequence.
In the above embodiment, the object identifier is mapped into the identifier sequence with a fixed length, so that different types of subtitle offset fragments can be determined according to the identifier sequence, and further, the marked video indirectly embedded with the object identifier can be obtained, the embedding of the subtitle watermark of the video can be realized, and the follow-up tracing can be facilitated.
In some embodiments, the arranging the identifiers corresponding to each character according to a preset format to obtain an identifier sequence with a preset length includes: arranging identifiers corresponding to all the characters according to a preset format, and if the number of the identifiers corresponding to all the characters in the object identifier is smaller than the preset number, supplementing the identifiers at the tail end of the arrangement through preset supplementing identifiers so as to obtain an identifier sequence with a preset length.
The padding identifier is a preset identifier, and is usually set to a character different from other identifiers in order to distinguish the identifiers corresponding to the subtitle offset types. For example, the identifiers corresponding to the respective subtitle offset types may be set to letters or numbers, and the supplemental identifiers may be set to special symbols, such asUnderline "_", or letters or numbers not used, etc. Thus, when the server reads the complement identifier, it can determine that the bit identifier does not correspond to a caption offset type, and the server may not offset the caption. Of course, the padding identifier may also correspond to a subtitle offset type, for example, a correspondence between the padding identifier and a certain subtitle offset type is preset, and when the server reads the padding identifier, the server may perform offset processing on the subtitle according to the subtitle offset type corresponding to the bit identifier.
Specifically, after arranging the identifiers corresponding to each character according to the preset format, if the obtained identifier sequence is smaller than the preset length, in other words, the number of identifiers corresponding to all the characters in the object identifier is smaller than the preset number, the server fills a certain number of supplementary identifiers to carry out supplementary alignment at the tail end of the arrangement (i.e. the obtained identifier sequence with a certain length), so that the length of the identifier sequence reaches the preset length.
In the embodiment, the object identification is mapped into the identifier sequence with the fixed length, so that the server can conveniently extract and determine the object identification in the follow-up tracing, and the tracing accuracy is improved.
It should be appreciated that subtitles have a temporal attribute. Generally, the subtitles in the subtitle file are sequentially arranged in time order, for example, the subtitle between 1 st second and 2 nd second is a first subtitle, the subtitle between 3 rd second and 4 th second is a second subtitle, and so on. Therefore, in some embodiments, determining identifiers corresponding to each subtitle to be shifted in the original subtitle file includes: resolving the original subtitle file, splitting the subtitles in the original subtitle file according to the number of the subtitles to obtain a plurality of subtitles to be offset; and determining the identifiers corresponding to each subtitle to be offset in the original subtitle file based on the time sequence of each subtitle to be offset in the original subtitle file and the sequence among the identifiers in the identifier sequence.
Specifically, the server parses the original subtitle file according to the format of the original subtitle file, and extracts all subtitles in the original subtitle file. The server splits all the subtitles in the original subtitle file according to the number of the subtitles to obtain the subtitle to be offset for subsequent subtitle offset processing.
For each subtitle to be offset, the server corresponds the subtitle to be offset with the identifiers one by one according to the time sequence of each subtitle to be offset and the respective corresponding sequence of each identifier in the identifier sequence obtained by converting the object identifier, thereby determining the identifiers corresponding to each subtitle to be offset in the original subtitle file. For example, assuming that the identifier sequence obtained by converting the object identifier "abc" is a binary sequence "0000 0001 0010", the server parses the original subtitle file, and performs the disassembly according to the number of subtitle bars. And for each split subtitle to be offset, the server sequentially matches the corresponding binary identifiers according to the sequence of the subtitle to be offset and the sequence of the identifiers in the identifier sequence. For example, the 1 st to-be-shifted subtitle match identifier "0", the 2 nd to-be-shifted subtitle match identifier "0" … …, the 8 th to-be-shifted subtitle match identifier "1" … …, the 11 th to-be-shifted subtitle match identifier "1", and the 12 th to-be-shifted subtitle match identifier "0". If the number of the subtitles is greater than the length of the identifier sequence, starting to match from the first identifier in the identifier sequence again until all subtitles to be offset in the original subtitle file are matched.
In the above embodiment, the efficiency of the subtitle offset process is improved by allocating corresponding identifiers to each subtitle to be offset according to the timing sequence of the subtitle in the original subtitle file.
In the foregoing description, the server may perform corresponding subtitle offset on the subtitle to be offset after determining the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs, so as to obtain the offset subtitle. For this reason, in some embodiments, obtaining an offset subtitle obtained by performing corresponding subtitle offset on a subtitle to be offset according to a subtitle offset type to which an identifier corresponding to the subtitle to be offset belongs includes: for each subtitle to be offset, determining the subtitle offset type of the subtitle to be offset according to the identifier corresponding to the subtitle to be offset; and carrying out corresponding subtitle shifting on the subtitle to be shifted according to the subtitle shifting type of the subtitle to be shifted, so as to obtain the shifting subtitle belonging to the subtitle shifting type.
Specifically, for each subtitle to be offset, the server determines the subtitle offset type of the subtitle to be offset according to the identifier corresponding to the subtitle to be offset and according to the corresponding relation between the prestored identifier and the subtitle offset type. For example, assume that 2 subtitle offset types are preset, wherein an identifier "0" corresponds to a subtitle offset type of type a, indicating that the subtitle is offset upward by N pixels; the identifier "1" corresponds to a subtitle shift type of type B, indicating shifting the subtitle downward by N pixels. For the subtitle to be offset of the 1 st second, the server determines that the subtitle corresponds to the first identifier "0" in the identifier sequence, and then determines that the subtitle offset type corresponding to the subtitle to be offset is the subtitle offset type of the type A. And the server performs corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to obtain offset subtitles belonging to the subtitle offset type. For example, the server performs a type a subtitle offset on the subtitle to be offset, i.e., the pixel to be offset is offset upward by N pixels, to obtain an offset subtitle belonging to the type a.
In the above embodiment, by performing subtitle offset on the subtitle to be offset according to the identifier in the identifier sequence obtained by mapping, the subtitle embedding efficiency is higher compared with that of the picture dimension; meanwhile, the server does not need to store the subtitles to be offset of each type in advance, so that storage resources are saved.
Of course, the server may also perform subtitle shifting of various subtitle shifting types on each subtitle to be shifted in the original subtitle file in advance, so as to obtain shifted subtitles under various types and store the shifted subtitles in the storage medium; and then extracting the specific type of offset subtitle from the storage medium.
For this reason, in some embodiments, obtaining an offset subtitle obtained by performing corresponding subtitle offset on a subtitle to be offset according to a subtitle offset type to which an identifier corresponding to the subtitle to be offset belongs includes: determining the subtitle offset type corresponding to the identifier according to the identifier corresponding to the subtitle to be offset; and extracting the offset subtitles which correspond to the identifiers and belong to the corresponding subtitle offset types from the preprocessed plurality of offset subtitles.
Specifically, the server may perform subtitle offset processing on each subtitle in the original subtitle file in advance according to a plurality of preset subtitle offset types, to obtain a plurality of offset subtitles that are processed in advance. For example, assume that 3 subtitle offset types are preset, wherein an identifier "a" corresponds to a subtitle offset type of X type, indicating that the subtitle is offset upward by N pixels; the identifier "B" corresponds to a subtitle shift type of the Y type, indicating shifting the subtitle downward by N pixels; the identifier "C" corresponds to a subtitle shift type of the Z type, indicating shifting the subtitle to the left by N pixels. The server firstly analyzes the original subtitle file and splits the original subtitle file according to the number of the subtitle strips to obtain a plurality of subtitles. For each caption, the server performs three types of caption offset processing, namely, performs caption offset on one caption to be offset to obtain 3 offset captions. The server stores the offset subtitles under each type in a storage medium. Thus, in the above step, according to the identifier "B" corresponding to the subtitle to be offset, where a certain piece of subtitle content is "XXX", the server determines that the corresponding subtitle offset type is the Y type according to the identifier "B", and extracts, from the plurality of pre-processed offset subtitles, the offset subtitle belonging to the Y type corresponding to the identifier "B". The subtitle content of the offset subtitle is identical to the subtitle to be offset, differing only in the position.
In the above embodiment, offset subtitles corresponding to each subtitle offset type are obtained through preprocessing, and then the corresponding offset subtitles are directly extracted, so that the efficiency of obtaining the offset subtitles is improved.
As stated earlier, since the subtitles are in a format, in some embodiments, determining the markup subtitle file from the plurality of offset subtitles includes: based on a preset caption format, splicing the offset captions according to the time sequence corresponding to the offset captions to generate a mark caption file corresponding to the object identifier.
Specifically, the server splices the offset subtitles one by one according to the preset subtitle format and the corresponding time sequence of the offset subtitles and the sequence, so as to generate a mark subtitle file corresponding to the object identifier. For example, the caption format of the SRT is a line of caption serial numbers, a line of time codes, and a line of caption data. Accordingly, the server converts the offset subtitle into a subtitle format of the SRT. And then the server splices the offset subtitles after the format conversion, for example, the offset subtitles are connected together according to the time sequence, and then a subtitle generating process opposite to the subtitle parsing process is executed through a subtitle generating tool, so that the mark subtitle file in the SRT format is obtained. Wherein, object identification is hidden embedded in the mark caption file in a caption offset way. It should be noted that, the format of the markup subtitle file generated after the server splices each offset subtitle may be the same as or different from the format of the original subtitle file.
In the above embodiment, by splicing the offset subtitles according to the time sequence corresponding to the offset subtitles to generate the mark subtitle file corresponding to the object identifier, the object identifier is conveniently extracted from the leaked video by detecting the subtitle in the future, so that the leakage party of the video can be traced.
After the subtitle file is generated, the server can embed the subtitle file into the picture of the target video to generate the mark video, and can also pack the subtitle file and the target video as the mark video to be sent to the terminal for playing. To this end, in some embodiments, the above method further comprises: each offset subtitle in the mark subtitle file is added to a video frame of the target video respectively; and generating marked video corresponding to the object identification according to the video frame added with the offset subtitle.
Specifically, the server splits the target video into a plurality of video frames. Illustratively, the server may perform the encoding and decoding process of the video by using an FFMPEG tool (Fast Forward Mpeg, a multimedia video processing tool). The FFMPEG is free software of an open source code, and can perform video recording, conversion and streaming functions of various formats of audio and video. And the server adds each offset subtitle in the subtitle file to the corresponding video frame in the target video respectively. For example, the 1 st offset subtitle appears in the 1 st to 20 th frames, and the server adds the offset subtitle to each of the 1 st to 20 th video frames, respectively. And the server recodes each video frame according to the video frame added with the offset subtitle, and generates a marked video corresponding to the object identifier.
In the above embodiment, the offset subtitles in the subtitle file are added to the target video to generate the tagged video, so that the leaked video can be traced by detecting the subtitle to extract the object identifier.
The server may also package the markup file and the target video in a container for transmission as the markup video to the terminal. To this end, in some embodiments, the above method further comprises: and sending the mark subtitle file and the target video to a terminal corresponding to the object identifier so as to enable the terminal to play the mark video carrying the offset subtitle.
Specifically, the server packages the subtitle file and the target video in one file and sends the subtitle file and the target video to a terminal corresponding to the object identifier; after receiving the file, the terminal analyzes the file to obtain a mark subtitle file and a target video, and displays the offset subtitle when playing the target video, namely playing the mark video with the offset subtitle.
In the above embodiment, the closed caption file and the target video are packaged and sent to the terminal, so that the complete closed caption file can be conveniently extracted during subsequent detection, and the object identifier hidden and embedded in a caption offset manner can be accurately extracted, thereby tracing the leakage party of the source video.
The application scene is used for generating the marked video. Specifically, the application of the method for generating the marked video in the application scene is as follows: when the target object browses the video list in the video platform, the video to be watched is selected through the terminal. The terminal responds to the selection operation of the target object, sends a video playing request to the server, and the server finds the target video corresponding to the video playing request and the associated original subtitle file in the database by extracting video information such as video names and the like contained in the video playing request. Meanwhile, the server extracts the object identifier carried in the video playing request, and performs offset processing on the subtitles in the original subtitle file according to the identifier sequence obtained by converting the object identifier, so as to obtain the marked subtitle file. The server packs the marked subtitle file and the target video to be used as marked videos, and returns the marked videos to the terminal so that the terminal can play the marked videos. The video may be a pre-stored complete video, such as entertainment video, teaching video, television play, and short video, without limitation.
Based on the same inventive concept, the embodiment of the application also provides a detection method of the video mark. In some embodiments, as shown in fig. 3, a method for detecting a video mark is provided, and the method is applied to a computer device, which may specifically be a terminal or a server in fig. 1, for example, and the method for detecting a video mark includes the following steps:
Step S302, a video to be detected is obtained, and caption offset types corresponding to all video frames in the video to be detected are determined.
Specifically, the computer device may find and obtain the video to be detected in the internet. The computer equipment detects the caption in each video frame of the video to be detected, thereby determining the caption offset type specifically corresponding to each video frame. Illustratively, the computer device may decode the video to be detected by using tools such as FFMPEG to obtain all video frames of the video to be detected, and then determine, frame by frame, a subtitle offset type corresponding to the video frames.
Step S304, for each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected.
Specifically, for each subtitle in the video to be detected (for illustration, the computer device is called a subtitle to be detected), since one subtitle may exist in multiple video frames (for example, one subtitle exists in 40 video frames corresponding to 1 st to 3 rd seconds), the server determines a subtitle offset type corresponding to each subtitle to be detected based on a subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected.
Step S306, based on the subtitle offset type corresponding to each subtitle to be checked, determining the identifier corresponding to each subtitle to be checked.
Specifically, the computer device determines identifiers corresponding to the subtitles to be detected respectively according to the corresponding relation between the pre-stored subtitle offset types and the identifiers based on the subtitle offset types corresponding to the subtitles to be detected respectively. For example, for a piece of subtitle to be checked, the subtitle offset type to which it belongs is the a type, and the computer device stores in advance that the subtitle offset type of the a type corresponds to the identifier "0", thus determining that the identifier corresponding to the piece of subtitle to be checked is "0". For each subtitle to be checked, the computer equipment performs the processing as above, so as to obtain the identifiers corresponding to the subtitles to be checked.
Step S308, an identifier sequence is determined based on identifiers corresponding to the subtitles to be detected respectively, and the object identification marked in the video to be detected is determined based on the identifier sequence.
Specifically, the computer device extracts an identifier sequence from each subtitle according to the identifier corresponding to each subtitle to be detected. The computer equipment carries out inverse mapping through preset mapping rules based on the extracted identifier sequence to obtain object identification, so as to determine the marked object identification in the video to be detected.
For example, the computer device determines that a plurality of identifiers corresponding to the respective subtitles to be checked are "100101010010101001010 … …". Because the identifier sequence has a preset length and the identifiers in the identifier sequence are in a preset number, the computer equipment extracts the identifiers '1001010' with fixed length and repeated circulation from the identifier sequence and determines the identifier sequence as the identifier sequence, and performs inverse mapping through a preset mapping rule, so that the object identifier corresponding to the identifier sequence is obtained.
According to the method for detecting the video mark, the subtitle offset type corresponding to each video frame in the video to be detected is determined, the subtitle offset type corresponding to each subtitle to be detected is determined, the identifier corresponding to each subtitle to be detected is determined based on the subtitle offset type corresponding to each subtitle to be detected, and the identifier sequence is determined, so that the object mark marked in the video to be detected is determined based on the identifier sequence, the detection of the watermark information indirectly embedded in the marked video in a subtitle offset mode is realized, the playing party or the leakage source of the video can be determined according to the marked object mark, and the detection rate and the accuracy in tracing are ensured.
In some embodiments, determining a subtitle offset type corresponding to each video frame in the video to be detected includes: acquiring an original video corresponding to a video to be detected; and under the same video frame dimension, determining the subtitle offset type corresponding to each video frame in the video to be detected based on the position relationship between the subtitle to be detected in each video frame in the video to be detected and the original subtitle in the corresponding video frame in the original video.
Specifically, the computer device may obtain, from the copyright library, an original video corresponding to the video to be detected. Illustratively, the computer device may search through the video fingerprint technology in the copyrighted video library, so as to find the copyrighted video original piece corresponding to the copyrighted video original piece, i.e., the original video. The video fingerprint technology is a technology for reducing the dimension of video content into vectors through technologies such as computer vision and audio processing, and can be used for scenes such as video retrieval, video duplication elimination and video recommendation.
Because the propagated video to be detected may be subjected to processing such as editing, zooming and stretching, in order to ensure the detection accuracy, the computer device detects the position of the subtitle to be detected in each video frame in the video to be detected and the position of the original subtitle in the corresponding video frame in the original video under the same video frame dimension, and determines the subtitle offset type corresponding to each video frame in the video to be detected according to the position relationship between the subtitle to be detected and the original subtitle.
Illustratively, the computer device detects subtitles in the video frames by OCR technology and determines a specific offset of the subtitle to be detected in the video to be detected relative to the original subtitle. For example, if the computer device detects that the subtitle to be detected moves 1 pixel upward compared with the original subtitle, the computer device determines that the subtitle offset type corresponding to the video frame is a type a; for another example, if the computer device detects that the subtitle to be detected moves down by 1 pixel compared with the original subtitle, the computer device determines that the subtitle offset type corresponding to the video frame is the B type. The OCR (Optical Character Recognition ) refers to a process of analyzing and recognizing an image file of a text material to acquire text and layout information. Therefore, the subtitle offset types corresponding to the video frames can be obtained by detecting the subtitle positions of the video frames.
Wherein the video frame dimension includes a temporal dimension and a spatial dimension. Accordingly, in some embodiments, after acquiring the original video corresponding to the video to be detected, the method further includes: and respectively aligning the video to be detected with the original video in the time dimension and the space dimension. Specifically, the computer device determines, according to the same time axis, a video frame of the video to be detected and a video frame of the original video corresponding to the same time, so as to perform alignment processing in the time dimension. The computer device aligns the video frames of the video to be detected and the original video in the spatial dimension in accordance with the same pixel coordinate system, e.g., with the upper left corner as the origin.
In the above embodiment, the video to be detected and the original video are aligned, so that the video to be detected and the original video are ensured to be in the same video frame dimension, and the position of the detected caption is prevented from having errors.
As stated earlier, since a subtitle may appear in multiple video frames. For this purpose, in some embodiments, for each subtitle to be detected in the video to be detected, determining a subtitle offset type corresponding to each subtitle to be detected based on a subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected includes: for each subtitle to be detected, determining at least one video frame containing the same subtitle content; and determining the frame number of the video frames corresponding to different caption offset types in at least one video frame, and taking the caption offset type corresponding to the maximum frame number as the caption offset type corresponding to the current caption to be detected.
Specifically, the computer device first determines how many video frames each subtitle to be inspected corresponds to. That is, the server determines at least one video frame containing the same subtitle content. For example, for the same subtitle to be checked, which appears between 3 seconds and 5 seconds of the video time axis, and 60 frames are shared between 3 seconds and 5 seconds, the computer device determines that the subtitle to be checked corresponds to 60 video frames. The computer device determines the frame numbers of the video frames corresponding to different subtitle offset types respectively in the video frames, namely, the computer device counts the frame numbers of the video frames under each type, and takes the subtitle offset type corresponding to the maximum frame number as the subtitle offset type corresponding to the current subtitle to be detected. For example, if the type a is the largest, the subtitle offset type corresponding to the video segment is determined to be the type a, and then the identifier corresponding to the video segment is determined to be a or the identifier corresponding to a. Therefore, the computer equipment can extract the identifier sequence from the identifiers corresponding to each subtitle to be detected according to the identifiers respectively, and convert the identifier sequence into the object identifier, so that the leakage source of the video is determined.
In the above embodiment, the number of frames of different caption offset types is counted to determine the identifier corresponding to the offset caption, so that a certain fault tolerance rate is provided for the position detection of the video frame, and the accuracy of detecting the video mark is improved.
The application scene is provided with the video mark detection method. Specifically, the application of the method for detecting the video mark in the application scene is as follows: when the terminal plays the marked video, the target object may perform operations such as video recording, video caching, video downloading, video forwarding, and the like through the terminal. Because the object identifier is indirectly embedded in the marked video in a subtitle offset mode, the object identifier is also contained in the video which is recorded, cached, downloaded and forwarded by the object.
Therefore, when the computer equipment acquires the leaked video, for example, the computer equipment searches the video existing in the copyright library on the network, the computer equipment detects the searched video, extracts the identifier sequence according to the position relation between the subtitles and the original subtitle in the copyright library and converts the identifier sequence into the object identifier, and accordingly the leakage source of the leaked video on the network can be known.
For a better understanding of the present application, the following is illustrated with one specific product example. As shown in fig. 4A to fig. 4C, in a specific example, the method for generating a video mark in the embodiment of the present application may be integrated with the method for detecting a marked video into a set of systems, where the systems include a watermark subtitle generating module, a playing module, and a watermark detecting module. The modules can be deployed on one device or can be deployed on different devices separately. For example, the playing module is disposed on the terminal or the server, and the watermark subtitle generating module may be disposed on the terminal together with the playing module or may be disposed on the server separately. The watermark detection module may be deployed on a terminal or a server.
The playing module is used for receiving a video playing request sent by an object through the terminal and obtaining an object identifier, so that the object identifier is transmitted to a subsequent watermark subtitle generating module, and the object identifier is embedded as watermark information after being mapped. Meanwhile, the playing module can be used for transmitting the subtitle file and the video with the watermark to the terminal so as to enable the terminal to play the video. Specifically, as shown in fig. 4A, the object requests to play the target video, and the terminal sends the video playing request to the playing module. The playing module requests playing sequence service, such as applying for M3U8 playing sequence, by using the player according to the received video playing request, and pulls the original subtitle file away through the playing sequence service so as to request the subsequent watermark subtitle generating module to generate watermark subtitle. The M3U8 file is essentially a Playlist/sequence, which may be a Media Playlist (Media Playlist) or a Master Playlist (Master Playlist). But whatever the playlist, its internal text is encoded with utf-8. When the M3U8 file is used as a media play list, a series of media fragment resources are recorded in the internal information of the M3U8 file, and the fragment resources are sequentially played, so that the multimedia resources can be completely displayed.
The watermark subtitle generating module converts the original subtitle file into a watermark subtitle file embedded with watermark information of the object identifier by modifying subtitle position information. Specifically, as shown in fig. 4B, the watermark subtitle generation module converts the object identification into an identifier sequence through a preset mapping rule, for example, maps the object identification "abc" into the identifier sequence "0000 0001 0010". The watermark subtitle generating module analyzes the original subtitle file, splits the original subtitle file according to the number of subtitle strips, and matches the corresponding binary identifier with each piece of split subtitle information. For example, the first subtitle match identifier "0", the second subtitle match identifier "0" … …, the 8 th subtitle match identifier "1" … …, and the 12 th subtitle match identifier "0". If the number of subtitling bars is greater than the length of the binary sequence, the matching is resumed from the first bit of the identifier sequence. And according to the identifier corresponding to each subtitle, the watermark subtitle generating module adjusts the subtitle position, for example, for the identifier of 0, the watermark subtitle generating module shifts the subtitle upwards by N pixels, and otherwise, for the identifier of 1, shifts the subtitle downwards by N pixels. And finally, the watermark caption generating module performs piece-by-piece splicing on each piece of adjusted caption information to regenerate a caption file with the watermark, namely a mark caption file.
The watermark caption generating module transmits the generated mark caption file to the playing module, the playing module integrates the caption file with watermark (namely the mark caption file) into an M3U8 playing sequence by using a playing sequence service, so that the M3U8 playing sequence is issued to a player, and the player sends the marked video to a terminal for playing.
When watermark detection is performed subsequently, as shown in fig. 4C, the watermark detection module searches the video to be detected in the copyright library through the video fingerprint technology to find the original video corresponding to the video, and the original video can be understood as the original video of the copyright, i.e. the video with no subtitle watermark added and normally pressed subtitle. And the watermark detection module aligns the video to be detected with the original video in the time dimension and the space dimension. For example, as shown in fig. 5, the watermark detection module first retrieves the original video corresponding to the video to be detected from the video copyright library by using the video fingerprint technology, and aligns the original video on a time axis (i.e. a time dimension). And when the video to be detected and the original video are in the same time dimension, the server aligns the video to be detected and the original video in the space dimension, so that an alignment result is obtained.
After alignment is completed, the watermark detection module sends the frame of the picture to be detected in the video to be detected and the corresponding frame of the original picture in the original video to the OCR module for subtitle detection, and then compares the position relation (up/down/unchanged) of the two detection frames, so that the position relation between the subtitle to be detected of the video to be detected in the frame and the original subtitle of the original video can be judged. According to the position relation, the watermark detection module can judge whether the frame is embedded with the subtitle watermark and the subtitle offset type corresponding to the subtitle.
With continued reference to fig. 4C, according to the alignment result, the watermark detection module splits the video to be detected and the original film into a matched image pair, that is, a frame of the picture to be detected and a frame of the original film corresponding to the same video frame, sends the image pair into the OCR module to perform subtitle position detection, compares the detected subtitle position information (that is, the subtitle position of the frame to be detected and the subtitle position of the original frame), and can determine whether the subtitle offset type of the video frame is the type a or the type B according to the compared position relationship. Referring to fig. 4C again, the watermark detection module performs result fusion on the same subtitle, that is, determines, for a plurality of video frames corresponding to the same subtitle, which type corresponds to the video frame with the largest number of frames by voting, as the type, the subtitle offset type corresponding to the subtitle. Furthermore, the watermark detection module can determine the identifier corresponding to the caption according to the corresponding relation between the pre-stored caption offset type and the identifier. According to the identifier corresponding to each subtitle, the watermark detection module can detect the identifier sequence from the identifier sequence, and can determine the object identifier embedded in the video to be detected through inverse mapping, namely, a detection result is obtained.
Therefore, by detecting the subtitle position and extracting the object identification in the marked video, the traceability of the processes of playing, spreading, revealing and the like of the marked video is realized, and the copyright protection of the video is facilitated.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a device for generating the marked video. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation in the embodiment of the generating apparatus of one or more marked videos provided below may refer to the limitation of the generating method of the marked video described above, and will not be repeated here.
In some embodiments, as shown in fig. 6, there is provided a generating apparatus of a marked video, which may use a software module or a hardware module, or a combination of both, as a part of a computer device, and the apparatus specifically includes: an acquisition module 601, a mapping module 602, and a determination module 603, wherein:
the acquiring module 601 is configured to acquire an object identifier, and determine an original subtitle file of a target video requested by the object identifier.
The mapping module 602 is configured to map the object identifier to an identifier sequence, and determine a subtitle offset type corresponding to each identifier in the identifier sequence.
The determining module 603 is configured to determine identifiers corresponding to the subtitles to be offset in the original subtitle file.
The determining module 603 is further configured to obtain an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs.
The determining module 603 is further configured to determine a subtitle file according to the plurality of offset subtitles; the subtitle file is used to form a mark video corresponding to the object identifier together with the target video.
In some embodiments, the object identifier is composed of a plurality of characters, and the mapping module is further configured to sequentially determine, starting from a first digit in the plurality of characters, an identifier corresponding to each character; and arranging identifiers corresponding to each character according to a preset format to obtain an identifier sequence with a preset length.
In some embodiments, the mapping module is further configured to arrange identifiers corresponding to each character according to a preset format, and if the number of identifiers corresponding to all the characters in the object identifier is smaller than the preset number, then the tail end of the arrangement is complemented by preset complement identifiers, so as to obtain an identifier sequence with a preset length.
In some embodiments, the determining module is further configured to parse an original subtitle file, split the subtitles in the original subtitle file according to the number of the subtitles, and obtain a plurality of subtitles to be offset; and determining the identifiers corresponding to each subtitle to be offset in the original subtitle file based on the time sequence of each subtitle to be offset in the original subtitle file and the sequence among the identifiers in the identifier sequence.
In some embodiments, the determining module is further configured to determine, for each subtitle to be offset, a subtitle offset type of the subtitle to be offset according to an identifier corresponding to the subtitle to be offset; and carrying out corresponding subtitle shifting on the subtitle to be shifted according to the subtitle shifting type of the subtitle to be shifted, so as to obtain the shifting subtitle belonging to the subtitle shifting type.
In some embodiments, the determining module is further configured to determine, according to an identifier corresponding to the subtitle to be offset, a subtitle offset type corresponding to the identifier; and extracting the offset subtitles which correspond to the identifiers and belong to the corresponding subtitle offset types from the preprocessed plurality of offset subtitles.
In some embodiments, the determining module is further configured to splice each offset subtitle according to a timing sequence corresponding to each offset subtitle based on a preset subtitle format, and generate a subtitle file corresponding to the object identifier.
In some embodiments, the apparatus further includes a first sending module, configured to add each offset subtitle in the subtitle file to a video frame of the target video; and generating marked video corresponding to the object identification according to the video frame added with the offset subtitle.
In some embodiments, the apparatus further includes a second sending module, configured to send the subtitle file and the target video together to a terminal corresponding to the object identifier, so that the terminal plays the subtitle-carrying subtitle-offset-tagged video.
For specific limitations on the generation apparatus of the marked video, reference may be made to the above limitation on the generation method of the marked video, and no further description is given here. The respective modules in the above-described generation apparatus of the marked video may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Based on the same inventive concept, the embodiment of the application also provides a detection device of the video mark. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in the embodiments of the detection device for one or more videomarks provided below may be referred to as the limitation of the detection method for a videomark hereinabove, and will not be repeated here.
In some embodiments, as shown in fig. 7, a video mark detection apparatus is provided, where the apparatus may use a software module or a hardware module, or a combination of both, as a part of a computer device, and specifically includes: an acquisition module 701 and a determination module 702, wherein:
the acquiring module 701 is configured to acquire a video to be detected, and determine a subtitle offset type corresponding to each video frame in the video to be detected.
The determining module 702 is configured to determine, for each subtitle to be detected in the video to be detected, a subtitle offset type corresponding to each subtitle to be detected based on a subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected.
The determining module 702 is further configured to determine identifiers corresponding to the subtitles to be detected respectively based on the subtitle offset types corresponding to the subtitles to be detected respectively.
The determining module 702 is further configured to determine an identifier sequence based on identifiers corresponding to the subtitles to be detected, and determine an object identifier of the mark in the video to be detected based on the identifier sequence.
In some embodiments, the determining module is further configured to obtain an original video corresponding to the video to be detected; and under the same video frame dimension, determining the subtitle offset type corresponding to each video frame in the video to be detected based on the position relationship between the subtitle to be detected in each video frame in the video to be detected and the original subtitle in the corresponding video frame in the original video.
In some embodiments, the video frame dimension includes a time dimension and a space dimension, and the apparatus further includes an alignment module configured to align the video to be detected with the original video in the time dimension and the space dimension, respectively.
In some embodiments, the determining module is further configured to determine, for each subtitle to be detected, at least one video frame containing the same subtitle content; and determining the frame number of the video frames corresponding to different caption offset types in at least one video frame, and taking the caption offset type corresponding to the maximum frame number as the caption offset type corresponding to the current caption to be detected.
For specific limitations of the detection means for the videomark, reference may be made to the above limitations of the detection method for the videomark, and no further description is given here. The above-described modules in the video mark detection device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a terminal or a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a method of generating a marked video, or implements a method of detecting a video mark.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is also provided a computer device, including a memory and a processor, where the memory stores a computer program, and the processor executes the computer program to implement the steps in the method embodiments corresponding to the method for generating a marked video.
In some embodiments, there is also provided a computer device including a memory and a processor, where the memory stores a computer program, and the processor implements steps in each method embodiment corresponding to the above-mentioned method for detecting a videomark when the computer program is executed.
In some embodiments, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments corresponding to the method for generating a marked video described above.
In some embodiments, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments corresponding to the method for detecting a videomark described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments corresponding to the method of generating a marked video described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the respective method embodiments corresponding to the above-described method of detecting a videomark.
The object information (including but not limited to account information, ID, code information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the object or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (18)

1. A method of generating a marked video, the method comprising:
acquiring an object identifier, and determining an original subtitle file of a target video requested by the object identifier;
mapping the object identifier into an identifier sequence, and determining subtitle offset types corresponding to identifiers in the identifier sequence respectively;
determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
Acquiring an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
2. The method of claim 1, wherein the object identification is made up of a plurality of characters, the mapping the object identification to an identifier sequence comprising:
sequentially determining identifiers corresponding to each character from the first digit in the plurality of characters;
and arranging the identifiers corresponding to the characters according to a preset format to obtain an identifier sequence with a preset length.
3. The method according to claim 2, wherein the arranging the identifiers corresponding to each character according to a preset format to obtain the identifier sequence with the preset length includes:
arranging identifiers corresponding to all the characters according to a preset format, and if the number of the identifiers corresponding to all the characters in the object identifier is smaller than the preset number, supplementing the identifiers at the tail end of the arrangement through preset supplementing identifiers so as to obtain an identifier sequence with a preset length.
4. The method according to claim 1, wherein the determining identifiers corresponding to the subtitles to be shifted in the original subtitle file respectively includes:
analyzing the original subtitle file, and splitting the subtitles in the original subtitle file according to the number of the subtitles to obtain a plurality of subtitles to be offset;
and determining identifiers corresponding to each subtitle to be offset in the original subtitle file based on the time sequence of each subtitle to be offset in the original subtitle file and the sequence among the identifiers in the identifier sequence.
5. The method of claim 1, wherein the obtaining the offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs includes:
for each subtitle to be offset, determining the subtitle offset type of the subtitle to be offset according to the identifier corresponding to the subtitle to be offset;
and performing corresponding subtitle shifting on the subtitle to be shifted according to the subtitle shifting type of the subtitle to be shifted, so as to obtain the shifted subtitle belonging to the subtitle shifting type.
6. The method of claim 1, wherein the obtaining the offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to the subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs includes:
determining a caption offset type corresponding to the identifier according to the identifier corresponding to the caption to be offset;
and extracting the offset subtitles which correspond to the identifier and belong to the corresponding subtitle offset type from the preprocessed plurality of offset subtitles.
7. The method of claim 1, wherein said determining a closed caption file from a plurality of offset captions comprises:
and splicing the offset subtitles according to the time sequence corresponding to the offset subtitles based on a preset subtitle format, and generating a mark subtitle file corresponding to the object identifier.
8. The method according to any one of claims 1 to 7, further comprising:
each offset subtitle in the subtitle file is added to a video frame of the target video respectively;
and generating marked video corresponding to the object identification according to the video frame added with the offset subtitle.
9. The method according to any one of claims 1 to 7, further comprising:
and sending the mark subtitle file and the target video to a terminal corresponding to the object identifier together so that the terminal can play the mark video carrying the offset subtitle.
10. A method of detecting a video marker, the method comprising:
acquiring a video to be detected, and determining a subtitle offset type corresponding to each video frame in the video to be detected;
for each subtitle to be detected in the video to be detected, determining the subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
based on the subtitle offset type corresponding to each subtitle to be detected, determining identifiers corresponding to the subtitles to be detected respectively;
and determining an identifier sequence based on identifiers corresponding to the subtitles to be detected respectively, and determining the object identification marked in the video to be detected based on the identifier sequence.
11. The method of claim 10, wherein determining the subtitle offset type for each video frame in the video to be detected comprises:
Acquiring an original video corresponding to the video to be detected;
and under the same video frame dimension, determining the subtitle offset type corresponding to each video frame in the video to be detected based on the position relation between the subtitle to be detected in each video frame in the video to be detected and the original subtitle in the corresponding video frame in the original video.
12. The method of claim 11, wherein the video frame dimensions include a temporal dimension and a spatial dimension, and wherein after the obtaining the original video corresponding to the video to be detected, the method further comprises: and respectively aligning the video to be detected with the original video in a time dimension and a space dimension.
13. The method according to claim 10, wherein for each subtitle to be detected in the video to be detected, determining a subtitle offset type corresponding to each subtitle to be detected based on a subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected includes:
for each subtitle to be detected, determining at least one video frame containing the same subtitle content;
and determining the frame number of the video frames corresponding to different caption offset types in the at least one video frame, and taking the caption offset type corresponding to the maximum number of frames as the caption offset type corresponding to the current caption to be detected.
14. A marked video generating apparatus, the apparatus comprising:
the acquisition module is used for acquiring the object identifier and determining an original subtitle file of the target video requested by the object identifier;
the mapping module is used for mapping the object identifier into an identifier sequence and determining subtitle offset types corresponding to the identifiers in the identifier sequence respectively;
the determining module is used for determining identifiers corresponding to the subtitles to be offset in the original subtitle file respectively;
the determining module is further configured to obtain an offset subtitle obtained by performing corresponding subtitle offset on the subtitle to be offset according to a subtitle offset type to which the identifier corresponding to the subtitle to be offset belongs;
the determining module is further used for determining a mark subtitle file according to the plurality of offset subtitles; the mark subtitle file is used for forming mark video corresponding to the object identifier together with the target video.
15. A device for detecting a video marker, the device comprising:
the acquisition module is used for acquiring the video to be detected and determining the subtitle offset type corresponding to each video frame in the video to be detected;
The determining module is used for determining, for each subtitle to be detected in the video to be detected, a subtitle offset type corresponding to each subtitle to be detected respectively based on the subtitle offset type corresponding to at least one video frame corresponding to the subtitle to be detected;
the determining module is further configured to determine identifiers corresponding to the subtitles to be detected respectively based on subtitle offset types corresponding to the subtitles to be detected respectively;
the determining module is further configured to determine an identifier sequence based on identifiers corresponding to the subtitles to be detected, and determine an object identifier marked in the video to be detected based on the identifier sequence.
16. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any one of claims 1 to 9 or the steps of the method of any one of claims 10 to 13.
17. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 9 or the steps of the method of any one of claims 10 to 13.
18. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, realizes the steps of the method of any one of claims 1 to 9 or the steps of the method of any one of claims 10 to 13.
CN202210080149.XA 2022-01-24 2022-01-24 Method and device for generating marked video and method and device for detecting video marks Pending CN116528020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210080149.XA CN116528020A (en) 2022-01-24 2022-01-24 Method and device for generating marked video and method and device for detecting video marks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210080149.XA CN116528020A (en) 2022-01-24 2022-01-24 Method and device for generating marked video and method and device for detecting video marks

Publications (1)

Publication Number Publication Date
CN116528020A true CN116528020A (en) 2023-08-01

Family

ID=87394569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210080149.XA Pending CN116528020A (en) 2022-01-24 2022-01-24 Method and device for generating marked video and method and device for detecting video marks

Country Status (1)

Country Link
CN (1) CN116528020A (en)

Similar Documents

Publication Publication Date Title
CN112040336B (en) Method, device and equipment for adding and extracting video watermark
KR102450781B1 (en) Method and apparatus for encoding media data comprising generated content
US9639911B2 (en) Watermark detection using a multiplicity of predicted patterns
US8959202B2 (en) Generating statistics of popular content
CN102244783B (en) Method and system for data processing
US11611808B2 (en) Systems and methods of preparing multiple video streams for assembly with digital watermarking
CN110896484A (en) Video watermark adding and extracting method and device, video playing end and storage medium
US10834158B1 (en) Encoding identifiers into customized manifest data
EP3888375A1 (en) Method, device, and computer program for encapsulating media data into a media file
CN113254393B (en) Interactive video packaging method and device and electronic equipment
CN104361075A (en) Image website system and realizing method
CN113569719B (en) Video infringement judging method and device, storage medium and electronic equipment
CN107690074A (en) Video coding and restoring method, audio/video player system and relevant device
CN113099282A (en) Data processing method, device and equipment
US20160196631A1 (en) Hybrid Automatic Content Recognition and Watermarking
CN114257843B (en) Multimedia data processing method, device, equipment and readable storage medium
CN105472339A (en) Method, server and system for implementing video playback
KR102348633B1 (en) Video encryption and decryption method and apparatus
CN111836054B (en) Video anti-piracy method, electronic device and computer readable storage medium
CN116528020A (en) Method and device for generating marked video and method and device for detecting video marks
US10972808B2 (en) Extensible watermark associated information retrieval
CN116527965A (en) Method and device for generating marked video and method and device for detecting video marks
CN114091081B (en) Subtitle file encryption and decryption method, system, storage medium and electronic device
CN112383836B (en) Video verification system and method
US20100175083A1 (en) Method of broadcasting a complementary element, corresponding server and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40091087

Country of ref document: HK