[go: up one dir, main page]

CN112330522A - Watermark removal model training method and device, computer equipment and storage medium - Google Patents

Watermark removal model training method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112330522A
CN112330522A CN202011238056.2A CN202011238056A CN112330522A CN 112330522 A CN112330522 A CN 112330522A CN 202011238056 A CN202011238056 A CN 202011238056A CN 112330522 A CN112330522 A CN 112330522A
Authority
CN
China
Prior art keywords
image
watermark
loss value
trained
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011238056.2A
Other languages
Chinese (zh)
Other versions
CN112330522B (en
Inventor
张少林
宁欣
曾庆亮
许少辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wave Kingdom Co ltd
Original Assignee
Shenzhen Wave Kingdom Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wave Kingdom Co ltd filed Critical Shenzhen Wave Kingdom Co ltd
Priority to CN202011238056.2A priority Critical patent/CN112330522B/en
Publication of CN112330522A publication Critical patent/CN112330522A/en
Application granted granted Critical
Publication of CN112330522B publication Critical patent/CN112330522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The application relates to a watermark removal model training method, a watermark removal model training device, computer equipment and a storage medium. The method comprises the following steps: extracting a watermark image and a corresponding clean image in a sample image data set; inputting the extracted image into a watermark removal model to be trained for carrying out three times of style migration to obtain a style migration result, and identifying a target image in the style migration result according to the extracted image to obtain an image identification result; calculating a confrontation loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model to be trained according to the loss values; and performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping the model training to obtain the trained watermark removal model. By adopting the method, the image quality after the watermark is removed can be improved.

Description

Watermark removal model training method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a watermark removal model training method and apparatus, a computer device, and a storage medium.
Background
With the development of digital media technology and computer technology, various digital media such as images are spread through the internet, and people can download and use the digital media. In order to protect the copyright of an image, a watermark is often added to the image. Since the watermark interferes with or destroys the intrinsic data information of the image to some extent, the watermark in the image needs to be removed in order to better apply the value of the image.
At present, watermark removal can be performed on a watermark image through a generative countermeasure model to obtain a corresponding clean image, however, in the watermark removal process of the conventional generative countermeasure model, original information of the watermark image may be lost, resulting in lower quality of the obtained clean image.
Disclosure of Invention
In view of the above, it is desirable to provide a watermark removal model training method, device, computer device, and storage medium capable of improving the quality of an image after watermark removal.
A watermark removal model training method, the method comprising:
obtaining a sample image dataset;
extracting a watermark image and a clean image corresponding to the watermark image in the sample image data set;
inputting the watermark image and the clean image into a watermark removing model to be trained, carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to be trained to obtain a style migration result, and identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result;
calculating a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image;
calculating a target loss value corresponding to the watermark removal model to be trained according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value;
and performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping model training to obtain the trained watermark removal model.
In one embodiment, the inputting the watermark image and the clean image into a watermark removal model to be trained, and performing three times of style migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result includes:
inputting the watermark image and the clean image into a corresponding generator of a watermark removal model to be trained for carrying out three-time style migration to obtain a non-watermark image corresponding to the watermark image, a watermark image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, a first watermark removing image corresponding to the watermark image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image;
and generating a style migration result according to the non-watermark image, the watermarked image, the first watermark adding image, the first watermark removing image, the second watermark adding image and the second watermark removing image.
In one embodiment, the generator of the to-be-trained watermark removing model includes a first generator and a second generator, and the inputting the watermark image and the clean image into the corresponding generator of the to-be-trained watermark removing model for performing the third-time style migration to obtain a non-watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, a first watermark removing image corresponding to the watermarked image, a second watermark adding image corresponding to the watermark image, and a second watermark removing image corresponding to the clean image includes:
inputting the watermark image into a first generator of the to-be-trained watermark removal model for first style migration, inputting the clean image into a second generator of the to-be-trained watermark removal model for first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermarked image corresponding to the clean image through the second generator;
inputting the watermark-free image into the second generator for second style migration, inputting the watermarked image into the second generator for second style migration, outputting a first watermark adding image corresponding to the watermark-free image through the second generator, and outputting a first watermark removing image corresponding to the watermarked image through the first generator;
inputting the clean image into the first generator to perform third style migration, inputting the watermark image into the second generator to perform third style migration, outputting a second watermark removal image corresponding to the clean image through the first generator, and outputting a second watermark addition image corresponding to the watermark image through the second generator.
In one embodiment, the style migration result includes a non-watermark image, a first watermark adding image, a first watermark removing image, a second watermark adding image, and a second watermark removing image, and the calculating the countermeasure loss value, the cycle consistency loss value, and the identity reconstruction loss value corresponding to the watermark removing model to be trained according to the style migration result, the image recognition result, the watermark image, and the clean image includes:
calculating a countermeasure loss value corresponding to the watermark removal model to be trained according to the image recognition result;
calculating a cycle consistency loss value corresponding to the watermark removal model to be trained according to a first watermark removal image and a first watermark adding image in the style migration result, the watermark image and the clean image;
and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to a second watermark adding image and a second watermark removing image in the style migration result, the watermark image and the clean image.
In one embodiment, the to-be-trained watermark removal model includes two sub-networks, each sub-network includes a generator and a discriminator, the generator and the discriminator both include an encoder and share one encoder, the countermeasure training is performed on the to-be-trained watermark removal model according to the target loss value until a preset condition is reached, the model training is stopped, and obtaining the trained watermark removal model includes:
fixing the generation parameters of the decoder in each generator of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in each sub-network according to the target loss value to obtain an adjusted loss value;
fixing the discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss values, wherein the discrimination parameters comprise coding parameters corresponding to a coder;
and repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain the trained watermark removal model, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
An image watermark removal method, the method comprising:
acquiring an image to be processed;
inputting the image to be processed into a generator of a trained watermark removal model, encoding the image to be processed through an encoder in the generator, and outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculation according to a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value; the confrontation loss value, the cycle consistency loss value, and the identity reconstruction loss value are calculated by performing style migration on the sample image dataset;
and carrying out style migration on the image characteristics through the generator to obtain a clean image corresponding to the image to be processed.
A watermark removal model training apparatus, the apparatus comprising:
the style migration module is used for acquiring a sample image dataset; extracting a watermark image and a clean image corresponding to the watermark image in the sample image data set; inputting the watermark image and the clean image into a watermark removing model to be trained, and carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to obtain a style migration result;
the loss calculation module is used for calculating a confrontation loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value;
and the countermeasure training module is used for carrying out countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, and stopping the model training to obtain the trained watermark removal model.
An image watermark removal apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed;
the image coding module is used for inputting the image to be processed into a coder of a trained watermark removal model for coding and outputting image characteristics, the trained watermark removal model is obtained by carrying out countermeasure training according to a sample image data set, and in the countermeasure training process, a target loss value corresponding to the watermark removal model is obtained by calculation according to an countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value;
and the watermark removing module is used for inputting the image characteristics into the generator of the trained watermark removing model to remove the watermark so as to obtain a clean image corresponding to the image to be processed.
A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, the processor implementing the steps in the various method embodiments described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the respective method embodiment described above.
According to the watermark removal model training method, the watermark removal model training device, the computer equipment and the storage medium, the sample image data set is obtained, the watermark image and the clean image corresponding to the watermark image are extracted from the sample image data set, the watermark image and the clean image are subjected to three times of style migration through the to-be-trained watermark removal model, and the target image in the style migration result is identified according to the watermark image and the clean image, so that the image identification result is obtained. The sample data set used for model training only needs a batch of watermark images and clean images corresponding to the batch of watermark images, and any additional labeling is not needed, so that the time consumption caused by manual labeling is reduced. And calculating an antagonistic loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image, so as to calculate a target loss value corresponding to the watermark removal model to be trained according to the loss values, and then performing antagonistic training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the trained watermark removal model. Because the counter loss value can remove the mode difference between the generated watermark-free image and the watermark image and the corresponding input image, the cyclic consistency loss value can ensure that the content of the image is not changed before and after the watermark is removed. The identity reconstruction loss value can ensure that the color composition of the image is the same before and after the watermark is removed, so that the original information of the input image is ensured while the watermark is effectively removed by the trained watermark removal model, the loss of the original information is avoided, and the quality of a clean image output by the watermark removal model is improved.
Drawings
FIG. 1 is a diagram of an embodiment of an application environment of a watermark removal model training method;
FIG. 2 is a schematic flowchart of a method for training a watermark removal model according to an embodiment;
fig. 3 is a schematic flow chart illustrating a step of inputting a watermark image and a clean image into a watermark removal model to be trained, and performing three times of style migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result in one embodiment;
FIG. 4 is a diagram illustrating a network structure of any sub-network in a training watermark removal model according to an embodiment;
FIG. 5 is a schematic flowchart illustrating the three-pass style migration procedure performed by inputting a watermark image and a clean image into a generator corresponding to a watermark removal model to be trained in one embodiment;
FIG. 6 is a flowchart illustrating the steps of calculating a countermeasure loss value, a cycle consistency loss value, and an identity reconstruction loss value corresponding to a watermark removal model to be trained according to a style migration result, an image recognition result, a watermark image, and a clean image in one embodiment;
fig. 7 is a schematic flow chart illustrating a step of performing countermeasure training on a watermark removal model to be trained according to a target loss value until a preset condition is reached, and stopping the model training to obtain a trained watermark removal model in one embodiment;
FIG. 8 is a flowchart illustrating an image watermark removal method according to an embodiment;
FIG. 9 is a block diagram showing the structure of a watermark removal model training apparatus according to an embodiment;
FIG. 10 is a block diagram showing the configuration of an image watermark removal apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The watermark removal model training method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. When the watermark removal model needs to be trained, the terminal 102 sends the initial image data set to the server 104, and the server 104 preprocesses the initial image data set to obtain a sample image data set. After the server 104 acquires the sample image data set, the watermark image and the clean image corresponding to the watermark image are extracted from the sample image data set, the watermark image and the clean image are input into the watermark removal model to be trained, the watermark image and the clean image are subjected to three times of style migration through the watermark removal model to be trained to obtain a style migration result, and a target image in the style migration result is identified according to the watermark image and the clean image to obtain an image identification result. The server 104 calculates an confrontation loss value, a circulation consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the watermark image and the clean image, calculates a target loss value corresponding to the watermark removal model to be trained according to the confrontation loss value, the circulation consistency loss value and the identity reconstruction loss value, performs confrontation training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stops model training to obtain the trained watermark removal model. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a watermark removal model training method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
at step 202, a sample image dataset is acquired.
The sample image data set refers to a training set used for training a watermark removal model, and the sample image data set may include a plurality of sample images, where a plurality refers to two or more. The image categories to which the sample image corresponds may include a watermark image category and a clean image category. The watermark images in the watermark image category are in one-to-one correspondence with the clean images in the clean image category. Wherein the watermark in the watermark image may be a digital watermark. A clean image is an image in which no watermark is present in the image.
In one embodiment, the sample image data set may be obtained from the terminal, may be obtained by preprocessing a pre-stored initial image data set, or may be obtained by preprocessing an initial image data set after obtaining the initial image data set sent by the terminal. The initial image data set refers to images subjected to watermarking processing, and includes a clean image and a watermark image corresponding to the clean image.
When the server obtains the sample image dataset by preprocessing the pre-stored initial image dataset, the initial image dataset may be pre-constructed by the server and stored in the server. Specifically, the server may obtain a plurality of clean images, add a random watermark to each clean image, and obtain a watermark image corresponding to each clean image, so as to mark the clean image as a clean image category, mark the watermark image obtained by adding the watermark as a watermark image category, further obtain an initial image data set, and store the initial image data set. The random watermark refers to that information such as the size, position, color, transparency, quantity and the like of the watermark is randomly added, the watermarks in the same watermark image can be different, and the watermarks in a plurality of watermark images can be different. Because the watermarks in the watermark images are added randomly, the distribution quantity, the distribution positions and the like of the watermarks are uncertain, more abundant image data can be obtained, the accuracy of the trained watermark removal model is improved, and the watermarks in the watermark images can be removed completely under the conditions that the watermarks in the watermark images are distributed densely and the randomness of the watermarks is high in the practical application process of the trained watermark removal model.
Further, the server may associate the clean image with the watermark image corresponding to the clean image after obtaining the watermark image corresponding to each clean image. Specifically, the terminal may add the association identifier to the clean image or any one of the watermark images corresponding to the clean image, so that the association image may be quickly determined according to the association identifier. The association identifier is used to mark the images with association relationship, for example, the association identifier may be an image serial number, an image name, or the like. For example, the associated identifier corresponding to the clean image may be an image name of the corresponding watermark image. The associated identifier corresponding to the watermark image may be an image name of the corresponding clean image. And then the server obtains an initial image data set according to the image after the association processing, and stores the initial image data set.
When the server obtains the sample image data set by preprocessing a pre-stored initial image data set or by preprocessing an initial image data set sent by the terminal, the server needs to preprocess the initial image data, wherein the preprocessing may include resizing, cropping, normalization, and the like.
In one embodiment, acquiring the sample image dataset comprises: acquiring an initial image dataset; carrying out size adjustment on the initial image data set to obtain an adjusted initial image data set; randomly clipping the adjusted initial image data set to obtain a clipped initial image data set; and carrying out normalization processing on the clipped initial image data set to obtain a sample data set. The initial image dataset may be pre-stored or may be obtained from a terminal. The server resizes the clean image and the watermark image in the initial image dataset, scaling all images in the initial image dataset to the same image size, e.g., 286 x 286. For example, the resizing may be performed by any one of a number of methods, such as nearest neighbor interpolation, linear interpolation, and area interpolation. The server crops the resized initial image dataset by cropping, which is cropping the image to a particular size, e.g., 256 x 256. For example, the manner of cropping may be random cropping. By cutting the initial image data set after the size adjustment, the data expansion is realized, and the data noise is weakened, so that the accuracy and the stability of the watermark removal model can be improved. The server may further perform normalization processing on the clipped initial image data set, where the normalization processing may be performed by any one of a z-score normalization (zero-mean normalization) method, a min-max normalization (min-max normalization, dispersion normalization) method, and the like. The accuracy of a subsequent watermark removal model can be improved by carrying out normalization processing on the clipped initial image data set, and the convergence rate of the watermark removal model is improved.
Step 204, extracting the watermark image and the clean image corresponding to the watermark image in the sample image data set.
Step 206, inputting the watermark image and the clean image into a watermark removal model to be trained, performing three times of style migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result, and identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result.
The image categories to which the sample images in the sample image dataset correspond may include a watermark image category and a clean image category. The watermark images in the watermark image category are in one-to-one correspondence with the clean images in the clean image category. The server may extract the watermark image in a watermark image category and extract a clean image corresponding to the watermark image in a clean image category.
The watermark removal model to be trained refers to a watermark removal model which needs to be trained. And the watermark removal model to be trained is used for carrying out style migration on the input image to obtain the image after the style migration. For example, the watermark removal model to be trained may be a model obtained by performing network structure improvement on the generative confrontation network model. Specifically, a pair of generator and discriminator is added to the watermark removal model to be trained on the basis of the generative countermeasure network, that is, the watermark removal model to be trained may include two sub-networks, each sub-network may include one generator and one discriminator, and both the generator and the discriminator include encoders. The roles of the generators in the two sub-networks may be different, one for generating images without watermarks and the other for generating images carrying watermarks, the purpose of the generators being to make the generated images as realistic as possible to be discriminated by the discriminator. The goal of the discriminator is to distinguish as correctly as possible whether the image output by the generator is a real image or a generated image.
The target image is an image that needs to be subjected to image recognition in the style migration result. The server can call a watermark removal model to be trained, the extracted watermark image and the clean image are input into the watermark removal model to be trained, and the watermark image and the clean image are subjected to three times of style migration through the watermark removal model to be trained, so that a style migration result is obtained. Style migration refers to converting the image style of an image. For example, the style migration is performed only once on the watermark image, and a clean image corresponding to the watermark image can be obtained. The clean image is an image obtained by removing a watermark in a watermark image. And carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to be trained, wherein the obtained style migration result can comprise the image obtained after the three times of style migration. The three times of style migration can represent that three times of style migration processing procedures are carried out in the watermark removal model to be trained.
The image obtained after the first style migration may include a non-watermarked image corresponding to the watermark image and a watermarked image corresponding to the clean image, the image obtained after the second style migration may include a first watermark added image corresponding to the non-watermarked image and a first watermark removed image corresponding to the watermarked image, and the image obtained after the third style migration may include: a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image. And identifying the target image in the style migration result through the watermark removing model to be trained, so as to obtain an image identification result. The target image can comprise a watermark-free image and a watermark-containing image obtained after the style migration. The identification means that whether the target image is a real image or not is judged, namely whether the image without the watermark is a clean image input to the watermark removal model to be trained or not is judged, and whether the image with the watermark is a watermark image input to the watermark removal model to be trained or not is judged. The image recognition result may be a probability that the target image is a real image.
And 208, calculating a confrontation loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image.
The style migration result may include a non-watermark image corresponding to the watermark image, a watermark image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, a first watermark removing image corresponding to the watermark image, a second watermark adding image corresponding to the watermark image, and a second watermark removing image corresponding to the clean image. The server can calculate a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the watermark image and the clean image, and calculate a confrontation loss value corresponding to the watermark removal model to be trained according to the image recognition result. The cyclic consistency loss value refers to a cyclic consistency loss value of a generator in the watermark removal model to be trained, and can be used for representing the difference between the first watermark adding image and the watermark image and the difference between the first watermark removing image and the clean image. The identity reconstruction loss value refers to an identity reconstruction loss value of a generator in the watermark removal model to be trained, the identity reconstruction loss value can be used for representing the difference between the second watermark adding image and the watermark image and the difference between the second watermark removing image and the clean image, and the countertraining of the watermark removal model to be trained according to the identity reconstruction loss value can also ensure that the color composition of the image is the same before and after the watermark is removed. The confrontation loss value comprises the confrontation loss value of a generator and the confrontation loss value of a discriminator in the watermark removal model to be trained, can be used for representing the difference between the watermark-free image and the clean image and the difference between the watermark image and the watermark image, can remove the mode difference between the generated watermark-free image and the watermark image and the corresponding input image, can also realize the training of a sample image data set which is not subjected to the marking, and reduces the time and the energy consumed by manual marking. The larger the difference between the output and the input of the watermark removal model to be trained is, the larger each loss value is.
And step 210, calculating a target loss value corresponding to the watermark removal model to be trained according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value.
After the server calculates the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value, the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value can be divided into a generator loss and a discriminator loss, and a target loss value corresponding to the watermark removal model to be trained is calculated according to the loss value in the generator loss and the loss value in the discriminator loss. The target loss value refers to the final loss of the watermark removal model to be trained, and may include a generator loss value and a discriminator loss value.
Specifically, the server may divide the countermeasure loss value, the cycle consistency loss value, and the identity reconstruction loss value into a generator loss and a discriminator loss, and the generator loss may include a generator loss value, a cycle consistency loss value, and an identity reconstruction loss value among the countermeasure loss values. The discriminator loss may include a discriminator loss value among the countermeasure loss values. And the server performs weighting operation on each loss value in the generator loss to obtain a generator loss value, and the generator loss value and the discriminator loss value are used as target loss values corresponding to the watermark removal model to be trained.
And 212, performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping model training to obtain the trained watermark removal model.
And the server performs countermeasure training on the two pairs of generators and the discriminator in the watermark removal model to be trained according to the target loss value. The countermeasure training is training in the opposite direction of the generator and the discriminator. Specifically, because the generator and the discriminator both include the encoder, and the encoder is shared, the server can perform decoupling training on the watermark removal model to be trained, and the decoupling training refers to adjusting the encoding parameters of the encoder only when the discriminator is trained. After the training is finished once, adjusting the model parameters once, and repeatedly performing iterative training until a preset condition is reached. The preset condition may be that the loss value of the model is no longer decreasing, or that the loss value of the model is less than a threshold value. At this time, the server may stop the model training, and store the target generator at this time and the generation parameters corresponding to the target generator, to obtain the trained watermark removal model. The target generator is a generator for generating an image without a watermark. The target generator comprises an encoder, and the generation parameters corresponding to the target generator comprise encoding parameters of the encoder.
In the embodiment, the watermark image and the clean image corresponding to the watermark image are extracted from the sample image data set by acquiring the sample image data set, the watermark image and the clean image are subjected to three times of style migration by the watermark removal model to be trained, and the target image in the style migration result is identified according to the watermark image and the clean image, so that the image identification result is obtained. The sample data set used for model training only needs a batch of watermark images and clean images corresponding to the batch of watermark images, and any additional labeling is not needed, so that the time consumption caused by manual labeling is reduced. And calculating an confrontation loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image, calculating a target loss value corresponding to the watermark removal model to be trained according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value, performing confrontation training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping model training to obtain the trained watermark removal model. Because the counter loss value can remove the mode difference between the generated watermark-free image and the watermark image and the corresponding input image, the cyclic consistency loss value can ensure that the content of the image is not changed before and after the watermark is removed. The identity reconstruction loss value can ensure that the color composition of the image is the same before and after the watermark is removed, so that the original information of the input image is ensured while the watermark is effectively removed by the trained watermark removal model, the loss of the original information is avoided, and the quality of a clean image output by the watermark removal model is improved.
In an embodiment, as shown in fig. 3, inputting the watermark image and the clean image into a watermark removal model to be trained, and performing three times of style migration on the watermark image and the clean image by using the watermark removal model to be trained to obtain a style migration result includes:
step 302, inputting the watermark image and the clean image into a corresponding generator of the watermark removal model to be trained for performing three-time style migration, and obtaining a non-watermark image corresponding to the watermark image, a watermark image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, a first watermark removing image corresponding to the watermark image, a second watermark adding image corresponding to the watermark image, and a second watermark removing image corresponding to the clean image.
And 304, generating a style migration result according to the non-watermark image, the first watermark adding image, the first watermark removing image, the second watermark adding image and the second watermark removing image.
The watermark removal model to be trained comprises two sub-networks, each sub-network comprises a generator and a discriminator, the generators and the discriminators respectively comprise encoders, and the generators and the discriminators share one encoder. Compared with the traditional antagonistic network model in which the generator and the discriminator are respectively provided with one encoder, the method reduces the model parameters and has simpler network structure.
The two sub-networks in the watermark removal model to be trained are independent from each other, and the network structures of the two sub-networks are the same. Fig. 4 is a schematic diagram of a network structure of any one of the subnetworks in the watermark removal model to be trained. The encoder may be a convolutional neural network, including a convolutional layer and two downsampling layers. The generator can be the combination of a convolutional neural network and a residual block network, the convolutional neural network is of an encoder structure, and the residual block structure comprises six residual blocks, two upsampling layers and a convolutional layer. The network of residual blocks may be referred to as a first decoder, i.e. the generator network comprises an encoder and a first decoder. The arbiter may be a convolutional neural network comprising two convolutional layers connected to an encoder. The two convolutional layers connected to the encoder may be referred to as decoders, i.e., the discriminator includes an encoder and a second decoder.
In one embodiment, the residual block in the generator may be in the form of a short connection. A short concatenation means that for each residual block, one or more layers of the residual block can be skipped, directly combining the input and the output together. The residual block is designed in a short connection mode, extra parameters and calculation complexity are not required to be added, the network degradation problem and the gradient disappearance problem of the traditional deep learning network are solved, the deeper neural network can effectively learn the features, and the watermark removing model to be trained can effectively learn the image features.
The server calls a watermark removal model to be trained, and the watermark removal model to be trained comprises two generators and two discriminators, wherein the two generators can have different functions, one generator is used for generating an image without a watermark, and the other generator is used for generating an image carrying the watermark. The server can input the watermark image and the clean image into corresponding generators of the watermark removal model to be trained respectively, and the generators perform three-time style migration on the watermark image and the clean image. Style migration refers to converting the image style of an image. For example, a clean image corresponding to the watermark image can be obtained by performing style migration on the watermark image once. And each time the generator is passed, the style migration is realized. Cubic style migration may be achieved by sequential operations as well as cross operations between two producers.
Because the generators comprise the encoder and the decoder, in each generator, feature extraction is firstly carried out through the encoder to obtain a corresponding feature vector, and then the feature vector is input into the decoder to be decoded to obtain a corresponding output image, so that one style migration is realized. For example, in the first style migration, the watermark image and the clean image may be input into the encoders in the generators, and the encoders perform feature extraction on the corresponding input images to obtain feature vectors output by the encoders. The feature vectors may include a first feature vector corresponding to the watermark image and a second feature vector corresponding to the clean image. And taking the first characteristic vector and the second characteristic vector as the input of a decoder in a corresponding generator through a watermark removing model to be trained to obtain a watermark image corresponding to a watermark image and a watermark image corresponding to a clean image. And the three-time style migration can be realized by the watermark removing model to be trained according to the sequential operation and the cross operation between the two generators, so that a non-watermark image, a first watermark adding image, a first watermark removing image, a second watermark adding image and a second watermark removing image are obtained, and the obtained images are used as style migration results.
In this embodiment, the watermark image and the clean image are input into the corresponding generator of the watermark removal model to be trained for three times of style migration, which is beneficial to subsequently calculate the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value corresponding to the watermark removal model to be trained.
In one embodiment, as shown in fig. 5, the step of inputting the watermark image and the clean image into a corresponding generator of the watermark removal model to be trained for performing the three-time style migration includes:
step 502, inputting the watermark image into a first generator of a watermark removal model to be trained for carrying out first style migration, inputting the clean image into a second generator of the watermark removal model to be trained for carrying out first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermark image corresponding to the clean image through the second generator.
Step 504, inputting the watermark-free image into a second generator for second style migration, inputting the watermarked image into the second generator for second style migration, outputting a first watermark adding image corresponding to the watermark-free image through the second generator, and outputting a first watermark removing image corresponding to the watermarked image through the first generator.
Step 506, inputting the clean image into the first generator for the third-time style migration, inputting the watermark image into the second generator for the third-time style migration, outputting a second watermark-removed image corresponding to the clean image through the first generator, and outputting a second watermark-added image corresponding to the watermark image through the second generator.
The watermark removal model to be trained comprises two generators, a first generator and a second generator. A first generator may be used to generate an image without a watermark and a second generator may be used to generate an image carrying a watermark.
The server can input the watermark image into a first generator in the watermark removal model to be trained and input the clean image into a second generator in the watermark removal model to be trained, the first generator performs first style migration on the watermark image to obtain a non-watermark image corresponding to the watermark image, and the second generator performs first style migration on the clean image to obtain a watermarked image corresponding to the clean image. The watermark-free image and the watermark-containing image can be used for calculating a corresponding confrontation loss value of the watermark removal model to be trained.
Inputting the watermark-free image into a second generator by the watermark removing model to be trained, inputting the image with the watermark into a first generator, carrying out second-time style migration on the image without the watermark by the second generator to obtain a first watermark adding image corresponding to the image without the watermark, and carrying out second-time style migration on the image with the watermark by the first generator to obtain a first watermark removing image corresponding to the image with the watermark. The first watermark adding image and the first watermark removing image can be used for calculating a cycle consistency loss value corresponding to the watermark removing model to be trained.
The watermark removing model to be trained can also take a clean image as the input of a first generator, take a watermark image as the input of a second generator, perform third-time style migration on the clean image through the first generator to obtain a second watermark removing image corresponding to the clean image, and perform third-time style migration on the watermark image through the second generator to obtain a second watermark adding image corresponding to the watermark image. The second watermark removal image and the first watermark adding image can be used for calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained. Therefore, the to-be-trained watermark removal model can calculate the cycle consistency loss by judging the difference between the first watermark adding image and the first watermark removing image and the corresponding input image. And performing third-time style migration on the clean image through the first generator to obtain a second watermark removed image, and performing third-time style migration on the watermark image through the second generator to obtain a second watermark added image. Therefore, the watermark removing model to be trained can calculate the identity reconstruction loss by judging the difference between the second watermark removing image and the second watermark adding image and the corresponding input image.
In one embodiment, the generator of the to-be-trained watermark removal model includes a first generator and a second generator, the target image in the style migration result is identified according to the watermark image and the clean image, and the step of obtaining the image identification result includes: taking the watermark-free image and the watermark-containing image in the style migration result as target images; and inputting the target image into a corresponding discriminator of the watermark removal model to be trained for recognition to obtain an image recognition result. The server inputs the watermark image and the clean image into a watermark removing model to be trained, the watermark image and the clean image are subjected to three times of style migration through a generator in the watermark removing model to be trained, after a style migration result is obtained, a watermark-free image and a watermark-containing image in the style migration result can be extracted, and the watermark-free image and the watermark-containing image are used as target images. The image without watermark is the image output by the first generator, and the image with watermark is the image output by the second generator. The watermark removing model to be trained comprises two discriminators, a first discriminator and a second discriminator, wherein the first discriminator can be connected with the first generator, and the second discriminator can be connected with the second discriminator. The method comprises the steps of taking a watermark-free image in a target image as input of a first discriminator, taking a watermark-containing image in the target image as input of a second discriminator, recognizing the difference between the watermark-free image and a clean image through the first discriminator, outputting the probability that the watermark-free image is a real image, and taking the output of the first discriminator as a first recognition result corresponding to the watermark-free image. And identifying the difference between the watermarked image and the watermark image through a second discriminator, and outputting the probability that the watermarked image is a real image. And taking the output of the second discriminator as a second identification result corresponding to the watermark image. Thus, the image recognition result can be obtained according to the first recognition result and the second recognition result.
In this embodiment, the discriminator in the to-be-trained watermark removal model is used to identify the watermark-free image and the watermark-containing image in the style migration result, so that the probability that the watermark-free image is the real image and the probability that the watermark-containing image is the real image can be obtained, thereby facilitating the server to calculate the confrontation loss value according to the output of the discriminator, and removing the mode difference between the generated watermark-free image and the generated watermark-containing image and the corresponding input image.
In an embodiment, as shown in fig. 6, the step of calculating a countermeasure loss value, a cycle consistency loss value, and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image, and the clean image includes:
step 602, calculating a countermeasure loss value corresponding to the watermark removal model to be trained according to the image recognition result.
And step 604, calculating a cycle consistency loss value corresponding to the watermark removal model to be trained according to the first watermark removal image, the first watermark adding image, the watermark image and the clean image in the style migration result.
And 606, calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark adding image, the second watermark removing image, the watermark image and the clean image in the style migration result.
The style migration result comprises a non-watermark image, a first watermark adding image, a first watermark removing image, a second watermark adding image and a second watermark removing image. The image identification result comprises the probability that the image without the watermark is the real image and the probability that the image with the watermark is the real image.
After obtaining the style migration result and the image recognition result, the server may use an MSE _ loss (Mean Square Error) loss function to calculate an confrontation loss value corresponding to the first generator according to the probability that the image without the watermark in the image recognition result is the real image, where the first generator may be represented by G, and the confrontation loss value corresponding to the first generator may be represented by loss _ G _ a. And calculating a corresponding resistance loss value of the second generator according to the probability that the watermark image in the image identification result is the real image by adopting an MSE _ loss (Mean Square Error) loss function. The second generator may be denoted by F and the corresponding penalty value for the second generator may be denoted by loss _ G _ B. The server can also adopt an average absolute error L1_ loss function to calculate a corresponding confrontation loss value of the first discriminator according to the probability that the watermark-free image in the image recognition result is a real image, and the first discriminator can use DYIndicates that is firstThe corresponding confrontation loss value of the discriminator can be expressed by loss _ D _ a. And calculating the corresponding countermeasure loss value of the second discriminator according to the probability that the watermark image is a real image in the image identification result by adopting an average absolute error L1_ loss function. The second discriminator may be DXIt is shown that the corresponding confrontation loss value of the second discriminator can be represented by loss _ D _ B. The server can use the sum of loss _ G _ a and loss _ G _ B as the confrontation loss value of the generator in the watermark removal model to be trained, and use the sum of loss _ D _ a and loss _ D _ B as the confrontation loss value of the discriminator in the watermark removal model to be trained.
The server extracts a first watermark adding image from the style migration result, and calculates the difference between the first watermark adding image and the watermark image by adopting an average absolute error L1_ loss function, so as to obtain a first cycle consistency loss value loss _ cycle _ A of a generator in the watermark removing model to be trained. And extracting a first watermark removal image from the style migration result, and calculating the difference between the first watermark removal image and the clean image by adopting an average absolute error L1_ loss function, thereby obtaining a second cycle consistency loss value loss _ cycle _ B of a generator in the watermark removal model to be trained. The server can add the loss _ cycle _ a and the loss _ cycle _ B to obtain a cycle consistency loss value corresponding to the watermark removal model to be trained.
And the server extracts a second watermark removal image from the style migration result, and calculates the difference between the second watermark addition image and the clean image by adopting an average absolute error L1_ loss function, so as to obtain a first identity reconstruction loss value idt _ A corresponding to the watermark removal model to be trained. And the server extracts a second watermark adding image from the style migration result, and calculates the difference between the second watermark adding image and the watermark image by adopting an average absolute error L1_ loss function, so as to obtain a second identity reconstruction loss value idt _ B corresponding to the watermark removing model to be trained. The server can add the idt _ A and the idt _ B to obtain an identity reconstruction loss value corresponding to the watermark removal model to be trained.
In this embodiment, the server calculates a countermeasure loss value corresponding to the watermark removal model to be trained according to the image recognition result, calculates a cycle consistency loss value corresponding to the watermark removal model to be trained according to the first watermark removal image and the first watermark addition image, the watermark image, and the clean image in the style migration result, and calculates an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark addition image and the second watermark removal image, the watermark image, and the clean image in the style migration result. The loss of the watermark removal model to be trained can be rapidly and comprehensively calculated.
In one embodiment, the countermeasure loss value includes a generator countermeasure loss value and a discriminator countermeasure loss value, and calculating the target loss value of the watermark removal model to be trained based on the countermeasure loss value, the cycle consistency loss value, and the identity reconstruction loss value includes: acquiring a countermeasure weight corresponding to a countermeasure loss value of a generator, a cycle weight corresponding to a cycle consistency loss value and an identity weight corresponding to an identity reconstruction loss value in the countermeasure loss value, wherein the countermeasure weight, the cycle weight and the identity weight have a preset relationship; calculating a generator loss value corresponding to the watermark removal model to be trained according to the generator confrontation loss and confrontation weight, the cycle consistency loss value and the cycle weight, and the identity reconstruction loss value and the identity weight; and taking the generator loss value and the discriminator countermeasure loss value in the countermeasure loss value as a target loss value corresponding to the watermark removal model to be trained.
Because the countermeasure loss value includes the countermeasure loss value of the generator in the watermark removal model to be trained and the countermeasure loss value of the discriminator, the server can obtain the countermeasure weight corresponding to the countermeasure loss value of the generator in the countermeasure loss value, the cycle weight corresponding to the cycle consistency loss value and the identity weight corresponding to the identity reconstruction loss value. The confrontation weight, the circulation weight, and the identity weight have a predetermined relationship, which may be an addition of one. The distribution between the weights may be done when training the model. And the server performs weighted calculation on the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value of the generator to obtain a generator loss value corresponding to the watermark removal model to be trained. The calculation formula for the generator loss value may be as follows:
loss_G=w1(loss_G_A+loss_G_B)+w2(loss_cycle_A+loss_cycle_B)+w3(idt_A+idt_B)
where loss _ G represents the producer loss value, w1 represents the countermeasure weight, w2 represents the round robin weight, and w3 represents the identity weight. And the loss _ G _ A represents the corresponding countermeasure loss value of the first generator, the loss _ G _ B represents the corresponding countermeasure loss value of the second generator, the loss _ cycle _ A represents the first cycle consistency loss value, the loss _ cycle _ B represents the second cycle consistency loss value, the idt _ A represents the first identity reconstruction loss value, and the idt _ B represents the second identity reconstruction loss value.
The discriminator opposition loss value among the opposition loss values can be calculated by the following formula:
loss_D=loss_D_A+loss_D_B
wherein, loss _ D represents the countermeasure loss value of the discriminator, loss _ D _ a represents the countermeasure loss value corresponding to the first discriminator, and loss _ D _ B represents the countermeasure loss value corresponding to the second discriminator.
The server can take the generator loss value and the discriminator countermeasure loss value in the countermeasure loss value as the target loss value corresponding to the watermark removal model to be trained. In this embodiment, the server performs weighted calculation on each loss value of the generator to obtain a generator loss value, and uses the generator loss value and a discriminator countermeasure loss value in the countermeasure loss value as a target loss value, so that a target loss value corresponding to the watermark removal model to be trained can be calculated according to the importance of each loss in the whole loss calculation process, and the calculation accuracy of the model loss can be improved, thereby improving the training efficiency and accuracy of the watermark removal model to be trained.
In an embodiment, as shown in fig. 7, performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping the model training to obtain the trained watermark removal model includes:
and step 702, fixing the generation parameters of the decoder in each generator of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in each subnetwork according to the target loss value to obtain the adjusted loss value.
And 704, fixing the discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss values, wherein the discrimination parameters comprise the coding parameters corresponding to the coder.
Step 706, repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain the trained watermark removal model, wherein the current generator parameters include current encoder parameters corresponding to an encoder in the model generator.
The watermark removal model to be trained comprises two sub-network structures, each sub-network comprises a generator and a discriminator, the generator comprises an encoder and a first decoder, the generator and the discriminator share one encoder, and the discriminator comprises an encoder and a second decoder.
The watermark removal model to be trained can be trained by adopting a decoupling training mode. Decoupled training refers to adjusting the encoding parameters of the encoder only when training the discriminators in the model. Therefore, when training the discriminators in the model, it is necessary to fix the generation parameters of the decoders in the generators of the watermark removal model to be trained, and adjust the discrimination parameters of the discriminators in the subnets according to the target loss value to obtain the adjusted loss value. The discrimination parameters of the discriminator comprise coding parameters of the coder, and the discrimination parameters comprise coding parameters corresponding to the coder. When a generator in the model is trained, the discrimination parameters of each discriminator of the watermark removal model to be trained need to be fixed, and the generation parameters of the decoder are adjusted according to the adjusted loss values, so that decoupling training is realized. Model parameters are gradually optimized through counterstudy of two pairs of generators and discriminators in the watermark removal model to be trained, so that the first generator can generate an image with a watermark into a cleaner watermark removal image, the second generator can generate an image without the watermark into an image with the watermark and close to the existing watermark style, the counterstudy is a counterstudy auxiliary process, iterative training is carried out on the model until a preset condition is reached, and model training is stopped. The preset condition may be that the loss value of the model is no longer reduced or is less than a threshold value. And the server stores the current generation parameters of the model generator and the model generator, so as to obtain the trained watermark removal model.
In this embodiment, by performing decoupling training on the watermark removal model to be trained, the quality of a clean image output by the trained watermark removal model can be improved.
In one embodiment, as shown in fig. 8, an image watermark removing method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 802, acquiring an image to be processed.
Step 804, inputting the image to be processed into a generator of the trained watermark removal model, encoding the image to be processed through an encoder in the generator, and outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculation according to the countermeasure loss value, the cycle consistency loss value and the identity reconstruction loss value; the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value are calculated by performing style migration on the sample image dataset.
And 806, performing style migration on the image features through the generator to obtain a clean image corresponding to the image to be processed.
The image to be processed refers to an image which needs to be subjected to watermark removal processing. The watermark contained in the image to be processed may be a digital watermark, and the number, position, color, transparency, and the like of the digital watermark may be random. When the watermark needs to be removed, the terminal can send the image to be processed to the server. And after acquiring the image to be processed, the server calls the trained watermark removal model. The trained watermark removal model is obtained by performing countermeasure training according to the sample image data set. Wherein the mode of the resistance training can be decoupling training. Decoupled training refers to adjusting the encoding parameters of the encoder only when training the discriminators in the model. In the countercheck training process, watermark images and clean images corresponding to the watermark images are extracted from a sample image data set, the watermark images and the clean images are input into a watermark removal model to be trained, three times of style migration are carried out on the watermark images and the clean images through the watermark removal model to be trained to obtain a style migration result, and target images in the style migration result are identified according to the watermark images and the clean images to obtain an image identification result. Therefore, an confrontation loss value, a circulation consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained are calculated according to the style migration result, the image recognition result, the watermark image and the clean image, a target loss value corresponding to the watermark removal model to be trained is calculated according to the confrontation loss value, the circulation consistency loss value and the identity reconstruction loss value, model parameters of the watermark removal model to be trained are adjusted through the target loss value, current generation parameters corresponding to the model generator and the model generator are finally reserved, and the trained watermark removal model is obtained. The model generator refers to a generator for generating an image without a watermark. An encoder and a corresponding decoder are included in the model generator.
The server inputs the image to be processed into a trained watermark removal model, the watermark removal model comprises a generator, the generator comprises an encoder and a corresponding decoder, and the image to be processed is encoded through the encoder to obtain image characteristics. And then, taking the image characteristics as the input of a decoder, and carrying out style migration on the image characteristics through the decoder to obtain a clean image corresponding to the image to be processed.
In this embodiment, since the trained watermark removal model is obtained by performing countermeasure training based on the sample image data set, the quality of the output clean image can be improved. In the countermeasure training process, the target loss value corresponding to the watermark removal model is obtained through calculation according to the countermeasure loss value, the cycle consistency loss value and the identity reconstruction loss value, and the countermeasure loss value, the cycle consistency loss value and the identity reconstruction loss value are obtained through style migration calculation of the sample image data set. The counter loss value can remove the mode difference between the generated watermark-free image and the watermark image and the corresponding input image, and the cycle consistency loss value can ensure that the content of the image is not changed before and after the watermark is removed. The identity reconstruction loss value can ensure that the color composition of the image is the same before and after the watermark is removed, so that the original information of the input image is ensured while the watermark is effectively removed by the trained watermark removal model, the loss of the original information is avoided, and the quality of the output clean image is effectively improved.
It should be understood that, although the respective steps in the flowcharts of fig. 2 to 3, 5 to 8 are sequentially shown as indicated by arrows, the steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 to 3 and 5 to 8 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided a watermark removal model training apparatus including: a sample acquisition module 902, a style migration module 904, a loss calculation module 906, and a confrontation training module 908,
wherein:
a sample acquisition module 902 for acquiring a sample image dataset.
A style migration module 904, configured to extract a watermark image and a clean image corresponding to the watermark image in the sample image dataset; inputting the watermark image and the clean image into a watermark removing model to be trained, and carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to obtain a style migration result.
A loss calculation module 906, configured to calculate a countermeasure loss value, a cycle consistency loss value, and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the watermark image, and the clean image; and calculating a target loss value corresponding to the watermark removal model according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value.
And the countermeasure training module 908 is configured to perform countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, and stop the model training to obtain a trained watermark removal model.
In an embodiment, the style migration module 904 is further configured to input the watermark image and the clean image into a corresponding generator of the watermark removal model to be trained for performing three times of style migration, so as to obtain a non-watermark image corresponding to the watermark image, a watermark image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, and a first watermark removing image corresponding to the watermark image, a second watermark adding image corresponding to the watermark image, and a second watermark removing image corresponding to the clean image; and generating a style migration result according to the non-watermark image, the first watermark adding image, the first watermark removing image, the second watermark adding image and the second watermark removing image.
In one embodiment, the generators of the watermark removal model to be trained include a first generator and a second generator, and the style migration module 904 is further configured to input the watermark image into the first generator of the watermark removal model to be trained for the first style migration, and input the clean image into the second generator of the watermark removal model to be trained for the first style migration, output a watermark-free image corresponding to the watermark image through the first generator, and output a watermarked image corresponding to the clean image through the second generator; inputting the watermark-free image into a second generator for second style migration, inputting the image with the watermark into the second generator for second style migration, outputting a first watermark adding image corresponding to the watermark-free image through the second generator, and outputting a first watermark removing image corresponding to the image with the watermark through the first generator; inputting the clean image into a first generator to carry out third-time style migration, inputting the watermark image into a second generator to carry out third-time style migration, outputting a second watermark removing image corresponding to the clean image through the first generator, and outputting a second watermark adding image corresponding to the watermark image through the second generator.
In one embodiment, the style migration module 904 is further configured to take the watermark-free image and the watermark-containing image in the style migration result as the target images; and inputting the target image into a corresponding discriminator of the watermark removal model to be trained for recognition to obtain an image recognition result, wherein the image recognition result comprises a first recognition result corresponding to the image without the watermark and a second recognition result corresponding to the image with the watermark.
In one embodiment, the style migration result includes a watermark-free image, a watermark-containing image, a first watermark adding image, a first watermark removing image, a second watermark adding image, and a second watermark removing image, and the loss calculating module 906 is further configured to calculate a countermeasure loss value corresponding to the watermark removing model to be trained according to the image recognition result; calculating a cycle consistency loss value corresponding to a watermark removal model to be trained according to a first watermark removal image, a first watermark adding image, a watermark image and a clean image in the style migration result; and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark adding image, the second watermark removing image, the watermark image and the clean image in the style migration result.
In one embodiment, the countermeasure loss value includes a generator countermeasure loss value and a discriminator countermeasure loss value, and the loss calculation module 906 is further configured to obtain a countermeasure weight corresponding to the generator countermeasure loss value, a round weight corresponding to the round consistency loss value, and an identity weight corresponding to the identity reconstruction loss value in the countermeasure loss value, where the countermeasure weight, the round weight, and the identity weight have a preset relationship; calculating a generator loss value corresponding to the watermark removal model to be trained according to the generator confrontation loss and confrontation weight, the cycle consistency loss value and the cycle weight, and the identity reconstruction loss value and the identity weight; and taking the generator loss value and the discriminator countermeasure loss value in the countermeasure loss value as a target loss value corresponding to the watermark removal model to be trained.
In one embodiment, the sample acquisition module 902 is further configured to acquire an initial image dataset; carrying out size adjustment on the initial image data set to obtain an adjusted initial image data set; clipping the adjusted initial image data set to obtain a clipped initial image data set; and carrying out normalization processing on the clipped initial image data set to obtain a sample data set.
In one embodiment, the to-be-trained watermark removal model includes two sub-networks, each sub-network includes a generator and a discriminator, the generator and the discriminator both include an encoder and share one encoder, the counter training module 908 is further configured to fix generation parameters of decoders in each generator of the to-be-trained watermark removal model, and adjust the discrimination parameters of the discriminator in each sub-network according to a target loss value to obtain an adjusted loss value; fixing the discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss values, wherein the discrimination parameters comprise coding parameters corresponding to the encoder; and repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain the trained watermark removal model, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
In one embodiment, as shown in fig. 10, there is provided an image watermark removal apparatus including: an image acquisition module 1002, an image encoding module 1004, and a watermark removal module 1006, wherein:
an image obtaining module 1002, configured to obtain an image to be processed.
The image encoding module 1004 is configured to input an image to be processed into an encoder of a trained watermark removal model for encoding, and output image characteristics, where the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the countermeasure training process, a target loss value corresponding to the watermark removal model is calculated according to a countermeasure loss value, a cycle consistency loss value, and an identity reconstruction loss value.
The watermark removing module 1006 is configured to input the image features into a generator of the trained watermark removing model to perform watermark removal, so as to obtain a clean image corresponding to the image to be processed.
For specific limitations of the watermark removal model training apparatus, reference may be made to the above limitations of the watermark removal model training method, which are not described herein again. For specific limitations of the image watermark removing device, reference may be made to the above limitations of the image watermark removing method, which are not described herein again. All or part of the modules in the watermark removing model training device and the image watermark removing device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data of a watermark removal model training method or data of an image watermark removal method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a watermark removal model training method or an image watermark removal method.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the various embodiments described above when the processor executes the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the respective embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A watermark removal model training method, the method comprising:
obtaining a sample image dataset;
extracting a watermark image and a clean image corresponding to the watermark image in the sample image data set;
inputting the watermark image and the clean image into a watermark removing model to be trained, carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to be trained to obtain a style migration result, and identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result;
calculating a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image;
calculating a target loss value corresponding to the watermark removal model to be trained according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value;
and performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, and stopping model training to obtain the trained watermark removal model.
2. The method according to claim 1, wherein the inputting the watermark image and the clean image into a watermark removal model to be trained, and performing three times of style migration on the watermark image and the clean image by using the watermark removal model to be trained to obtain a style migration result comprises:
inputting the watermark image and the clean image into a corresponding generator of a watermark removal model to be trained for carrying out three-time style migration to obtain a non-watermark image corresponding to the watermark image, a watermark image corresponding to the clean image, a first watermark adding image corresponding to the non-watermark image, a first watermark removing image corresponding to the watermark image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image;
and generating a style migration result according to the non-watermark image, the watermarked image, the first watermark adding image, the first watermark removing image, the second watermark adding image and the second watermark removing image.
3. The method according to claim 2, wherein the generator of the to-be-trained watermark removal model includes a first generator and a second generator, and the inputting the watermark image and the clean image into the corresponding generator of the to-be-trained watermark removal model for performing the third-time style migration to obtain the non-watermark image corresponding to the watermark image, the watermarked image corresponding to the clean image, the first watermark adding image corresponding to the non-watermark image, and the first watermark removing image corresponding to the watermarked image, the second watermark adding image corresponding to the watermark image, and the second watermark removing image corresponding to the clean image includes:
inputting the watermark image into a first generator of the to-be-trained watermark removal model for first style migration, inputting the clean image into a second generator of the to-be-trained watermark removal model for first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermarked image corresponding to the clean image through the second generator;
inputting the watermark-free image into the second generator for second style migration, inputting the watermarked image into the second generator for second style migration, outputting a first watermark adding image corresponding to the watermark-free image through the second generator, and outputting a first watermark removing image corresponding to the watermarked image through the first generator;
inputting the clean image into the first generator to perform third style migration, inputting the watermark image into the second generator to perform third style migration, outputting a second watermark removal image corresponding to the clean image through the first generator, and outputting a second watermark addition image corresponding to the watermark image through the second generator.
4. The method of claim 1, wherein the style migration result comprises a non-watermark image, a first watermark adding image, a first watermark removing image, a second watermark adding image and a second watermark removing image, and the calculating the countermeasure loss value, the cycle consistency loss value and the identity reconstruction loss value corresponding to the watermark removing model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image comprises:
calculating a countermeasure loss value corresponding to the watermark removal model to be trained according to the image recognition result;
calculating a cycle consistency loss value corresponding to the watermark removal model to be trained according to a first watermark removal image and a first watermark adding image in the style migration result, the watermark image and the clean image;
and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to a second watermark adding image and a second watermark removing image in the style migration result, the watermark image and the clean image.
5. The method according to claim 1, wherein the watermark removal model to be trained comprises two sub-networks, each sub-network comprises a generator and a discriminator, the generator and the discriminator both comprise encoders and share one encoder, the countermeasure training of the watermark removal model to be trained is performed according to the target loss value until a preset condition is reached, the model training is stopped, and obtaining the trained watermark removal model comprises:
fixing the generation parameters of the decoder in each generator of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in each sub-network according to the target loss value to obtain an adjusted loss value;
fixing the discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss values, wherein the discrimination parameters comprise coding parameters corresponding to a coder;
and repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain the trained watermark removal model, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
6. An image watermark removal method, characterized in that the method comprises:
acquiring an image to be processed;
inputting the image to be processed into a generator of a trained watermark removal model, encoding the image to be processed through an encoder in the generator, and outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculation according to a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value; the confrontation loss value, the cycle consistency loss value, and the identity reconstruction loss value are calculated by performing style migration on the sample image dataset;
and carrying out style migration on the image characteristics through the generator to obtain a clean image corresponding to the image to be processed.
7. A watermark removal model training apparatus, characterized in that the apparatus comprises:
a sample acquisition module for acquiring a sample image dataset;
the style migration module extracts a watermark image and a clean image corresponding to the watermark image in the sample image data set; inputting the watermark image and the clean image into a watermark removing model to be trained, and carrying out three times of style migration on the watermark image and the clean image through the watermark removing model to obtain a style migration result;
the loss calculation module is used for calculating a confrontation loss value, a cycle consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model according to the confrontation loss value, the cycle consistency loss value and the identity reconstruction loss value;
and the countermeasure training module is used for carrying out countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, and stopping the model training to obtain the trained watermark removal model.
8. An image watermark removal apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed;
the image coding module is used for inputting the image to be processed into a coder of a trained watermark removal model for coding and outputting image characteristics, the trained watermark removal model is obtained by carrying out countermeasure training according to a sample image data set, and in the countermeasure training process, a target loss value corresponding to the watermark removal model is obtained by calculation according to an countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value;
and the watermark removing module is used for inputting the image characteristics into the generator of the trained watermark removing model to remove the watermark so as to obtain a clean image corresponding to the image to be processed.
9. A computer device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202011238056.2A 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium Active CN112330522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011238056.2A CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011238056.2A CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112330522A true CN112330522A (en) 2021-02-05
CN112330522B CN112330522B (en) 2024-06-04

Family

ID=74316929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011238056.2A Active CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112330522B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113052068A (en) * 2021-03-24 2021-06-29 深圳威富云数科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113379585A (en) * 2021-06-23 2021-09-10 景德镇陶瓷大学 Ceramic watermark model training method and embedding method for frameless positioning
CN113591856A (en) * 2021-08-23 2021-11-02 中国银行股份有限公司 Bill picture processing method and device
CN113781352A (en) * 2021-09-16 2021-12-10 科大讯飞股份有限公司 Light removal method and device, electronic equipment and storage medium
CN113793258A (en) * 2021-09-18 2021-12-14 超级视线科技有限公司 Privacy protection method and device for monitoring video image
CN113822976A (en) * 2021-06-08 2021-12-21 腾讯科技(深圳)有限公司 Training method and device of generator, storage medium and electronic device
CN115171023A (en) * 2022-07-20 2022-10-11 广州虎牙科技有限公司 Style migration model training method, video processing method and related device
CN116228896A (en) * 2023-03-10 2023-06-06 北京百度网讯科技有限公司 Image desensitization method, model training method, device, equipment and storage medium
CN117034219A (en) * 2022-09-09 2023-11-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN117156152A (en) * 2022-05-18 2023-12-01 腾讯科技(深圳)有限公司 Model training methods, encoding methods, decoding methods and equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030131237A1 (en) * 2002-01-04 2003-07-10 Ameline Ian R. Method for applying a digital watermark to an output image from a computer program
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN110796583A (en) * 2019-10-25 2020-02-14 南京航空航天大学 Stylized visible watermark adding method
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111696046A (en) * 2019-03-13 2020-09-22 北京奇虎科技有限公司 Watermark removing method and device based on generating type countermeasure network
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device, and style transfer model training method and device
CN111862274A (en) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 Generative adversarial network training method, image style transfer method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030131237A1 (en) * 2002-01-04 2003-07-10 Ameline Ian R. Method for applying a digital watermark to an output image from a computer program
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN111696046A (en) * 2019-03-13 2020-09-22 北京奇虎科技有限公司 Watermark removing method and device based on generating type countermeasure network
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN110796583A (en) * 2019-10-25 2020-02-14 南京航空航天大学 Stylized visible watermark adding method
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device, and style transfer model training method and device
CN111862274A (en) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 Generative adversarial network training method, image style transfer method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许哲豪;陈玮;: "基于生成对抗网络的图片风格迁移", 软件导刊, no. 06 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950458A (en) * 2021-03-19 2021-06-11 润联软件系统(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113052068A (en) * 2021-03-24 2021-06-29 深圳威富云数科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113052068B (en) * 2021-03-24 2024-04-30 深圳威富云数科技有限公司 Image processing method, device, computer equipment and storage medium
CN113822976B (en) * 2021-06-08 2024-11-12 腾讯科技(深圳)有限公司 Generator training method and device, storage medium and electronic device
CN113822976A (en) * 2021-06-08 2021-12-21 腾讯科技(深圳)有限公司 Training method and device of generator, storage medium and electronic device
CN113379585B (en) * 2021-06-23 2022-05-27 景德镇陶瓷大学 Ceramic watermarking model training method and embedding method without border positioning
CN113379585A (en) * 2021-06-23 2021-09-10 景德镇陶瓷大学 Ceramic watermark model training method and embedding method for frameless positioning
CN113591856A (en) * 2021-08-23 2021-11-02 中国银行股份有限公司 Bill picture processing method and device
CN113781352A (en) * 2021-09-16 2021-12-10 科大讯飞股份有限公司 Light removal method and device, electronic equipment and storage medium
CN113793258A (en) * 2021-09-18 2021-12-14 超级视线科技有限公司 Privacy protection method and device for monitoring video image
CN117156152A (en) * 2022-05-18 2023-12-01 腾讯科技(深圳)有限公司 Model training methods, encoding methods, decoding methods and equipment
CN117156152B (en) * 2022-05-18 2025-02-25 腾讯科技(深圳)有限公司 Model training method, encoding method, decoding method and device
CN115171023A (en) * 2022-07-20 2022-10-11 广州虎牙科技有限公司 Style migration model training method, video processing method and related device
CN117034219A (en) * 2022-09-09 2023-11-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium
CN116228896A (en) * 2023-03-10 2023-06-06 北京百度网讯科技有限公司 Image desensitization method, model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112330522B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN112330522A (en) Watermark removal model training method and device, computer equipment and storage medium
CN111179177B (en) Image reconstruction model training method, image reconstruction method, device and medium
CN110378844B (en) A Blind Image Deblurring Method Based on Recurrent Multiscale Generative Adversarial Networks
CN112598579B (en) Monitoring scene-oriented image super-resolution method, device and storage medium
CN112819689B (en) Training method of human face attribute editing model, human face attribute editing method and human face attribute editing equipment
CN113077379B (en) Feature latent code extraction method and device, equipment and storage medium
CN107545277A (en) Model training, auth method, device, storage medium and computer equipment
Wei et al. A robust image watermarking approach using cycle variational autoencoder
CN117710251A (en) One-stage image restoration method, device and storage medium
Qin et al. CADW: CGAN-based attack on deep robust image watermarking
CN110084766A (en) A kind of image processing method, device and electronic equipment
Niu et al. An image steganography method based on texture perception
CN118736122B (en) Three-dimensional CT imaging method and device based on X-ray double projection
CN111199193A (en) Image classification method and device based on digital slicing and computer equipment
CN119205478A (en) Robust image watermarking method, system and terminal based on conditional diffusion model
CN117156152A (en) Model training methods, encoding methods, decoding methods and equipment
Wang et al. Multi-feature fusion based image steganography using GAN
CN112614199A (en) Semantic segmentation image conversion method and device, computer equipment and storage medium
CN117830078A (en) Image generation method, device, equipment and storage medium
CN117495648A (en) A three-dimensional model watermark authentication method and system based on latent space
CN109345440B (en) Digital media watermark processing method, computer device and storage medium
EP4414940A1 (en) Caricaturization model construction method and apparatus, and device, storage medium and program product
CN117649574A (en) Training method, training device and storage medium for image generation model
CN119006253A (en) Watermark extraction method, watermark extraction device, computer equipment and storage medium
CN113537484B (en) Network training, encoding and decoding method, device and medium for digital watermarking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant