[go: up one dir, main page]

CN110533585B - Image face changing method, device, system, equipment and storage medium - Google Patents

Image face changing method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN110533585B
CN110533585B CN201910833438.0A CN201910833438A CN110533585B CN 110533585 B CN110533585 B CN 110533585B CN 201910833438 A CN201910833438 A CN 201910833438A CN 110533585 B CN110533585 B CN 110533585B
Authority
CN
China
Prior art keywords
face
person
image
changing
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910833438.0A
Other languages
Chinese (zh)
Other versions
CN110533585A (en
Inventor
王云
尹淳骥
杨城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201910833438.0A priority Critical patent/CN110533585B/en
Publication of CN110533585A publication Critical patent/CN110533585A/en
Priority to PCT/CN2020/112777 priority patent/WO2021043121A1/en
Application granted granted Critical
Publication of CN110533585B publication Critical patent/CN110533585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method, a device, a system, equipment and a storage medium for face changing of images, and belongs to the technical field of computers. The method comprises the following steps: receiving a first face changing request corresponding to a second account and sent by a first terminal for logging in the first account; receiving a second face changing request corresponding to the first account and sent by a second terminal logged in by the second account; performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model; and sending the trained first face changing model to the first terminal, and sending the trained second face changing model to the second terminal. By the method and the device, image distortion can be reduced.

Description

Image face changing method, device, system, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for face changing of an image.
Background
With the rapid development of network technology, network videos are widely popularized and used, and the videos can be live videos or recorded videos. In the process of recording videos, many anchor broadcasters try new interactive modes, such as changing the face of a video, that is, changing the face image of the anchor broadcaster into the face image of another person in the video, and then uploading the face image to a server for playing.
A face change scheme in the related art is to recognize a face image in a video image using an image recognition technology, then replace the face image in the video image with a face image of a target person, and then upload the face-changed video image to a server.
In the process of implementing the present application, the inventors found that the prior art has at least the following problems:
in the related art, when the video image is subjected to face change processing, local image replacement is performed in the video image, and a situation that the shape of the face image of the target person is not matched with that of the face image of the original person is likely to occur, so that the face image of the target person needs to be subjected to deformation processing, and thus, the image after face change is distorted.
Disclosure of Invention
The embodiment of the application provides a method, a device, a system, equipment and a storage medium for face changing of an image, which can ensure that the image obtained after face changing processing of the image is more vivid. The technical scheme is as follows:
in one aspect, a method for face changing of an image is provided, and the method is used for a server and includes:
receiving a first face changing request corresponding to a second account and sent by a first terminal logged in by a first account, wherein the face changing request carries a face image set of a first person;
receiving a second face changing request which is sent by a second terminal logged in by a second account and corresponds to the first account, wherein the face changing request carries a face image set of a second person;
performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model, wherein the trained first face-changing model is used for changing a face of the facial image of the first person into a facial image of the second person, and the trained second face-changing model is used for changing a face of the facial image of the second person into a facial image of the first person;
and sending the trained first face changing model to the first terminal, and sending the trained second face changing model to the second terminal.
Optionally, performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model, including:
alternately acquiring the facial images in the facial image set of the first person and the facial image set of the second person;
every time a face image of a first person is obtained, the face image of the first person is distorted to obtain a distorted face image of the first person, the distorted face image of the first person is input into a feature extraction model to obtain a first feature image, the first feature image is input into a second restoration model to obtain a first output image, and the feature extraction model and the second restoration model are subjected to parameter updating based on the currently obtained face image of the first person and the first output image;
every time a face image of a second person is obtained, the face image of the second person is distorted to obtain a distorted face image of the second person, the distorted face image of the second person is input into a feature extraction model to obtain a second feature image, the second feature image is input into a first reduction model to obtain a second output image, and the feature extraction model and the first reduction model are subjected to parameter updating based on the currently obtained face image of the second person and the second output image;
after parameter updating is carried out on the feature extraction model, the first reduction model and the second reduction model based on the face images of all the first persons and the face images of all the second persons, the trained first face changing model is determined based on the feature extraction model after parameter updating and the first reduction model after parameter updating, and the trained second face changing model is determined based on the feature extraction model after parameter updating and the second reduction model after parameter updating.
Optionally, the sending the trained first face-changing model to the first terminal includes:
if the first account is in an online state at present, sending the trained first face changing model to the first terminal;
if the first account is in an offline state at present, storing the trained face changing model, and when the fact that the first account is switched to an online state is detected, sending the trained first face changing model to the first terminal;
the sending the trained second face change model to the second terminal includes:
if the second account is in an online state at present, sending the trained second face changing model to the second terminal;
and if the second account is in an offline state at present, storing the trained face changing model, and when the fact that the second account is switched to an online state is detected, sending the trained second face changing model to the second terminal.
Optionally, after the trained first face change model is sent to the first terminal, the method further includes:
and when a face change termination request which is sent by the second terminal and corresponds to the first account is received, sending a deletion notification which corresponds to the trained first face change model to the first terminal.
In another aspect, an image face changing method is provided, where the method is used for a terminal, and the method includes:
sending a face changing request corresponding to a second account to a server, wherein the face changing request carries a face image set of a first person;
receiving a trained first face changing model sent by the server, wherein the trained first face changing model is used for changing the face image of the first person into the face image of the second person;
and when a face changing instruction corresponding to the second account is received, inputting a first image of a face to be changed into the trained first face changing model to obtain a second image of the face to be changed.
Optionally, before sending the face change request corresponding to the second account to the server, the sending method further includes:
playing guidance information and/or displaying guidance information, wherein the guidance information is used for indicating the first person to do different actions;
and shooting the facial image set of the first person in the process of playing the guidance information and/or displaying the guidance information.
Optionally, the capturing the set of facial images of the first person includes:
and shooting the facial image set of the first person in a state of closing an image adjusting function.
In another aspect, an apparatus for changing faces of images is provided, the apparatus being applied to a server, and the apparatus including:
the system comprises a receiving module, a first face changing module and a second face changing module, wherein the receiving module is used for receiving a first face changing request which is sent by a first terminal logged in by a first account and corresponds to a second account, and the face changing request carries a face image set of a first person; receiving a second face changing request which is sent by a second terminal logged in by a second account and corresponds to the first account, wherein the face changing request carries a face image set of a second person;
the training module is used for performing model training on the basis of the facial image set of the first person and the facial image set of the second person to obtain a trained first face changing model and a trained second face changing model, wherein the trained first face changing model is used for changing the facial image of the first person into the facial image of the second person, and the trained second face changing model is used for changing the facial image of the second person into the facial image of the first person;
and the sending module is used for sending the trained first face changing model to the first terminal and sending the trained second face changing model to the second terminal.
Optionally, the training module is configured to:
alternately acquiring the facial images in the facial image set of the first person and the facial image set of the second person;
every time a face image of a first person is obtained, the face image of the first person is distorted to obtain a distorted face image of the first person, the distorted face image of the first person is input into a feature extraction model to obtain a first feature image, the first feature image is input into a second restoration model to obtain a first output image, and the feature extraction model and the second restoration model are subjected to parameter updating based on the currently obtained face image of the first person and the first output image;
every time a face image of a second person is obtained, the face image of the second person is distorted to obtain a distorted face image of the second person, the distorted face image of the second person is input into a feature extraction model to obtain a second feature image, the second feature image is input into a first reduction model to obtain a second output image, and the feature extraction model and the first reduction model are subjected to parameter updating based on the currently obtained face image of the second person and the second output image;
after parameter updating is carried out on the feature extraction model, the first reduction model and the second reduction model based on all the first person face images and all the second person face images, a trained first face changing model is determined based on the parameter updated feature extraction model and the parameter updated first reduction model, and a trained second face changing model is determined based on the parameter updated feature extraction model and the parameter updated second reduction model.
Optionally, the sending module is configured to:
if the first account is in an online state at present, sending the trained first face changing model to the first terminal;
if the first account is in an offline state at present, storing the trained face changing model, and when the fact that the first account is switched to an online state is detected, sending the trained first face changing model to the first terminal;
if the second account is in an online state at present, sending the trained second face changing model to the second terminal;
and if the second account is in an offline state at present, storing the trained face changing model, and when the fact that the second account is switched to an online state is detected, sending the trained second face changing model to the second terminal.
Optionally, the apparatus further comprises:
and the deleting module is used for sending a deleting notice corresponding to the trained first face changing model to the first terminal when receiving a face changing termination request corresponding to the first account sent by the second terminal.
In still another aspect, an apparatus for changing a face of an image is provided, where the apparatus is applied to a terminal, and the apparatus includes:
the sending module is used for sending a face changing request corresponding to a second account to the server, wherein the face changing request carries a face image set of a first person;
the receiving module is used for receiving a trained first face changing model sent by the server, wherein the trained first face changing model is used for changing the face image of the first person into the face image of the second person;
and the face changing module is used for inputting a first image to be changed into the trained first face changing model when receiving a face changing instruction corresponding to the second account, so as to obtain a second image after face changing.
Optionally, the apparatus further comprises:
the guiding module plays guiding information and/or displays the guiding information, wherein the guiding information is used for indicating the first person to do different actions;
and the shooting module is used for shooting the facial image set of the first person in the process of playing the guide information and/or displaying the guide information.
Optionally, the shooting module is configured to:
and shooting the facial image set of the first person in a state of closing an image adjusting function.
In another aspect, a system for changing faces of images is provided, where the system includes a first terminal, a second terminal, and a server, where:
the server receives a first face changing request which is sent by the first terminal and is corresponding to a second account and is logged in by a first account, wherein the face changing request carries a face image set of a first person; receiving a second face changing request corresponding to the first account and sent by the second terminal logged in by a second account, wherein the face changing request carries a face image set of a second person; performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face changing model and a trained second face changing model, wherein the trained first face changing model is used for changing the facial image of the first person into the facial image of the second person, and the trained second face changing model is used for changing the facial image of the second person into the facial image of the first person; and sending the trained first face changing model to the first terminal, and sending the trained second face changing model to the second terminal.
The first terminal sends a face changing request corresponding to a second account to the server; receiving a trained first face changing model sent by the server; and when a face changing instruction corresponding to the second account is received, inputting a first image of a face to be changed into the trained first face changing model to obtain a second image of the face to be changed.
The second terminal sends a face changing request corresponding to the first account to the server; receiving a trained second face changing model sent by the server; and when a face changing instruction corresponding to the first account is received, inputting a second image to be changed into the trained second face changing model to obtain a first image after face changing.
In yet another aspect, a computer device is provided that includes one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to implement the operations performed by the image facelining method.
In yet another aspect, a computer-readable storage medium having at least one instruction stored therein is provided, which is loaded and executed by a processor to implement the operations performed by the image face-changing method.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
the face changing model is generated and used for face changing processing, the second image after face changing can be obtained after the first image is input into the face changing model, and local image replacement is carried out on the facial image of a target person without using the pre-stored facial image of an original person, so that deformation processing on the facial image is not involved, and image distortion can be reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 3 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 4 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 5 is a flowchart of an image face changing method provided in an embodiment of the present application;
fig. 6 is a flowchart of an image face changing method according to an embodiment of the present application;
fig. 7 is a flowchart of an image face changing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of one implementation provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image face changing device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image face changing device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1, fig. 2, fig. 3, and fig. 4 are implementation environments of an image face changing method according to an embodiment of the present invention, where the image face changing method provided in the present application may be implemented by a terminal and a server together. The terminal can be operated with an application program with an image recording function, such as a live application program, a short video application program and the like, and can be provided with a microphone, a camera, a loudspeaker and other components, the terminal has a communication function and can be accessed to the internet, and the terminal can be a mobile phone, a tablet personal computer, intelligent wearable equipment, a desktop computer, a notebook computer and the like. The server can be a background server of the application program, and the server can be communicated with the terminal. The server may be a single server or a server group, if the server is a single server, the server may be responsible for all processing that needs to be performed by the server in the following scheme, if the server is a server group, different servers in the server group may be responsible for different processing in the following scheme, respectively, and the specific processing allocation condition may be set arbitrarily by a technical person according to actual needs, and is not described herein again.
The image face changing method provided in the embodiment of the application can replace the face image in the video image with the face image of another person, and the function can be called a face changing function. In the embodiment of the present application, a live application is taken as an example to perform detailed description of the scheme, and other situations are similar and will not be described again. And the terminal is provided with a live broadcast application program. The live application program can record videos and upload the videos to a network, and various special effects can be added to the videos or some filter processing can be performed in the recording process.
When the anchor uses the live application program, an account can be registered in the live application program, the created account can pay attention to other accounts and can also be paid attention to other accounts, the live application program is provided with a plurality of pages, such as a live page, a live list page and the like, the own character page of the anchor is shown in figure 1, the personal page is provided with personal information such as an account nickname and the like, and the anchor also comprises a jump control for changing a face and applying for a page and a jump control for terminating the changing of the face and applying for the page. In the face change application page, as shown in fig. 2, the user may input a target account to be changed, and the user may manually fill in the target account or may select the target account from the list of interest accounts or the list of friend accounts. After filling in or selecting the object of the face change application, the user enters a face image shooting page, as shown in fig. 3, and the anchor can shoot the face image according to the prompt in the live application program. Before changing the face and broadcasting live, communicate between two anchor that want to change the face, both sides agree to change the face after, two anchor can send the facial image set of shooting separately to the server, and the server can be based on the facial image set training that two anchor uploaded and trade the face model, then sends respectively for anchor, follow-up anchor alright open and trade the face and broadcast.
When the anchor wants to change the face for live broadcasting, the anchor can click to open the live broadcasting control, as shown in fig. 4, click the face change control in the live broadcasting page to trigger the face change for live broadcasting. Each viewer in the live room can see the live video after the main broadcast changes the face.
Fig. 5 is a flowchart of a server side in an image face changing method according to an embodiment of the present application. Referring to fig. 2, the process includes:
step 501, receiving a first face changing request corresponding to a second account sent by a first terminal logged in by a first account, wherein the face changing request carries a face image set of a first person.
Step 502, receiving a second face change request corresponding to the first account and sent by a second terminal logged in by a second account, wherein the face change request carries a face image set of a second person.
Step 503, performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model, wherein the trained first face-changing model is used for changing a face of the first person into a facial image of the second person, and the trained second face-changing model is used for changing a face of the second person into a facial image of the first person.
Step 504, sending the trained first face change model to the first terminal, and sending the trained second face change model to the second terminal.
Fig. 6 is a flowchart at a terminal side in a method for face changing of an image according to an embodiment of the present application. Referring to fig. 3, the process includes:
step 601, sending a face changing request corresponding to the second account to the server, wherein the face changing request carries a face image set of the first person.
Step 602, receiving a trained first face-changing model sent by a server, wherein the trained first face-changing model is used for changing a face image of a first person into a face image of a second person.
Step 603, when a face changing instruction corresponding to the second account is received, inputting the first image to be changed into the trained first face changing model to obtain a second image after face changing.
Fig. 7 is a flowchart illustrating interaction between a server and a terminal in a method for changing a face of an image according to an embodiment of the present application. Referring to fig. 7, the embodiment includes:
step 701, the first terminal sends a first face changing request corresponding to the second account to the server.
The face changing request carries a face image set of the first person, where the face image set may include a plurality of face pictures of the first person, where the face pictures may be obtained by taking a picture or a plurality of video frames extracted from a video.
In implementation, when a first anchor (namely, the first character) wants to change the face and broadcast directly, a live application program can be run on a terminal, a first account is logged in the live application program, then a character page is operated and entered in the live application program, a skip control of a face change application page is clicked, a face change application page is skipped to, and a face change application is performed on the face change application page. First, the first anchor may input account identifiers of one or more accounts (i.e., second accounts) that desire to change faces in a face change application page, may fill in the account identifiers manually, or may select the account identifiers from a concerned account list or a friend account list, click a confirmation control after the above operations are performed, and jump to a facial image shooting page to shoot facial images. In the face image shooting page, the first anchor can shoot a photo through the terminal, and can also shoot a video through the terminal, and the photo or a video frame in the video is formed into a face image set. After the facial image is shot, the terminal can generate a first face changing request, add the shot facial image set and the account identification of the second account in the first face changing request, and then send the first face changing request to the server. In the above-described photographing of the facial image set performed on the first terminal, the image adjustment functions, such as the functions of whitening, peeling, filtering, and the like, are turned off.
The server may have a plurality of processing modes after receiving the first face changing request from the first terminal.
According to a possible processing mode, a server can send a notification to a second terminal logged in by a second account according to an account identifier of the second account carried in a first face changing request, wherein the notification carries the account identifier of the first account and is used for informing that the second account currently has the first account requesting face changing live broadcast. The second terminal can display prompt information to the user in the live broadcast application program after receiving the notification to prompt the user that other users request to change faces for live broadcast, after the user clicks confirmation, the second terminal can send a second face change request corresponding to the first account to the server, and the second face change request carries a face image set of a second person and can serve as a confirmation message of the notification. And then the server carries out subsequent processing.
In another possible processing mode, the server does not send any notification to the second terminal, but waits for a second face changing request corresponding to the first account and sent by the second terminal to the server, and performs subsequent processing when receiving the second face changing request. In this way, the users can communicate with each other privately, and then send requests respectively to realize live face changing, or the users can send face changing requests respectively for the users who want to change faces without communicating with each other, and if two users just happen to send face changing requests for changing faces with the other user respectively, live face changing can be realized.
Optionally, before sending the face change request corresponding to the second account to the server, the terminal may play guidance information and/or display guidance information, and capture the facial image set of the first person in the process of playing guidance information and/or displaying guidance information.
Wherein the guidance information is used for instructing the first person to make different actions.
In practice, when a set of facial images is captured, the live application instructs the image capture. For example, a segment of text is displayed above the screen to indicate that the first person makes different actions, such as nodding the head, shaking the head, opening the mouth, closing the mouth, smiling and the like, and the terminal can also perform the indication in a voice broadcasting mode.
And step 702, the second terminal sends a second face changing request corresponding to the first account to the server.
The face changing request carries a face image set of a second person.
In implementation, before a second terminal sends a face change request corresponding to a first account to a server, if two face change anchor anchors agree to change faces through negotiation, the anchor of the second terminal opens a live application program, firstly logs in the second account, enters the application program, then enters a character page, clicks a jump link of the face change application page, jumps to the face change application page, and applies for face change on the face change application page, firstly, the first anchor can input account identifications of one or more accounts (namely, second accounts) needing face change in the face change application page, can be manually filled in, or can be selected from a concerned account list or a friend account list, clicks a confirmation control after the operations are executed, jumps to a face image shooting page to shoot a face image, and after the face image shooting is completed, the account identifications of the first account input/selected by the anchor of the second terminal and a face image set of the second character shot by the anchor of the second terminal are input/selected And sent to the server. In the shooting of the facial image set performed on the second terminal, the image adjustment functions, such as whitening, peeling, filtering, etc., are turned off.
Optionally, before sending the face change request corresponding to the first account to the server, the terminal may play guidance information and/or display guidance information, and capture a facial image set of the second person in a process of playing the guidance information and/or displaying the guidance information.
Wherein the guidance information is used for instructing the second person to make different actions.
In practice, when a set of facial images is captured, the live application instructs the capture of the images. For example, a segment of text is displayed above the screen to indicate that the first person makes different actions, such as nodding the head, shaking the head, opening the mouth, closing the mouth, smiling and the like, and the terminal can also perform the indication in a voice broadcasting mode.
In step 703, the server performs model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model.
The trained first face changing model is used for changing the face image of the first person into the face image of the second person, and the trained second face changing model is used for changing the face image of the second person into the face image of the first person.
In implementation, the server stores the received first account face changing request and the received second account face changing request, searches for a corresponding second account according to an account identifier contained in the first account face changing request, detects the second account face changing request, determines that the first account and the second account are successfully paired if the second account face changing request contains the account identifier of the first account, and inputs a facial image set of the two accounts into the face changing model.
Optionally, model training is performed based on the facial image set of the first person and the facial image set of the second person, so as to obtain a trained first face-changing model and a trained second face-changing model, including:
the method comprises the following steps of firstly, alternately acquiring facial images in a first person facial image set and a second person facial image set.
And secondly, each time a face image of a first person is obtained, the face image of the first person is distorted to obtain the distorted face image of the first person, the distorted face image of the first person is input into a feature extraction model to obtain a first feature image, the first feature image is input into a second reduction model to obtain a first output image, and the feature extraction model and the second reduction model are subjected to parameter updating based on the currently obtained face image of the first person and the first output image.
And thirdly, each time a face image of a second person is obtained, the face image of the second person is distorted to obtain the distorted face image of the second person, the distorted face image of the second person is input into the feature extraction model to obtain a second feature image, the second feature image is input into the first reduction model to obtain a second output image, and the feature extraction model and the first reduction model are updated according to the currently obtained face image of the second person and the second output image.
And step four, after parameter updating is carried out on the feature extraction model, the first reduction model and the second reduction model based on the face images of all the first persons and the face images of all the second persons, the trained first face changing model is determined based on the feature extraction model after the parameter updating and the first reduction model after the parameter updating, and the trained second face changing model is determined based on the feature extraction model after the parameter updating and the second reduction model after the parameter updating.
The distortion processing is performed on the face image of the first person and the face image of the second person, in order to train the second reduction model and the first reduction model, the distorted face images are input to output the reduced face images, the reduction model is trained by taking the original face images as a reference, and meanwhile, the feature extraction capability of the feature extraction model is trained.
In the implementation, firstly, the face image of a first person and the face image of a second person with the same angle and the same expression are matched, then the matched face image of the first person is subjected to distortion processing to obtain a distorted face image of the first person, the distorted face image of the first person is input into a feature extraction model, the feature extraction model is used for performing feature extraction on the distorted face image of the first person to obtain a first feature image, the first feature image is input into a second restoration model, the second restoration model is used for restoring the first feature image to obtain a first output image, and the feature extraction model and the second restoration model are subjected to parameter updating based on the currently obtained face image of the first person and the first output image.
After the training of the feature extraction model and the second restoration model is completed for one time, the matched face image of the second person is subjected to distortion processing to obtain a distorted face image of the second person, the distorted face image of the second person is input into the feature extraction model, the feature extraction model is used for performing feature extraction on the distorted face image of the second person to obtain a second feature image, the second feature image is input into the first restoration model, the first restoration model is used for restoring the second feature image to obtain a second output image, and the feature extraction model and the first restoration model are subjected to parameter updating based on the currently obtained face image and the second output image of the second person.
And (4) performing the training circulation until all the facial image sets are input into the face changing model, and stopping the circulation, wherein the trained feature extraction model and the trained first restoration model are the first face changing model, and the trained feature extraction model and the trained second restoration model are the second face changing model.
Step 704, the server sends the trained first face-changing model to the first terminal and sends the trained second face-changing model to the second terminal.
In implementation, when the face change model training is completed, the face change model is stored to generate a storage address, and the server sends the storage address of the face change model and a trained message to the second terminal according to account identifications carried in face change requests sent by the first terminal and the second terminal. The process of sending the trained first face-changing model to the first terminal is similar to that described above, and thus is not described in detail.
Optionally, when the trained face changing model is sent to the first terminal, if the first account is currently in an online state, the trained first face changing model is sent to the first terminal; and if the first account is in an off-line state at present, storing the trained face changing model, and when the first account is detected to be switched to an on-line state, sending the trained first face changing model to the first terminal.
In implementation, when a first account is in an online state currently, a server sends a message that a face change request passes and a storage address of a face change model to a network storage space exclusive to the first account, a first terminal detects the network storage space exclusive to the first account according to a certain period, and when the message that the face change request passes is detected, the first terminal automatically downloads the message according to the storage address of the face change model.
When the first account is in an off-line state, the server sends the message that the face request passes to the exclusive network storage space of the first account, and stores the message that the face request passes and the storage address of the face changing model in the exclusive network storage space of the first account. When the first account is switched to an online state, the first terminal detects the exclusive network storage space of the first account according to a certain periodicity, and when the face changing message is detected, the first terminal automatically downloads the message according to the storage address of the face changing model.
Optionally, after the trained first face changing model is sent to the first terminal, when a face changing termination request corresponding to the first account sent by the second terminal is received, a deletion notification corresponding to the trained first face changing model is sent to the first terminal.
In implementation, after the face change model is downloaded to the first terminal, the anchor of the second terminal enters a face change termination application page to be filled in or selects the anchor in the attention list as an object of a face change termination request, and after the anchor is clicked and determined, the second terminal sends the face change termination request to the server. And when receiving a face changing termination request of the second terminal, the server sends a deletion notification of the face changing model to a network storage space exclusive to the first account according to the account identifier of the first account stored in the server. The first terminal detects the exclusive network storage space of the first account according to a certain period, and directly deletes the first face changing model stored in the first terminal when detecting that the deletion notification of the face changing model exists.
Step 705, when a face change instruction corresponding to a second account is received, inputting a first image to be changed into the trained first face change model to obtain a second image after face change.
In implementation, after the first terminal receives the sent face change model, when the person of the first terminal is in live broadcasting, the person clicks the face change effect, a selection page of the face change model appears, and a face change button corresponding to the second account face change model is selected. The camera continuously acquires a first image of a first person, in order to improve the accuracy of face changing processing during acquisition, the image adjusting function can be closed, the first image is input into a trained face changing model, the first image is firstly input into a feature extraction model, the facial image information of the first person in the first image is replaced by the facial image information of a second person, the facial image information is input into a second restoration model to be restored, a second image is obtained, the face changing model continuously outputs a second image after face changing, and the second image is beautified by adding the image adjusting function to the second image. A viewer watching a live broadcast sees the face of a second person while watching a live broadcast of a first person.
Step 706, when a face changing instruction corresponding to the first account is received, inputting the second image to be changed into the trained second face changing model to obtain the first image after face changing.
In implementation, after the second terminal receives the sent face change model, when the person of the second terminal plays the face change model directly, the person clicks the face change effect, a selection page of the face change model appears, and a face change button corresponding to the first account face change model is selected. The camera constantly acquires a second image of a second person, in order to improve the accuracy of face changing processing when the second image is acquired, the image adjusting function can be closed, the second image is input into a face changing model after training, the face changing model replaces the face image information of the second person into the face image information of a first person through the identification of the face image of the second person by an encoder, the face image information is input into a decoder to be restored to obtain a first image, the face changing model continuously outputs the face changed second image, and the task image presented is beautified by adding the image adjusting function to the first image. A viewer watching a live broadcast sees the face of a first person while watching a live broadcast of a second person.
Fig. 8 is a specific implementation diagram provided in this embodiment of the application, where users at two terminals record materials, send an application to a server, the server matches and pairs the application after receiving the application, and adds the face-changing task to a task queue, and then performs model training, the two terminals need to wait for several days when the user server performs the above operations, after training the model training is completed, send a notification of completion of the model training to two interrupt sources, the two terminals download the training model after receiving the notification, and the face-changing control, that is, a face-changing function entry appears in a live interface of the user after the downloading is completed.
The face changing model is generated and used for face changing processing, the second image after face changing can be obtained after the first image is input into the face changing model, and local image replacement is carried out on the facial image of a target person without using the pre-stored facial image of an original person, so that deformation processing on the facial image is not involved, and image distortion can be reduced.
An embodiment of the present application provides an apparatus for changing a face of an image, where the apparatus may be a server in the foregoing embodiment, and as shown in fig. 9, the apparatus includes:
the receiving module 910 receives a first face change request, which is sent by a first terminal logged in by a first account and corresponds to a second account, wherein the face change request carries a face image set of a first person; and receiving a second face changing request which is sent by a second terminal logged in by a second account and corresponds to the first account, wherein the face changing request carries a facial image set of a second person.
Training module 920, based on the facial image set of first personage with the facial image set of second personage carries out the model training, obtains the first face changing model after the training and the second face changing model after the training, wherein, the first face changing model after the training is used for with the facial image of first personage changes the face and does the facial image of second personage, the second face changing model after the training is used for with the facial image of second personage changes the face and does the facial image of first personage.
A sending module 930, configured to send the trained first face changing model to the first terminal, and send the trained second face changing model to the second terminal.
Optionally, based on the facial image set of the first person and the facial image set of the second person, model training is performed to obtain a trained first face-changing model and a trained second face-changing model, and the training module 920 is configured to:
alternately acquiring the facial images in the first person's facial image set and the second person's facial image set.
The method comprises the steps of obtaining a face image of a first person, distorting the face image of the first person to obtain a distorted face image of the first person, inputting the distorted face image of the first person into a feature extraction model to obtain a first feature image, inputting the first feature image into a second restoration model to obtain a first output image, and updating parameters of the feature extraction model and the second restoration model based on the currently obtained face image of the first person and the first output image.
The method comprises the steps of obtaining a face image of a second person, distorting the face image of the second person to obtain a distorted face image of the second person, inputting the distorted face image of the second person into a feature extraction model to obtain a second feature image, inputting the second feature image into a first reduction model to obtain a second output image, and updating parameters of the feature extraction model and the first reduction model based on the currently obtained face image of the second person and the second output image.
After parameter updating is carried out on the feature extraction model, the first reduction model and the second reduction model based on the face images of all the first persons and the face images of all the second persons, the trained first face changing model is determined based on the feature extraction model after parameter updating and the first reduction model after parameter updating, and the trained second face changing model is determined based on the feature extraction model after parameter updating and the second reduction model after parameter updating.
Optionally, the sending module 930, configured to send the trained first face-changing model to the first terminal, is configured to:
and if the first account is in an online state at present, sending the trained first face changing model to the first terminal.
And if the first account is in an offline state at present, storing the trained face changing model, and when the fact that the first account is switched to an online state is detected, sending the trained first face changing model to the first terminal.
And if the second account is in an online state at present, sending the trained second face changing model to the second terminal.
And if the second account is in an offline state at present, storing the trained face changing model, and when the fact that the second account is switched to an online state is detected, sending the trained second face changing model to the second terminal.
Optionally, after the sending the trained first face-changing model to the first terminal, the apparatus further includes:
and the deleting module is used for sending a deleting notice corresponding to the trained first face changing model to the first terminal when receiving a face changing termination request corresponding to the first account sent by the second terminal.
An embodiment of the present application provides an apparatus for changing a face of an image, where the apparatus may be a terminal in the foregoing embodiment, and as shown in fig. 10, the apparatus includes:
the sending module 1010 sends a face changing request corresponding to the second account to the server, where the face changing request carries a face image set of the first person.
The receiving module 1020 receives the trained first face change model sent by the server, where the trained first face change model is used to change a face of the first person into a face of the second person.
And the face changing module 1030 is used for inputting a first image to be changed into the trained first face changing model when receiving a face changing instruction corresponding to the second account, so as to obtain a second image after face changing.
Optionally, before sending the face change request corresponding to the second account to the server, the apparatus further includes:
and the guiding module plays guiding information and/or displays the guiding information, wherein the guiding information is used for indicating the first person to do different actions.
And the shooting module is used for shooting the facial image set of the first person in the process of playing the guide information and/or displaying the guide information.
Optionally, the capturing the set of facial images of the first person, the capturing module is configured to:
and shooting the facial image set of the first person in a state that the image adjusting function is closed.
The face changing model is generated and used for face changing processing, the second image after face changing can be obtained after the first image is input into the face changing model, and local image replacement is carried out on the facial image of a target person without using the pre-stored facial image of an original person, so that deformation processing on the facial image is not involved, and image distortion can be reduced.
It should be noted that: in the image face changing device provided in the above embodiment, only the division of the functional modules is illustrated when the image face is changed, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the embodiments of the image face changing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the embodiments of the methods, which are not described herein again.
The embodiment of the present application further provides a system for changing faces of images, where the system includes a first terminal, a second terminal, and a server, where:
the server receives a first face changing request corresponding to a second account and sent by a first terminal logged in by a first account, wherein the face changing request carries a face image set of a first person; receiving a second face changing request corresponding to the first account and sent by a second terminal logged in by a second account, wherein the face changing request carries a face image set of a second person; performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face changing model and a trained second face changing model, wherein the trained first face changing model is used for changing the facial image of the first person into the facial image of the second person, and the trained second face changing model is used for changing the facial image of the second person into the facial image of the first person; and sending the trained first face changing model to the first terminal, and sending the trained second face changing model to the second terminal.
The first terminal sends a face changing request corresponding to a second account to the server; receiving a trained first face changing model sent by the server; and when a face changing instruction corresponding to the second account is received, inputting a first image to be changed into the trained first face changing model to obtain a second image after face changing.
The second terminal sends a face changing request corresponding to the first account to the server; receiving a trained second face changing model sent by the server; and when a face changing instruction corresponding to the first account is received, inputting a second image to be changed into the trained second face changing model to obtain a first image after face changing.
The face changing model is generated and used for face changing processing, the second image after face changing can be obtained after the first image is input into the face changing model, and local image replacement is carried out on the facial image of a target person without using the pre-stored facial image of an original person, so that deformation processing on the facial image is not involved, and image distortion can be reduced.
Fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal may be the first terminal or the second terminal in the above embodiments. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: one or more processors 1101 and one or more memories 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement the image facelining methods provided by method embodiments herein.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, display screen 1105, camera 1106, audio circuitry 1107, positioning component 1108, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 can be implemented on separate chips or circuit boards, which is not limited by the present embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or above the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, providing the front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 may be a flexible display disposed on a curved surface or a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is then used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (Location Based Service). The Positioning component 1108 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be alternating current, direct current, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the display screen 1105 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may acquire a 3D motion of the user on the terminal 1100 in cooperation with the acceleration sensor 1111. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or underlying display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the display screen 1105 according to the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the display screen 1105 is reduced. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually reduced, the display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes progressively larger, the display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a light-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 is not limiting of terminal 1100, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 12 is a schematic structural diagram of a server 1200 according to an embodiment of the present application, where the server 1200 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where the one or more memories 1202 store at least one instruction, and the at least one instruction is loaded and executed by the one or more processors 1201 to implement the methods provided by the foregoing method embodiments. Certainly, the server 1200 may further have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input and output, and the server 1200 may further include other components for implementing the functions of the device, which is not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, comprising instructions executable by a processor to perform the image resurfacing method in the above embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by hardware related to instructions of a program, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. A method of face changing an image, the method comprising:
the method comprises the steps that a server receives a first face changing request which is sent by a first terminal logged in by a first account and corresponds to a second account, wherein the first face changing request carries a face image set of a first person, and the face image set of the first person comprises a plurality of face images of the first person;
the server receives a second face changing request corresponding to a first account and sent by a second terminal logged in by a second account, wherein the face changing request carries a face image set of a second person, and the face image set of the second person comprises a plurality of face images of the second person;
the server performs model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face changing model and a trained second face changing model, wherein the trained first face changing model is used for changing the facial image of the first person into the facial image of the second person, and the trained second face changing model is used for changing the facial image of the second person into the facial image of the first person;
and the server sends the trained first face changing model to the first terminal and sends the trained second face changing model to the second terminal.
2. The method of claim 1, wherein performing model training based on the set of facial images of the first person and the set of facial images of the second person to obtain a trained first face-changed model and a trained second face-changed model comprises:
alternately acquiring the facial images in the facial image set of the first person and the facial image set of the second person;
every time a face image of a first person is obtained, the face image of the first person is distorted to obtain a distorted face image of the first person, the distorted face image of the first person is input into a feature extraction model to obtain a first feature image, the first feature image is input into a second restoration model to obtain a first output image, and the feature extraction model and the second restoration model are subjected to parameter updating based on the currently obtained face image of the first person and the first output image;
every time a second person face image is obtained, the second person face image is distorted to obtain a distorted second person face image, the distorted second person face image is input into a feature extraction model to obtain a second feature image, the second feature image is input into a first reduction model to obtain a second output image, and parameter updating is carried out on the feature extraction model and the first reduction model based on the currently obtained second person face image and the second output image;
after parameter updating is carried out on the feature extraction model, the first reduction model and the second reduction model based on the face images of all the first persons and the face images of all the second persons, the trained first face changing model is determined based on the feature extraction model after parameter updating and the first reduction model after parameter updating, and the trained second face changing model is determined based on the feature extraction model after parameter updating and the second reduction model after parameter updating.
3. The method of claim 1, wherein after sending the trained first face-changed model to the first terminal, the method further comprises:
and when a face change termination request which is sent by the second terminal and corresponds to the first account is received, sending a deletion notification which corresponds to the trained first face change model to the first terminal.
4. A method of face changing an image, the method comprising:
a first terminal sends a face changing request corresponding to a second account to a server, wherein the first terminal is a terminal for logging in a first account, the face changing request carries a face image set of a first person, and the face image set of the first person comprises a plurality of face images of the first person;
the first terminal receives a trained first face changing model sent by the server, wherein the trained first face changing model is obtained by the server through training based on a face image set of a first person and a face image set of a second person, the face image of the second person is sent to the server through a face changing request corresponding to a first account by a second terminal logged in to the second account, the face image set of the second person comprises a plurality of face images of the second person, and the trained first face changing model is used for changing the face image of the first person into the face image of the second person;
and when receiving a face changing instruction corresponding to the second account, the first terminal inputs a first image to be changed into the trained first face changing model to obtain a second image after face changing.
5. The method of claim 4, wherein prior to sending the request to change the face to the server for the second account, the method further comprises:
playing guidance information and/or displaying guidance information, wherein the guidance information is used for indicating the first person to do different actions;
and shooting the facial image set of the first person in the process of playing the guidance information and/or displaying the guidance information.
6. The method of claim 5, wherein capturing the set of facial images of the first person comprises:
and shooting the facial image set of the first person in a state that the image adjusting function is closed.
7. An image face changing device, which is applied to a server, the device comprising:
the system comprises a receiving module, a first face changing module and a second face changing module, wherein the receiving module is used for receiving a first face changing request which is sent by a first terminal logged in by a first account and corresponds to a second account, and the face changing request carries a face image set of a first person; receiving a second face changing request corresponding to a first account and sent by a second terminal logged in by a second account, wherein the face changing request carries a face image set of a second person, and the face image set of the first person comprises a plurality of face images of the first person;
the training module performs model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face change model and a trained second face change model, wherein the trained first face change model is used for changing the facial image of the first person into the facial image of the second person, the trained second face change model is used for changing the facial image of the second person into the facial image of the first person, and the facial image set of the second person comprises a plurality of facial images of the second person;
and the sending module is used for sending the trained first face changing model to the first terminal and sending the trained second face changing model to the second terminal.
8. An apparatus for changing a face of an image, the apparatus being applied to a first terminal, the apparatus comprising:
the system comprises a sending module, a face changing module and a face changing module, wherein the sending module sends a face changing request corresponding to a second account to a server, the first terminal is a terminal for logging in a first account, the face changing request carries a face image set of a first person, and the face image set of the first person comprises a plurality of face images of the first person;
a receiving module, configured to receive a first trained face changing model sent by the server, where the first trained face changing model is obtained by the server through training based on a set of facial images of a first person and a set of facial images of a second person, the facial images of the second person are sent to the server through a face changing request corresponding to a first account by a second terminal logged in to the second account, the set of facial images of the second person includes multiple facial images of the second person, and the first trained face changing model is used to change a face of the first person into a face of the second person;
and the face changing module is used for inputting a first image to be changed into the trained first face changing model when receiving a face changing instruction corresponding to the second account, so as to obtain a second image after face changing.
9. A system for changing faces of images, the system comprising a first terminal, a second terminal and a server, wherein:
the server receives a first face changing request which is sent by the first terminal and corresponds to a second account and is logged in by a first account, wherein the face changing request carries a face image set of a first person, and the face image set of the first person comprises a plurality of face images of the first person; receiving a second face changing request corresponding to a first account and sent by a second terminal logged in by a second account, wherein the face changing request carries a face image set of a second person, and the face image set of the first person comprises a plurality of face images of the first person; performing model training based on the facial image set of the first person and the facial image set of the second person to obtain a trained first face-changing model and a trained second face-changing model, wherein the trained first face-changing model is used for changing a face of the facial image of the first person into a facial image of the second person, and the trained second face-changing model is used for changing a face of the facial image of the second person into a facial image of the first person; sending the trained first face changing model to the first terminal, and sending the trained second face changing model to the second terminal;
the first terminal sends a face changing request corresponding to a second account to the server; receiving a trained first face changing model sent by the server; when a face changing instruction corresponding to the second account is received, inputting a first image of a face to be changed into the trained first face changing model to obtain a second image of the face to be changed;
the second terminal sends a face changing request corresponding to the first account to the server; receiving a trained second face changing model sent by the server; and when a face changing instruction corresponding to the first account is received, inputting a second image to be changed into the trained second face changing model to obtain a first image after face changing.
10. A computer device comprising one or more processors and one or more memories having stored therein at least one instruction, the instruction being loaded and executed by the one or more processors to perform operations performed by the method of image resurfacing according to any one of claims 1 to 6.
11. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by the method of image face-changing according to any one of claims 1 to 6.
CN201910833438.0A 2019-09-04 2019-09-04 Image face changing method, device, system, equipment and storage medium Active CN110533585B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910833438.0A CN110533585B (en) 2019-09-04 2019-09-04 Image face changing method, device, system, equipment and storage medium
PCT/CN2020/112777 WO2021043121A1 (en) 2019-09-04 2020-09-01 Image face changing method, apparatus, system, and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910833438.0A CN110533585B (en) 2019-09-04 2019-09-04 Image face changing method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110533585A CN110533585A (en) 2019-12-03
CN110533585B true CN110533585B (en) 2022-09-27

Family

ID=68666849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910833438.0A Active CN110533585B (en) 2019-09-04 2019-09-04 Image face changing method, device, system, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110533585B (en)
WO (1) WO2021043121A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533585B (en) * 2019-09-04 2022-09-27 广州方硅信息技术有限公司 Image face changing method, device, system, equipment and storage medium
CN111263226B (en) * 2020-01-17 2021-10-22 中国科学技术大学 Video processing method, device, electronic device and medium
CN111986301B (en) * 2020-09-04 2024-06-28 网易(杭州)网络有限公司 Method and device for processing data in live broadcast, electronic equipment and storage medium
CN112752147A (en) * 2020-09-04 2021-05-04 腾讯科技(深圳)有限公司 Video processing method, device and storage medium
CN113487745A (en) * 2021-07-16 2021-10-08 思享智汇(海南)科技有限责任公司 Method, device and system for enhancing reality
CN114494002B (en) * 2022-03-30 2022-07-01 广州公评科技有限公司 AI face changing video-based original face image intelligent restoration method and system
CN115294423B (en) * 2022-08-15 2025-09-26 网易(杭州)网络有限公司 Model determination method, image processing method, device, equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9165182B2 (en) * 2013-08-19 2015-10-20 Cisco Technology, Inc. Method and apparatus for using face detection information to improve speaker segmentation
US9697266B1 (en) * 2013-09-27 2017-07-04 EMC IP Holding Company LLC Management of computing system element migration
CN104244022B (en) * 2014-08-29 2018-03-09 形山科技(深圳)有限公司 A kind of image processing method and system
CN106331569B (en) * 2016-08-23 2019-08-30 广州华多网络科技有限公司 Character facial transform method and system in instant video picture
CN106534757B (en) * 2016-11-22 2020-02-28 香港乐蜜有限公司 Face exchange method, device, host terminal and viewer terminal
CN108347578B (en) * 2017-01-23 2020-05-08 腾讯科技(深圳)有限公司 Method and device for processing video image in video call
CN107564080B (en) * 2017-08-17 2020-07-28 北京觅己科技有限公司 Face image replacement system
CN108040290A (en) * 2017-12-22 2018-05-15 四川长虹电器股份有限公司 TV programme based on AR technologies are changed face method in real time
CN109063658A (en) * 2018-08-08 2018-12-21 吴培希 A method of it is changed face using deep learning in multi-mobile-terminal video personage
CN110533585B (en) * 2019-09-04 2022-09-27 广州方硅信息技术有限公司 Image face changing method, device, system, equipment and storage medium

Also Published As

Publication number Publication date
CN110533585A (en) 2019-12-03
WO2021043121A1 (en) 2021-03-11

Similar Documents

Publication Publication Date Title
CN109600678B (en) Information display method, device and system, server, terminal and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109982102B (en) Interface display method and system for live broadcast room, live broadcast server and anchor terminal
CN108401124B (en) Video recording method and device
CN110740340B (en) Video live broadcast method and device and storage medium
CN110278464B (en) Method and device for displaying list
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN110992493A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112533017B (en) Live broadcast method, device, terminal and storage medium
CN110971930A (en) Live virtual image broadcasting method, device, terminal and storage medium
CN111083516B (en) Live broadcast processing method and device
CN111355974A (en) Method, apparatus, system, device and storage medium for virtual gift giving processing
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN111246095B (en) Method, device and equipment for controlling lens movement and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN111818358A (en) Audio file playing method and device, terminal and storage medium
CN110300274A (en) Method for recording, device and the storage medium of video file
CN110662105A (en) Animation file generation method and device and storage medium
CN112104648A (en) Data processing method, device, terminal, server and storage medium
CN112468884A (en) Dynamic resource display method, device, terminal, server and storage medium
CN107896337B (en) Information popularization method and device and storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210111

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511446 28th floor, block B1, Wanda Plaza, Wanbo business district, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20191203

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000054

Denomination of invention: The invention relates to a method, a device, a system, a device and a storage medium for image face changing

License type: Common License

Record date: 20210208

GR01 Patent grant
GR01 Patent grant