[go: up one dir, main page]

CN113284229B - Three-dimensional face model generation method, device, equipment and storage medium - Google Patents

Three-dimensional face model generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113284229B
CN113284229B CN202110597244.2A CN202110597244A CN113284229B CN 113284229 B CN113284229 B CN 113284229B CN 202110597244 A CN202110597244 A CN 202110597244A CN 113284229 B CN113284229 B CN 113284229B
Authority
CN
China
Prior art keywords
face
model
preset
user
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110597244.2A
Other languages
Chinese (zh)
Other versions
CN113284229A (en
Inventor
王纪章
戎荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Canzi Technology Co ltd
Original Assignee
Shanghai Xinglan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xinglan Information Technology Co ltd filed Critical Shanghai Xinglan Information Technology Co ltd
Priority to CN202110597244.2A priority Critical patent/CN113284229B/en
Publication of CN113284229A publication Critical patent/CN113284229A/en
Application granted granted Critical
Publication of CN113284229B publication Critical patent/CN113284229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the field of image processing, and discloses a method, a device, equipment and a storage medium for generating a three-dimensional face model, wherein the method comprises the following steps: acquiring face image information of a user, and performing feature recognition on the face image information to acquire face feature information of the user; inputting the face feature information into a preset three-dimensional conversion neural network model for three-dimensional conversion processing to obtain an initial face model of a user; the method comprises the steps of obtaining preset rendering data, performing effect rendering on an initial-order face model based on the preset rendering data to obtain a three-dimensional face model of a user, improving modeling efficiency by performing model matching and model adjustment in a preset three-dimensional conversion neural network model based on obtained face feature information, and performing effect rendering on the obtained initial-order face model based on the preset rendering data to improve personalized display effect of the obtained three-dimensional face model, further improving network social experience of the user and increasing user viscosity.

Description

三维人脸模型生成方法、装置、设备及存储介质Three-dimensional face model generation method, device, equipment and storage medium

技术领域technical field

本发明涉及图像处理技术领域,尤其涉及一种三维人脸模型生成方法、装置、设备及存储介质。The present invention relates to the technical field of image processing, in particular to a method, device, equipment and storage medium for generating a three-dimensional face model.

背景技术Background technique

在用户选用社交APP进行社交时,会上传照片或者网络图片当作自己的头像(如,“个人中心”中的“上传头像”),然而,自拍容易泄露用户的个人隐私,网络图片又容易和其他用户相同,而产生“撞头像”的问题,且将自拍或网络图片作为头像均无法给用户带来新颖感,缺乏趣味性,难以激发用户的自我展现意识,现有技术中,虽有部分厂商在自拍功能中加入了根据用户的实时自拍生成对应的人脸模型这一功能,但仍存在人脸模型精度低或人脸模型构造效率低、构造时间长的问题,因此,如何提高个性化三维人脸模型的生成效率和展示效果,以提高用户的网络社交体验,增加用户粘度,成为一个亟待解决的问题。When a user chooses a social APP to socialize, he will upload a photo or an online picture as his avatar (for example, "Upload Avatar" in the "Personal Center"). Other users are the same, but the problem of "colliding head portraits" occurs, and using selfies or Internet pictures as head portraits cannot bring novelty to users, lacks interest, and is difficult to stimulate users' self-expression awareness. In the prior art, although some The manufacturer has added the function of generating a corresponding face model based on the user's real-time selfie in the selfie function, but there are still problems of low accuracy of the face model or low construction efficiency and long construction time of the face model. Therefore, how to improve the personalization The generation efficiency and display effect of the 3D face model, in order to improve the user's online social experience and increase user stickiness, has become an urgent problem to be solved.

上述内容仅用于辅助理解本发明的技术方案,并不代表承认上述内容是现有技术。The above content is only used to assist in understanding the technical solution of the present invention, and does not mean that the above content is admitted as prior art.

发明内容Contents of the invention

本发明的主要目的在于提供了一种三维人脸模型生成方法、装置、设备及存储介质,旨在解决如何提高个性化三维人脸模型的生成效率和展示效果,以提高用户的网络社交体验,增加用户粘度的技术问题。The main purpose of the present invention is to provide a method, device, device, and storage medium for generating a three-dimensional face model, aiming to solve how to improve the generation efficiency and display effect of a personalized three-dimensional face model, so as to improve the user's network social experience, Technical issues that increase user stickiness.

为实现上述目的,本发明提供了一种三维人脸模型生成方法,所述方法包括以下步骤:To achieve the above object, the invention provides a method for generating a three-dimensional human face model, the method comprising the following steps:

获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;Obtaining the user's face image information, and performing feature recognition on the face image information to obtain the user's face feature information;

将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;inputting the face feature information into a preset three-dimensional transformation neural network model for three-dimensional transformation processing to obtain the user's primary face model;

获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型。Acquiring preset rendering data, and performing effect rendering on the primary face model based on the preset rendering data, so as to obtain a three-dimensional face model of the user.

可选地,所述获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息的步骤,具体包括:Optionally, the step of acquiring the user's face image information and performing feature recognition on the face image information to obtain the user's face feature information specifically includes:

获取用户的人脸图像信息,并对所述人脸图像信息进行图像识别,获得人脸图像关键点;Acquire the user's face image information, and perform image recognition on the face image information to obtain key points of the face image;

将所述人脸图像关键点转换到预设人脸坐标系中,以获得所述人脸图像关键点与所述预设人脸坐标系中对应的预设图像关键点的关键点坐标差值;Converting the key points of the human face image into a preset human face coordinate system to obtain the key point coordinate difference between the key points of the human face image and the corresponding preset image key points in the preset human face coordinate system ;

基于所述关键点坐标差值生成所述用户的人脸特征信息。Generate facial feature information of the user based on the key point coordinate difference.

可选地,所述将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型的步骤,具体包括:Optionally, the step of inputting the facial feature information into a preset three-dimensional transformation neural network model to perform three-dimensional transformation processing, so as to obtain the user's primary human face model, specifically includes:

将所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息进行模型匹配,获得所述人脸特征信息和所述模型特征信息之间的模型匹配度;Carrying out model matching between the face feature information and the model feature information of each preset face model in the preset three-dimensional conversion neural network model, to obtain the model matching degree between the face feature information and the model feature information;

对所述模型匹配度进行排序,以获得匹配度排序结果,并基于所述匹配度排序结果确定目标人脸模型;Sorting the matching degree of the models to obtain a matching degree ranking result, and determining a target face model based on the matching degree ranking result;

基于所述人脸特征信息对所述目标人脸模型进行自适应调整,以获得所述用户的初阶人脸模型。Adaptively adjusting the target face model based on the face feature information to obtain a primary face model of the user.

可选地,所述对所述模型匹配度进行排序,以获得匹配度排序结果,并基于所述匹配度排序结果确定目标人脸模型的步骤,具体包括:Optionally, the step of sorting the matching degree of the models to obtain the matching degree ranking result, and determining the target face model based on the matching degree ranking result, specifically includes:

将所述模型匹配度按照从大到小进行排序,以获得匹配度排序结果;Sorting the matching degrees of the models from large to small to obtain the matching degree sorting results;

判断所述匹配度排序结果中排序顺位第一的预设人脸模型对应的模型匹配度是否大于预设匹配度;Judging whether the model matching degree corresponding to the preset face model with the first ranking order in the matching degree sorting result is greater than the preset matching degree;

在大于等于所述预设匹配度时,将所述排序顺位第一的预设人脸模型作为目标人脸模型;When the preset matching degree is greater than or equal to the preset face model with the first rank as the target face model;

在小于所述预设匹配度时,将预设标准模型作为所述目标人脸模型。When it is less than the preset matching degree, the preset standard model is used as the target face model.

可选地,所述基于所述人脸特征信息对所述目标人脸模型进行自适应调整,以获得所述用户的初阶人脸模型的步骤,具体包括:Optionally, the step of adaptively adjusting the target face model based on the face feature information to obtain the user's primary face model specifically includes:

获取所述人脸特征信息对应在各人脸区域的人脸特征值,以及所述目标人脸模型对应在各预设分区的人脸目标值;Obtaining the face feature values corresponding to the face feature information in each face area, and the target face model corresponding to face target values in each preset partition;

将所述人脸特征值与所述人脸目标值进行比对,获得特征比对结果;Comparing the feature value of the human face with the target value of the human face to obtain a feature comparison result;

基于所述特征比对结果对所述目标人脸模型的各人脸区域进行自适应调整,以获得所述用户的初阶人脸模型。Adaptively adjust each face area of the target face model based on the feature comparison result to obtain a primary face model of the user.

可选地,所述预设渲染数据包括预设纹理数据和预设风格数据;Optionally, the preset rendering data includes preset texture data and preset style data;

相应地,所述获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型的步骤,具体包括:Correspondingly, the step of acquiring preset rendering data, and performing effect rendering on the primary face model based on the preset rendering data, so as to obtain the user's 3D face model, specifically includes:

获取各人脸区域对应的所述预设纹理数据和所述预设风格数据,以及各人脸区域对应的边缘点信息和法向量信息;Obtaining the preset texture data and the preset style data corresponding to each face area, and edge point information and normal vector information corresponding to each face area;

基于各人脸区域对应的所述预设纹理数据、所述边缘点信息以及所述法向量信息对所述初阶人脸模型进行贴图,获得人脸纹理模型;Mapping the primary face model based on the preset texture data corresponding to each face area, the edge point information, and the normal vector information to obtain a face texture model;

基于各人脸区域对应的所述预设风格数据、所述边缘点信息以及所述法向量信息对所述人脸纹理模型进行渲染,获得所述用户的三维人脸模型。Render the face texture model based on the preset style data, the edge point information, and the normal vector information corresponding to each face area, to obtain a three-dimensional face model of the user.

可选地,所述获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型的步骤,具体包括:Optionally, the step of acquiring preset rendering data, and performing effect rendering on the primary face model based on the preset rendering data, so as to obtain the user's 3D face model, specifically includes:

获取用户输入的调整偏好信息中的一类偏好信息,并基于所述一类偏好信息对所述初阶人脸模型进行调整,以获得用户微调模型;Acquiring one type of preference information in the adjustment preference information input by the user, and adjusting the primary face model based on the one type of preference information to obtain a user fine-tuning model;

获取预设渲染数据和所述调整偏好信息中的二类偏好信息,并基于所述渲染数据和所述二类偏好信息对所述用户微调模型进行效果渲染,以获得所述用户的三维人脸模型。Obtain preset rendering data and the second-type preference information in the adjustment preference information, and perform effect rendering on the user fine-tuned model based on the rendering data and the second-type preference information, so as to obtain the user's three-dimensional face Model.

此外,为实现上述目的,本发明还提出一种三维人脸模型生成装置,所述三维人脸模型生成装置包括:In addition, in order to achieve the above object, the present invention also proposes a 3D face model generation device, the 3D face model generation device includes:

特征识别模块,用于获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;A feature recognition module, configured to acquire the user's face image information, and perform feature recognition on the face image information to obtain the user's face feature information;

三维转换模块,用于将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;A three-dimensional conversion module, configured to input the face feature information into a preset three-dimensional conversion neural network model for three-dimensional conversion processing, so as to obtain the user's primary face model;

效果渲染模块,用于获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型。The effect rendering module is configured to obtain preset rendering data, and perform effect rendering on the primary face model based on the preset rendering data, so as to obtain a 3D face model of the user.

此外,为实现上述目的,本发明还提出一种三维人脸模型生成设备,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的三维人脸模型生成程序,所述三维人脸模型生成程序配置为实现如上文所述的三维人脸模型生成方法的步骤。In addition, in order to achieve the above object, the present invention also proposes a 3D human face model generation device, which includes: a memory, a processor, and a 3D human face model stored in the memory and operable on the processor A generation program, the 3D face model generation program is configured to implement the steps of the method for generating a 3D face model as described above.

此外,为实现上述目的,本发明还提出一种存储介质,所述存储介质上存储有三维人脸模型生成程序,所述三维人脸模型生成程序被处理器执行时实现如上文所述的三维人脸模型生成方法的步骤。In addition, in order to achieve the above object, the present invention also proposes a storage medium, on which a 3D face model generation program is stored, and when the 3D face model generation program is executed by a processor, the 3D face model generation program as described above is realized. The steps of the face model generation method.

本发明中,获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型,相较于现有技术直接根据用户的人脸图像信息进行建模,本发明通过基于用户的人脸图像信息获得的人脸特征信息在预设三维转换神经网络模型中进行人脸模型匹配,然后对匹配到的人脸模型进行模型调整来提高建模效率,然后基于预设渲染数据对获得的初阶人脸模型进行效果渲染,以提高基于初阶人脸模型获得的三维人脸模型的展示效果,实现了三维人脸模型的个性化设置,也提高了用户的网络社交体验,增加了用户粘度。In the present invention, the face image information of the user is obtained, and the feature recognition is performed on the face image information to obtain the user's face feature information; the face feature information is input into a preset three-dimensional conversion neural network Perform three-dimensional conversion processing in the model to obtain the user's primary face model; obtain preset rendering data, and perform effect rendering on the primary face model based on the preset rendering data to obtain the user's 3D face model, compared with the prior art that directly models according to the user's face image information, the present invention uses the face feature information obtained based on the user's face image information in the preset 3D conversion neural network model Face model matching, and then adjust the matched face model to improve modeling efficiency, and then perform effect rendering on the obtained primary face model based on the preset rendering data, so as to improve the accuracy obtained based on the primary face model. The display effect of the 3D face model realizes the personalized setting of the 3D face model, and also improves the user's online social experience and increases user stickiness.

附图说明Description of drawings

图1是本发明实施例方案涉及的硬件运行环境的三维人脸模型生成设备的结构示意图;Fig. 1 is the schematic structural diagram of the three-dimensional human face model generating equipment of the hardware operating environment that scheme of the embodiment of the present invention relates to;

图2为本发明三维人脸模型生成方法第一实施例的流程示意图;Fig. 2 is a schematic flow chart of the first embodiment of the method for generating a three-dimensional face model of the present invention;

图3为本发明三维人脸模型生成方法第二实施例的流程示意图;Fig. 3 is a schematic flow chart of the second embodiment of the method for generating a three-dimensional face model of the present invention;

图4为本发明三维人脸模型生成装置第一实施例的结构框图。Fig. 4 is a structural block diagram of the first embodiment of the device for generating a three-dimensional face model according to the present invention.

本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose of the present invention, functional characteristics and advantages will be further described in conjunction with the embodiments and with reference to the accompanying drawings.

具体实施方式Detailed ways

应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

参照图1,图1为本发明实施例方案涉及的硬件运行环境的三维人脸模型生成设备结构示意图。Referring to FIG. 1 , FIG. 1 is a schematic structural diagram of a 3D face model generation device in a hardware operating environment involved in an embodiment of the present invention.

如图1所示,该三维人脸模型生成设备可以包括:处理器1001,例如中央处理器(Central Processing Unit,CPU),通信总线1002、用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真(WIreless-FIdelity,WI-FI)接口)。存储器1005可以是高速的随机存取存储器(RandomAccess Memory,RAM)存储器,也可以是稳定的非易失性存储器(Non-Volatile Memory,NVM),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in Figure 1, the 3D face model generating device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein, the communication bus 1002 is used to realize connection and communication between these components. The user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a wireless fidelity (WIreless-FIdelity, WI-FI) interface). The memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) memory, or a stable non-volatile memory (Non-Volatile Memory, NVM), such as a disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .

本领域技术人员可以理解,图1中示出的结构并不构成对三维人脸模型生成设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the structure shown in Figure 1 does not constitute a limitation to the three-dimensional face model generation device, and may include more or less components than those shown in the illustration, or combine some components, or different components layout.

如图1所示,作为一种存储介质的存储器1005中可以包括操作系统、数据存储模块、网络通信模块、用户接口模块以及三维人脸模型生成程序。As shown in FIG. 1 , the memory 1005 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, and a 3D face model generation program.

在图1所示的三维人脸模型生成设备中,网络接口1004主要用于与网络服务器进行数据通信;用户接口1003主要用于与用户进行数据交互;本发明三维人脸模型生成设备中的处理器1001、存储器1005可以设置在三维人脸模型生成设备中,所述三维人脸模型生成设备通过处理器1001调用存储器1005中存储的三维人脸模型生成程序,并执行本发明实施例提供的三维人脸模型生成方法。In the three-dimensional face model generation device shown in Figure 1, the network interface 1004 is mainly used for data communication with the network server; the user interface 1003 is mainly used for data interaction with the user; the processing in the three-dimensional face model generation device of the present invention The processor 1001 and the memory 1005 can be set in the 3D face model generation device, and the 3D face model generation device calls the 3D face model generation program stored in the memory 1005 through the processor 1001, and executes the 3D face model generation program provided by the embodiment of the present invention. Face model generation method.

本发明实施例提供了一种三维人脸模型生成方法,参照图2,图2为本发明三维人脸模型生成方法第一实施例的流程示意图。An embodiment of the present invention provides a method for generating a three-dimensional face model. Referring to FIG. 2 , FIG. 2 is a schematic flowchart of a first embodiment of the method for generating a three-dimensional face model according to the present invention.

本实施例中,所述三维人脸模型生成方法包括以下步骤:In this embodiment, the method for generating a three-dimensional human face model includes the following steps:

步骤S10:获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;Step S10: Obtain the user's face image information, and perform feature recognition on the face image information to obtain the user's face feature information;

易于理解的是,本实施例中,所述人脸图像信息,可基于用户的自拍照片获得,也可直接启用上述三维人脸模型生成设备内置的或外连的摄像装置对用户进行图像扫描,以获得用户的人脸图像信息,然后,基于预设人脸识别算法对所述人脸图像信息进行图像识别,获得人脸图像关键点,所述预设人脸识别算法可根据实际需求进行设置,本实施例中,为了提高人脸识别效率,以进一步提高三维人脸模型的生成效率,可设置为基于模板的人脸识别算法,如,基于主成分分析(Principal Component Analysis,PCA)的人脸识别算法,即特征脸(Eigenface)算法。所述人脸图像关键点,可理解为人脸图像的关键位置处的特征点,如,五官(眉眼耳鼻口)轮廓、脸部边缘轮廓、胡须、色斑、痣等,本实施例对此不加以限制。It is easy to understand that, in this embodiment, the face image information can be obtained based on the user's self-portrait, or directly enable the built-in or external camera device of the above-mentioned 3D face model generation device to scan the image of the user, Obtain the user's face image information, and then perform image recognition on the face image information based on a preset face recognition algorithm to obtain key points of the face image, and the preset face recognition algorithm can be set according to actual needs , in this embodiment, in order to improve the efficiency of face recognition, to further improve the generation efficiency of the three-dimensional face model, it can be set to a template-based face recognition algorithm, such as, based on principal component analysis (Principal Component Analysis, PCA) Face recognition algorithm, that is, Eigenface algorithm. The key points of the human face image can be understood as feature points at key positions of the human face image, such as the contours of facial features (eyebrows, ears, nose, mouth), facial edge contours, beards, stains, moles, etc., which are not discussed in this embodiment. be restricted.

进一步地,可将所述人脸图像关键点转换到预设人脸坐标系中,以获得所述人脸图像关键点与所述预设人脸坐标系中对应的预设图像关键点的关键点坐标差值,再基于所述关键点坐标差值生成所述用户的人脸特征信息,所述预设人脸坐标系为基于预设标准模型建立的坐标系,所述预设标准模型可理解为预设三维转换神经网络模型中内置的一种人脸模型标准,如,基于马夸特面具(Marquardt)建立的人脸模型,基于平均脸建立的人脸模型等,本实施例对此不加以限制,相应地,所述预设图像关键点,可理解为预设标准模型的关键位置处的特征点,如,五官轮廓、脸型轮廓等,本实施例对此不加以限制,所述预设三维转换神经网络模型可理解为基于卷积神经网络(Convolutional Neural Networks,CNN)建立的用以基于人脸特征信息进行人脸模型匹配和人脸模型调整的一种神经网络模型。Further, the key points of the human face image can be transformed into the preset human face coordinate system, so as to obtain the key points between the key points of the human face image and the corresponding preset image key points in the preset human face coordinate system. point coordinate difference, and then generate the face feature information of the user based on the key point coordinate difference, the preset face coordinate system is a coordinate system based on a preset standard model, and the preset standard model can be It is understood as a face model standard built into the preset three-dimensional conversion neural network model, such as a face model based on a Marquardt mask, a face model based on an average face, etc. Without limitation, correspondingly, the key points of the preset image can be understood as feature points at the key positions of the preset standard model, such as facial features outlines, face outlines, etc., which are not limited in this embodiment. The preset 3D conversion neural network model can be understood as a neural network model established based on Convolutional Neural Networks (CNN) to perform face model matching and face model adjustment based on face feature information.

步骤S20:将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;Step S20: Input the face feature information into the preset three-dimensional transformation neural network model to perform three-dimensional transformation processing, so as to obtain the user's primary face model;

易于理解的是,所谓人脸特征信息,即基于获得的人脸图像信息对应的人脸图像关键点与预设人脸坐标系中预设标准模型对应的预设图像关键点之间的关键点坐标差值生成的用户的人脸特征信息,可理解为基于预设图像关键点的坐标值和关键点坐标差值生成的人脸图像关键点对应的坐标值,用以表示与预设图像关键点之间的差距,即用户的人脸图像信息相较于预设标准模型的特征之处,如,预设标准模型的某一预设关键点对应的坐标值为(a,b),此预设关键点对应的关键点坐标差值为(-0.90,+1.21),则用户的人脸图像信息对应的人脸特征信息的坐标值为(a-0.90,b+1.21)。It is easy to understand that the so-called face feature information is based on the key points between the key points of the face image corresponding to the obtained face image information and the key points of the preset image corresponding to the preset standard model in the preset face coordinate system The user's face feature information generated by the coordinate difference can be understood as the coordinate value corresponding to the key point of the face image generated based on the coordinate value of the key point of the preset image and the coordinate difference of the key point, which is used to represent the key point of the preset image. The gap between the points, that is, the feature of the user's face image information compared with the preset standard model, for example, the coordinate value corresponding to a preset key point of the preset standard model is (a, b), this The coordinate difference of the key point corresponding to the preset key point is (-0.90, +1.21), and the coordinate value of the face feature information corresponding to the user's face image information is (a-0.90, b+1.21).

应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above is only an example, and does not constitute any limitation to the technical solution of the present invention. In specific applications, those skilled in the art can make settings according to needs, and the present invention is not limited thereto.

进一步地,在获得所述人脸特征信息后,为了提高建模效率,可将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型,所谓三维转换处理,可理解为基于人脸特征信息在预设三维转换神经网络模型中进行人脸模型匹配,然后对匹配到的人脸模型进行模型调整,以获得所述用户的初阶人脸模型。Further, after obtaining the facial feature information, in order to improve modeling efficiency, the facial feature information can be input into a preset 3D conversion neural network model for 3D conversion processing, so as to obtain the user's primary Face model, the so-called three-dimensional conversion processing, can be understood as performing face model matching in the preset three-dimensional conversion neural network model based on face feature information, and then performing model adjustment on the matched face model to obtain the user's Basic face model.

步骤S30:获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型。Step S30: Obtain preset rendering data, and perform effect rendering on the primary face model based on the preset rendering data, so as to obtain a 3D face model of the user.

需要说明的是,所述预设渲染数据包括预设纹理数据和预设风格数据,所述预设纹理数据可理解为用以渲染初阶人脸模型的脸部表皮层和软组织结构的数据,如皮肤质感、皮肤色度、模型饱满程度等,所述预设风格数据可理解为用以渲染初阶人脸模型的风格特征的数据,如,色调、曝光度、对比度等,在具体实现中,为了提高获得的三维人脸模型的展示效果,可获取各人脸区域对应的所述预设纹理数据和所述预设风格数据,以及各人脸区域对应的边缘点信息和法向量信息,其中,所述边缘点信息可用以控制渲染范围,所述法向量信息可用以控制渲染层次,结合所述边缘点信息和所述法向量信息可用以控制渲染力度,然后基于各人脸区域对应的所述预设纹理数据、所述边缘点信息以及所述法向量信息对所述初阶人脸模型进行贴图,获得人脸纹理模型,再基于各人脸区域对应的所述预设风格数据、所述边缘点信息以及所述法向量信息对所述人脸纹理模型进行渲染,获得所述用户的三维人脸模型。It should be noted that the preset rendering data includes preset texture data and preset style data, and the preset texture data can be understood as the data used to render the facial epidermis and soft tissue structure of the primary face model, Such as skin texture, skin chroma, model plumpness, etc., the preset style data can be understood as the data used to render the style features of the primary face model, such as hue, exposure, contrast, etc., in specific implementation , in order to improve the display effect of the obtained 3D face model, the preset texture data and the preset style data corresponding to each face area can be obtained, as well as the edge point information and normal vector information corresponding to each face area, Wherein, the edge point information can be used to control the rendering range, the normal vector information can be used to control the rendering level, the combination of the edge point information and the normal vector information can be used to control the rendering intensity, and then based on the corresponding The preset texture data, the edge point information, and the normal vector information map the primary face model to obtain a face texture model, and then based on the preset style data corresponding to each face area, The edge point information and the normal vector information render the face texture model to obtain a three-dimensional face model of the user.

在具体实现中,为了进一步提高三维人脸模型的展示效果,以提高用户的网络社交体验,增加用户粘度,可获取用户输入的调整偏好信息中的一类偏好信息,并基于所述一类偏好信息对所述初阶人脸模型进行调整,以获得用户微调模型,所述一类偏好信息可理解为对用户对人脸的脸型轮廓、五官轮廓等进行调整时的偏好数据,如,瘦脸程度、双眼放大程度、鼻头缩小程度等,接着,还可获取预设渲染数据和所述调整偏好信息中的二类偏好信息,所述二类偏好信息可理解为对皮肤纹理进行调整时的偏好数据,如,美白/美黑程度、颊部红润程度等,并基于所述渲染数据和所述二类偏好信息对所述用户微调模型进行效果渲染,以获得所述用户的三维人脸模型。In a specific implementation, in order to further improve the display effect of the three-dimensional face model, to improve the user's network social experience and increase user viscosity, one type of preference information in the adjustment preference information input by the user can be obtained, and based on the one type of preference information information to adjust the primary face model to obtain a fine-tuned model for users. The type of preference information can be understood as the user’s preference data when adjusting the face contour, facial features contour, etc., such as the degree of face thinning , binocular magnification degree, nose reduction degree, etc. Then, the preset rendering data and the second-type preference information in the adjustment preference information can also be obtained, and the second-type preference information can be understood as the preference data when adjusting the skin texture , such as the degree of whitening/tanning, the degree of rosy cheeks, etc., and perform effect rendering on the fine-tuned model of the user based on the rendering data and the second-type preference information, so as to obtain the three-dimensional face model of the user.

应当理解的是,以上仅为举例说明,对本发明的技术方案并不构成任何限定,在具体应用中,本领域的技术人员可以根据需要进行设置,本发明对此不做限制。It should be understood that the above is only an example, and does not constitute any limitation to the technical solution of the present invention. In specific applications, those skilled in the art can make settings according to needs, and the present invention is not limited thereto.

本实施例中,获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型,相较于现有技术直接根据用户的人脸图像信息进行建模,本实例通过基于用户的人脸图像信息获得的人脸特征信息在预设三维转换神经网络模型中进行人脸模型匹配,然后对匹配到的人脸模型进行模型调整来提高建模效率,然后基于预设渲染数据对获得的初阶人脸模型进行效果渲染,以提高基于初阶人脸模型获得的三维人脸模型的展示效果,实现了三维人脸模型的个性化设置,也提高了用户的网络社交体验,增加了用户粘度。In this embodiment, the face image information of the user is obtained, and the feature recognition is performed on the face image information to obtain the user's face feature information; the face feature information is input to the preset three-dimensional conversion neuron Perform three-dimensional conversion processing in the network model to obtain the user's primary face model; obtain preset rendering data, and perform effect rendering on the primary face model based on the preset rendering data to obtain the The user's 3D face model is compared with the existing technology that directly models the user's face image information. In this example, the face feature information obtained based on the user's face image information is converted into the preset 3D neural network model. Perform face model matching, and then adjust the matched face model to improve modeling efficiency, and then perform effect rendering on the obtained primary face model based on the preset rendering data to improve the accuracy obtained based on the primary face model. The display effect of the 3D face model realizes the personalized setting of the 3D face model, and also improves the user's online social experience and increases user stickiness.

参考图3,图3为本发明三维人脸模型生成方法第二实施例的流程示意图。Referring to FIG. 3 , FIG. 3 is a schematic flowchart of a second embodiment of a method for generating a 3D face model according to the present invention.

基于上述第一实施例,在本实施例中,所述步骤S20包括:Based on the first embodiment above, in this embodiment, the step S20 includes:

步骤S201:将所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息进行模型匹配,获得所述人脸特征信息和所述模型特征信息之间的模型匹配度;Step S201: Perform model matching between the face feature information and the model feature information of each preset face model in the preset three-dimensional transformation neural network model, and obtain a model between the face feature information and the model feature information suitability;

需要说明的是,在获得基于用户的人脸图像信息生成的人脸特征信息后,可将所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息进行模型匹配,获得所述人脸特征信息和所述模型特征信息之间的模型匹配度,所述预设三维转换神经网络模型可理解为基于卷积神经网络建立的用以基于人脸特征信息进行人脸模型匹配和人脸模型调整的一种神经网络模型,所谓人脸特征信息,即基于获得的人脸图像信息对应的人脸图像关键点与预设人脸坐标系中预设标准模型对应的预设图像关键点之间的关键点坐标差值生成的用户的人脸特征信息,可理解为基于预设图像关键点的坐标值和关键点坐标差值生成的人脸图像关键点对应的坐标值,用以表示与预设图像关键点之间的差距,即用户的人脸图像信息相较于预设标准模型的特征之处,如,预设标准模型的某一预设关键点对应的坐标值为(m,n),此预设关键点对应的关键点坐标差值为(-0.07,+0.23),则用户的人脸图像信息对应的人脸特征信息的坐标值为(m-0.07,n+0.23)。所述预设标准模型可理解为预设三维转换神经网络模型中内置的一种人脸模型标准,如,基于马夸特面具建立的人脸模型,基于平均脸建立的人脸模型等,本实施例对此不加以限制,相应地,所述预设图像关键点,可理解为预设标准模型的关键位置处的特征点,如,五官轮廓、脸型轮廓等,本实施例对此不加以限制。It should be noted that after obtaining the face feature information generated based on the user's face image information, the face feature information can be compared with the model feature information of each preset face model in the preset three-dimensional transformation neural network model. Model matching, to obtain the model matching degree between the facial feature information and the model feature information, the preset three-dimensional conversion neural network model can be understood as based on the convolutional neural network to perform based on the facial feature information A neural network model for face model matching and face model adjustment, the so-called face feature information, that is, the key points of the face image based on the obtained face image information correspond to the preset standard model in the preset face coordinate system The user's face feature information generated by the key point coordinate difference between the preset image key points can be understood as the corresponding key point of the face image generated based on the coordinate value of the preset image key point and the key point coordinate difference The coordinate value is used to indicate the gap between the key points of the preset image, that is, the feature of the user's face image information compared with the preset standard model, for example, a preset key point of the preset standard model corresponds to The coordinate value of the key point is (m, n), and the key point coordinate difference value corresponding to this preset key point is (-0.07, +0.23), then the coordinate value of the face feature information corresponding to the user's face image information is (m -0.07, n+0.23). The preset standard model can be understood as a face model standard built into the preset three-dimensional transformation neural network model, such as a face model based on a Marquardt mask, a face model based on an average face, etc. The embodiment does not limit this. Correspondingly, the key points of the preset image can be understood as the feature points at the key positions of the preset standard model, such as the outline of facial features, the outline of the face, etc., which are not included in this embodiment. limit.

易于理解的是,各预设人脸模型也可理解为基于预设标准模型设置的人脸模型,相应地,各预设人脸模型的模型特征信息可理解为用以表示预设人脸模型的模型图像关键点与预设标准模型的预设图像关键点之间的不同差距段的特征信息,在具体实现中,可基于所述预设标准模型的各预设关键点的坐标值的不同波动区间设置不同的人脸模型,即,将所述预设标准模型的各预设关键点的坐标值的不同波动区间按照预设组合方式进行组合,以获得不同的人脸模型,所述预设组合方式可基于对预设人脸图像数据库中存储的人脸图像进行大数据分析,然后基于大数据分析结果中占比大于预设占比的组合方式建立,所述预设人脸图像数据库为实时更新的用于存储人脸图像的数据库,相应地,所述预设组合方式也可随着预设人脸图像数据库的更新进行周期性的更新,所述预设占比可根据实际需求进行设置,本实施例对此不加以限制。It is easy to understand that each preset face model can also be understood as a face model set based on a preset standard model, and correspondingly, the model feature information of each preset face model can be understood as being used to represent the preset face model The characteristic information of the different gap segments between the key points of the model image of the preset standard model and the preset image key points of the preset standard model. Different face models are set in the fluctuation intervals, that is, the different fluctuation intervals of the coordinate values of the preset key points of the preset standard model are combined according to the preset combination method to obtain different face models. The combination method can be based on big data analysis of the face images stored in the preset face image database, and then based on the combination method in which the proportion in the big data analysis result is greater than the preset proportion, the preset face image database It is a database for storing face images that is updated in real time. Correspondingly, the preset combination method can also be periodically updated with the update of the preset face image database, and the preset ratio can be based on actual needs. Set, which is not limited in this embodiment.

步骤S202:对所述模型匹配度进行排序,以获得匹配度排序结果,并基于所述匹配度排序结果确定目标人脸模型;Step S202: sort the model matching degree to obtain a matching degree ranking result, and determine a target face model based on the matching degree ranking result;

进一步地,为了提高建模效率,在获得所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息之间的模型匹配度后,可将所述模型匹配度按照从大到小进行排序,以获得匹配度排序结果,判断所述匹配度排序结果中排序顺位第一的预设人脸模型对应的模型匹配度是否大于预设匹配度,所述预设匹配度可根据实际需求进行设置,如50%,70%等,本实施例对此不加以限制,在大于等于所述预设匹配度时,将所述排序顺位第一的预设人脸模型作为目标人脸模型,在小于所述预设匹配度时,将预设标准模型作为所述目标人脸模型。Further, in order to improve modeling efficiency, after obtaining the model matching degree between the face feature information and the model feature information of each preset face model in the preset three-dimensional conversion neural network model, the model can be matched Degrees are sorted from large to small to obtain matching degree sorting results, and it is judged whether the model matching degree corresponding to the preset face model with the first ranking order in the matching degree sorting results is greater than the preset matching degree, and the predetermined matching degree The matching degree can be set according to actual needs, such as 50%, 70%, etc., which is not limited in this embodiment. When the matching degree is greater than or equal to the preset matching degree, the preset person who ranks first in the ranking The face model is used as the target face model, and when the matching degree is less than the preset standard model, the preset standard model is used as the target face model.

步骤S203:基于所述人脸特征信息对所述目标人脸模型进行自适应调整,以获得所述用户的初阶人脸模型。Step S203: Adaptively adjust the target face model based on the face feature information to obtain a primary face model of the user.

易于理解的是,在获得目标人脸模型后,可获取所述人脸特征信息对应在各人脸区域的人脸特征值,以及所述目标人脸模型对应在各预设分区的人脸目标值,所述人脸特征值,可理解为各人脸区域分布的人脸图像关键点的坐标值,相应地,所述人脸目标值,可理解为所述目标人脸模型对应在各预设分区的模型图像关键点的坐标值,将所述人脸特征值与所述人脸目标值进行比对,获得特征比对结果,即获得各人脸区域的人脸图像关键点和模型图像关键点之间的坐标差值,然后,基于所述特征比对结果对所述目标人脸模型的各人脸区域进行自适应调整,以获得所述用户的初阶人脸模型,即,将各模型图像关键点的坐标值根据所述人脸图像关键点的坐标值按照预设调整比例进行调整,所述预设比例可根据实际需求进行设置,如75%,85%,95%。在具体实现中,因所生成的初阶人脸模型,以及后续基于初阶人脸模型生成的三维人脸模型本质上都是非自拍式(即非所见即所得式)的三维数字模型,随着拟人程度增加,人类对之的好感并非一直增加,会在非常接近而又不完全一致时,表现出恐惧感,且用户本人希望展现的也未必是完全真实的自己,而是能够代表自身主要特点的三维数字形象,甚至是美化后的三维数字形象,因此,本实施例中,不会将预设比例设置为100%,即,将各模型图像关键点的坐标值直接调整至所述人脸图像关键点的坐标值,因,预设标准模型所基于的马夸特面具也好,平均脸也好,都是公认越接近就会越具有美感的模型标准,故为了提升展示效果,实现三维人脸模型的个性化设置,提高了用户的网络社交体验,增加用户粘度,本实施例中,是在获得目标人脸模型后,是将目标人脸模型的各模型图像关键点的坐标值根据所述人脸图像关键点的坐标值按照预设调整比例进行调整,以获得初阶人脸模型。It is easy to understand that after the target face model is obtained, the face feature information corresponding to the face feature value in each face area can be obtained, and the target face model corresponds to the face target in each preset partition. Value, the face feature value, can be understood as the coordinate value of the key points of the face image distributed in each face area, and correspondingly, the target face value can be understood as the target face model corresponding to each preset Set the coordinate values of the key points of the model image of the partition, compare the feature value of the human face with the target value of the human face, and obtain the feature comparison result, that is, obtain the key points of the human face image and the model image of each human face area The coordinate difference between the key points, and then, based on the feature comparison result, each face area of the target face model is adaptively adjusted to obtain the user's primary face model, that is, the The coordinate values of the key points of each model image are adjusted according to the coordinate values of the key points of the face image according to a preset adjustment ratio, and the preset ratio can be set according to actual needs, such as 75%, 85%, and 95%. In the specific implementation, because the generated primary face model and the subsequent 3D face model generated based on the primary face model are essentially non-selfie (that is, not WYSIWYG) 3D digital models, As the degree of anthropomorphism increases, the favorability of human beings does not always increase. They will show a sense of fear when they are very close but not completely consistent, and what the user himself hopes to show may not be completely true to himself, but to be able to represent himself. Characteristic three-dimensional digital images, even beautified three-dimensional digital images, therefore, in this embodiment, the preset ratio will not be set to 100%, that is, the coordinate values of the key points of each model image will be directly adjusted to the The coordinate values of the key points of the face image, because the Marquardt mask on which the preset standard model is based, or the average face, it is recognized that the closer the model standard is, the more beautiful it will be. Therefore, in order to improve the display effect, realize The personalized setting of the three-dimensional face model improves the user's network social experience and increases user viscosity. In this embodiment, after the target face model is obtained, the coordinate values of the key points of each model image of the target face model Adjusting according to the preset adjustment ratio according to the coordinate values of the key points of the human face image to obtain a primary human face model.

本实施例中,将所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息进行模型匹配,获得所述人脸特征信息和所述模型特征信息之间的模型匹配度;对所述模型匹配度进行排序,以获得匹配度排序结果,并基于所述匹配度排序结果确定目标人脸模型;基于所述人脸特征信息对所述目标人脸模型进行自适应调整,以获得所述用户的初阶人脸模型。通过基于各人脸区域的人脸图像关键点和模型图像关键点之间的坐标差值对目标人脸模型的各人脸区域进行自适应调整,以获得所述用户的初阶人脸模型,即,将各模型图像关键点的坐标值根据所述人脸图像关键点的坐标值按照预设调整比例进行调整,以提高后续基于初阶人脸模型获得的三维人脸模型的美观程度,也实现了三维人脸模型的个性化设置,提高了用户的网络社交体验,增加用户粘度。In this embodiment, the face feature information is model-matched with the model feature information of each preset face model in the preset three-dimensional transformation neural network model, and the relationship between the face feature information and the model feature information is obtained. the model matching degree; sort the model matching degree to obtain the matching degree sorting result, and determine the target face model based on the matching degree sorting result; perform the target human face model based on the facial feature information Adaptive adjustment to obtain the user's primary face model. Adaptively adjust each face area of the target face model based on the coordinate difference between the key points of the face image of each face area and the key points of the model image, so as to obtain the primary face model of the user, That is, the coordinate values of the key points of each model image are adjusted according to the coordinate values of the key points of the human face image according to a preset adjustment ratio, so as to improve the aesthetics of the subsequent three-dimensional human face model obtained based on the primary human face model, and also The personalized setting of the 3D face model is realized, which improves the user's online social experience and increases user stickiness.

此外,本发明实施例还提出一种存储介质,所述存储介质上存储有三维人脸模型生成程序,所述三维人脸模型生成程序被处理器执行时实现如上文所述的三维人脸模型生成方法的步骤。In addition, an embodiment of the present invention also proposes a storage medium, on which a 3D face model generation program is stored, and when the 3D face model generation program is executed by a processor, the 3D face model as described above is realized The steps to generate the method.

参照图4,图4为本发明三维人脸模型生成装置第一实施例的结构框图。Referring to FIG. 4 , FIG. 4 is a structural block diagram of the first embodiment of the device for generating a three-dimensional face model according to the present invention.

如图4所示,本发明实施例提出的三维人脸模型生成装置包括:As shown in Figure 4, the three-dimensional face model generating device proposed by the embodiment of the present invention includes:

特征识别模块10,用于获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;The feature recognition module 10 is used to obtain the face image information of the user, and carry out feature recognition to the face image information to obtain the face feature information of the user;

三维转换模块20,用于将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;A three-dimensional conversion module 20, configured to input the face feature information into a preset three-dimensional conversion neural network model for three-dimensional conversion processing, so as to obtain the user's primary face model;

效果渲染模块30,用于获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型。The effect rendering module 30 is configured to obtain preset rendering data, and perform effect rendering on the primary human face model based on the preset rendering data, so as to obtain a three-dimensional human face model of the user.

本实施例中,获取用户的人脸图像信息,并对所述人脸图像信息进行特征识别,以获得所述用户的人脸特征信息;将所述人脸特征信息输入至预设三维转换神经网络模型中进行三维转换处理,以获得所述用户的初阶人脸模型;获取预设渲染数据,并基于所述预设渲染数据对所述初阶人脸模型进行效果渲染,以获得所述用户的三维人脸模型,相较于现有技术直接根据用户的人脸图像信息进行建模,本实例通过基于用户的人脸图像信息获得的人脸特征信息在预设三维转换神经网络模型中进行人脸模型匹配,然后对匹配到的人脸模型进行模型调整来提高建模效率,然后基于预设渲染数据对获得的初阶人脸模型进行效果渲染,以提高基于初阶人脸模型获得的三维人脸模型的展示效果,实现了三维人脸模型的个性化设置,也提高了用户的网络社交体验,增加了用户粘度。In this embodiment, the face image information of the user is obtained, and the feature recognition is performed on the face image information to obtain the user's face feature information; the face feature information is input to the preset three-dimensional conversion neuron Perform three-dimensional conversion processing in the network model to obtain the user's primary face model; obtain preset rendering data, and perform effect rendering on the primary face model based on the preset rendering data to obtain the The user's 3D face model is compared with the existing technology that directly models the user's face image information. In this example, the face feature information obtained based on the user's face image information is converted into the preset 3D neural network model. Perform face model matching, and then adjust the matched face model to improve modeling efficiency, and then perform effect rendering on the obtained primary face model based on the preset rendering data to improve the accuracy obtained based on the primary face model. The display effect of the 3D face model realizes the personalized setting of the 3D face model, and also improves the user's online social experience and increases user stickiness.

基于本发明上述三维人脸模型生成装置第一实施例,提出本发明三维人脸模型生成装置的第二实施例。Based on the first embodiment of the above-mentioned 3D face model generating device of the present invention, a second embodiment of the 3D face model generating device of the present invention is proposed.

在本实施例中,所述特征识别模块10,还用于获取用户的人脸图像信息,并对所述人脸图像信息进行图像识别,获得人脸图像关键点;In this embodiment, the feature recognition module 10 is also used to obtain the user's face image information, and perform image recognition on the face image information to obtain key points of the face image;

所述特征识别模块10,还用于将所述人脸图像关键点转换到预设人脸坐标系中,以获得所述人脸图像关键点与所述预设人脸坐标系中对应的预设图像关键点的关键点坐标差值;The feature recognition module 10 is further configured to convert the key points of the human face image into a preset human face coordinate system, so as to obtain the corresponding preset value of the key points of the human face image and the preset human face coordinate system. Set the key point coordinate difference of the image key point;

所述特征识别模块10,还用于基于所述关键点坐标差值生成所述用户的人脸特征信息。The feature recognition module 10 is further configured to generate facial feature information of the user based on the key point coordinate difference.

所述三维转换模块20,还用于将所述人脸特征信息与预设三维转换神经网络模型中各预设人脸模型的模型特征信息进行模型匹配,获得所述人脸特征信息和所述模型特征信息之间的模型匹配度;The three-dimensional conversion module 20 is also used to perform model matching on the face feature information and the model feature information of each preset face model in the preset three-dimensional conversion neural network model, to obtain the face feature information and the The model matching degree between model feature information;

所述三维转换模块20,还用于对所述模型匹配度进行排序,以获得匹配度排序结果,并基于所述匹配度排序结果确定目标人脸模型;The three-dimensional conversion module 20 is further configured to sort the model matching degree to obtain a matching degree sorting result, and determine a target face model based on the matching degree sorting result;

所述三维转换模块20,还用于基于所述人脸特征信息对所述目标人脸模型进行自适应调整,以获得所述用户的初阶人脸模型。The three-dimensional conversion module 20 is further configured to adaptively adjust the target face model based on the face feature information, so as to obtain the user's primary face model.

所述三维转换模块20,还用于将所述模型匹配度按照从大到小进行排序,以获得匹配度排序结果;The three-dimensional conversion module 20 is also used to sort the matching degree of the models from large to small, so as to obtain the matching degree sorting result;

所述三维转换模块20,还用于判断所述匹配度排序结果中排序顺位第一的预设人脸模型对应的模型匹配度是否大于预设匹配度;The three-dimensional conversion module 20 is also used to judge whether the model matching degree corresponding to the preset face model ranked first in the matching degree sorting result is greater than the preset matching degree;

所述三维转换模块20,还用于在大于等于所述预设匹配度时,将所述排序顺位第一的预设人脸模型作为目标人脸模型;The three-dimensional conversion module 20 is further configured to use the preset face model ranked first as the target face model when the preset matching degree is greater than or equal to the preset matching degree;

所述三维转换模块20,还用于在小于所述预设匹配度时,将预设标准模型作为所述目标人脸模型。The three-dimensional conversion module 20 is further configured to use a preset standard model as the target face model when the matching degree is less than the preset matching degree.

所述三维转换模块20,还用于获取所述人脸特征信息对应在各人脸区域的人脸特征值,以及所述目标人脸模型对应在各预设分区的人脸目标值;The three-dimensional conversion module 20 is also used to obtain the face feature value corresponding to each face region of the face feature information, and the face target value corresponding to each preset partition of the target face model;

所述三维转换模块20,还用于将所述人脸特征值与所述人脸目标值进行比对,获得特征比对结果;The three-dimensional conversion module 20 is further configured to compare the face feature value with the face target value to obtain a feature comparison result;

所述三维转换模块20,还用于基于所述特征比对结果对所述目标人脸模型的各人脸区域进行自适应调整,以获得所述用户的初阶人脸模型。The three-dimensional conversion module 20 is further configured to adaptively adjust each face area of the target face model based on the feature comparison result, so as to obtain a primary face model of the user.

其中,所述预设渲染数据包括预设纹理数据和预设风格数据;Wherein, the preset rendering data includes preset texture data and preset style data;

所述效果渲染模块30,还用于获取各人脸区域对应的所述预设纹理数据和所述预设风格数据,以及各人脸区域对应的边缘点信息和法向量信息;The effect rendering module 30 is further configured to acquire the preset texture data and the preset style data corresponding to each face area, as well as edge point information and normal vector information corresponding to each face area;

所述效果渲染模块30,还用于基于各人脸区域对应的所述预设纹理数据、所述边缘点信息以及所述法向量信息对所述初阶人脸模型进行贴图,获得人脸纹理模型;The effect rendering module 30 is further configured to map the primary face model based on the preset texture data corresponding to each face area, the edge point information and the normal vector information, to obtain a face texture Model;

所述效果渲染模块30,还用于基于各人脸区域对应的所述预设风格数据、所述边缘点信息以及所述法向量信息对所述人脸纹理模型进行渲染,获得所述用户的三维人脸模型。The effect rendering module 30 is further configured to render the face texture model based on the preset style data corresponding to each face area, the edge point information and the normal vector information, and obtain the user's 3D face model.

所述效果渲染模块30,还用于获取用户输入的调整偏好信息中的一类偏好信息,并基于所述一类偏好信息对所述初阶人脸模型进行调整,以获得用户微调模型;The effect rendering module 30 is further configured to obtain one type of preference information in the adjustment preference information input by the user, and adjust the primary face model based on the one type of preference information to obtain a fine-tuned model by the user;

所述效果渲染模块30,还用于获取预设渲染数据和所述调整偏好信息中的二类偏好信息,并基于所述渲染数据和所述二类偏好信息对所述用户微调模型进行效果渲染,以获得所述用户的三维人脸模型。The effect rendering module 30 is further configured to obtain preset rendering data and second-type preference information in the adjustment preference information, and perform effect rendering on the user fine-tuning model based on the rendering data and the second-type preference information to obtain the 3D face model of the user.

本发明三维人脸模型生成装置的其他实施例或具体实现方式可参照上述各方法实施例,此处不再赘述。For other embodiments or specific implementations of the device for generating a three-dimensional face model according to the present invention, reference may be made to the above-mentioned method embodiments, which will not be repeated here.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, as used herein, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or system comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or system. Without further limitations, an element defined by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article or system comprising that element.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the above embodiments of the present invention are for description only, and do not represent the advantages and disadvantages of the embodiments.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如只读存储器/随机存取存储器、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solution of the present invention can be embodied in the form of software products in essence or in other words, the part that contributes to the prior art, and the computer software products are stored in a storage medium (such as read-only memory/random access memory, magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, computer, server, air conditioner, or network equipment, etc.) execute the methods described in various embodiments of the present invention.

以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process conversion made by using the description of the present invention and the contents of the accompanying drawings, or directly or indirectly used in other related technical fields , are all included in the scope of patent protection of the present invention in the same way.

Claims (9)

1. A three-dimensional face model generation method is characterized by comprising the following steps:
acquiring face image information of a user, and performing feature recognition on the face image information to acquire face feature information of the user;
inputting the face feature information into a preset three-dimensional conversion neural network model for three-dimensional conversion processing to obtain an initial-order face model of the user;
acquiring preset rendering data, and performing effect rendering on the initial-order face model based on the preset rendering data to obtain a three-dimensional face model of the user;
the step of obtaining the face image information of the user and performing feature recognition on the face image information to obtain the face feature information of the user specifically includes:
acquiring face image information of a user, and performing image recognition on the face image information to obtain face image key points;
converting the key points of the face image into a preset face coordinate system to obtain key point coordinate difference values of the key points of the face image and corresponding preset image key points in the preset face coordinate system;
and generating face feature information of the user based on the coordinate difference value of the key point.
2. The method for generating a three-dimensional face model according to claim 1, wherein the step of inputting the face feature information into a preset three-dimensional transformation neural network model for three-dimensional transformation processing to obtain an initial face model of the user specifically comprises:
performing model matching on the face feature information and model feature information of each preset face model in a preset three-dimensional conversion neural network model to obtain a model matching degree between the face feature information and the model feature information;
sorting the model matching degrees to obtain a matching degree sorting result, and determining a target face model based on the matching degree sorting result;
and carrying out self-adaptive adjustment on the target face model based on the face feature information to obtain an initial-order face model of the user.
3. The method for generating a three-dimensional face model according to claim 2, wherein the step of ranking the model matching degrees to obtain a matching degree ranking result and determining the target face model based on the matching degree ranking result specifically comprises:
sorting the model matching degrees from large to small to obtain a matching degree sorting result;
judging whether the model matching degree corresponding to the preset face model with the first sorting order in the matching degree sorting result is greater than the preset matching degree or not;
when the preset matching degree is greater than or equal to the preset matching degree, taking the preset face model with the first ordering order as a target face model;
and when the matching degree is smaller than the preset matching degree, taking a preset standard model as the target face model.
4. The method for generating a three-dimensional face model according to claim 2, wherein the step of performing adaptive adjustment on the target face model based on the face feature information to obtain an initial-order face model of the user specifically comprises:
acquiring face characteristic values of the face characteristic information corresponding to the face regions and face target values of the target face model corresponding to the preset subareas;
comparing the face characteristic value with the face target value to obtain a characteristic comparison result;
and performing self-adaptive adjustment on each face region of the target face model based on the feature comparison result to obtain an initial-order face model of the user.
5. The three-dimensional face model generation method according to any one of claims 1 to 4, wherein the preset rendering data includes preset texture data and preset style data;
correspondingly, the step of obtaining preset rendering data and performing effect rendering on the initial-order face model based on the preset rendering data to obtain the three-dimensional face model of the user specifically includes:
acquiring the preset texture data and the preset style data corresponding to each face area, and the edge point information and the normal vector information corresponding to each face area;
mapping the initial-order face model based on the preset texture data, the edge point information and the normal vector information corresponding to each face region to obtain a face texture model;
rendering the face texture model based on the preset style data, the edge point information and the normal vector information corresponding to each face area to obtain a three-dimensional face model of the user.
6. The method for generating a three-dimensional face model according to any one of claims 1 to 4, wherein the step of obtaining preset rendering data and performing effect rendering on the preliminary-stage face model based on the preset rendering data to obtain the three-dimensional face model of the user specifically includes:
acquiring one type of preference information in adjustment preference information input by a user, and adjusting the initial-order face model based on the one type of preference information to obtain a user fine-tuning model;
and acquiring preset rendering data and two types of preference information in the adjustment preference information, and performing effect rendering on the user fine-tuning model based on the rendering data and the two types of preference information to obtain the three-dimensional face model of the user.
7. A three-dimensional face model generation apparatus, characterized by comprising:
the characteristic identification module is used for acquiring face image information of a user and carrying out characteristic identification on the face image information to acquire face characteristic information of the user;
the three-dimensional conversion module is used for inputting the face feature information into a preset three-dimensional conversion neural network model for three-dimensional conversion processing so as to obtain an initial face model of the user;
the effect rendering module is used for acquiring preset rendering data and performing effect rendering on the initial-order face model based on the preset rendering data so as to obtain a three-dimensional face model of the user;
the feature recognition module is further configured to acquire face image information of a user, perform image recognition on the face image information, and acquire a face image key point; converting the key points of the face image into a preset face coordinate system to obtain key point coordinate difference values of the key points of the face image and corresponding preset image key points in the preset face coordinate system; and generating the face feature information of the user based on the coordinate difference value of the key point.
8. A three-dimensional face model generation apparatus, characterized in that the apparatus comprises: a memory, a processor and a three-dimensional face model generation program stored on the memory and executable on the processor, the three-dimensional face model generation program being configured to implement the steps of the three-dimensional face model generation method of any one of claims 1 to 6.
9. A storage medium, characterized in that a three-dimensional face model generation program is stored thereon, which when executed by a processor implements the steps of the three-dimensional face model generation method according to any one of claims 1 to 6.
CN202110597244.2A 2021-05-28 2021-05-28 Three-dimensional face model generation method, device, equipment and storage medium Active CN113284229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110597244.2A CN113284229B (en) 2021-05-28 2021-05-28 Three-dimensional face model generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110597244.2A CN113284229B (en) 2021-05-28 2021-05-28 Three-dimensional face model generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113284229A CN113284229A (en) 2021-08-20
CN113284229B true CN113284229B (en) 2023-04-18

Family

ID=77282551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110597244.2A Active CN113284229B (en) 2021-05-28 2021-05-28 Three-dimensional face model generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113284229B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838176B (en) * 2021-09-16 2023-09-15 网易(杭州)网络有限公司 Model training method, three-dimensional face image generation method and three-dimensional face image generation equipment
CN116542846B (en) * 2023-07-05 2024-04-26 深圳兔展智能科技有限公司 User account icon generation method and device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401157A (en) * 2020-03-02 2020-07-10 中国电子科技集团公司第五十二研究所 Face recognition method and system based on three-dimensional features

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764180A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Face recognition method and device, electronic equipment and readable storage medium
CN110580733B (en) * 2018-06-08 2024-05-17 北京搜狗科技发展有限公司 Data processing method and device for data processing
CN109118569B (en) * 2018-08-16 2023-03-10 Oppo广东移动通信有限公司 Rendering method and device based on three-dimensional model
CN109978930B (en) * 2019-03-27 2020-11-10 杭州相芯科技有限公司 Stylized human face three-dimensional model automatic generation method based on single image
CN110136243B (en) * 2019-04-09 2023-03-17 五邑大学 Three-dimensional face reconstruction method, system, device and storage medium thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401157A (en) * 2020-03-02 2020-07-10 中国电子科技集团公司第五十二研究所 Face recognition method and system based on three-dimensional features

Also Published As

Publication number Publication date
CN113284229A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
US11861936B2 (en) Face reenactment
US11475608B2 (en) Face image generation with pose and expression control
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
US11769286B2 (en) Beauty processing method, electronic device, and computer-readable storage medium
WO2022257766A1 (en) Image processing method and apparatus, device, and medium
CN113284229B (en) Three-dimensional face model generation method, device, equipment and storage medium
KR20120005587A (en) Method and apparatus for generating facial animation in computer system
CN113592988A (en) Three-dimensional virtual character image generation method and device
CN112837213A (en) Face shape adjustment image generation method, model training method, apparatus and equipment
JPWO2018163356A1 (en) Information processing device, program
CN113408452B (en) Expression redirection training method, device, electronic device and readable storage medium
WO2022110855A1 (en) Face reconstruction method and apparatus, computer device, and storage medium
CN112785490A (en) Image processing method and device and electronic equipment
WO2024140081A1 (en) Method and apparatus for processing facial image, and computer device and storage medium
US12437493B2 (en) Method and apparatus for generating expression model, device, and medium
CN114078130A (en) Image generation method and device, computer equipment and storage medium
CN113486694A (en) Face image processing method and terminal equipment
CN116862757B (en) Method, device, electronic equipment and medium for controlling face stylization degree
US12288280B1 (en) Scene, pose, and environment replacement in videos and images using generative AI
CN117788315A (en) Image processing method, device, electronic device and readable storage medium
CN119967225A (en) A method and device for generating digital human
CN116597079A (en) Three-dimensional virtual face generation method and device and electronic equipment
CN118864658A (en) Image processing method and device
CN119520939A (en) Cover generation method, device, equipment and storage medium
CN119971503A (en) Data processing method, device, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20250712

Address after: 200092 Shanghai City Yangpu District Fushun Road 360.NO

Patentee after: Shanghai Canzi Technology Co.,Ltd.

Country or region after: China

Address before: 200082 Shanghai City Yangpu District Guo'an Road 758 Lane 49 Building 601-1, 601-2, 602-1 Room

Patentee before: Shanghai Xinglan Information Technology Co.,Ltd.

Country or region before: China