[go: up one dir, main page]

CN114904279A - Data preprocessing method, device, medium and equipment - Google Patents

Data preprocessing method, device, medium and equipment Download PDF

Info

Publication number
CN114904279A
CN114904279A CN202210507592.0A CN202210507592A CN114904279A CN 114904279 A CN114904279 A CN 114904279A CN 202210507592 A CN202210507592 A CN 202210507592A CN 114904279 A CN114904279 A CN 114904279A
Authority
CN
China
Prior art keywords
map
target
model
animation
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210507592.0A
Other languages
Chinese (zh)
Other versions
CN114904279B (en
Inventor
项忠良
卿爽
陈文旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210507592.0A priority Critical patent/CN114904279B/en
Publication of CN114904279A publication Critical patent/CN114904279A/en
Application granted granted Critical
Publication of CN114904279B publication Critical patent/CN114904279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a data preprocessing method, a data preprocessing device, a storage medium and terminal equipment, wherein the method comprises the steps of determining a virtual object model and a model map of the virtual object model, and automatically generating an initial animation model; receiving an animation frame generation instruction aiming at the animation model, wherein the animation frame generation instruction comprises a target chartlet mark of the virtual object model in the current animation frame; controlling only a target sub-map corresponding to the target map identification to be displayed in the expression map on the model map according to the target map identification, calling a material ball and mapping the target sub-map to the virtual object model based on a UV map technology; and correspondingly writing the target map identification and the animation frame number corresponding to the target map identification into the animation generation control file. The embodiment of the application can realize that the virtual character in the game scene can perform expression action switching according to a set program, and expression switching is realized on the basis of a 3D model, so that the manufacturing cost can be reduced compared with the prior art.

Description

数据预处理方法、装置、介质及设备Data preprocessing method, device, medium and equipment

技术领域technical field

本申请涉及电子通信技术领域,尤其涉及一种数据预处理技术领域,特别涉及一种数据预处理方法、装置、介质及设备。The present application relates to the technical field of electronic communication, in particular to the technical field of data preprocessing, and in particular to a data preprocessing method, apparatus, medium and device.

背景技术Background technique

在一些具有项目开发进度快、周期短特点的开发项目中,通常为了加快进度而忽略了虚拟角色动画的画质要求,使得在展示虚拟角色出场或待机时都没有表情动画,只有固定不变的表情贴图,导致动作表演,人物情绪化不够生动,表情呆板。尤其是在游戏战斗,交互体验,界面展示等场景中,由于虚拟角色缺乏情感丰富性,导致用户体验较差。In some development projects with fast project development progress and short cycle, the quality requirements of virtual character animation are usually ignored in order to speed up the progress, so that there is no expression animation when the virtual character is displayed or on standby, only fixed and unchanging. Expression maps lead to action performances, the characters are not emotionally vivid enough, and the expressions are rigid. Especially in scenarios such as game battle, interactive experience, and interface display, the user experience is poor due to the lack of emotional richness of virtual characters.

发明内容SUMMARY OF THE INVENTION

本申请实施例提供一种数据预处理方法、装置、介质及设备,通过在虚拟对象模型的脸部部分贴不同表情类型的表情贴图能够实现改变虚拟角色的面部表情,实现在游戏场景中的虚拟角色能够按照设定好的程序进行表情动作切换,并且是基于3D模型上实现表情切换,相较于现有技术能够降低制作成本。The embodiments of the present application provide a data preprocessing method, device, medium, and device. By pasting expression maps of different expression types on the face of a virtual object model, the facial expressions of the virtual characters can be changed, and the virtual characters in the game scene can be changed. The character can switch between expressions and actions according to the set program, and the switch of expressions is realized based on the 3D model, which can reduce the production cost compared with the prior art.

本申请实施例一方面提供了一种数据预处理方法,包括:On the one hand, the embodiments of the present application provide a data preprocessing method, including:

确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;determining a virtual object model and a model map of the virtual object model, and automatically generating an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and different sub-maps have different expression types;

接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;receiving an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame;

根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;Control the expression map to display only the target sub-map corresponding to the target map ID on the model map according to the target map ID, call the shader, and map the target sub-map to the target sub-map based on the UV mapping technology. Describe the corresponding area of the virtual object model;

将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The target texture identifier and the animation frame number corresponding to the target texture identifier are correspondingly written into the animation generation control file.

在本申请实施例所述的数据预处理方法中,在所述根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图之前,所述方法还包括:In the data preprocessing method described in this embodiment of the present application, before the controlling of the expression map to display only the target sub-map corresponding to the target texture identifier on the model texture according to the target texture identifier, The method also includes:

在所述模型贴图上将所述表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在所述模型贴图中拥有独立的像素点集合,通过控制所述像素点集合的开启或关闭能够控制与其对应的子贴图在所述模型贴图上显示或隐藏。The expression map is divided into pixels on the model map to obtain at least two independent sub-maps, wherein each sub-map has an independent set of pixels in the model map, and the set of pixels is controlled by controlling the set of pixels. The on or off of can control its corresponding sub-map to show or hide on the model map.

在本申请实施例所述的数据预处理方法中,所述根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,包括:In the data preprocessing method described in the embodiment of the present application, the controlling on the model texture to display only the target sub-map corresponding to the target texture identifier on the model texture according to the target texture identifier includes:

根据所述目标贴图标识确定与其对应的目标像素点集合,控制所述目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。Determine the corresponding target pixel set according to the target map identifier, control the target pixel set to be in an open state on the model map, and control other pixel sets to be in a closed state on the model map, so as to control the Only the target sub-map corresponding to the target map identifier is displayed in the expression map.

在本申请实施例所述的数据预处理方法中,所述动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数;在所述接收针对动画模型的动画帧生成指令之后,所述方法还包括:In the data preprocessing method described in the embodiment of the present application, the animation frame generation instruction further includes control parameters of the limb controller of the virtual object model body in the current animation frame; in the receiving animation frame generation instruction for the animation model Afterwards, the method further includes:

将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。The control parameters are input into the limb controller to control limb movements of the virtual object model.

在本申请实施例所述的数据预处理方法中,在所述将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中之后,所述方法还包括:In the data preprocessing method described in the embodiment of the present application, after the target texture identifier and the animation frame number corresponding to the target texture identifier are written into the animation generation control file, the method further includes: :

将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;Importing the animation generation control file into a preset game program, and analysing the animation generation control file to obtain the number of animation frames and texture identifiers corresponding to each animation frame included;

将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;Sort each of the animation frames from small to large, and calculate the time interval between two adjacent target frames according to the adjacent two target frame numbers;

将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。The time interval is imported into a game timer, and the game timer is used to switch and display a picture of an animation frame corresponding to each of the animation frame numbers.

在本申请实施例所述的数据预处理方法中,所述根据相邻两个目标帧数计算得到相邻两个目标帧数之间的时间间隔,包括:In the data preprocessing method described in the embodiment of the present application, the time interval between two adjacent target frame numbers is calculated and obtained according to the adjacent two target frame numbers, including:

计算相邻两个动画帧数之间的帧数差;Calculate the frame difference between two adjacent animation frames;

将所述帧数差与预设时间参数做相乘计算得到所述时间间隔。The time interval is obtained by multiplying the frame number difference by a preset time parameter.

在本申请实施例所述的数据预处理方法中,所述方法还包括:In the data preprocessing method described in the embodiment of the present application, the method further includes:

提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。A visual interface is provided, the visual interface includes a display area, the display area is used to display the animation model, and the facial expression of the animation model in the display area changes in real time following the switching of the target texture identifier.

相应的,本申请实施例另一方面还提供了一种数据预处理装置,包括:Correspondingly, the embodiments of the present application further provide a data preprocessing device, including:

数据导入模块,用于确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;A data import module is used to determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and different sub-maps. Stickers have different emoji types;

指令接收模块,用于接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;an instruction receiving module, configured to receive an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame;

数据映射模块,用于根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;The data mapping module is used to control the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, call the shader, and map the expression map based on the UV map technology. The target sub-map is mapped to the corresponding area of the virtual object model;

数据写入模块,用于将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The data writing module is configured to write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly.

相应的,本申请实施例另一方面还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行如上所述的数据预处理方法。Correspondingly, another aspect of the embodiments of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the above-mentioned Data preprocessing methods.

相应的,本申请实施例另一方面还提供了一种终端设备,包括处理器和存储器,所述存储器存储有多条指令,所述处理器加载所述指令以执行如上所述的数据预处理方法。Correspondingly, another embodiment of the present application further provides a terminal device, including a processor and a memory, the memory stores a plurality of instructions, and the processor loads the instructions to perform the data preprocessing as described above. method.

本申请实施例提供了一种数据预处理方法、装置、介质及设备,该方法通过确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。本申请实施例通过在虚拟对象模型的脸部部分贴不同表情类型的表情贴图能够实现改变虚拟角色的面部表情,实现游戏场景中的虚拟角色能够按照设定好的程序进行表情动作切换,并且是基于3D模型上实现表情切换,相较于现有技术能够降低制作成本。Embodiments of the present application provide a data preprocessing method, apparatus, medium, and device. The method automatically generates an initial animation model by determining a virtual object model and a model map of the virtual object model, wherein the model map Including an expression map, the expression map includes at least two sub-maps and the expression types of different sub-maps are different; receiving an animation frame generation instruction for the animation model, the animation frame generation instruction includes the target of the virtual object model in the current animation frame Texture identification; control the expression map to display only the target sub-map corresponding to the target texture identification on the model texture according to the target texture identification, call the shader, and convert the target sub-map based on the UV mapping technology Map to the corresponding area of the virtual object model; write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly. In the embodiment of the present application, the facial expression of the virtual character can be changed by pasting expression maps of different expression types on the face part of the virtual object model, so that the virtual character in the game scene can switch between expressions and actions according to the set program, and is The expression switching based on the 3D model can reduce the production cost compared with the prior art.

附图说明Description of drawings

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can also be obtained from these drawings without creative effort.

图1为本申请实施例提供的数据预处理方法的流程示意图。FIG. 1 is a schematic flowchart of a data preprocessing method provided by an embodiment of the present application.

图2为为本申请实施例提供的数据预处理方法中动画模型的示例图。FIG. 2 is an example diagram of an animation model in the data preprocessing method provided by the embodiment of the present application.

图3为本申请实施例提供的数据预处理方法中模型贴图的示例图。FIG. 3 is an example diagram of a model map in a data preprocessing method provided by an embodiment of the present application.

图4为本申请实施例提供的数据预处理装置的结构示意图。FIG. 4 is a schematic structural diagram of a data preprocessing apparatus provided by an embodiment of the present application.

图5为本申请实施例提供的数据预处理装置的另一种结构示意图。FIG. 5 is another schematic structural diagram of a data preprocessing apparatus provided by an embodiment of the present application.

图6为本申请实施例提供的终端设备的结构示意图。FIG. 6 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本申请的保护范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those skilled in the art without creative efforts shall fall within the protection scope of this application.

本申请实施例提供一种数据预处理方法,所述数据预处理方法可以应用于终端设备中。所述终端设备可以是智能手机、平板电脑等设备。The embodiment of the present application provides a data preprocessing method, and the data preprocessing method can be applied to a terminal device. The terminal device may be a smart phone, a tablet computer or the like.

在一些具有项目开发进度快、周期短特点的开发项目中,通常为了加快进度而忽略了虚拟角色动画的画质要求,使得在展示虚拟角色出场或待机时都没有表情动画,只有固定不变的表情贴图,导致动作表演,人物情绪化不够生动,表情呆板。尤其是在游戏战斗,交互体验,界面展示等场景中,由于虚拟角色缺乏情感丰富性,导致用户体验较差。In some development projects with fast project development progress and short cycle, the quality requirements of virtual character animation are usually ignored in order to speed up the progress, so that there is no expression animation when the virtual character is displayed or on standby, only fixed and unchanging. Expression maps lead to action performances, the characters are not emotionally vivid enough, and the expressions are rigid. Especially in scenarios such as game battle, interactive experience, and interface display, the user experience is poor due to the lack of emotional richness of virtual characters.

目前市场上为了实现场景中的虚拟角色具备表情变化的效果,通常分为两种实现方案。一种是在2D游戏的spine软件中使用平面表情骨骼切换动画,即在2D虚拟角色上添加多张表情图片,给每个表情图片添加对应的骨骼蒙皮,使用骨骼位移、旋转和缩放来实现表情之间的转换动画。该方案的优点在于表情形式夸张没有限制,但是只适合用在2D展示或者2D游戏里面,不适用于项目开发进度快、周期短特点的开发项目,具有一定的局限性。另一种是在3D游戏里使用纯3D面部骨骼绑定来实现表情动画,面部会加很多骨骼,给模型蒙皮,用骨骼驱动模型顶点变化或者blendshape(表情模型)来实现。该方案的优点是动画细腻生动真实,适应动画电影和高品质游戏。但是受限于角色脸部模型设计和绑定技术复杂等原因,制作成本高,周期长,同样不适用于项目开发进度快、周期短特点的开发项目。At present, in order to realize the effect of changing the expression of the virtual character in the scene, there are usually two implementation schemes. One is to use plane expression bones to switch animations in the spine software of 2D games, that is, add multiple expression pictures to the 2D virtual character, add corresponding bone skins to each expression picture, and use bone displacement, rotation and scaling to achieve this. Transition animation between expressions. The advantage of this solution is that there is no limit to the exaggerated expression form, but it is only suitable for use in 2D display or 2D games, and is not suitable for development projects with fast development progress and short cycle, which has certain limitations. The other is to use pure 3D facial bone binding to achieve expression animation in 3D games, adding a lot of bones to the face, skinning the model, and using bones to drive model vertex changes or blendshape (expression model) to achieve. The advantage of this program is that the animation is delicate and vivid, and it is suitable for animated movies and high-quality games. However, due to the complexity of character face model design and binding technology, high production cost and long cycle, it is also not suitable for development projects with fast project development progress and short cycle.

为了解决上述技术问题,本申请实施例提供一种数据预处理方法。利用本申请实施例提供的数据预处理方法,能够通过结合区块链技术来对游戏资产的操作方式进行优化,使游戏资产在避免被盗取的安全性问题的情况下,还能够在玩家游戏下线后,基于不使用的游戏资产获取增值效益,提升游戏玩家的游戏体验感。In order to solve the above technical problems, the embodiments of the present application provide a data preprocessing method. Using the data preprocessing method provided by the embodiments of the present application, the operation mode of game assets can be optimized by combining with the blockchain technology, so that the game assets can still be used in the player's game without the security problem of being stolen. After going offline, gain value-added benefits based on unused game assets, and improve gamers' gaming experience.

请参阅图1-图3,图1为本申请实施例提供的数据预处理方法的流程示意图。图2为为本申请实施例提供的数据预处理方法中动画模型的示例图。图3为本申请实施例提供的数据预处理方法中模型贴图的示例图。所述数据预处理方法,应用于终端设备中,主要用于游戏软件在运行前的数据预处理过程。所述方法可以包括以下步骤:Please refer to FIG. 1-FIG. 3. FIG. 1 is a schematic flowchart of a data preprocessing method provided by an embodiment of the present application. FIG. 2 is an example diagram of an animation model in the data preprocessing method provided by the embodiment of the present application. FIG. 3 is an example diagram of a model map in a data preprocessing method provided by an embodiment of the present application. The data preprocessing method is applied to a terminal device, and is mainly used in the data preprocessing process of game software before running. The method may include the following steps:

步骤101,确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同。Step 101: Determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and expressions of different sub-maps different types.

需要说明的是,此步骤主要由动画师完成,可以在Maya中将虚拟对象模型以及所述虚拟对象模型的模型贴图,并在Maya的动画制作场景中自动生成初始的动画模型。其中,虚拟对象模型主要由建模师完成,可以在3dsMax中完成对虚拟对象模型的建模操作,同时创建用于制作动画模型的模型贴图。需要解释的是,虚拟对象模型是指用于构建虚拟角色的3D模型。模型贴图是指用于存储表情贴图的模板,从计算机的角度理解模型贴图指的是表情贴图的存储载体。其中,模型贴图中的表情贴图包含至少两个子贴图且不同子贴图的表情类型不同,例如“开心”或“悲伤”。通过在虚拟对象模型的脸部部分贴不同表情类型的表情贴图能够实现改变虚拟角色的面部表情,实现在游戏场景中的虚拟角色能够按照设定好的程序进行表情动作切换,并且是基于3D模型上实现表情切换,相较于现有技术能够降低制作成本。需要理解的是,模型贴图还包括用于对虚拟对象模型的肢体部分做贴图操作的肢体贴图。It should be noted that this step is mainly completed by an animator, and the virtual object model and the model texture of the virtual object model can be mapped in Maya, and an initial animation model can be automatically generated in the animation production scene of Maya. Among them, the virtual object model is mainly completed by the modeler, and the modeling operation of the virtual object model can be completed in 3dsMax, and the model texture for making the animation model can be created at the same time. It needs to be explained that the virtual object model refers to the 3D model used to construct the virtual character. Model map refers to a template used to store expression maps. From the perspective of a computer, model maps refer to the storage carrier of expression maps. The expression map in the model map contains at least two sub-maps, and different sub-maps have different expression types, such as "happy" or "sad". The facial expression of the virtual character can be changed by pasting the expression maps of different expression types on the face part of the virtual object model, so that the virtual character in the game scene can switch the expression and action according to the set program, and it is based on the 3D model. Compared with the existing technology, the expression switching can be realized, and the production cost can be reduced. It should be understood that the model map also includes a body map used to map the body parts of the virtual object model.

步骤102,接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识。Step 102: Receive an animation frame generation instruction for the animation model, where the animation frame generation instruction includes a target texture identifier of the virtual object model in the current animation frame.

在本实施例中,动画师可以在Maya中针对动画模型输入动画帧生成指令,动画帧生成指令包括当前动画帧中虚拟对象模型的目标贴图标识。需要解释的是,贴图标识指的是对应不同子贴图的标识,例如1、2、3。动画师可以在操作界面中通过输入不同的贴图标识来做到实时切换动画模型当前面部表情的目的。In this embodiment, an animator may input an animation frame generation instruction for an animation model in Maya, where the animation frame generation instruction includes a target texture identifier of the virtual object model in the current animation frame. It should be explained that the texture ID refers to the ID corresponding to different sub-maps, such as 1, 2, and 3. The animator can switch the current facial expression of the animated model in real time by inputting different texture identifiers in the operation interface.

在一些实施例中,当动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数时,该方法还包括:In some embodiments, when the animation frame generation instruction further includes control parameters of the limb controller of the virtual object model body in the current animation frame, the method further includes:

将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。The control parameters are input into the limb controller to control limb movements of the virtual object model.

步骤103,根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域。Step 103: Control the expression map to display only the target sub-map corresponding to the target map ID on the model map according to the target map ID, call the shader, and convert the target sub-map based on the UV mapping technology. Mapped to corresponding regions of the virtual object model.

在本实施例中,为了简化处理,虚拟角色的面部部分和肢体部分的模型没有拆分开,都是整体合并在一起的。每个角色模型的材质通过指定一张用来表示模型基础颜色的贴图,即模型贴图。所以可以将多个子贴图全部都放在一张模型贴图之中。多个对应不同表情的子贴图需要大小完全一致,每个部分都占整张贴图十六分之一,排放在整张贴图的最上方四分之一的位置。为了能让动画模型的脸部正常显示,建模师在制作模型时需要将有表情变化的脸部模型的uv贴图展开到整张模型贴图左上方十六分之一(即uv坐标的0<=x<=0.25,0<=y<=0.25)的位置,正好对应四个表情中的第一个表情,也就是动画模型的默认表情。当需要切换到第二个表情时,要在材质中对脸部区域的动画模型的uv坐标进行偏移。由于脸部区域的uv坐标已经有了确定的范围,所以可以在材质中通过uv坐标的大小判断出脸部区域,从而可以直接将uv坐标的x值加上0.25切换到第二个表情上,y值保持不变。由此可知,将四个表情按照顺序进行编号,用n表示编号的数值,当需要切换到第n个表情时,只要将脸部区域的uv坐标的x值加上0.25*n就能实现想要的效果。把n设置为模型材质的一个参数,就可以通过材质来控制表情的切换。In this embodiment, in order to simplify the processing, the models of the face part and the limb part of the virtual character are not separated, but are integrated together as a whole. The material of each character model is assigned a map that represents the base color of the model, that is, the model map. So you can put multiple sub-maps all in one model map. Multiple sub-maps corresponding to different expressions need to be exactly the same size, each part occupies one-sixteenth of the whole map, and is arranged in the top quarter of the whole map. In order to make the face of the animated model display normally, the modeler needs to expand the uv texture of the face model with facial expression changes to one-sixteenth of the upper left of the entire model texture (that is, 0< =x<=0.25, 0<=y<=0.25), which exactly corresponds to the first expression in the four expressions, that is, the default expression of the animation model. When you need to switch to the second expression, you need to offset the uv coordinates of the animated model of the face area in the material. Since the uv coordinates of the face area have a certain range, the face area can be determined by the size of the uv coordinates in the material, so that the x value of the uv coordinate can be directly switched to the second expression by adding 0.25, The y value remains the same. It can be seen from this that the four expressions are numbered in sequence, and n is used to represent the numerical value of the number. When it is necessary to switch to the nth expression, it is only necessary to add 0.25*n to the x value of the uv coordinate of the face area. desired effect. By setting n as a parameter of the model material, you can control the switching of expressions through the material.

由于在3D建模软件中无法动态的去进行模型uv坐标的偏移,从计算机运行的角度理解,程序无法调用材质球直接从模型贴图上读取指定位置的目标子贴图,导致动画模型只能显示默认的第一个表情,不能看到其他表情的效果,这样会导致动画师无法方便的进行表情动画的编辑工作。为了解决这个问题,本方案通过在模型贴图上控制表情贴图中仅显示与目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将目标子贴图映射到虚拟对象模型的对应区域,完成模型贴图操作,方便动画师进行表情动画的编辑工作。Since it is impossible to dynamically offset the uv coordinates of the model in the 3D modeling software, from the perspective of computer operation, the program cannot call the shader to directly read the target sub-map at the specified position from the model map, resulting in the animation model only The default first expression is displayed, and the effect of other expressions cannot be seen, which will cause the animator to be unable to easily edit the expression animation. In order to solve this problem, this scheme controls the expression map on the model map to display only the target sub-map corresponding to the target map identifier, calls the shader, and maps the target sub-map to the corresponding area of the virtual object model based on UV mapping technology. Complete the model texture operation, which is convenient for the animator to edit the expression animation.

需要说明的是,在根据目标贴图标识在模型贴图上控制表情贴图中仅显示与目标贴图标识对应的目标子贴图之前,需要在模型贴图上将表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在模型贴图中拥有独立的像素点集合,通过控制像素点集合的开启或关闭能够控制与其对应的子贴图在模型贴图上显示或隐藏。It should be noted that, before controlling the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, the expression map needs to be pixel-split on the model map to obtain at least two independent Sub-maps, in which each sub-map has an independent set of pixels in the model map. By controlling the opening or closing of the pixel set, the corresponding sub-maps can be displayed or hidden on the model map.

具体地,根据目标贴图标识确定与其对应的目标像素点集合,控制目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。Specifically, the corresponding target pixel set is determined according to the target map identifier, the target pixel set is controlled to be turned on on the model map, and other pixel sets are controlled to be turned off on the model map, so as to control the In the expression map, only the target sub-map corresponding to the target map identifier is displayed.

需要解释的是,所有的图像文件都是二维的一个平面。水平方向是U,垂直方向是V,因此,通过这个平面的二维UV坐标系,可以定位图像上的任意一个像素。UV是UV纹理贴图坐标的简称(其与空间模型的X,Y,Z轴类似)。它定义了图片上每个像素的位置的信息,这些像素与3D模型是相互联系的,以决定表面纹理贴图的位置,UV就是将图像上每一个像素精确对应到模型物体的表面,在像素与像素之间的间隙位置由软件进行图像光滑插值处理,这就是所谓的UV贴图技术。It should be explained that all image files are two-dimensional and a plane. The horizontal direction is U and the vertical direction is V. Therefore, through the two-dimensional UV coordinate system of this plane, any pixel on the image can be located. UV is short for UV texture map coordinates (which are analogous to the X, Y, Z axes of a spatial model). It defines the information of the position of each pixel on the image. These pixels are related to the 3D model to determine the position of the surface texture map. UV is to accurately correspond each pixel on the image to the surface of the model object. The positions of the gaps between the pixels are image-smoothly interpolated by the software, which is known as UV mapping.

步骤104,将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。Step 104: Write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly.

在本实施例中,动画生成控制文件采用json编码技术创建,得到json格式文件。具体地,动画生成控制文件的生成是采用MaxScript编程技术和json编码技术。采用MaxScript编程技术的原因为:MaxScript为3ds Max原生编程语言对跨版本的兼容度高。采用json编码技术的原因为:json是常用的编码格式,适用于不同的软件和引擎通用度高。工具的益处在于,自动化创建多维材质插件大幅度减低了制作手动创建的时间节省效率接近于100%,并且在提升效率的同时还保证了零出错的准确度。In this embodiment, the animation generation control file is created by using json encoding technology to obtain a json format file. Specifically, the animation generation control file is generated by using MaxScript programming technology and json encoding technology. The reason for using MaxScript programming technology is: MaxScript is the native programming language of 3ds Max and has high compatibility across versions. The reason for using json encoding technology is: json is a commonly used encoding format, which is suitable for different software and engines and has a high degree of versatility. The benefit of the tool is that the automated creation of multi-dimensional material plug-ins greatly reduces the time-saving efficiency of manual creation by nearly 100%, and improves efficiency while ensuring zero-error accuracy.

通过将目标贴图标识和目标贴图标识对应的动画帧数对应地写入动画生成控制文件中,将同级目录下的材质贴图运用MaxScript编程的方法,自动化的将其子贴图与贴图标识匹配上,在制作完成后支持对目标贴图标识变化时间表的输出,使在游戏场景中可以看到和动画师在3D建模软件中制作的表情切换动画同步的效果。By writing the target texture ID and the animation frame number corresponding to the target texture ID into the animation generation control file, the material texture in the same level directory is automatically matched with the texture ID by using the MaxScript programming method. After the production is completed, it supports the output of the target texture logo change schedule, so that the effect of synchronization with the expression switching animation produced by the animator in the 3D modeling software can be seen in the game scene.

在一些实施例中,在所述将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中之后,所述方法还包括:In some embodiments, after the target texture identifier and the number of animation frames corresponding to the target texture identifier are written into the animation generation control file, the method further includes:

将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;Importing the animation generation control file into a preset game program, and analysing the animation generation control file to obtain the number of animation frames and texture identifiers corresponding to each animation frame included;

将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;Sort each of the animation frames from small to large, and calculate the time interval between two adjacent target frames according to the adjacent two target frame numbers;

将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。The time interval is imported into a game timer, and the game timer is used to switch and display a picture of an animation frame corresponding to each of the animation frame numbers.

在本实施例中,游戏运行时,在逻辑脚本中可以直接读取到导出的动画生成控制文件并解析其中的数据。数据记录了需要切换表情的帧数以及对应的贴图标识。用数组将这些数据记录下来,并按照帧数从小到大进行排序。读取数组的第一个数据,根据帧数算出下一次触发表情切换的时间,计算方法如下:3D软件中导出的数据都是按30帧每秒进行计算的,所以用当前帧数减去上一次表情切换的帧数(第一次播放时为0)再乘以1/30即可得到间隔时间。利用游戏引擎的定时器功能去设置好经过这个间隔时间后进行切换材质参数的处理,可以在正确的时间点切换到对应的表情上去。以此类推,将动画生成控制文件中记录的数据都按照这种方式处理,就可以看到和动画师在3D建模软件中制作的表情切换动画完全一致的效果了。In this embodiment, when the game is running, the logic script can directly read the exported animation generation control file and parse the data therein. The data records the number of frames that need to switch expressions and the corresponding texture identifiers. Record these data in an array and sort them according to the number of frames from small to large. Read the first data of the array, and calculate the time to trigger the next expression switch according to the number of frames. The interval time can be obtained by multiplying the frame number of an expression switch (0 in the first play) by 1/30. Use the timer function of the game engine to set the processing of switching the material parameters after this interval, so that you can switch to the corresponding expression at the correct time. By analogy, if the data recorded in the animation generation control file is processed in this way, you can see the exact same effect as the expression switching animation made by the animator in the 3D modeling software.

在一些实施例中,所述方法还包括:In some embodiments, the method further includes:

提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。A visual interface is provided, the visual interface includes a display area, the display area is used to display the animation model, and the facial expression of the animation model in the display area changes in real time following the switching of the target texture identifier.

在本实施例中,通过设置可视化界面,能够方便动画师在制作动画的过程中实时查看动画模型的脸部表情切换效果。In this embodiment, by setting the visual interface, it is convenient for the animator to view the facial expression switching effect of the animation model in real time during the animation production process.

上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。All the above-mentioned optional technical solutions can be combined arbitrarily to form optional embodiments of the present application, which will not be repeated here.

具体实施时,本申请不受所描述的各个步骤的执行顺序的限制,在不产生冲突的情况下,某些步骤还可以采用其它顺序进行或者同时进行。During specific implementation, the present application is not limited by the execution order of the described steps, and certain steps may also be performed in other sequences or simultaneously under the condition of no conflict.

由上可知,本申请实施例提供的数据预处理方法通过确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。本申请实施例通过在虚拟对象模型的脸部部分贴不同表情类型的表情贴图能够实现改变虚拟角色的面部表情,实现在游戏场景中的虚拟角色能够按照设定好的程序进行表情动作切换,并且是基于3D模型上实现表情切换,相较于现有技术能够降低制作成本。It can be seen from the above that the data preprocessing method provided by the embodiment of the present application automatically generates an initial animation model by determining a virtual object model and a model map of the virtual object model, wherein the model map includes an expression map, and the expression The texture includes at least two sub textures and different sub textures have different expression types; receive an animation frame generation instruction for the animation model, the animation frame generation instruction includes the target texture identifier of the virtual object model in the current animation frame; according to the target The texture ID controls the expression texture to display only the target sub-map corresponding to the target texture ID on the model map, calls the shader, and maps the target sub-map to the virtual object model based on UV mapping technology The corresponding area of the target map and the number of animation frames corresponding to the target map identifier and the target map identifier are correspondingly written into the animation generation control file. In the embodiment of the present application, the facial expression of the virtual character can be changed by pasting expression maps of different expression types on the face part of the virtual object model, so that the virtual character in the game scene can switch between expressions and actions according to a set program, and It is based on the 3D model to realize expression switching, which can reduce the production cost compared with the existing technology.

本申请实施例还提供一种数据预处理装置,所述数据预处理装置可以集成在终端设备中。所述终端设备可以是智能手机、平板电脑等设备。The embodiment of the present application further provides a data preprocessing apparatus, and the data preprocessing apparatus may be integrated in a terminal device. The terminal device may be a smart phone, a tablet computer or the like.

请参阅图4,图4为本申请实施例提供的数据预处理装置的结构示意图。数据预处理装置30可以包括:Please refer to FIG. 4 , which is a schematic structural diagram of a data preprocessing apparatus provided by an embodiment of the present application. The data preprocessing device 30 may include:

数据导入模块31,用于确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;The data import module 31 is configured to determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps that are different from each other. The expression types of the sub-maps are different;

指令接收模块32,用于接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;an instruction receiving module 32, configured to receive an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame;

数据映射模块33,用于根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;The data mapping module 33 is configured to control the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, call the shader, and map all the sub-maps based on the UV map technology. The target sub-map is mapped to the corresponding area of the virtual object model;

数据写入模块34,用于将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The data writing module 34 is configured to write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly.

在一些实施例中,所述装置还包括像素拆分模块,用于在所述模型贴图上将所述表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在所述模型贴图中拥有独立的像素点集合,通过控制所述像素点集合的开启或关闭能够控制与其对应的子贴图在所述模型贴图上显示或隐藏。In some embodiments, the device further includes a pixel splitting module, configured to perform pixel splitting on the expression map on the model texture to obtain at least two independent sub-maps, wherein each sub-map is in the The model texture has an independent set of pixel points, and by controlling the opening or closing of the pixel set, the corresponding sub-map can be controlled to be displayed or hidden on the model texture.

在一些实施例中,所述数据映射模块33,用于根据所述目标贴图标识确定与其对应的目标像素点集合,控制所述目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。In some embodiments, the data mapping module 33 is configured to determine the corresponding target pixel set according to the target map identifier, control the target pixel set to be in an open state on the model map, and control other The pixel point set is in a closed state on the model map, so as to control only the target sub-map corresponding to the target map identifier to be displayed in the expression map.

在一些实施例中,所述动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数;所述装置还包括肢体控制模块,用于将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。In some embodiments, the animation frame generation instruction further includes control parameters of the limb controller of the virtual object model body in the current animation frame; the apparatus further includes a limb control module for inputting the control parameters to the The limb controller controls limb movements of the virtual object model.

在一些实施例中,所述装置还包括预处理模块,用于将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。In some embodiments, the device further includes a preprocessing module, configured to import the animation generation control file into a preset game program, and parse the animation generation control file to obtain the animation corresponding to each animation frame contained therein. Frame number and texture identification; sort each animation frame number from small to large, and calculate the time interval between two adjacent target frames according to the number of adjacent two target frames; import the time interval into the game timer , through the game timer switching and displaying the picture of the animation frame corresponding to each of the animation frame numbers.

在一些实施例中,所述预处理模块,用于计算相邻两个动画帧数之间的帧数差;将所述帧数差与预设时间参数做相乘计算得到所述时间间隔。In some embodiments, the preprocessing module is configured to calculate a frame number difference between two adjacent animation frame numbers; the time interval is calculated by multiplying the frame number difference by a preset time parameter.

在一些实施例中,所述装置还包括展示模块,用于提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。In some embodiments, the device further includes a display module for providing a visual interface, the visual interface includes a display area, the display area is used to display the animation model, and the animation model in the display area The facial expression changes in real time following the switching of the target texture identification.

具体实施时,以上各个模块可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现。During specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities.

由上可知,本申请实施例提供的数据预处理装置30,通过数据导入模块31确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;指令接收模块32接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;数据映射模块33根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;数据写入模块34将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。As can be seen from the above, the data preprocessing device 30 provided in the embodiment of the present application determines a virtual object model and a model map of the virtual object model through the data import module 31, and automatically generates an initial animation model, wherein the model map includes an expression map, the expression map includes at least two sub-maps and the expression types of different sub-maps are different; the instruction receiving module 32 receives an animation frame generation instruction for the animation model, and the animation frame generation instruction includes the virtual object in the current animation frame The target texture identification of the model; the data mapping module 33 controls the expression texture on the model texture according to the target texture identification to display only the target sub-map corresponding to the target texture identification, calls the shader, and based on the UV mapping The technology maps the target sub-map to the corresponding area of the virtual object model; the data writing module 34 writes the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly.

请参阅图5,图5为本申请实施例提供的数据预处理装置的另一结构示意图,数据预处理装置30包括存储器120、一个或多个处理器180、以及一个或多个应用程序,其中该一个或多个应用程序被存储于该存储器120中,并配置为由该处理器180执行;该处理器180可以包括数据导入模块31,指令接收模块32,数据映射模块33,以及数据写入模块34。例如,以上各个部件的结构和连接关系可以如下:Please refer to FIG. 5. FIG. 5 is another schematic structural diagram of the data preprocessing apparatus provided by the embodiment of the present application. The data preprocessing apparatus 30 includes a memory 120, one or more processors 180, and one or more application programs, wherein The one or more application programs are stored in the memory 120 and configured to be executed by the processor 180; the processor 180 may include a data import module 31, an instruction reception module 32, a data mapping module 33, and a data write module module 34. For example, the structures and connection relationships of the above components can be as follows:

存储器120可用于存储应用程序和数据。存储器120存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器180通过运行存储在存储器120的应用程序,从而执行各种功能应用以及数据处理。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器120还可以包括存储器控制器,以提供处理器180对存储器120的访问。Memory 120 may be used to store applications and data. The application programs stored in the memory 120 contain executable codes. Applications can be composed of various functional modules. The processor 180 executes various functional applications and data processing by executing application programs stored in the memory 120 . Additionally, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 120 may also include a memory controller to provide processor 180 access to memory 120 .

处理器180是装置的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器120内的应用程序,以及调用存储在存储器120内的数据,执行装置的各种功能和处理数据,从而对装置进行整体监控。可选的,处理器180可包括一个或多个处理核心;优选的,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等。The processor 180 is the control center of the device, uses various interfaces and lines to connect various parts of the entire terminal, and executes various functions of the device by running or executing the application program stored in the memory 120 and calling the data stored in the memory 120. function and process data for overall monitoring of the installation. Optionally, the processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc. .

具体在本实施例中,处理器180会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行代码加载到存储器120中,并由处理器180来运行存储在存储器120中的应用程序,从而实现各种功能:Specifically in this embodiment, the processor 180 loads the executable code corresponding to the process of one or more application programs into the memory 120 according to the following instructions, and the processor 180 executes the executable code stored in the memory 120 application to achieve various functions:

数据导入模块31,用于确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;The data import module 31 is configured to determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps that are different from each other. The expression types of the sub-maps are different;

指令接收模块32,用于接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;an instruction receiving module 32, configured to receive an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame;

数据映射模块33,用于根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;The data mapping module 33 is configured to control the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, call the shader, and map all the sub-maps based on the UV map technology. The target sub-map is mapped to the corresponding area of the virtual object model;

数据写入模块34,用于将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The data writing module 34 is configured to write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly.

在一些实施例中,所述装置还包括像素拆分模块,用于在所述模型贴图上将所述表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在所述模型贴图中拥有独立的像素点集合,通过控制所述像素点集合的开启或关闭能够控制与其对应的子贴图在所述模型贴图上显示或隐藏。In some embodiments, the device further includes a pixel splitting module, configured to perform pixel splitting on the expression map on the model texture to obtain at least two independent sub-maps, wherein each sub-map is in the The model texture has an independent set of pixel points, and by controlling the opening or closing of the pixel set, the corresponding sub-map can be controlled to be displayed or hidden on the model texture.

在一些实施例中,所述数据映射模块33,用于根据所述目标贴图标识确定与其对应的目标像素点集合,控制所述目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。In some embodiments, the data mapping module 33 is configured to determine the corresponding target pixel set according to the target map identifier, control the target pixel set to be in an open state on the model map, and control other The pixel point set is in a closed state on the model map, so as to control only the target sub-map corresponding to the target map identifier to be displayed in the expression map.

在一些实施例中,所述动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数;所述装置还包括肢体控制模块,用于将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。In some embodiments, the animation frame generation instruction further includes control parameters of the limb controller of the virtual object model body in the current animation frame; the apparatus further includes a limb control module for inputting the control parameters to the The limb controller controls limb movements of the virtual object model.

在一些实施例中,所述装置还包括预处理模块,用于将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。In some embodiments, the device further includes a preprocessing module, configured to import the animation generation control file into a preset game program, and parse the animation generation control file to obtain the animation corresponding to each animation frame contained therein. Frame number and texture identification; sort each animation frame number from small to large, and calculate the time interval between two adjacent target frames according to the number of adjacent two target frames; import the time interval into the game timer , through the game timer switching and displaying the picture of the animation frame corresponding to each of the animation frame numbers.

在一些实施例中,所述预处理模块,用于计算相邻两个动画帧数之间的帧数差;将所述帧数差与预设时间参数做相乘计算得到所述时间间隔。In some embodiments, the preprocessing module is configured to calculate a frame number difference between two adjacent animation frame numbers; the time interval is calculated by multiplying the frame number difference by a preset time parameter.

在一些实施例中,所述装置还包括展示模块,用于提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。In some embodiments, the device further includes a display module for providing a visual interface, the visual interface includes a display area, the display area is used to display the animation model, and the animation model in the display area The facial expression changes in real time following the switching of the target texture identification.

本申请实施例还提供一种终端设备。所述终端设备可以是智能手机、电脑、平板电脑等设备。The embodiments of the present application also provide a terminal device. The terminal device may be a smart phone, a computer, a tablet computer or other devices.

请参阅图6,图6示出了本申请实施例提供的终端设备的结构示意图,该终端设备可以用于实施上述实施例中提供的数据预处理方法。该终端设备1200可以为智能手机或平板电脑。Referring to FIG. 6 , FIG. 6 shows a schematic structural diagram of a terminal device provided by an embodiment of the present application, where the terminal device can be used to implement the data preprocessing method provided in the foregoing embodiment. The terminal device 1200 can be a smart phone or a tablet computer.

如图6所示,终端设备1200可以包括RF(Radio Frequency,射频)电路110、包括有一个或一个以上(图中仅示出一个)计算机可读存储介质的存储器120、输入单元130、显示单元140、传感器150、音频电路160、传输模块170、包括有一个或者一个以上(图中仅示出一个)处理核心的处理器180以及电源190等部件。本领域技术人员可以理解,图6中示出的终端设备1200结构并不构成对终端设备1200的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:As shown in FIG. 6 , the terminal device 1200 may include an RF (Radio Frequency, radio frequency) circuit 110 , a memory 120 including one or more (only one is shown in the figure) computer-readable storage medium, an input unit 130 , and a display unit 140 , the sensor 150 , the audio circuit 160 , the transmission module 170 , the processor 180 including one or more (only one is shown in the figure) processing core, and the power supply 190 and other components. Those skilled in the art can understand that the structure of the terminal device 1200 shown in FIG. 6 does not constitute a limitation on the terminal device 1200, and may include more or less components than the one shown, or combine some components, or different components layout. in:

RF电路110用于接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯。RF电路110可包括各种现有的用于执行这些功能的电路元件,例如,天线、射频收发器、数字信号处理器、加密/解密芯片、用户身份模块(SIM)卡、存储器等等。RF电路110可与各种网络如互联网、企业内部网、无线网络进行通讯或者通过无线网络与其他设备进行通讯。The RF circuit 110 is used for receiving and sending electromagnetic waves, realizing mutual conversion between electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. RF circuitry 110 may include various existing circuit elements for performing these functions, eg, antennas, radio frequency transceivers, digital signal processors, encryption/decryption chips, subscriber identity module (SIM) cards, memory, and the like. The RF circuit 110 may communicate with various networks such as the Internet, an intranet, a wireless network, or with other devices over a wireless network.

存储器120可用于存储软件程序以及模块,如上述实施例中数据预处理方法对应的程序指令/模块,处理器180通过运行存储在存储器120内的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器120可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器120可进一步包括相对于处理器180远程设置的存储器,这些远程存储器可以通过网络连接至终端设备1200。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 120 can be used to store software programs and modules, such as program instructions/modules corresponding to the data preprocessing method in the above-mentioned embodiments, the processor 180 executes various functional applications and data by running the software programs and modules stored in the memory 120. deal with. Memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 120 may further include memory located remotely from the processor 180, and these remote memories may be connected to the terminal device 1200 through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.

输入单元130可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,输入单元130可包括触敏表面131以及其他输入设备132。触敏表面131,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面131上或在触敏表面131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器180,并能接收处理器180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面131。除了触敏表面131,输入单元130还可以包括其他输入设备132。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 130 may be used to receive input numerical or character information, and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control. Specifically, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132 . Touch-sensitive surface 131, also known as a touch display or trackpad, can collect user touch operations on or near it (such as a user using a finger, stylus, etc., any suitable object or accessory on or on touch-sensitive surface 131). operation near the touch-sensitive surface 131 ), and drive the corresponding connection device according to a preset program. Optionally, the touch-sensitive surface 131 may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller. To the processor 180, and can receive the commands sent by the processor 180 and execute them. Additionally, the touch-sensitive surface 131 may be implemented using resistive, capacitive, infrared, and surface acoustic wave types. In addition to the touch-sensitive surface 131 , the input unit 130 may also include other input devices 132 . Specifically, other input devices 132 may include, but are not limited to, one or more of physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, joysticks, and the like.

显示单元140可用于显示由用户输入的信息或提供给用户的信息以及终端设备1200的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元140可包括显示面板141,可选的,可以采用LCD(Liquid CrystalDisplay,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板141。进一步的,触敏表面131可覆盖显示面板141,当触敏表面131检测到在其上或附近的触摸操作后,传送给处理器180以确定触摸事件的类型,随后处理器180根据触摸事件的类型在显示面板141上提供相应的视觉输出。虽然在图6中,触敏表面131与显示面板141是作为两个独立的部件来实现输入和输出功能,但是在某些实施例中,可以将触敏表面131与显示面板141集成而实现输入和输出功能。The display unit 140 may be used to display information input by the user or information provided to the user and various graphical user interfaces of the terminal device 1200, which may be composed of graphics, text, icons, videos and any combination thereof. The display unit 140 may include a display panel 141. Optionally, the display panel 141 may be configured in the form of an LCD (Liquid Crystal Display, liquid crystal display), an OLED (Organic Light-Emitting Diode, organic light emitting diode), and the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when the touch-sensitive surface 131 detects a touch operation on or near it, it is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 determines the type of the touch event according to the touch event. Type provides corresponding visual output on display panel 141 . Although in FIG. 6, the touch-sensitive surface 131 and the display panel 141 are implemented as two separate components to realize the input and output functions, in some embodiments, the touch-sensitive surface 131 and the display panel 141 may be integrated to realize the input and output functions.

终端设备1200还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板141的亮度,接近传感器可在终端设备1200移动到耳边时,关闭显示面板141和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端设备1200还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The terminal device 1200 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 141 according to the brightness of the ambient light, and the proximity sensor may close the display panel 141 when the terminal device 1200 is moved to the ear and/or backlight. As a kind of motion sensor, the gravitational acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used for applications that recognize the attitude of mobile phones (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that can be configured on the terminal device 1200, here No longer.

音频电路160、扬声器161,传声器162可提供用户与终端设备1200之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器180处理后,经RF电路110以发送给比如另一终端,或者将音频数据输出至存储器120以便进一步处理。音频电路160还可能包括耳塞插孔,以提供外设耳机与终端设备1200的通信。The audio circuit 160 , the speaker 161 , and the microphone 162 can provide an audio interface between the user and the terminal device 1200 . The audio circuit 160 can transmit the received audio data converted electrical signal to the speaker 161, and the speaker 161 converts it into a sound signal for output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is converted by the audio circuit 160 After receiving, it is converted into audio data, and then the audio data is output to the processor 180 for processing, and then sent to, for example, another terminal through the RF circuit 110, or the audio data is output to the memory 120 for further processing. The audio circuit 160 may also include an earphone jack to provide communication between the peripheral earphone and the terminal device 1200 .

终端设备1200通过传输模块170(例如Wi-Fi模块)可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图6示出了传输模块170,但是可以理解的是,其并不属于终端设备1200的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。The terminal device 1200 can help users to send and receive emails, browse web pages, access streaming media, etc. through the transmission module 170 (for example, a Wi-Fi module), which provides users with wireless broadband Internet access. Although FIG. 6 shows the transmission module 170, it can be understood that it does not belong to the necessary structure of the terminal device 1200, and can be completely omitted as required within the scope of not changing the essence of the invention.

处理器180是终端设备1200的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行终端设备1200的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器180可包括一个或多个处理核心;在一些实施例中,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器180中。The processor 180 is the control center of the terminal device 1200, and uses various interfaces and lines to connect various parts of the entire mobile phone, by running or executing the software programs and/or modules stored in the memory 120, and calling the data stored in the memory 120. , perform various functions of the terminal device 1200 and process data, so as to monitor the mobile phone as a whole. Optionally, the processor 180 may include one or more processing cores; in some embodiments, the processor 180 may integrate an application processor and a modem processor, wherein the application processor mainly handles the operating system, user interface and Applications, etc., the modem processor mainly deals with wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 180 .

终端设备1200还包括给各个部件供电的电源190,在一些实施例中,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理放电、以及功耗管理等功能。电源190还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。The terminal device 1200 also includes a power supply 190 for supplying power to various components. In some embodiments, the power supply can be logically connected to the processor 180 through a power management system, so as to implement functions such as discharge management and power consumption management through the power management system. Power supply 190 may also include one or more DC or AC power sources, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and any other components.

尽管未示出,终端设备1200还可以包括摄像头(如前置摄像头、后置摄像头)、蓝牙模块等,在此不再赘述。具体在本实施例中,终端设备1200的显示单元140是触摸屏显示器,终端设备1200还包括有存储器120,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器120中,且经配置以由一个或者一个以上处理器180执行一个或者一个以上程序包含用于进行以下操作的指令:Although not shown, the terminal device 1200 may also include a camera (eg, a front camera, a rear camera), a Bluetooth module, and the like, which will not be repeated here. Specifically in this embodiment, the display unit 140 of the terminal device 1200 is a touch screen display, and the terminal device 1200 further includes a memory 120 and one or more programs, wherein one or more programs are stored in the memory 120 and configured Execution of one or more programs by one or more processors 180 includes instructions for:

数据导入指令,用于确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;The data import instruction is used to determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and different sub-maps. Stickers have different emoji types;

指令接收指令,用于接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;an instruction receiving instruction for receiving an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame;

数据映射指令,用于根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;The data mapping instruction is used to control the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, call the shader, and convert the The target sub-map is mapped to the corresponding area of the virtual object model;

数据写入指令,用于将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The data writing instruction is used for correspondingly writing the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file.

在一些实施例中,所述程序还包括像素拆分指令,用于在所述模型贴图上将所述表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在所述模型贴图中拥有独立的像素点集合,通过控制所述像素点集合的开启或关闭能够控制与其对应的子贴图在所述模型贴图上显示或隐藏。In some embodiments, the program further includes a pixel splitting instruction for performing pixel splitting on the expression map on the model texture to obtain at least two independent sub-maps, wherein each sub-map is in The model texture has an independent set of pixel points, and by controlling the opening or closing of the pixel set, the corresponding sub-map can be controlled to be displayed or hidden on the model texture.

在一些实施例中,所述数据映射指令,用于根据所述目标贴图标识确定与其对应的目标像素点集合,控制所述目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。In some embodiments, the data mapping instruction is used to determine the corresponding target pixel set according to the target texture identifier, control the target pixel set to be in an open state on the model texture, and control other pixels at the same time The point set is in a closed state on the model map, so as to control only the target sub-map corresponding to the target map identifier to be displayed in the expression map.

在一些实施例中,所述动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数;所述程序还包括肢体控制指令,用于将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。In some embodiments, the animation frame generation instruction further includes control parameters of a limb controller of the virtual object model body in the current animation frame; the program further includes a limb control instruction for inputting the control parameters to the The limb controller controls limb movements of the virtual object model.

在一些实施例中,所述程序还包括预处理指令,用于将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。In some embodiments, the program further includes a preprocessing instruction for importing the animation generation control file into a preset game program, and analyzing the animation generation control file to obtain the animation corresponding to each animation frame contained therein Frame number and texture identification; sort each animation frame number from small to large, and calculate the time interval between two adjacent target frames according to the number of adjacent two target frames; import the time interval into the game timer , through the game timer switching and displaying the picture of the animation frame corresponding to each of the animation frame numbers.

在一些实施例中,所述预处理指令,用于计算相邻两个动画帧数之间的帧数差;将所述帧数差与预设时间参数做相乘计算得到所述时间间隔。In some embodiments, the preprocessing instruction is used to calculate a frame number difference between two adjacent animation frame numbers; the time interval is calculated by multiplying the frame number difference by a preset time parameter.

在一些实施例中,所述程序还包括展示指令,用于提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。In some embodiments, the program further includes display instructions for providing a visual interface, the visual interface includes a display area, the display area is used to display the animation model, and the animation model in the display area The facial expression changes in real time following the switching of the target texture identification.

本申请实施例还提供一种终端设备。所述终端设备可以是智能手机、平板电脑等设备。The embodiments of the present application also provide a terminal device. The terminal device may be a smart phone, a tablet computer or the like.

由上可知,本申请实施例提供了一种终端设备1200,所述终端设备1200执行以下步骤:通过确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。本申请实施例通过在虚拟对象模型的脸部部分贴不同表情类型的表情贴图能够实现改变虚拟角色的面部表情,实现在游戏场景中的虚拟角色能够按照设定好的程序进行表情动作切换,并且是基于3D模型上实现表情切换,相较于现有技术能够降低制作成本。As can be seen from the above, an embodiment of the present application provides a terminal device 1200, and the terminal device 1200 performs the following steps: by determining a virtual object model and a model map of the virtual object model, and automatically generating an initial animation model, wherein, The model map includes an expression map, the expression map includes at least two sub-maps, and different sub-maps have different expression types; an animation frame generation instruction for the animation model is received, and the animation frame generation instruction includes the virtual image in the current animation frame. The target map identifier of the object model; according to the target map identifier, control the expression map to display only the target submap corresponding to the target map identifier on the model map, call the shader, and use the UV map technology to map all the The target sub-map is mapped to the corresponding area of the virtual object model; the target texture identifier and the animation frame number corresponding to the target texture identifier are correspondingly written into the animation generation control file. In the embodiment of the present application, the facial expression of the virtual character can be changed by pasting expression maps of different expression types on the face part of the virtual object model, so that the virtual character in the game scene can switch between expressions and actions according to a set program, and It is based on the 3D model to realize expression switching, which can reduce the production cost compared with the existing technology.

本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当所述计算机程序在计算机上运行时,所述计算机执行上述任一实施例所述的数据预处理方法。Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is run on a computer, the computer executes the method described in any of the foregoing embodiments. Data preprocessing methods.

需要说明的是,对本申请所述数据预处理方法而言,本领域普通测试人员可以理解实现本申请实施例所述数据预处理方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读存储介质中,如存储在终端设备的存储器中,并被该终端设备内的至少一个处理器执行,在执行过程中可包括如所述数据预处理方法的实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)等。It should be noted that, for the data preprocessing method described in this application, ordinary testers in the art can understand that all or part of the process of implementing the data preprocessing method described in the embodiments of this application can be controlled by a computer program. to complete, the computer program can be stored in a computer-readable storage medium, such as in the memory of the terminal device, and executed by at least one processor in the terminal device, and the execution process can include the data as described Flow of an embodiment of a preprocessing method. The storage medium may be a magnetic disk, an optical disk, a read only memory (ROM, Read Only Memory), a random access memory (RAM, Random Access Memory), and the like.

对本申请实施例的所述数据预处理装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。For the data preprocessing device of the embodiments of the present application, each functional module may be integrated in one processing chip, or each module may exist physically alone, or two or more modules may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium, such as a read-only memory, a magnetic disk or an optical disk.

以上对本申请实施例所提供的数据预处理方法、装置、计算机可读存储介质及终端设备进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The data preprocessing method, apparatus, computer-readable storage medium, and terminal device provided by the embodiments of the present application have been described in detail above. The principles and implementations of the present application are described herein using specific examples, and the descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application; meanwhile, for those skilled in the art, according to the Thoughts, there will be changes in specific embodiments and application scopes. To sum up, the contents of this specification should not be construed as limitations on the present application.

Claims (10)

1.一种数据预处理方法,其特征在于,包括:1. a data preprocessing method, is characterized in that, comprises: 确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;determining a virtual object model and a model map of the virtual object model, and automatically generating an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and different sub-maps have different expression types; 接收针对所述动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;receiving an animation frame generation instruction for the animation model, where the animation frame generation instruction includes a target texture identifier of the virtual object model in the current animation frame; 根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;Control the expression map to display only the target sub-map corresponding to the target map ID on the model map according to the target map ID, call the shader, and map the target sub-map to the target sub-map based on the UV mapping technology. Describe the corresponding area of the virtual object model; 将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The target texture identifier and the animation frame number corresponding to the target texture identifier are correspondingly written into the animation generation control file. 2.如权利要求1所述的数据预处理方法,其特征在于,在所述根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图之前,所述方法还包括:2 . The data preprocessing method according to claim 1 , wherein, in the control of the expression map on the model map according to the target map identifier, only the target corresponding to the target map identifier is displayed. 3 . Before the sub-map, the method further includes: 在所述模型贴图上将所述表情贴图进行像素拆分,得到至少两张独立的子贴图,其中各张子贴图在所述模型贴图中拥有独立的像素点集合,通过控制所述像素点集合的开启或关闭能够控制与其对应的子贴图在所述模型贴图上显示或隐藏。The expression map is divided into pixels on the model map to obtain at least two independent sub-maps, wherein each sub-map has an independent set of pixels in the model map, and the set of pixels is controlled by controlling the set of pixels. The on or off of can control its corresponding sub-map to show or hide on the model map. 3.如权利要求2所述的数据预处理方法,其特征在于,所述根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,包括:3 . The data preprocessing method according to claim 2 , wherein, according to the target texture identification, controlling the expression texture on the model texture to display only the target object corresponding to the target texture identification. 4 . stickers, including: 根据所述目标贴图标识确定与其对应的目标像素点集合,控制所述目标像素点集合在所述模型贴图上呈开启状态,同时控制其他像素点集合在所述模型贴图上呈关闭状态,以控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图。Determine the corresponding target pixel set according to the target map identifier, control the target pixel set to be in an open state on the model map, and control other pixel sets to be in a closed state on the model map, so as to control the Only the target sub-map corresponding to the target map identifier is displayed in the expression map. 4.如权利要求1所述的数据预处理方法,其特征在于,所述动画帧生成指令还包括当前动画帧中虚拟对象模型体的肢体控制器的控制参数;在所述接收针对动画模型的动画帧生成指令之后,所述方法还包括:4. The data preprocessing method according to claim 1, wherein the animation frame generation instruction further comprises the control parameters of the limb controller of the virtual object model body in the current animation frame; After the animation frame generation instruction, the method further includes: 将所述控制参数输入至所述肢体控制器中控制所述虚拟对象模型的肢体动作。The control parameters are input into the limb controller to control limb movements of the virtual object model. 5.如权利要求1所述的数据预处理方法,其特征在于,在所述将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中之后,所述方法还包括:5. The data preprocessing method according to claim 1, wherein after the target texture identifier and the number of animation frames corresponding to the target texture identifier are written into the animation generation control file correspondingly, the The method also includes: 将所述动画生成控制文件导入预设的游戏程序中,对所述动画生成控制文件进行解析得到包含的各个动画帧对应的动画帧数及贴图标识;Importing the animation generation control file into a preset game program, and analysing the animation generation control file to obtain the number of animation frames and texture identifiers corresponding to each animation frame included; 将各个所述动画帧数按照从小到大进行排序,根据相邻两个目标帧数计算得到相邻两者之间的时间间隔;Sort each of the animation frames from small to large, and calculate the time interval between two adjacent target frames according to the adjacent two target frame numbers; 将所述时间间隔导入游戏定时器中,通过所述游戏定时器切换显示与各个所述动画帧数对应的动画帧的画面。The time interval is imported into a game timer, and the game timer is used to switch and display a picture of an animation frame corresponding to each of the animation frame numbers. 6.如权利要求5所述的数据预处理方法,其特征在于,所述根据相邻两个目标帧数计算得到相邻两个目标帧数之间的时间间隔,包括:6. data preprocessing method as claimed in claim 5, is characterized in that, described according to two adjacent target frame numbers to calculate the time interval between adjacent two target frame numbers, comprising: 计算相邻两个动画帧数之间的帧数差;Calculate the frame difference between two adjacent animation frames; 将所述帧数差与预设时间参数做相乘计算得到所述时间间隔。The time interval is obtained by multiplying the frame number difference by a preset time parameter. 7.如权利要求1所述的数据预处理方法,其特征在于,所述方法还包括:7. The data preprocessing method of claim 1, wherein the method further comprises: 提供一可视化界面,所述可视化界面包括展示区域,所述展示区域用于展示所述动画模型,且所述展示区域中的动画模型的脸部表情实时跟随所述目标贴图标识的切换而改变。A visual interface is provided, the visual interface includes a display area, the display area is used to display the animation model, and the facial expression of the animation model in the display area changes in real time following the switching of the target texture identifier. 8.一种数据预处理装置,其特征在于,包括:8. A data preprocessing device, comprising: 数据导入模块,用于确定虚拟对象模型以及所述虚拟对象模型的模型贴图,并自动生成初始的动画模型,其中,所述模型贴图包括表情贴图,所述表情贴图包含至少两个子贴图且不同子贴图的表情类型不同;A data import module is used to determine a virtual object model and a model map of the virtual object model, and automatically generate an initial animation model, wherein the model map includes an expression map, and the expression map includes at least two sub-maps and different sub-maps. Stickers have different emoji types; 指令接收模块,用于接收针对动画模型的动画帧生成指令,所述动画帧生成指令包括当前动画帧中所述虚拟对象模型的目标贴图标识;an instruction receiving module, configured to receive an animation frame generation instruction for the animation model, the animation frame generation instruction including the target texture identifier of the virtual object model in the current animation frame; 数据映射模块,用于根据所述目标贴图标识在所述模型贴图上控制所述表情贴图中仅显示与所述目标贴图标识对应的目标子贴图,调用材质球,并基于UV贴图技术将所述目标子贴图映射到所述虚拟对象模型的对应区域;The data mapping module is used to control the expression map to display only the target sub-map corresponding to the target map identifier on the model map according to the target map identifier, call the shader, and map the expression map based on the UV map technology. The target sub-map is mapped to the corresponding area of the virtual object model; 数据写入模块,用于将所述目标贴图标识和所述目标贴图标识对应的动画帧数对应地写入动画生成控制文件中。The data writing module is configured to write the target texture identifier and the animation frame number corresponding to the target texture identifier into the animation generation control file correspondingly. 9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1-7任一项所述的数据预处理方法。9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by a processor to execute the method according to any one of claims 1-7. Data preprocessing methods. 10.一种终端设备,其特征在于,包括处理器和存储器,所述存储器存储有多条指令,所述处理器加载所述指令以执行权利要求1-7任一项所述的数据预处理方法。10. A terminal device, comprising a processor and a memory, the memory stores a plurality of instructions, the processor loads the instructions to execute the data preprocessing according to any one of claims 1-7 method.
CN202210507592.0A 2022-05-10 2022-05-10 Data preprocessing method, device, medium and equipment Active CN114904279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210507592.0A CN114904279B (en) 2022-05-10 2022-05-10 Data preprocessing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210507592.0A CN114904279B (en) 2022-05-10 2022-05-10 Data preprocessing method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN114904279A true CN114904279A (en) 2022-08-16
CN114904279B CN114904279B (en) 2025-05-27

Family

ID=82767499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210507592.0A Active CN114904279B (en) 2022-05-10 2022-05-10 Data preprocessing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN114904279B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485959A (en) * 2023-04-17 2023-07-25 北京优酷科技有限公司 Control method of animation model, and adding method and device of expression
CN118096981A (en) * 2024-04-22 2024-05-28 山东捷瑞数字科技股份有限公司 Mapping processing method, system and equipment based on dynamic change of model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016029768A1 (en) * 2014-08-29 2016-03-03 厦门幻世网络科技有限公司 3d human face reconstruction method and apparatus
CN107180445A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression control method and device of a kind of animation model
CN108305309A (en) * 2018-04-13 2018-07-20 腾讯科技(成都)有限公司 Human face expression generation method based on 3-D cartoon and device
CN109395387A (en) * 2018-12-07 2019-03-01 腾讯科技(深圳)有限公司 Display methods, device, storage medium and the electronic device of threedimensional model
CN109675315A (en) * 2018-12-27 2019-04-26 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of avatar model
CN110136231A (en) * 2019-05-17 2019-08-16 网易(杭州)网络有限公司 Expression implementation method, device and the storage medium of virtual role
CN112907702A (en) * 2020-12-07 2021-06-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114092611A (en) * 2021-11-09 2022-02-25 网易(杭州)网络有限公司 Virtual expression driving method and device, electronic device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016029768A1 (en) * 2014-08-29 2016-03-03 厦门幻世网络科技有限公司 3d human face reconstruction method and apparatus
CN107180445A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression control method and device of a kind of animation model
CN108305309A (en) * 2018-04-13 2018-07-20 腾讯科技(成都)有限公司 Human face expression generation method based on 3-D cartoon and device
CN109395387A (en) * 2018-12-07 2019-03-01 腾讯科技(深圳)有限公司 Display methods, device, storage medium and the electronic device of threedimensional model
CN109675315A (en) * 2018-12-27 2019-04-26 网易(杭州)网络有限公司 Generation method, device, processor and the terminal of avatar model
CN110136231A (en) * 2019-05-17 2019-08-16 网易(杭州)网络有限公司 Expression implementation method, device and the storage medium of virtual role
CN112907702A (en) * 2020-12-07 2021-06-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114092611A (en) * 2021-11-09 2022-02-25 网易(杭州)网络有限公司 Virtual expression driving method and device, electronic device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116485959A (en) * 2023-04-17 2023-07-25 北京优酷科技有限公司 Control method of animation model, and adding method and device of expression
CN118096981A (en) * 2024-04-22 2024-05-28 山东捷瑞数字科技股份有限公司 Mapping processing method, system and equipment based on dynamic change of model

Also Published As

Publication number Publication date
CN114904279B (en) 2025-05-27

Similar Documents

Publication Publication Date Title
US12182917B2 (en) Electronic device and method for generating user avatar-based emoji sticker
JP7206388B2 (en) Virtual character face display method, apparatus, computer device, and computer program
US11393154B2 (en) Hair rendering method, device, electronic apparatus, and storage medium
CN112037311B (en) Animation generation method, animation playing method and related devices
CN109427083B (en) Method, device, terminal and storage medium for displaying three-dimensional virtual image
CN108537889A (en) Adjustment method, device, storage medium and electronic device for augmented reality model
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN109215007B (en) Image generation method and terminal equipment
WO2016173427A1 (en) Method, device and computer readable medium for creating motion blur effect
CN111880648A (en) Three-dimensional element control method and terminal
CN116762103A (en) Electronic device and method for running avatar video service in the same
KR20150079387A (en) Illuminating a Virtual Environment With Camera Light Data
CN109753892B (en) Face wrinkle generation method and device, computer storage medium and terminal
CN107180445B (en) Expression control method and device of animation model
CN114904279A (en) Data preprocessing method, device, medium and equipment
CN110658971A (en) Screen capturing method and terminal equipment
CN110544287B (en) Picture allocation processing method and electronic equipment
CN108369726B (en) Method and portable electronic device for changing graphics processing resolution according to scene
CN114564101B (en) Control method and terminal of three-dimensional interface
CN110517346B (en) Virtual environment interface display method and device, computer equipment and storage medium
CN106303722B (en) animation playing method and device
TW202138971A (en) Interaction method and apparatus, interaction system, electronic device, and storage medium
KR102521800B1 (en) The electronic apparatus for generating animated massege by drawing input
KR20210155499A (en) Electronic device and method for generating image in electronic device
CN114663560B (en) Method, device, storage medium and electronic device for realizing animation of target model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant