[go: up one dir, main page]

CN108876889B - In-situ volume rendering method - Google Patents

In-situ volume rendering method Download PDF

Info

Publication number
CN108876889B
CN108876889B CN201810549318.3A CN201810549318A CN108876889B CN 108876889 B CN108876889 B CN 108876889B CN 201810549318 A CN201810549318 A CN 201810549318A CN 108876889 B CN108876889 B CN 108876889B
Authority
CN
China
Prior art keywords
depth image
image
volume
data
situ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810549318.3A
Other languages
Chinese (zh)
Other versions
CN108876889A (en
Inventor
解利军
洪天龙
郑耀
陈建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810549318.3A priority Critical patent/CN108876889B/en
Publication of CN108876889A publication Critical patent/CN108876889A/en
Application granted granted Critical
Publication of CN108876889B publication Critical patent/CN108876889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

本发明公开一种原位体绘制方法,用于在大规模科学计算时,在原位进行数据体绘制预处理,以大幅度降低数据传输和存储量;并在绘制节点,根据用户视点进行交互式体绘制。本发明在计算节点将原始数据映射为体深度图像,该图像是保留了深度图像的直接体绘制结果。由于在原位计算时,用户难以交互式干涉,绘制的主要参数使用粒子群算法进行自动寻优。用户可以设置希望达到的压缩率和绘制时间,算法据此寻找达到最佳绘制效果的绘制参数。该方法可以降低大规模科学计算的1到3个数量级的传输数据量。在多个大规模科学计算应用上进行了测试,验证了方法的有效性。

Figure 201810549318

The invention discloses an in-situ volume rendering method, which is used for in-situ preprocessing of data volume rendering during large-scale scientific computing, so as to greatly reduce the amount of data transmission and storage; body drawing. The present invention maps the original data into a volume depth image at a computing node, and the image is a direct volume rendering result that retains the depth image. Since it is difficult for users to intervene interactively during in-situ calculation, the main parameters of the drawing are automatically optimized by particle swarm optimization. The user can set the desired compression rate and rendering time, and the algorithm finds the rendering parameters to achieve the best rendering effect accordingly. This method can reduce the amount of transmitted data by 1 to 3 orders of magnitude for large-scale scientific computing. Tested on several large-scale scientific computing applications to verify the effectiveness of the method.

Figure 201810549318

Description

一种原位体绘制方法An in-situ volume rendering method

技术领域technical field

本发明涉及可视化领域,尤其涉及一种原位体绘制方法。The invention relates to the field of visualization, in particular to an in-situ volume rendering method.

背景技术Background technique

随着计算机硬件技术和数值模拟方法的快速发展,大规模科学计算已开始尝试E级计算(1018,百亿亿次),但当前大型计算机的磁盘读写速度比CPU计算速度要慢至少4到5个数量级,数据读写成为了科学计算和科学数据分析的主要速度瓶颈。With the rapid development of computer hardware technology and numerical simulation methods, large-scale scientific computing has begun to attempt exascale computing (10 18 , 10 billion billion times), but the current large-scale computer's disk read and write speed is at least 4 slower than CPU computing speed By 5 orders of magnitude, data reading and writing has become the main speed bottleneck for scientific computing and scientific data analysis.

为解决该问题,当前主要方法是在时间和空间上对输出数据进行稀疏化采样,降低输出数据的规模,但该方法必然会丢失瞬态和小尺度特征,对原始计算结果是极大的浪费。为了能充分利用所有计算数据,但又不增加数据通量,近年来学术界和工业界提出了原位分析和可视化的概念。其核心思路是,在计算模拟得到数据时,立即在原位(不进行传输存储)进行分析,然后舍弃原始数据,仅仅输出这些分析和可视化的结果。In order to solve this problem, the current main method is to sparse and sample the output data in time and space to reduce the scale of the output data, but this method will inevitably lose transient and small-scale features, which is a great waste of the original calculation results. . In order to make full use of all computational data without increasing data throughput, the concept of in situ analysis and visualization has been proposed in recent years in academia and industry. The core idea is that when the data is obtained from the calculation and simulation, it is immediately analyzed in situ (without transmission and storage), and then the original data is discarded, and only the results of these analysis and visualization are output.

虽然原位分析和可视化的思路很直接,但实际实现却非常困难,真正的应用还非常少见。其最主要的问题是因为数据分析和可视化往往是一个探索过程,往往不能在计算的“原位”就确定合适的方法和参数。Although the idea of in situ analysis and visualization is very straightforward, the actual implementation is very difficult, and the real application is still very rare. The main problem is that because data analysis and visualization is often an exploratory process, it is often impossible to determine appropriate methods and parameters "in situ" of the computation.

发明内容SUMMARY OF THE INVENTION

针对现有技术的不足,本发明提出一种原位体绘制方法,其使用体深度图像做中间数据,使用寻优算法进行自动参数设置,最终实现超大规模科学数据的可交互体绘制。具体技术方案如下:In view of the deficiencies of the prior art, the present invention proposes an in-situ volume rendering method, which uses volume depth images as intermediate data, and uses an optimization algorithm for automatic parameter setting, finally realizing interactive volume rendering of super large-scale scientific data. The specific technical solutions are as follows:

一种原位体绘制方法,其特征在于,该方法包括如下步骤:An in-situ volume rendering method, characterized in that the method comprises the following steps:

(1)体深度图像生成:在大规模科学计算的原位计算节点,对计算结果的数据进行处理,处理的方式如下:(1) Volume depth image generation: In the in-situ computing node of large-scale scientific computing, the data of the calculation result is processed, and the processing method is as follows:

(1.1)从视点O向绘制区域(W,H)的每个像素点发射一条射线,计算每条射线与体数据包围盒的相交点,其中W,H分别代表绘制区域的宽度和高度;(1.1) Send a ray from the viewpoint O to each pixel of the drawing area (W, H), and calculate the intersection of each ray and the volume data bounding box, where W and H represent the width and height of the drawing area, respectively;

(1.2)沿射线从包围盒入射点到出射点进行等间距采样,采样间隔记为

Figure BDA0001680025880000011
使用三线性插值方法,从周边数据插值得到采样值;(1.2) Sampling at equal intervals from the incident point to the exit point of the bounding box along the ray, and the sampling interval is denoted as
Figure BDA0001680025880000011
Use trilinear interpolation method to interpolate sample values from surrounding data;

(1.3)使用传递函数F,获得该采样点的颜色值c和不透明度值α;(1.3) Use the transfer function F to obtain the color value c and the opacity value α of the sampling point;

(1.4)当同一射线上相邻的采样点的颜色相似度小于特定的阈值δ,将其合并,合并过后的采样点集合称为一个超级片段,其属性包括起始位置、终止位置、颜色值、透明度,所有超级片段的组合称之为体深度图像,包含了从特定视点观察时,数据体绘制的深度信息和色彩信息;(1.4) When the color similarity of adjacent sampling points on the same ray is less than a specific threshold δ, they are merged. The merged sampling point set is called a super segment, and its attributes include start position, end position, color value , transparency, the combination of all super fragments is called the volume depth image, which contains the depth information and color information of the data volume rendering when observed from a specific viewpoint;

(2)体深度图像绘制:将步骤(1.4)获得的体深度图像传输至绘制节点绘制,体深度图像的绘制方法为:(2) Volume depth image drawing: transfer the volume depth image obtained in step (1.4) to the drawing node for drawing, and the drawing method of the volume depth image is:

(2.1)利用像素点位置、生成时视点的位置将每个超级片段扩展为平截锥体;(2.1) Expand each super fragment into a frustum by using the pixel position and the position of the viewpoint during generation;

(2.2)依据当前绘制视点,对所有平截锥体依深度进行排序;(2.2) Sort all frustums by depth according to the current drawing viewpoint;

(2.3)排序后直接绘制所有平截锥体,绘制时对每个平截锥体的透明度进行修正:透明度根据平截锥体的长度与当前视线穿过该平截锥体的线段长度之间的关系决定,使用以下公式:(2.3) After sorting, all frustums are drawn directly, and the transparency of each frustum is corrected when drawing: the transparency is based on the length of the frustum and the length of the line segment through which the current line of sight passes through the frustum. The relationship is determined using the following formula:

Figure BDA0001680025880000021
Figure BDA0001680025880000021

式中η是平截锥体的透明度值,s是平截锥体的长度,s′是当前视线穿过平截锥体的线段长度,η'则是进行纠正后的透明度值;where η is the transparency value of the frustum, s is the length of the frustum, s' is the length of the line segment where the current line of sight passes through the frustum, and η' is the corrected transparency value;

(3)体深度图像运行参数的自动寻优(3) Automatic optimization of operating parameters of volume depth image

体深度图像生成时关键的运行参数组合为

Figure BDA0001680025880000026
对其进行寻优;The key operating parameter combination for volume depth image generation is:
Figure BDA0001680025880000026
optimize it;

(3.1)在参数空间中选取一组参数组进行体深度图像的生成,获得相应的体深度图像生成时间t,体深度图像压缩率c;(3.1) Select a set of parameter groups in the parameter space to generate volume depth images, and obtain the corresponding volume depth image generation time t and volume depth image compression rate c;

(3.2)使用该体深度图像进行体深度图像绘制得最终图像,计算最终图像的质量q;(3.2) Use the volume depth image to draw the volume depth image to obtain the final image, and calculate the quality q of the final image;

(3.3)将t,c,q值带入评估函数得到该组参数的评估值,根据评估值更新全局最优参数组,所述的评估函数公式如下:(3.3) Bring the t, c, q values into the evaluation function to obtain the evaluation value of this group of parameters, and update the global optimal parameter group according to the evaluation value. The evaluation function formula is as follows:

Figure BDA0001680025880000022
Figure BDA0001680025880000022

式中k1,k2,k3分别为体深度图像的生成时间、体深度图像的数据压缩率和体深度图像绘制质量的权重,满足k1+k2+k3=1,ψ(t)是生成时间的评估函数,

Figure BDA0001680025880000023
是数据压缩率的评估函数,两者公式如下:In the formula, k 1 , k 2 , and k 3 are the generation time of the volume depth image, the data compression rate of the volume depth image, and the weight of the volume depth image rendering quality, and satisfy k 1 +k 2 +k 3 =1, ψ(t ) is the evaluation function of the generation time,
Figure BDA0001680025880000023
is the evaluation function of the data compression rate, and the two formulas are as follows:

Figure BDA0001680025880000024
Figure BDA0001680025880000024

Figure BDA0001680025880000025
Figure BDA0001680025880000025

式中,α和β分别是生成时间和压缩率的两个比例分割点,取值在(0,1)区间内,用于调控。In the formula, α and β are the two proportional division points of the generation time and the compression rate, respectively, and their values are in the (0, 1) interval, which are used for regulation.

(3.4)根据终止条件结束寻优,获得最优参数组;否则回到(3.1)。(3.4) End the optimization according to the termination condition, and obtain the optimal parameter group; otherwise, go back to (3.1).

(4)在后续的模拟计算中,根据该组最优参数组来进行体深度图像的生成。(4) In the subsequent simulation calculation, the volume depth image is generated according to the optimal parameter group.

优选地,所述的(3.1)中的体深度图像生成时间t通过如下公式得到:Preferably, the volume depth image generation time t in (3.1) is obtained by the following formula:

Figure BDA0001680025880000031
Figure BDA0001680025880000031

其中ti为第i个计算节点进行体深度图像生成的用时,N为计算节点的个数。where t i is the time taken by the i-th computing node to generate the volume depth image, and N is the number of computing nodes.

优选地,所述的(3.1)中的体深度图像压缩率c通过如下公式得到:Preferably, the volume depth image compression rate c in (3.1) is obtained by the following formula:

Figure BDA0001680025880000032
Figure BDA0001680025880000032

式中,Di为第i个计算节点生成的体深度图像的数据大小,Draw表示原始数据的大小。In the formula, D i is the data size of the volume depth image generated by the i-th computing node, and D raw represents the size of the original data.

优选地,所述的步骤(3.2)中的图像的质量q通过如下公式得到:Preferably, the quality q of the image in the step (3.2) is obtained by the following formula:

Figure BDA0001680025880000033
Figure BDA0001680025880000033

n为图像像素点的个数,Xobs,i是使用体深度图像绘制的图像的第i个像素的颜色值,Xmodel,i是一张基准图像的第i个像素的颜色值。由于每个颜色值是一个三维向量,(Xobs,i-Xmodel,i)的计算使用欧拉距离公式,该值越小,说明两张图像越接近。n is the number of image pixels, X obs,i is the color value of the ith pixel of the image drawn using the volume depth image, and X model,i is the color value of the ith pixel of a reference image. Since each color value is a three-dimensional vector, (X obs,i -X model,i ) is calculated using the Euler distance formula. The smaller the value, the closer the two images are.

优选地,所述的步骤(3)使用粒子群算法进行寻优,其过程包括:Preferably, the step (3) uses particle swarm optimization for optimization, and the process includes:

(a)随机生成J个点,每个点包括一组位置信息和速度信息,位置信息即一组体深度图像参数组,参数组在参数空间随机生成,速度值则在(-1,1)的参数空间中随机生成;(a) Randomly generate J points, each point includes a set of position information and velocity information, the position information is a set of volume depth image parameter groups, the parameter group is randomly generated in the parameter space, and the velocity value is (-1,1) is randomly generated in the parameter space of ;

(b)根据每个点的参数组,生成体深度图像,获得生成时间t和压缩率c,使用生成的体深度图像数据绘制图像,同基准图像比较计算得图像质量q;(b) According to the parameter group of each point, generate a volume depth image, obtain the generation time t and the compression rate c, use the generated volume depth image data to draw an image, and compare with the reference image to calculate the image quality q;

(c)根据t,c,q带入评估函数E计算得评估值e,根据e更新该点的历史最优解以及全局最优解;(c) According to t, c, q, bring into the evaluation function E to calculate the evaluation value e, and update the historical optimal solution and the global optimal solution of the point according to e;

(d)根据历史最优解和全局最优解更新每个点的速度以及位置信息。(d) Update the velocity and position information of each point according to the historical optimal solution and the global optimal solution.

(e)通过判断点集中是否超过一半的点和全局最优解的欧拉距离小于一定的阈值ε来决定迭代结束与否。若未达标则回到步骤(b),反之结束寻优,全局最优解即寻优结果。(e) Determine whether the iteration ends or not by judging whether the Euler distance between more than half of the points in the point set and the global optimal solution is less than a certain threshold ε. If it does not meet the standard, go back to step (b), otherwise end the optimization, and the global optimal solution is the optimization result.

与现有技术相比,本发明的有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:

1.使用体深度图像可以大幅度缩减原始数据;1. The use of volume depth images can greatly reduce the original data;

2.使用体深度图像的绘制可以达到高质量的图像结果和一定的可视化交互效果;2. The use of volume depth image rendering can achieve high-quality image results and certain visual interaction effects;

3.基于粒子群算法的体深度图像参数自动寻优可以在大范围、连续的参数空间中又快又好的搜索到最优参数组。3. The automatic optimization of volume depth image parameters based on particle swarm algorithm can quickly and well search for the optimal parameter group in a large-scale and continuous parameter space.

附图说明Description of drawings

图1原位体绘制工作模式;Figure 1 In-situ volume rendering working mode;

图2体深度图像生成示意图;Figure 2 is a schematic diagram of volume depth image generation;

图3体深度图像绘制示意图;Figure 3 is a schematic diagram of volume depth image rendering;

图4基于粒子群算法的参数自动寻优流程图;Figure 4 is a flowchart of automatic optimization of parameters based on particle swarm optimization;

图5DNS湍流原位体绘制结果。Figure 5 DNS turbulent in situ volume rendering results.

具体实施方式Detailed ways

下面根据附图和优选实施例详细描述本发明,本发明的目的和效果将变得更加明白,以下结合附图和实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。The present invention will be described in detail below according to the accompanying drawings and preferred embodiments, and the purpose and effects of the present invention will become clearer. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.

如图1所示,原位体绘制的算法分两部分,一部分运行在超级计算机中的计算模拟节点,用以将原始计算数据转化为深度图像中间表达,另一部分运行在装有显卡的绘制服务器上,将体深度图像根据视点进行绘制,以实现体绘制效果。其中体深度图像的参数选择,由参数寻优算法实现,只在计算的第一个时间步时运行一次,选定参数后,后续运行不改变。具体实现如下:As shown in Figure 1, the in-situ volume rendering algorithm is divided into two parts, one part runs on the computing simulation node in the supercomputer to convert the original calculation data into the intermediate representation of the depth image, and the other part runs on the rendering server equipped with the graphics card Above, the volume depth image is drawn according to the viewpoint to realize the volume rendering effect. The parameter selection of the volume depth image is realized by the parameter optimization algorithm, which only runs once in the first time step of the calculation. After the parameters are selected, the subsequent runs do not change. The specific implementation is as follows:

一种原位体绘制方法,该方法包括如下步骤:An in-situ volume rendering method, the method comprises the following steps:

(1)体深度图像生成(如图2所示):在大规模科学计算的原位计算节点,对计算结果的数据进行处理,处理的方式如下:(1) Volume depth image generation (as shown in Figure 2): In the in-situ computing node of large-scale scientific computing, the data of the calculation result is processed, and the processing method is as follows:

(1.1)从视点O向绘制区域(W,H)的每个像素点发射一条射线,计算每条射线与体数据包围盒的相交点,其中W,H分别代表绘制区域的宽度和高度;(1.1) Send a ray from the viewpoint O to each pixel of the drawing area (W, H), and calculate the intersection of each ray and the volume data bounding box, where W and H represent the width and height of the drawing area, respectively;

(1.2)沿射线从包围盒入射点到出射点进行等间距采样,采样间隔记为

Figure BDA0001680025880000041
使用三线性插值方法,从周边数据插值得到采样值;(1.2) Sampling at equal intervals from the incident point to the exit point of the bounding box along the ray, and the sampling interval is denoted as
Figure BDA0001680025880000041
Use trilinear interpolation method to interpolate sample values from surrounding data;

(1.3)使用传递函数F,获得该采样点的颜色值c和不透明度值α;(1.3) Use the transfer function F to obtain the color value c and the opacity value α of the sampling point;

(1.4)当同一射线上相邻的采样点的颜色相似度小于特定的阈值δ时,将其合并,合并过后的采样点集合称为一个超级片段,其属性包括起始位置、终止位置、颜色值、透明度,所有超级片段的组合称之为体深度图像,包含了从特定视点观察时,数据体绘制的深度信息和色彩信息;(1.4) When the color similarity of adjacent sampling points on the same ray is less than a specific threshold δ, they are merged. The merged sampling point set is called a super segment, and its attributes include start position, end position, color Value, transparency, the combination of all super fragments is called the volume depth image, which contains the depth information and color information of the data volume rendering when viewed from a specific viewpoint;

(2)体深度图像绘制(如图3所示):将步骤(1.4)获得的体深度图像传输至绘制节点绘制,体深度图像一般比原始数据小1~3个数量级,可通过网络传输至绘制节点绘制,也可以永久存储于磁盘阵列,供后处理交互分析。体深度图像的绘制方法为:(2) Volume depth image rendering (as shown in Figure 3): The volume depth image obtained in step (1.4) is transmitted to the rendering node for rendering. The volume depth image is generally 1 to 3 orders of magnitude smaller than the original data, and can be transmitted to the Drawing node drawing can also be permanently stored in the disk array for post-processing interactive analysis. The drawing method of the volume depth image is:

(2.1)利用像素点位置、生成时视点的位置将每个超级片段扩展为平截锥体;(2.1) Expand each super fragment into a frustum by using the pixel position and the position of the viewpoint during generation;

(2.2)依据当前绘制视点,对所有平截锥体依深度进行排序;(2.2) Sort all frustums by depth according to the current drawing viewpoint;

(2.3)排序后直接绘制所有平截锥体,绘制时对每个平截锥体的透明度进行修正:透明度根据平截锥体的长度与当前视线穿过该平截锥体的线段长度之间的关系决定,使用以下公式:(2.3) After sorting, all frustums are drawn directly, and the transparency of each frustum is corrected when drawing: the transparency is based on the length of the frustum and the length of the line segment through which the current line of sight passes through the frustum. The relationship is determined using the following formula:

Figure BDA0001680025880000051
Figure BDA0001680025880000051

式中η是平截锥体的透明度值,s是平截锥体的长度,s′是当前视线穿过平截锥体的线段长度,η'则是进行纠正后的透明度值;where η is the transparency value of the frustum, s is the length of the frustum, s' is the length of the line segment where the current line of sight passes through the frustum, and η' is the corrected transparency value;

(3)体深度图像运行参数的自动寻优(3) Automatic optimization of operating parameters of volume depth image

体深度图像在原位进行,其最佳运行参数很难手动设定,需给出自动寻优方法,其寻优目标是:The volume depth image is carried out in situ, and its optimal operating parameters are difficult to manually set. An automatic optimization method needs to be given. The optimization goals are:

(a)采用体深度图像进行绘制生成的图像与采用原始数据进行直接体绘制生成的图像间的差异应当尽可能小,该目标使用均方根误差来衡量,公式如下:(a) The difference between the image generated by rendering using the volume depth image and the image generated by direct volume rendering using the original data should be as small as possible. This goal is measured by the root mean square error, and the formula is as follows:

Figure BDA0001680025880000052
Figure BDA0001680025880000052

Xobs,i是使用体深度图像绘制的图像的第i个像素的颜色值,Xmodel,i是一张基准图像的第i个像素的颜色值。由于每个颜色值是一个三维向量,(Xobs,i-Xmodel,i)的计算使用欧拉距离公式。该值越小,说明两张图像越接近。X obs,i is the color value of the ith pixel of the image drawn using the volume depth image, and X model,i is the color value of the ith pixel of a reference image. Since each color value is a three-dimensional vector, the calculation of (X obs,i -X model,i ) uses the Euler distance formula. The smaller the value, the closer the two images are.

(b)体深度图像应当尽可能小,该目标使用数据压缩率来衡量,公式如下:(b) The volume depth image should be as small as possible, and this target is measured using the data compression ratio as follows:

Figure BDA0001680025880000053
Figure BDA0001680025880000053

式中,其中c为全局体深度图像的数据压缩率,Di为第i个计算节点生成的体深度图像的数据大小,Draw表示原始数据的大小。where c is the data compression rate of the global volume depth image, D i is the data size of the volume depth image generated by the i-th computing node, and D raw represents the size of the original data.

(c)体深度图像的生成时间应当尽可能小:(c) The generation time of the volume depth image should be as small as possible:

Figure BDA0001680025880000054
Figure BDA0001680025880000054

其中t为全局体深度图像的生成时间值,ti为第i个计算节点进行体深度图像生成的用时。where t is the generation time value of the global volume depth image, and t i is the time taken by the i-th computing node to generate the volume depth image.

(d)对以上三个目标,本发明使用加权方法将其综合为单目标优化问题,公式如下:(d) to the above three goals, the present invention uses a weighting method to synthesize it into a single-objective optimization problem, and the formula is as follows:

Figure BDA0001680025880000055
Figure BDA0001680025880000055

式中k1,k2,k3分别为体深度图像的生成时间、体深度图像的数据压缩率和体深度图像绘制质量的权重,满足k1+k2+k3=1。ψ(t)是生成时间的评估函数,

Figure BDA0001680025880000061
是数据压缩率的评估函数,两者公式如下:In the formula, k 1 , k 2 , and k 3 are the generation time of the volume depth image, the data compression rate of the volume depth image, and the weight of the rendering quality of the volume depth image, which satisfy k 1 +k 2 +k 3 =1. ψ(t) is the evaluation function of the generation time,
Figure BDA0001680025880000061
is the evaluation function of the data compression rate, and the two formulas are as follows:

Figure BDA0001680025880000062
Figure BDA0001680025880000062

Figure BDA0001680025880000063
Figure BDA0001680025880000063

式中,α和β分别是生成时间和压缩率的两个比例分割点,取值在(0,1)区间内,用于调控,其含义是优先选择生成时间小于α,压缩率小于β的参数组。如超出该调控值,则给予指数函数惩罚。In the formula, α and β are the two proportional division points of the generation time and the compression rate, respectively. The values are in the (0,1) interval and are used for regulation. The meaning is that the generation time is less than α and the compression rate is less than β. parameter group. If the control value is exceeded, an exponential function penalty will be given.

影响上述目标的因素较多,包括计算平台特性、网络特性、原始数据特性等。其中,可由算法设定的参数包括:需生成深度图像的尺寸、采样间隔记为

Figure BDA0001680025880000064
超级片段的合并阈值δ等。本发明使用粒子群算法,进行寻优,如图4所示,其过程包括:There are many factors that affect the above goals, including computing platform characteristics, network characteristics, and raw data characteristics. Among them, the parameters that can be set by the algorithm include: the size of the depth image to be generated, and the sampling interval is recorded as
Figure BDA0001680025880000064
Merge Threshold δ for Super Fragments, etc. The present invention uses particle swarm algorithm to search for optimization, as shown in Figure 4, the process includes:

(a)随机生成J个点,每个点包括一组位置信息和速度信息,位置信息即一组体深度图像参数组,参数组在参数空间随机生成,速度值则在(-1,1)的参数空间中随机生成;(a) Randomly generate J points, each point includes a set of position information and velocity information, the position information is a set of volume depth image parameter groups, the parameter group is randomly generated in the parameter space, and the velocity value is (-1,1) is randomly generated in the parameter space of ;

(b)根据每个点的参数组,生成体深度图像,获得生成时间t和压缩率c,使用生成的体深度图像数据绘制图像,同基准图像比较计算得图像质量q;(b) According to the parameter group of each point, generate a volume depth image, obtain the generation time t and the compression rate c, use the generated volume depth image data to draw an image, and compare with the reference image to calculate the image quality q;

(c)根据t,c,q带入评估函数E计算得评估值e,根据e更新该点历史最优解以及全局最优解;(c) Bring the evaluation value e into the evaluation function E according to t, c, q to calculate the evaluation value e, and update the historical optimal solution and the global optimal solution of the point according to e;

(d)根据历史最优解以及全局最优解,代入以下公式来更新每个点的速度以及位置信息:(d) According to the historical optimal solution and the global optimal solution, substitute the following formulas to update the speed and position information of each point:

Vi,t+1=w*Vi,t+c1*r1*(PB,i-Xi,t)+c2*r2*(GB-Xi,t)V i,t+1 =w*V i,t +c 1 *r 1 *(P B,i -X i,t )+c 2 *r 2 *( GB -X i,t )

Xi,t+1=Xi,t+r*Vi,t+1X i,t+1 =X i,t +r*V i,t+1 ;

式中Vi,t+1和Xi,t+1分别表示第i个点t+1轮的速度和新的位置值,由上一轮的Vi,t和Xi,t获得。c1和c2是学习因子,分别代表将每个点推向该点的历史最优解和点集的全局最优解的最大步长。c2较大时,整个点集将更快的向全局最优解靠拢,反之则缓慢靠拢。PB,i表示第i个点的历史最优解,GB表示全局最优解。参数w是惯性权重,是保持之前速度的系数,使点具有保持原始运动方向的惯性。参数r1和r2是[0,1]之间均匀分布的随机数,为算法加入随机扰动。常数r是约束因子,是速度的一个权重,通常设置为1。In the formula, Vi ,t+1 and Xi ,t+1 respectively represent the speed and the new position value of the i-th point t+1 round, which are obtained from Vi,t and Xi ,t of the previous round. c 1 and c 2 are learning factors, representing the maximum step size to push each point to the historical optimal solution for that point and the global optimal solution for the point set, respectively. When c 2 is large, the entire point set will move towards the global optimal solution faster, and vice versa. PB ,i represents the historical optimal solution of the i-th point, and GB represents the global optimal solution. The parameter w is the inertia weight, a coefficient that maintains the previous velocity, so that the point has an inertia that maintains the original direction of motion. Parameters r 1 and r 2 are uniformly distributed random numbers between [0,1], adding random perturbations to the algorithm. The constant r is the constraint factor, a weight for speed, usually set to 1.

(e)通过判断点集中是否超过一半的点和全局最优解的欧拉距离小于一定的阈值ε来决定迭代结束与否。若未达标则回到步骤(b),反之结束寻优,全局最优解即寻优结果。(e) Determine whether the iteration ends or not by judging whether the Euler distance between more than half of the points in the point set and the global optimal solution is less than a certain threshold ε. If it does not meet the standard, go back to step (b), otherwise end the optimization, and the global optimal solution is the optimization result.

(4)在后续的模拟计算中,根据该组最优参数组在原位进行体深度图像的生成。(4) In the subsequent simulation calculation, the volume depth image is generated in situ according to the optimal parameter group.

实施例:在一超算平台对上述算法进行了实现。测试算例中,计算节点含32个节点,每节点含16个Intel(R)Xeon(R)E5620CPU,主频2.40GHz,单节点内存22G,节点使用1GInfiniband网络连接。绘制节点使用2个英特尔酷睿i7-4核CPU,2块Quadro 4000显卡,32GB内存。Embodiment: The above algorithm is implemented on a supercomputing platform. In the test example, the computing node contains 32 nodes, each node contains 16 Intel(R) Xeon(R) E5620 CPUs, the main frequency is 2.40GHz, the single-node memory is 22G, and the nodes use 1G Infiniband network connection. The draw node uses 2 Intel Core i7-4-core CPUs, 2 Quadro 4000 graphics cards, and 32GB of RAM.

计算实例使用直接数值模拟方法(Direct Numerical Simulation,DNS)模拟3维不可压缩流体各向同性湍流现象,网格数为10243,时间步为1024。图5是使用该方法对DNS中速度推导量λ2进行绘制的效果图片,可以展示出流场中涡的位置。使用该方法绘制的效果与直接体绘制绘制效果相差很小,无明显肉眼差别,但原位处理后压缩率可达10倍以上。The calculation example uses the direct numerical simulation method (Direct Numerical Simulation, DNS) to simulate the 3-dimensional incompressible fluid isotropic turbulent phenomenon, the grid number is 1024 3 , and the time step is 1024. Figure 5 is the effect picture of using this method to draw the velocity derivation λ 2 in DNS, which can show the position of the vortex in the flow field. The rendering effect of this method is very different from the direct volume rendering rendering effect, and there is no obvious difference to the naked eye, but the compression rate after in-situ processing can reach more than 10 times.

本领域普通技术人员可以理解,以上所述仅为发明的优选实例而已,并不用于限制发明,尽管参照前述实例对发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实例记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在发明的精神和原则之内,所做的修改、等同替换等均应包含在发明的保护范围之内。Those of ordinary skill in the art can understand that the above are only preferred examples of the invention and are not intended to limit the invention. Although the invention has been described in detail with reference to the foregoing examples, those skilled in the art can still Modifications are made to the technical solutions described in the foregoing examples, or equivalent replacements are made to some of the technical features. All modifications and equivalent replacements made within the spirit and principle of the invention shall be included within the protection scope of the invention.

Claims (5)

1. An in-situ volume rendering method, comprising the steps of:
(1) and (3) generating a body depth image: processing data of a calculation result at an in-situ calculation node of large-scale scientific calculation in the following processing mode:
(1.1) emitting a ray from the viewpoint O to each pixel point of a rendering region (W, H), and calculating the intersection point of each ray and a volume data bounding box, wherein W and H respectively represent the width and the height of the rendering region;
(1.2) sampling at equal intervals along the ray from the incident point to the emergent point of the bounding box, wherein the sampling interval is recorded as
Figure FDA0003461246830000013
Interpolating from peripheral data by using a trilinear interpolation method to obtain a sampling value;
(1.3) obtaining color values c and opacity values alpha of the sampling points by using a transfer function F;
(1.4) when the color similarity of adjacent sampling points on the same ray is smaller than a specific threshold value delta, merging the adjacent sampling points, wherein the merged sampling point set is called a super segment, the attribute of the super segment comprises a starting position, an ending position, a color value and transparency, the combination of all the super segments is called a body depth image and contains depth information and color information of data body drawing when observed from a specific viewpoint;
(2) and (3) volume depth image drawing: and (3) transmitting the volume depth image obtained in the step (1.4) to a drawing node for drawing, wherein the drawing method of the volume depth image comprises the following steps:
(2.1) expanding each super segment into a frustum by using the positions of the pixel points and the positions of the viewpoints during generation;
(2.2) sequencing all the frustum cones according to the depth according to the current drawing viewpoint;
(2.3) directly drawing all the frustum bodies after sequencing, and correcting the transparency of each frustum body during drawing: the transparency is determined according to the relationship between the length of the frustum and the length of the line segment through which the current line of sight passes, using the following formula:
Figure FDA0003461246830000011
wherein eta is the transparency value of the frustum, s is the length of the frustum, s 'is the length of the line segment of the current sight line passing through the frustum, and eta' is the transparency value after correction;
(3) automatic optimization of body depth image operation parameters
Depth of bodyThe key operation parameter combination in the process of generating the degree image is
Figure FDA0003461246830000012
Optimizing the data;
(3.1) selecting a group of parameter groups in the parameter space to generate a body depth image, and obtaining corresponding body depth image generation time t and body depth image compression ratio c;
(3.2) drawing the body depth image by using the body depth image to obtain a final image, and calculating the quality q of the final image;
(3.3) substituting the t, c and q values into an evaluation function to obtain the evaluation values of the group of parameters, and updating the global optimal parameter group according to the evaluation values, wherein the evaluation function formula is as follows:
Figure FDA0003461246830000021
in the formula k1,k2,k3Weights for the generation time of the body depth image, the data compression rate of the body depth image and the rendering quality of the body depth image respectively satisfy k1+k2+k31, ψ (t) is an evaluation function of the generation time,
Figure FDA0003461246830000022
is an evaluation function of data compression ratio, and the two formulas are as follows:
Figure FDA0003461246830000023
Figure FDA0003461246830000024
in the formula, alpha and beta are two proportional division points of generation time and compression ratio respectively, and values are in a (0,1) interval for regulation and control;
(3.4) finishing the optimization according to the termination condition to obtain an optimal parameter set; otherwise, returning to (3.1);
(4) in the subsequent simulation calculation, the body depth image is generated based on the set of optimum parameter sets.
2. The in-situ volume rendering method according to claim 1, wherein the volume depth image generation time t in (3.1) is obtained by the following formula:
Figure FDA0003461246830000025
wherein t isiAnd N is the number of the calculation nodes when the volume depth image is generated for the ith calculation node.
3. The in-situ volume rendering method according to claim 1, wherein the volume depth image compression ratio c in (3.1) is obtained by the following formula:
Figure FDA0003461246830000026
in the formula, DiData size, D, of the body depth image generated for the ith compute noderawRepresenting the size of the original data.
4. The in-situ volume rendering method according to claim 1, wherein the quality q of the image in step (3.2) is obtained by the following formula:
Figure FDA0003461246830000027
n is the number of image pixels, Xobs,iIs the color value of the ith pixel of an image rendered using a volume depth image, Xmodel,iIs the color value of the ith pixel of a reference image; due to each colorThe color value is a three-dimensional vector, (X)obs,i-Xmodel,i) The euler distance formula is used for the calculation of (a), and the smaller the value, the closer the two images are.
5. The in-situ volume rendering method according to claim 1, wherein the step (3) is optimized by using a particle swarm optimization, and comprises the steps of:
(a) randomly generating J points, wherein each point comprises a group of position information and speed information, the position information is a group of body depth image parameter groups, the parameter groups are randomly generated in a parameter space, and the speed values are randomly generated in a parameter space of (-1, 1);
(b) generating a body depth image according to the parameter set of each point, obtaining generation time t and a compression ratio c, drawing an image by using the generated body depth image data, and comparing the image with a reference image to obtain image quality q;
(c) substituting the evaluation function E into the evaluation functions E according to the t, c and q to obtain an evaluation value E, and updating the historical optimal solution and the global optimal solution of the point according to the E;
(d) updating the speed and position information of each point according to the historical optimal solution and the global optimal solution;
(e) judging whether the Euler distance of more than half of points in the point set and the global optimal solution is less than a certain threshold value epsilon to determine whether the iteration is ended or not; and (c) returning to the step (b) if the optimal solution does not meet the standard, otherwise, finishing the optimization, and obtaining the global optimal solution, namely the optimization result.
CN201810549318.3A 2018-05-31 2018-05-31 In-situ volume rendering method Active CN108876889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810549318.3A CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810549318.3A CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Publications (2)

Publication Number Publication Date
CN108876889A CN108876889A (en) 2018-11-23
CN108876889B true CN108876889B (en) 2022-04-22

Family

ID=64336329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810549318.3A Active CN108876889B (en) 2018-05-31 2018-05-31 In-situ volume rendering method

Country Status (1)

Country Link
CN (1) CN108876889B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005020141A2 (en) * 2003-08-18 2005-03-03 Fovia, Inc. Method and system for adaptive direct volume rendering
CN101286225A (en) * 2007-04-11 2008-10-15 中国科学院自动化研究所 A Massive Data Volume Rendering Method Based on 3D Texture Hardware Acceleration
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
WO2012135153A2 (en) * 2011-03-25 2012-10-04 Oblong Industries, Inc. Fast fingertip detection for initializing a vision-based hand tracker
WO2013161590A1 (en) * 2012-04-27 2013-10-31 株式会社日立メディコ Image display device, method and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053574B2 (en) * 2011-03-02 2015-06-09 Sectra Ab Calibrated natural size views for visualizations of volumetric data sets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005020141A2 (en) * 2003-08-18 2005-03-03 Fovia, Inc. Method and system for adaptive direct volume rendering
CN101286225A (en) * 2007-04-11 2008-10-15 中国科学院自动化研究所 A Massive Data Volume Rendering Method Based on 3D Texture Hardware Acceleration
CN101604453A (en) * 2009-07-08 2009-12-16 西安电子科技大学 Large-scale data field volume rendering method based on partition strategy
WO2012135153A2 (en) * 2011-03-25 2012-10-04 Oblong Industries, Inc. Fast fingertip detection for initializing a vision-based hand tracker
WO2013161590A1 (en) * 2012-04-27 2013-10-31 株式会社日立メディコ Image display device, method and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
千万亿次科学计算的原位可视化;单桂华;《计算机辅助设计与图形学学报》;20130331;第286-293页 *

Also Published As

Publication number Publication date
CN108876889A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN111563841A (en) High-resolution image generation method based on generation countermeasure network
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
WO2021129145A1 (en) Image feature point filtering method and terminal
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
US12141921B2 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN102298767A (en) Method and apparatus for generating structure-based ASCII pictures
CN118411467A (en) Three-dimensional content generation method, system, equipment and medium based on three-dimensional Gaussian
Chen et al. Mesh2nerf: Direct mesh supervision for neural radiance field representation and generation
Kipfer et al. Local exact particle tracing on unstructured grids
Lai et al. Fast radiance field reconstruction from sparse inputs
Yang et al. Fast reconstruction for Monte Carlo rendering using deep convolutional networks
CN108876889B (en) In-situ volume rendering method
Bruder et al. Prediction-based load balancing and resolution tuning for interactive volume raycasting
CN118627588A (en) A self-supervised adversarial defense framework for federated learning based on unsupervised perturbations
JP7601944B2 (en) Method and system for generating polygon meshes that approximate surfaces using root finding and iteration on mesh vertex positions - Patents.com
Abaidi et al. GAN-based generation of realistic compressible-flow samples from incomplete data
CN118262034A (en) System and method for reconstructing an animated three-dimensional human head model from an image
CN116597275A (en) High-speed moving target recognition method based on data enhancement
Takeshita Aabb pruning: Pruning of neighborhood search for uniform grid using axis-aligned bounding box
CN110084872B (en) Data-driven smoke animation synthesis method and system
Holzschuh et al. Improving Flow Matching for Posterior Inference with Physics-based Controls
Legrand et al. Morton integrals for high speed geometry simplification
CN118521699B (en) A method and system for generating three-dimensional hair styles of virtual humans
JPH01134682A (en) Line folding processing system
KR20250053857A (en) Temporal supersampling of frames

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant