[go: up one dir, main page]

CN100428218C - A Method of Realizing General Virtual Environment Roaming Engine - Google Patents

A Method of Realizing General Virtual Environment Roaming Engine Download PDF

Info

Publication number
CN100428218C
CN100428218C CNB021307369A CN02130736A CN100428218C CN 100428218 C CN100428218 C CN 100428218C CN B021307369 A CNB021307369 A CN B021307369A CN 02130736 A CN02130736 A CN 02130736A CN 100428218 C CN100428218 C CN 100428218C
Authority
CN
China
Prior art keywords
scene
camera
roaming
control
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB021307369A
Other languages
Chinese (zh)
Other versions
CN1414496A (en
Inventor
郝爱民
沈旭昆
梁晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB021307369A priority Critical patent/CN100428218C/en
Publication of CN1414496A publication Critical patent/CN1414496A/en
Application granted granted Critical
Publication of CN100428218C publication Critical patent/CN100428218C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本发明属于计算机虚拟现实技术领域。其实现方法包括创建通用虚拟现实应用资源,加载符合场景描述规范的场景数据库文件,定制漫游状态机制,设置输入映射与解释机制并在仿真循环的每一帧都接受一次外部输入,支持多种输入设备接入漫游系统,根据外部输入命令进行视点控制、实体操纵及状态设置,在视点控制过程中实施多种场景复杂度消减策略、进行碰撞检测和地形匹配,支持使用标准二维输入设备选择并操纵三维场景中的物体,设置多种漫游控制状态和环境特效的步骤。用双相机模型进行视景观察、碰撞检测与漫游控制,多种场景复杂度消减策略支持,给出输入映射机制,功能完整,接口清晰,可方便扩充输入设备等。适于一用户开发虚拟现实环境系统。

The invention belongs to the technical field of computer virtual reality. Its implementation method includes creating general virtual reality application resources, loading scene database files conforming to the scene description specification, customizing the roaming state mechanism, setting the input mapping and interpretation mechanism and accepting an external input in each frame of the simulation cycle, supporting multiple inputs The device is connected to the roaming system, and the viewpoint control, entity manipulation and state setting are performed according to external input commands. During the viewpoint control process, various scene complexity reduction strategies are implemented, collision detection and terrain matching are performed, and standard two-dimensional input devices are supported to select and Procedures for manipulating objects in a 3D scene, setting various roaming control states and environmental effects. The dual-camera model is used for scene observation, collision detection and roaming control, supported by various scene complexity reduction strategies, and the input mapping mechanism is provided, with complete functions and clear interfaces, which can facilitate the expansion of input devices, etc. It is suitable for a user to develop a virtual reality environment system.

Description

一种实现通用虚拟环境漫游引擎的方法 A Method of Realizing General Virtual Environment Roaming Engine

技术领域 technical field

本发明属于计算机虚拟现实技术领域,具体地说是一种实现通用虚拟环境漫游引擎的方法。The invention belongs to the technical field of computer virtual reality, in particular to a method for realizing a general virtual environment roaming engine.

背景技术 Background technique

由于基于虚拟现实技术的城市规划、虚拟建造、场景漫游的重要特点和将会产生的巨大社会、经济效益以及对相关产业带来的根本性变革,一些西方发达国家从80年代中期开始投入大量资金及人员支持虚拟现实方面的研究。其中美国高级技术中心、亚特兰设计公司、北卡罗来纳大学、派拉迪姆公司等机构在理论和实践上都取得了许多成果,并开始推出初步实用、价格较为昂贵的专用商业软件工具包。Due to the important characteristics of urban planning, virtual construction, and scene roaming based on virtual reality technology and the huge social and economic benefits that will be generated, as well as the fundamental changes brought about by related industries, some western developed countries have invested a lot of money since the mid-1980s and personnel to support research in virtual reality. Among them, American Advanced Technology Center, Atlanta Design Company, University of North Carolina, Paradim Corporation and other institutions have achieved many achievements in theory and practice, and have begun to launch preliminary practical and relatively expensive special commercial software toolkits.

目前,在三维场景漫游的前沿形成了两个不同的研究方向,即利用可量测尺寸数据的基于几何的场景绘制和基于源图象序列的虚拟场景表示与绘制。基于几何的虚拟现实发展较早,随着计算机技术的高速发展,无论是硬件方面的图形加速卡,还是软件方面的三维建模工具、三维图形环境,都有众多的公司、科研机构推出了多种解决方案,如美国视算科技公司的图形加速卡性能价格比不断地大幅度提升;马尔特锦公司的建模软件MultiGen II Pro及其数据库格式OpenFlight几乎已成为仿真界的工业标准;VEGA(维格)、Performer(表演者)等图形系统已发展为功能强大、易于使用的商业软件。但是虚拟现实技术毕竟发展时间不长,基于几何的虚拟现实技术仍然存在许多有待解决的问题。相对而言,基于图象的虚拟现实技术起步较晚,其主要研究方向是降低基于几何虚拟现实技术的场景建模的复杂度及人工强度,降低虚拟现实技术系统对硬件性能的苛刻要求。但基于图象的虚拟现实技术在虚拟实体操纵、具有力反馈的人机交互、具有物理特性的碰撞检测及响应等方面还有很多理论要做进一步的研究。At present, two different research directions have formed in the frontier of 3D scene roaming, that is, geometry-based scene rendering using measurable size data and virtual scene representation and rendering based on source image sequences. Geometry-based virtual reality developed earlier. With the rapid development of computer technology, many companies and scientific research institutions have launched a variety of graphics accelerator cards in hardware, 3D modeling tools and 3D graphics environments in software. For example, the performance-price ratio of the graphics accelerator card of American Visual Computing Technology Co., Ltd. has been greatly improved; the modeling software MultiGen II Pro and its database format OpenFlight of Maltejin Company have almost become the industrial standard in the simulation field; VEGA ( Vig), Performer (performer) and other graphics systems have developed into powerful, easy-to-use commercial software. However, virtual reality technology has not been developed for a long time after all, and there are still many problems to be solved in geometry-based virtual reality technology. Relatively speaking, image-based virtual reality technology started late, and its main research direction is to reduce the complexity and labor intensity of scene modeling based on geometric virtual reality technology, and reduce the harsh requirements of virtual reality technology system for hardware performance. However, there are still many theories to be further studied in image-based virtual reality technology in terms of virtual entity manipulation, human-computer interaction with force feedback, and collision detection and response with physical characteristics.

卡罗来那大学伯克利分校场景漫游工作室是世界上从事场景漫游研究较早并取得突出成果的科研机构之一,他们1990年开始进行复杂模型的实时漫游策略研究。1996年,在SGI(美国图形工作站生厂商)的Power series 320工作站上实现了计算机系新系楼Soda Hall(索塔礼堂)的实时漫游。Soda Hall模型由1418807个多边形构成,占据21.5兆的硬盘空间,模型使用了406种材质及58种不同纹理。由于研究小组采用了高效的数据存储结构、多级层次细节技术、场景调度算法、实时可视区域判定算法及预计算处理等多种技术,使得Soda Hall的实时仿真率即每秒钟刷新频率恒定在每秒20帧左右。在多年的研究工作中,北卡大学场景漫游工作室提出并不断完善了UNIGRAFIX视景数据库,开发了一些软件工具包,如AutoCADDXF数据格式转为UNIGRAFIX格式的数据转换器、对象多级层次细节技术自动生成器和简易场景自动生成工具等。但是对国内而言,想要在代码级重用一个完整的漫游引擎实际上不可能的。The Scene Walking Studio of the University of Carolina at Berkeley is one of the earliest scientific research institutions in the world engaged in the study of scene roaming and has achieved outstanding results. They began to conduct research on real-time roaming strategies for complex models in 1990. In 1996, real-time roaming of Soda Hall (Soda Hall), a new department building of the Department of Computer Science, was realized on the Power series 320 workstation of SGI (American graphics workstation manufacturer). The Soda Hall model consists of 1,418,807 polygons, occupying 21.5 megabytes of hard disk space. The model uses 406 materials and 58 different textures. Because the research team adopted various technologies such as efficient data storage structure, multi-level detail technology, scene scheduling algorithm, real-time visual area determination algorithm and pre-calculation processing, the real-time simulation rate of Soda Hall, that is, the refresh rate per second is constant. Around 20 frames per second. During years of research work, the Scene Roaming Studio of the University of North Carolina proposed and continuously improved the UNIGRAFIX visual database, and developed some software toolkits, such as data converters for converting AutoCADDXF data format to UNIGRAFIX format, and object multi-level detail technology Automatic generator and simple scene automatic generation tool, etc. But for China, it is actually impossible to reuse a complete roaming engine at the code level.

国内也有一些研究机构从事场景漫游技术的研究,其中以杭州大学工业心理学研究室实现的故宫漫游为代表。故宫漫游采用脚踏车作为交互工具,让漫游者实际上原地不动地在虚拟故宫里骑行。故宫漫游主要展现的是建筑物的外观即室外场景,其室内部分做了大量简化,采用了以纹理映射技术为主的建模方法,碰撞检测及碰撞响应等功能相对较弱,该系统不提供虚拟实体的操纵功能。There are also some domestic research institutions engaged in the research of scene roaming technology, among which the Forbidden City roaming realized by the Industrial Psychology Laboratory of Hangzhou University is the representative. The Forbidden City tour uses bicycles as an interactive tool, allowing the roamer to ride in the virtual Forbidden City without moving. The Forbidden City Tour mainly shows the appearance of the building, that is, the outdoor scene. The indoor part has been greatly simplified, and the modeling method based on texture mapping technology is adopted. The functions of collision detection and collision response are relatively weak. The system does not provide Manipulation capabilities for virtual entities.

发明内容 Contents of the invention

为克服上述缺点,本发明的目的在于提供一种实现通用虚拟环境漫游引擎的方法,它可以实现虚拟环境漫游中的基本但重要的共性功能,而且较好地封装了一批关键技术,降低了一般用户开发虚拟现实应用系统的难度,在代码级实现了系统功能的重用,既有利于规范开发,又可以减少开发周期和费用。In order to overcome the above-mentioned shortcomings, the purpose of the present invention is to provide a method for realizing a general-purpose virtual environment roaming engine, which can realize basic but important common functions in virtual environment roaming, and better encapsulate a batch of key technologies, reducing the It is difficult for ordinary users to develop virtual reality application systems, and the reuse of system functions is realized at the code level, which is not only conducive to standardized development, but also can reduce development cycles and costs.

为达到上述目的,本发明的实现的方法,包括个人计算机、图形加速卡、步行器、立体视觉显示及跟踪设备、场景数据库以及漫游引擎核心组件,其实现方法包括:创建通用的虚拟现实应用资源,加载符合场景描述规范的场景数据库文件,定制漫游状态机制,设置输入映射与解释机制并在仿真循环的每一帧都接受一次外部输入,支持多种输入设备接入漫游系统,根据外部输入命令进行视点控制、实体操纵及状态设置,在视点控制过程中实施多种场景复杂度消减策略、进行碰撞检测和地形匹配,支持使用标准二维输入设备选择并操纵三维场景中的物体,设置多种漫游控制状态和环境特效,满足不同用户的需求的步骤。In order to achieve the above object, the implementation method of the present invention includes a personal computer, a graphics accelerator card, a walker, a stereo vision display and tracking device, a scene database and a roaming engine core component, and its implementation method includes: creating a general virtual reality application resource , load the scene database file that conforms to the scene description specification, customize the roaming state mechanism, set the input mapping and interpretation mechanism and accept an external input in each frame of the simulation cycle, support multiple input devices to access the roaming system, according to the external input command Perform viewpoint control, entity manipulation and state setting, implement various scene complexity reduction strategies, perform collision detection and terrain matching in the viewpoint control process, support the use of standard two-dimensional input devices to select and manipulate objects in three-dimensional scenes, and set various Roaming control status and environmental effects, steps to meet the needs of different users.

视点控制是使用双相机模型进行漫游的视点控制,创建观察视点相机和行走视点相机,观察视点相机受限于行走视点相机,行走相机用以实现漫游过程中的多点碰撞和保持行进方向,观察相机有三个转动自由度;视点控制进一步包括在视点相机及行走相机的运动中如果检测到碰撞,则判断碰撞的性质,如果是场景控制面,则启动场景调度策略,进行室内外场景的调度管理,如果是可操纵的运动实体,则按模型中定义的运动坐标系及运动参数进行场景中实体的运动控制,如果确认发生了碰撞,则给出碰撞响应的步骤;输入映射与解释机制用以将漫游引擎和输入设备隔离,以最大限度地减少输入设备对漫游系统内核的影响,从视点控制及设备功能中抽象出一个中间层,将输入设备的控制映射到该中间层并使其成为视点驱动的控制源;在整个漫游过程中,同时支持多种场景高度及复杂度消减策略,包括按观察视点相机与各场景模型的距离调度多层次细节模型的精度替换,用场景数据库中的三维空间平面和行为参数确定场景控制面,进行室内外场景及复杂室内场景的调度划分,在漫游过程中,基于视线的碰撞检测算法检测到模型控制面后,将建模时定义的控制行为标识解析出来,映射到场景控制对象集中,根据控制对象集中的对象列表,装载可见对象,卸载不可见对象,实现场景调度管理,模型加载时自动消减实体模型中的冗余多边形,生成实体表面的纹理;碰撞检测与响应进一步包括分析相机运动的原因,建立步行相机和观察相机向前的碰撞检测线段,观察相机碰撞检测,步行相机碰撞检测,如果发生碰撞,则进行虚拟环境场景调度处理,若未发生碰撞,则相机前进,并进行地形匹配的步骤;地形匹配技术进一步包括行走相机前进,如果视点位置发生变化,调用碰撞检测模块,碰撞检测结果为发生碰撞,则忽略视点位置改变,行走相机和观察相机升高一定高度,同时按地形表面方向设置观察相机的视向,若结果为未发生碰撞,则视点到达一个新位置的步骤;状态设置功能包括开/关雾化效果、开/关二维地图向导、开/关碰撞检测、选择透明处理方式、设置天气状况等的一种或几种;环境特效包括以雾化为技术支撑的恒定帧频方法的步骤;恒定帧频方法进一步包括取出前2秒内每帧平均耗用时间,计算当前帧频,计算当前帧频与目标帧频之差,如果当前帧频大于目标帧频则增加雾化可见距离,否则减小雾化可见距离,设置新的雾化参数并结束。Viewpoint control is a viewpoint control using a dual camera model for roaming. Create an observation viewpoint camera and a walking viewpoint camera. The observation viewpoint camera is limited by the walking viewpoint camera. The walking camera is used to realize multi-point collision and maintain the direction of travel during roaming. The camera has three rotational degrees of freedom; viewpoint control further includes judging the nature of the collision if a collision is detected during the movement of the viewpoint camera and the walking camera, and if it is a scene control surface, a scene scheduling strategy is activated to schedule and manage indoor and outdoor scenes , if it is a manipulable moving entity, the motion control of the entity in the scene is carried out according to the motion coordinate system and motion parameters defined in the model. If it is confirmed that a collision has occurred, the steps of the collision response are given; the input mapping and interpretation mechanism is used to Isolate the roaming engine from the input device to minimize the impact of the input device on the core of the roaming system, abstract an intermediate layer from the viewpoint control and device functions, map the control of the input device to the intermediate layer and make it a viewpoint Driven control source; during the entire roaming process, it supports multiple scene height and complexity reduction strategies, including scheduling the accuracy replacement of multi-level detail models according to the distance between the observation camera and each scene model, using the three-dimensional space in the scene database Plane and behavior parameters determine the scene control surface, and schedule and divide indoor and outdoor scenes and complex indoor scenes. During the roaming process, after the line-of-sight-based collision detection algorithm detects the model control surface, it analyzes the control behavior identifier defined during modeling , mapped to the scene control object set, according to the object list in the control object set, load visible objects, unload invisible objects, realize scene scheduling management, automatically reduce redundant polygons in the solid model when the model is loaded, and generate the texture of the solid surface; collision The detection and response further include analyzing the cause of camera movement, establishing a forward collision detection line segment between the walking camera and the observing camera, detecting the collision of the observing camera, and detecting the collision of the walking camera. If a collision occurs, the virtual environment scene scheduling process is performed; , then the camera moves forward, and the step of terrain matching is performed; the terrain matching technology further includes walking the camera forward, if the position of the viewpoint changes, the collision detection module is called, and the result of the collision detection is a collision, then ignore the change of the viewpoint position, the walking camera and the observation camera Raise a certain height, and at the same time set the viewing direction of the observation camera according to the direction of the terrain surface. If the result is no collision, the viewpoint reaches a new position; the status setting function includes turning on/off the fog effect, turning on/off the two-dimensional map One or more of wizards, on/off collision detection, selection of transparent processing methods, setting weather conditions, etc.; environmental effects include the steps of the constant frame rate method supported by atomization; the constant frame rate method further includes taking out the first 2 Calculate the average time spent per frame in seconds, calculate the current frame rate, and calculate the difference between the current frame rate and the target frame rate. If the current frame rate is greater than the target frame rate, increase the fog visible distance, otherwise reduce the fog visible distance, and set a new value. atomization parameters and end.

本发明的特点是:1、功能完整,接口清晰。使用本发明开发场景漫游系统,其工作量等同于创建场景数据库,漫游驱动部分几乎不必再重写代码。2、采用双相机模型进行视景观察、碰撞检测与漫游控制。双相机模型不仅可以有效地解决地面低矮障碍物的碰撞检测问题,同时也可以较好的解决在坡度路面上地形匹配的视向问题。3、多种场景复杂度消减策略的支持,特别是支持独创的基于场景控制面的调度算法。该算法保证了室内外结合场景的连续漫游问题,可以有效地提高漫游的实时性,同时,算法简单,便于掌握使用。4、遵循虚拟现实业界数据标准,在漫游引擎内封装了对虚拟场景的多种描述的解释,如门、窗的运动定义及高于地面物体对漫游行进的行为特征,如果是台阶可以进行地形匹配,使视点自动升高,如果是绿地围栏则不能通过,需要绕行等等。5、给出了输入映射的机制,同时支持鼠标、键盘、游戏杆、步行器、头盔跟踪器等多种输入设备,并可以方便的扩充输入设备接入。6、实现了基于雾化的恒定帧频技术,通过调解雾的浓度来控制场景中可见几何面片的数量,进而调整漫游帧频。The characteristics of the present invention are: 1. The function is complete and the interface is clear. Using the invention to develop a scene roaming system, the workload is equal to creating a scene database, and the roaming driving part hardly needs to rewrite codes. 2. The dual-camera model is used for scene observation, collision detection and roaming control. The dual-camera model can not only effectively solve the problem of collision detection of low obstacles on the ground, but also better solve the problem of sight direction for terrain matching on sloped roads. 3. Support for various scenario complexity reduction strategies, especially the original scheduling algorithm based on the scenario control plane. The algorithm guarantees the continuous roaming problem of combined indoor and outdoor scenes, and can effectively improve the real-time roaming. At the same time, the algorithm is simple and easy to master and use. 4. Following the virtual reality industry data standards, the roaming engine encapsulates various descriptions and interpretations of the virtual scene, such as the definition of the movement of doors and windows and the behavior characteristics of objects above the ground for roaming. If it is a step, it can perform terrain Match, so that the viewpoint is automatically raised, if it is a green fence, it cannot pass through, and it needs to go around and so on. 5. The mechanism of input mapping is given, and it supports multiple input devices such as mouse, keyboard, joystick, walker, helmet tracker, etc., and can easily expand the access of input devices. 6. Realize the constant frame rate technology based on fogging, control the number of visible geometric patches in the scene by adjusting the fog concentration, and then adjust the roaming frame rate.

本发明的虚拟环境漫游引擎计算机系统与现有技术相比,其有益效果是:它是一个相对完整的漫游引擎,实现了输入映射与视点控制、虚拟场景调度管理、多层次细节模型切换、纹理、光照、地形匹配、碰撞检测与响应、实体操纵、二维地图向导、恒定帧频方法等引擎功能,与国内的两个典型漫游引擎相比,具有明显的优势,整体功能与美国伯克利大学的漫游系统相当,虽在大场景处理方面功能不如其强大,但本发明中的基于场景控制面的调度算法实现简单,功能可以满足中型室内外场景的要求,与伯克利大学的漫游系统相比,无需场景的预处理过程,同时,可以在基本配置的个人计算机上完成场景的实时漫游。在应用系统开发方面,不必再做具体的程序编制工作,只需要关心场景的建立过程,它是一个通用性强、功能完整、技术先进的虚拟环境漫游引擎。Compared with the prior art, the virtual environment roaming engine computer system of the present invention has the beneficial effects that it is a relatively complete roaming engine, which realizes input mapping and viewpoint control, virtual scene scheduling management, multi-level detail model switching, texture , lighting, terrain matching, collision detection and response, entity manipulation, two-dimensional map guide, constant frame rate method and other engine functions, compared with the two typical roaming engines in China, it has obvious advantages, and the overall function is similar to that of the University of Berkeley in the United States. The roaming system is equivalent, although its function is not as powerful in large scene processing, but the scheduling algorithm based on the scene control plane in the present invention is simple to implement, and the function can meet the requirements of medium-sized indoor and outdoor scenes. Compared with the roaming system of Berkeley University, it does not need The preprocessing process of the scene, at the same time, the real-time roaming of the scene can be completed on the basic configuration of the personal computer. In terms of application system development, there is no need to do specific programming work, only need to care about the establishment process of the scene, it is a virtual environment roaming engine with strong versatility, complete functions and advanced technology.

附图说明 Description of drawings

图1示出本发明的主程序流程图;Fig. 1 shows the main program flow chart of the present invention;

图2示出本发明输入设备映射与视点控制图;Fig. 2 shows the input device mapping and viewpoint control diagram of the present invention;

图3示出本发明漫游引擎中的视点控制模型图;Fig. 3 shows the viewpoint control model diagram in the roaming engine of the present invention;

图4示出本发明鼠标输入时屏幕区域划分示意图;Fig. 4 shows a schematic diagram of screen area division during mouse input in the present invention;

图5示出本发明模型控制面场景调度控制工作流程图;Fig. 5 shows the working flow chart of the scene scheduling control of the model control plane of the present invention;

图6示出本发明实施例进出某建筑物的模型控制面定义图;Fig. 6 shows the definition diagram of the model control surface of entering and exiting a certain building according to the embodiment of the present invention;

图7示出本发明视点控制过程中的地形匹配技术示意图;Fig. 7 shows a schematic diagram of terrain matching technology in the viewpoint control process of the present invention;

图8示出本发明二维地图多通道模型图;Fig. 8 shows a two-dimensional map multi-channel model diagram of the present invention;

图9示出本发明基于雾化的恒定帧频算法图。FIG. 9 shows a diagram of the fog-based constant frame rate algorithm of the present invention.

表1为通用漫游引擎框架中定义的主要控制状态表;Table 1 is the main control state table defined in the general roaming engine framework;

表2为用键盘进行视点控制时采用的设备映射表;Table 2 is the device mapping table used when the keyboard is used for viewpoint control;

表3用鼠标进行视点控制时采用的设备映射表;Table 3 The device mapping table used when controlling the viewpoint with the mouse;

表4为通用漫游引擎系统键盘功能表。Table 4 is the keyboard function table of the general roaming engine system.

具体实施方式 Detailed ways

下面结合附图和具体实施方式对本发明作详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明的通用虚拟环境漫游引擎是一个可以独立运行的漫游引擎,采用的软件平台为Virtual Studio C++6.0及OpenGVS4.3软件,操作系统为Windwos2000。The universal virtual environment roaming engine of the present invention is a roaming engine that can operate independently, the software platform that adopts is Virtual Studio C++6.0 and OpenGVS4.3 software, and the operating system is Windwos2000.

参阅图1,本发明首先要创建通用虚拟现实环境资源,这些资源是场景、实体、帧缓存等,然后加载符合场景描述规范的场景数据库文件,接收外部输入并形成输入映射,之后进行输入映射的解释,输入映射解释又分为状态设置、视点控制和实体操纵,状态设置包括设置系统状态、二维导览和环境特效,视点控制视点控制包括对场景调度、碰撞检测、和地形匹配的控制,由视点控制判断是否退出和是否实释放资源。定制漫游状态机制时,设置输入映射与解释机制并在仿真循环的每一帧都接受一次外部输入,支持多种输入设备接入漫游系统,根据外部输入命令进行视点控制、实体操纵及状态设置,在视点控制过程中实施多种场景复杂度消减策略、进行碰撞检测和地形匹配,支持使用标准二维输入设备选择并操纵三维场景中的物体,设置多种漫游控制状态和环境特效,满足不同用户的需求。Referring to Fig. 1, the present invention first creates general-purpose virtual reality environment resources, these resources are scenes, entities, frame buffers, etc., then loads scene database files conforming to scene description specifications, receives external input and forms input mapping, and then performs input mapping Explanation, input mapping interpretation is divided into state setting, viewpoint control and entity manipulation, state setting includes setting system state, two-dimensional navigation and environmental special effects, viewpoint control viewpoint control includes control of scene scheduling, collision detection, and terrain matching, Whether to exit and whether to release resources is judged by viewpoint control. When customizing the roaming state mechanism, set the input mapping and interpretation mechanism and accept an external input in each frame of the simulation cycle, support multiple input devices to access the roaming system, and perform viewpoint control, entity manipulation and state setting according to external input commands. Implement various scene complexity reduction strategies, collision detection and terrain matching in the process of viewpoint control, support the use of standard two-dimensional input devices to select and manipulate objects in three-dimensional scenes, and set various roaming control states and environmental effects to meet different users demand.

参阅图2,为了适应不同漫游应用的各种需求,本发明的漫游引擎中设置了定制漫游的状态机制。大多数虚拟现实应用都是OPENGL兼容的应用,而OPENGL的内部实现本身就是一种状态机制。所以,在本发明中设置状态机制即保持了与图形系统的一致性,又增加了漫游系统的灵活性。Referring to FIG. 2 , in order to meet various demands of different roaming applications, a state mechanism of customized roaming is set in the roaming engine of the present invention. Most VR applications are OpenGL compatible applications, and the internal implementation of OpenGL is itself a state mechanism. Therefore, setting the state mechanism in the present invention not only maintains the consistency with the graphics system, but also increases the flexibility of the roaming system.

通用漫游框架中定义的绝大多数功能都被定义为可选择项,也就是说,漫游者可以根据自己的需要打开或关闭某些功能,如开/关雾化效果、开/关二维地图、决定是否进行碰撞检测、选择透明处理方式等。另一方面,漫游者还可以对漫游系统的初始状态进行设置,比如观察相机的初始位置、相机行进的速度步长、转角的步长、系统模拟的气候条件(晴、多云、阴)、时段(早晨、中午、傍晚)等。在实现漫游系统视点控制时,多采用直接读取设备信息并进行处理的方式。随着大量I/O设备和新的交互控制技术的发展,将会有越来越多的交互设备得到使用。由于所配备设备的多样性和可扩展性,无疑会给软件的设计和维护带来复杂性。表1定义了本发明的漫游引擎框架中的主要控制状态。Most of the functions defined in the general roaming framework are defined as optional items, that is, the roamer can turn on or off some functions according to their own needs, such as turning on/off the fog effect, turning on/off the two-dimensional map , Decide whether to perform collision detection, choose a transparent processing method, etc. On the other hand, the rover can also set the initial state of the roaming system, such as the initial position of the observation camera, the speed step of the camera, the step of the corner, the climate conditions simulated by the system (clear, cloudy, overcast), time period (morning, noon, evening) etc. When realizing the viewpoint control of the roaming system, the method of directly reading and processing the device information is mostly adopted. With the development of a large number of I/O devices and new interactive control technologies, more and more interactive devices will be used. Due to the diversity and scalability of the equipped equipment, it will undoubtedly bring complexity to the design and maintenance of the software. Table 1 defines the main control states in the roaming engine framework of the present invention.

基于上述原因,本发明定义了一个基本的设备映射,即从视点控制及设备功能中抽象出一个中间层,将输入设备的控制映射到这一中间层并使其成为视点驱动的控制源。Based on the above reasons, the present invention defines a basic device mapping, which abstracts an intermediate layer from viewpoint control and device functions, maps the control of input devices to this intermediate layer and makes it a source of viewpoint-driven control.

可以看出,输入设备的控制信号映射为控制指令,使得视点控制与输入设备的直接控制信号分离开来,成为独立的功能模块。使用这种“隔离”技术,既方便了漫游系统对输入设备的扩展,又保持了漫游引擎相对的独立性。It can be seen that the control signal of the input device is mapped to a control instruction, so that the viewpoint control is separated from the direct control signal of the input device and becomes an independent functional module. Using this "isolation" technology not only facilitates the expansion of the roaming system to the input device, but also maintains the relative independence of the roaming engine.

参考图3,在漫游系统中,视点即为人眼的“化身”,其功能与现实世界中的摄像机或照相机相同。视点控制通常是指观察相机的运动控制。视点控制模采用与众不同的两组相机分别模拟人眼及脚的运动。模拟人眼的相机称为观察相机,模拟人脚的相机称为行走相机。观察相机的运动受限于行走相机。一般的漫游系统只设置观察相机,本发明设置行走相机的目的有两个:一是在漫游过程中实现多点碰撞检测,二是用行走相机保持行进的方向,而允许观察相机有更多的转动自由度,观察相机除了沿Y轴左右转动外,还可以沿X轴上下转动,模拟人抬头、低头的动作。行走相机的基本参数包括:距离地面的高度(walker_height),前进或后退的步长(step),沿Y轴的转动步长(θ)。walker_height的设置可以提高漫游系统整体性能,因为行走相机将参与视点控制过程的碰撞检测,当walker_height设置为非0值如0.10时,虚拟环境中低于10cm的实体与视点都不会发生碰撞。对应于现实世界的行为是,所有低于10cm的障碍如某建筑物中的门槛是人可以跨越的。这样,漫游系统将减少大量无效计算,从而提升系统的整体性能。观察相机的基本参数与行走相机基本相同:距离地面的高度(eye_height),前进或后退的步长step,沿Y轴的转动步长θ。除此之外,观察相机还设置了另一个转动参数,即沿X轴转动的角度步长α,它允许漫游者抬头、低头去观察三维场景。Referring to Figure 3, in the roaming system, the point of view is the "avatar" of the human eye, and its function is the same as that of a video camera or camera in the real world. Viewpoint control generally refers to motion control of the viewing camera. The viewpoint control module uses two different sets of cameras to simulate the movement of human eyes and feet respectively. A camera that simulates the human eye is called an observation camera, and a camera that simulates a human foot is called a walking camera. The movement of the viewing camera is limited by the walking camera. The general roaming system is only equipped with observation cameras. The purpose of the invention is to set up walking cameras for two purposes: one is to realize multi-point collision detection during the roaming process; The degree of freedom of rotation, in addition to rotating left and right along the Y axis, the observation camera can also rotate up and down along the X axis, simulating the movements of people raising and lowering their heads. The basic parameters of the walking camera include: the height from the ground (walker_height), the forward or backward step (step), and the rotation step along the Y axis (θ). The setting of walker_height can improve the overall performance of the roaming system, because the walking camera will participate in the collision detection of the viewpoint control process. When the walker_height is set to a non-zero value such as 0.10, entities below 10cm in the virtual environment will not collide with the viewpoint. Corresponding to the behavior in the real world, all obstacles below 10cm, such as the threshold in a building, can be crossed by humans. In this way, the roaming system will reduce a large number of invalid calculations, thereby improving the overall performance of the system. The basic parameters of the observation camera are basically the same as those of the walking camera: the height from the ground (eye_height), the step size step forward or backward, and the rotation step size θ along the Y axis. In addition, the observation camera also sets another rotation parameter, that is, the angle step α along the X axis, which allows the rover to look up and down to observe the three-dimensional scene.

视点控制模型中还为观察相机和行走相机各定义了三个用于碰撞检测的参数,v_p0、v_p1、v_p2以及w_p0、w_p1、w_p2,分别表示相机当前的位置、前进方向上一定距离处的一点、后退方向上相同距离处的一点。视点控制模型定义了相机运动:观察相机运动集合:{FORWARD,BACKWARD,TURN_LEFT,TURN_RIGHT,LOOK_UP,LOOK_DOWN};行走相机运动集合:{FORWARD,BACKWARD,TURN_LEFT,TURN_RIGHT}The view point control model also defines three parameters for collision detection for the observation camera and the walking camera, v_p0, v_p1, v_p2 and w_p0, w_p1, w_p2, which represent the current position of the camera and a point at a certain distance in the forward direction respectively. , a point at the same distance in the backward direction. The viewpoint control model defines camera motion: observation camera motion collection: {FORWARD, BACKWARD, TURN_LEFT, TURN_RIGHT, LOOK_UP, LOOK_DOWN}; walking camera motion collection: {FORWARD, BACKWARD, TURN_LEFT, TURN_RIGHT}

由此可知,视点控制即为步行相机与观察相机的运动控制。而相机运动的方式无外乎两种;平移及旋转。本发明用视线方向上的位移步长step定义相机位置的变化,用相机沿坐标轴的旋转角度步长θ及α定义视线方向的变化。因此,输入设备的映射问题转化为输入设备控制量到相机运动类型及step、θ、α的数学变换的问题。It can be seen that viewpoint control is the motion control of walking camera and observation camera. There are only two ways of camera movement; translation and rotation. The present invention uses the displacement step step in the line of sight direction to define the change of the camera position, and uses the rotation angle steps θ and α of the camera along the coordinate axis to define the change of the line of sight direction. Therefore, the mapping problem of the input device is transformed into the problem of the mathematical transformation of the input device control amount to the camera motion type and step, θ, α.

键盘输入的映射最为简单,采用直接映射的方法,以观察相机为例,如表2所示。The mapping of keyboard input is the simplest, using the direct mapping method, taking the observation camera as an example, as shown in Table 2.

参阅图4鼠标输入时屏幕区域划分示意图,鼠标输入的视点控制映射则不同,本发明采用了屏幕区域化分的方法来识别一次鼠标输入的控制功能。记当前鼠标位置为(xm,ym),由设备坐标变换的有关原理可知,xm∈[-1,1],ym∈[-1,1],且二维显示屏幕的左下角坐标为(-1,-1),右上角为(1,1)。于是,按图5所示划分屏幕为十个区域,分别标记为区域1到区域10,并设:用鼠标设备控制视点时,相机最大平移步长为MAXSTEP,最大转动角度步长为MAXROTATE,令speed=MAXSTEP*fabs(ym),rot=MAXROTATE*fabs(xm),则鼠标设备的映射如表3所示。其他输入设备映射与鼠标设备映射基本相似。Referring to FIG. 4, the schematic diagram of screen area division during mouse input, the viewpoint control mapping of mouse input is different. The present invention adopts the method of screen area division to identify the control function of a mouse input. Note that the current mouse position is (xm, ym). According to the relevant principles of equipment coordinate transformation, xm∈[-1, 1], ym∈[-1, 1], and the coordinates of the lower left corner of the two-dimensional display screen are (- 1, -1), and (1, 1) in the upper right corner. Therefore, divide the screen into ten areas as shown in Figure 5, respectively marked as area 1 to area 10, and set: when using the mouse device to control the viewpoint, the maximum translation step of the camera is MAXSTEP, and the maximum rotation angle step is MAXROTATE, so that speed=MAXSTEP*fabs(ym), rot=MAXROTATE*fabs(xm), the mapping of the mouse device is shown in Table 3. Other input device mappings are basically similar to mouse device mappings.

基于几何式虚拟现实技术描述一个中等复杂程度的社区漫游虚拟环境,尤其是建筑物室内环境,一般要花费几万乃至数十万个实体表面三角形。由于图形硬件性能、计算机物理内存、CPU主频等条件的限制,不采取场景复杂度消减策略的虚拟现实系统将无法保证实时交互漫游。采用多种场景复杂度消减策略,可以提高漫游系统的整体性能。该策略包括基于碰撞检测的模型控制面场景调度方法、冗余多边形消减、纹理映射、多级层次细节模型等几种方法。To describe a moderately complex community roaming virtual environment based on geometric virtual reality technology, especially the indoor environment of buildings, generally requires tens of thousands or even hundreds of thousands of solid surface triangles. Due to the limitations of graphics hardware performance, computer physical memory, CPU frequency and other conditions, a virtual reality system that does not adopt a scene complexity reduction strategy will not be able to guarantee real-time interactive roaming. Using multiple scene complexity reduction strategies can improve the overall performance of the roaming system. The strategy includes several methods such as collision detection-based model control surface scene scheduling method, redundant polygon reduction, texture mapping, and multi-level detail model.

参阅图5,场景调度管理有两方面含义,一方面,由于计算机物理内存容量的限制,有时无法将一个复杂的场景数据库一次全部调入内存之中。此时,场景调度管理是指分块装载、卸载模型数据。另一方面,由于计算机硬件,尤其是图形加速卡性能的限制,在一帧之中渲染过多的多边形也将超过系统处理能力的极限。减少一帧渲染的多边形数量是场景调度管理的另一层面的含义。Referring to Figure 5, the scene scheduling management has two meanings. On the one hand, due to the limitation of the physical memory capacity of the computer, sometimes it is impossible to transfer a complex scene database into the memory all at once. At this time, scene scheduling management refers to loading and unloading model data in blocks. On the other hand, due to the limitations of computer hardware, especially the performance of graphics accelerator cards, rendering too many polygons in one frame will also exceed the limit of system processing capabilities. Reducing the number of polygons rendered in one frame is another aspect of scene scheduling management.

经典的室内场景调度控制如PVS(潜视集,Potentially Visible Set)算法等大都将漫游过程与场景调度处理分成两个阶段进行。对于某一特定的虚拟环境,上述算法在“离线”状态下将建筑物按围隔房间的墙面、地板、天花板等平行于世界坐标轴的分割面进行空间划分,形成Cell(空间单元),然后在分割后的空间单元上标注可视区域如门、窗等,再逐一计算Cell到Cell以及Cell到实体的可见区域,最后将计算结果存储在场景数据库中。这样,在漫游阶段,漫游驱动程序无需再做复杂的可见区域的判断,直接使用预计算的结果进行场景调度控制,达到较好的实时效果。但是,预计算方法也存在一些明显的不足,如空间划分及可见区域预计算依赖于虚拟环境、预计算结果占用较大的场景数据库空间、预计算工作量较大等。尤其对于中空型建筑物而言,预计算可见区域算法的效率相对较低。Classical indoor scene scheduling control, such as PVS (Potentially Visible Set) algorithm, mostly divides the roaming process and scene scheduling into two stages. For a specific virtual environment, the above algorithm divides the space of the building according to the partition planes parallel to the world coordinate axis, such as the walls, floors, and ceilings of the enclosed rooms, to form Cell (spatial unit). Then mark the visible areas such as doors, windows, etc. on the divided spatial units, and then calculate the visible areas from Cell to Cell and from Cell to Entity one by one, and finally store the calculation results in the scene database. In this way, in the roaming stage, the roaming driver does not need to make complex judgments on the visible area, and directly uses the pre-calculated results to perform scene scheduling control to achieve better real-time effects. However, the precomputation method also has some obvious shortcomings, such as the precomputation of space division and visible area depends on the virtual environment, the precomputation results occupy a large scene database space, and the precomputation workload is relatively large. Especially for hollow buildings, the efficiency of the pre-calculated visible area algorithm is relatively low.

本发明的漫游引擎提出了一种基于碰撞检测的模型控制面场景调度控制方法。该方法将场景建模与漫游控制有机结合,用主观评价标准确定可见区域,达到了较好的效果。The roaming engine of the present invention proposes a model control surface scene scheduling control method based on collision detection. This method organically combines scene modeling with roaming control, uses subjective evaluation criteria to determine the visible area, and achieves better results.

可以看出,模型控制面在建模阶段设定并与其他场景数据一同存储在场景数据库中。控制面为一空间平面,由空间方程Ax+By+Cz+D=0确定。与一般空间平面不同的是,控制而在建模过程中定义了一些特殊属性,其中最为重要的就是场景调度控制行为。控制行为在建模阶段仅仅定义了一个标识,具体的控制行为由漫游引擎解析和确定。It can be seen that the model control surfaces are set during the modeling phase and stored in the scene database together with other scene data. The control surface is a space plane, which is determined by the space equation Ax+By+Cz+D=0. Different from the general space plane, the control defines some special properties in the modeling process, the most important of which is the scene scheduling control behavior. The control behavior only defines an identifier in the modeling stage, and the specific control behavior is analyzed and determined by the roaming engine.

漫游过程中,基于视线的碰撞检测算法检测到模型控制面后,将建模时定义的控制行为标识解析出来,映射到场景控制对象集中,并根据控制对象集中的对象列表,装载可见对象,卸载不可见对象,实现场景调度管理。场景控制对象集由漫游程序反复实践,最终由主观标准确定。即确定在某一控制面处,哪些对象为可见对象,哪些为不可见对象,记录到漫游驱动的数据文件中。During the roaming process, after the line-of-sight-based collision detection algorithm detects the control surface of the model, it parses out the control behavior identifier defined during modeling, maps it to the scene control object set, and loads the visible objects and unloads them according to the object list in the control object set. Invisible objects, realize scene scheduling management. The set of scene control objects is repeatedly practiced by the roaming program and finally determined by subjective criteria. That is, determine which objects are visible objects and which are invisible objects at a certain control plane, and record them in the data file of the roaming drive.

图6是进出某建筑物的模型控制面定义图,当漫游视点接近某建筑物正门前的模型控制面时,碰撞检测算法给出该处的控制面规定的控制行为,如进入建筑物,经行为解析模块分析后,找出对应的场景调度控制对象集,很自然地,应关闭建筑物外形轮廓模型,调入建筑物室内模型,同时对六个楼层的可见与不可见对象进行相应显示与禁止显示的处理。Figure 6 is a definition diagram of the model control surface for entering and exiting a building. When the roaming viewpoint is close to the model control surface in front of the main entrance of a building, the collision detection algorithm gives the control behavior specified by the control surface at this place, such as entering the building, After the analysis of the behavior analysis module, the corresponding scene scheduling control object set is found out. Naturally, the building outline model should be closed, and the building indoor model should be transferred, and the visible and invisible objects on the six floors should be displayed accordingly Dealing with display suppression.

模型控制面采用六元组(A,B,C,D,arg1,arg2)方式描述,其中A、B、C、D唯一确定三维空间中的一个平面Ax+By+Cz+D=0,arg1,arg2为控制面给定的控制行为参数,由检测算法传递到控制行为解析模块。某建筑物场景模型中设置了多个模型控制面,图6给出了进出某建筑物的场景调度控制示意。六个模型控制面设定了进出某建筑物的场景调度行为。除此之外,还在各楼层切换的楼梯处以及接近外窗的楼层地板附近设置了控制面,如当视点处于建筑物二层时,其它楼层除构成中庭的窗子、门、墙等对象外,大部分对象均不可见,楼层切换控制面可以控制这类场景调度管理,大大减少一帧之中绘制的多边形的数量。The control surface of the model is described by a six-tuple (A, B, C, D, arg 1 , arg 2 ), where A, B, C, and D uniquely determine a plane Ax+By+Cz+D=0 in the three-dimensional space , arg 1 and arg 2 are the control behavior parameters given by the control plane, which are passed to the control behavior analysis module by the detection algorithm. Multiple model control surfaces are set in the scene model of a building. Figure 6 shows the scene scheduling control diagram for entering and exiting a building. Six model control surfaces set the scene scheduling behavior for entering and exiting a building. In addition, control surfaces are also set up at the stairs where each floor is switched and near the floor of the floor near the outer window. , most objects are invisible, and the floor switching control surface can control this kind of scene scheduling management, greatly reducing the number of polygons drawn in one frame.

三维场景建模的一个基本原则是使用最少的多边形获取相同的逼真视觉效果。描述实体模型的表面数据经常存在冗余现象,单独建模的实体在进行模型整合时也同样会发生数据冗余。消除这些冗余的表面多边形可以在很大程度上降低整个场景的复杂度。例如现有技术的多级层次细节窗户模型由156个三角形构成,窗户最上面的一个窗棱通常的建模技术将其用12个三角形(6个矩形平面)描述,由于两侧顶端的平面与侧面的竖直平面重合,不仅是冗余数据,而且会造成Z值争夺,应该预以删除。如此对整个窗户进行一遍删减,同样视觉效果的窗户则可由104个三角形构成。再考虑模型整合产生的数据冗余,最上面的棱面将与墙面重合,也可以删减,依此类推,最终的窗户模型只要用96个三角形就可以完整地描述其几何拓朴信息。与156个三角形表示的模型相比,其视觉效果没有任何损失,却提升了近40%的系统性能。A basic principle of 3D scene modeling is to use the least number of polygons to achieve the same realistic visual effect. There is often redundancy in the surface data describing the entity model, and data redundancy also occurs when the separately modeled entities are integrated. Eliminating these redundant surface polygons can greatly reduce the overall scene complexity. For example, the multi-level detail window model of the prior art is composed of 156 triangles, and a window edge on the top of the window is described by 12 triangles (6 rectangular planes) in the usual modeling technology. The coincidence of the vertical planes on the side is not only redundant data, but also causes Z value contention, so it should be deleted in advance. In this way, the entire window is deleted once, and the window with the same visual effect can be composed of 104 triangles. Considering the data redundancy caused by model integration, the uppermost facet will coincide with the wall, and can also be deleted, and so on. The final window model only needs 96 triangles to fully describe its geometric topology information. Compared with the model represented by 156 triangles, its visual effect has no loss, but it improves the system performance by nearly 40%.

纹理映射技术是计算机图形应用广泛采用的实体表面细节逼真表现技术,也是控制场景复杂度、加速图形绘制速度的有效方法。它将一个矩形数组定义的二维纹理图象映射到三维实体表面,或者通过一个过程修改实体表面的光强分布。Texture mapping technology is widely used in computer graphics applications to express realistic details of solid surfaces, and it is also an effective method to control scene complexity and accelerate graphics rendering speed. It maps a two-dimensional texture image defined by a rectangular array to a three-dimensional solid surface, or modifies the light intensity distribution on a solid surface through a process.

生成纹理的一般方法,是在一平面区域即纹理空间上预先定义纹理图案,然后在物体表面的点与纹理空间的点之间建立映射。当物体表面的可见点确定之后,以纹理空间的对应点的值乘以亮度值,就可以把纹理图案附到物体表面上。可以用类似的方法给物体表面产生凹凸不平的外观,或者称凸凹纹理。不过这时纹理值作用在法向量上,而不是颜色亮度上。无论是生成颜色纹理还是凸凹纹理,一般只要求与真实图案大致逼近,不必做精确的模拟,以便在不显著增加计算量的前提下,较大幅度地增加实体表面的真实感。The general method of generating texture is to predefine the texture pattern on a plane area, that is, the texture space, and then establish a mapping between the points on the surface of the object and the points in the texture space. After the visible points on the surface of the object are determined, the texture pattern can be attached to the surface of the object by multiplying the value of the corresponding point in the texture space by the brightness value. A similar method can be used to create a bumpy appearance, or bump texture, to the surface of an object. But this time the texture value acts on the normal vector, not the color brightness. Whether it is to generate color textures or convex-concave textures, it is generally only required to be roughly close to the real pattern, and it is not necessary to do precise simulations, so as to greatly increase the realism of the solid surface without significantly increasing the amount of calculation.

纹理映射技术可以大幅度降低场景复杂度,但此方法也有一定的局限性。当漫游者接近实体模型时,纹理表现的实体细节就会缺乏真实感。如果将多级层次细节模型与纹理映射技术结合使用,可以在保证实体细节真实感的同时有效地降低场景实时绘制的复杂度。Texture mapping technology can greatly reduce scene complexity, but this method also has certain limitations. When the rover gets close to the solid model, the textures represent a lack of realism to the solid details. If the multi-level detail model is used in combination with texture mapping technology, it can effectively reduce the complexity of real-time rendering of the scene while ensuring the realism of entity details.

多级层次细节模型是指对同一个场景或场景中的实体使用具有不同细节的描述方法得到的一组模型,供绘制时选择使用。由于虚拟现实通常用多边形网格来描述场景中的几何实体,因而多级层次细节模型采用不同繁简程度的网格实现对虚拟场景不同精细层次的实时绘制。通常情况下,多级层次细节模型使用较多的多边形建立实体的最细节的描述,而用较少的多边形建立实体的轮廓描述。当漫游视点距离实体较近时选择细致模型,视点距离较远时选用轮廓模型。一般的漫游系统都采用了三级多级层次细节模型技术,以上述描述的窗户为例,高级多级层次细节模型以设计数据为基准,全部采用几何建模的方法给出实体描述,中级多级层次细节模型则将窗户部分以纹理代替,但窗台仍采用几何模型,低级多级层次细节模型将整个窗户作为一个纹理平面。The multi-level detail model refers to a group of models obtained by using description methods with different details for the same scene or entities in the scene, which can be selected for use in drawing. Since virtual reality usually uses polygonal meshes to describe the geometric entities in the scene, the multi-level level of detail model uses meshes of different degrees of complexity to achieve real-time rendering of virtual scenes at different fine levels. Typically, the LOD model uses more polygons to create the most detailed description of the entity, and uses fewer polygons to create the outline description of the entity. When the roaming viewpoint is close to the entity, the detailed model is selected, and when the viewpoint is far away, the contour model is selected. The general roaming system adopts the three-level multi-level detail model technology. Taking the window described above as an example, the high-level multi-level detail model is based on the design data, and all use the geometric modeling method to give the entity description. The low-level multi-level detail model replaces the window part with texture, but the window sill still uses a geometric model, and the low-level multi-level detail model uses the entire window as a texture plane.

漫游引擎负责实现基于模型控制面的场景调度控制方法的控制行为解析及场景调度管理。由于漫游系统中存储了控制对象集,其中给出了某一模型控制面的打开即要加载的对象集合及关闭即要卸载的对象集合,实施行为解析及场景调度变得十分的简单。The roaming engine is responsible for realizing the control behavior analysis and scene scheduling management of the scene scheduling control method based on the model control plane. Because the control object set is stored in the roaming system, which gives the object set to be loaded when the control surface of a certain model is opened and the object set to be unloaded when the control surface is closed, the implementation of behavior analysis and scene scheduling becomes very simple.

当视点与模型控制面发生碰撞并返回控制参数后,行为解析只要简单地定位该模型控制面对应的控制对象集。然后,对于某一个模型控制面,按下面的方法将对象集中的对象逐一卸载、加载。When the viewpoint collides with the model control surface and returns the control parameters, the behavior analysis only needs to simply locate the control object set corresponding to the model control surface. Then, for a certain model control surface, the objects in the object set are unloaded and loaded one by one according to the following method.

如图7所示,漫游系统在视点控制的过程中,必须进行视点与虚拟环境中各种实体间的碰撞检测,进而给出合理的碰撞响应。碰撞检测算法包含于相机运动控制模块之中,一旦发生了视点与虚拟实体间的碰撞,输入设备给出的前进或后退指令将被忽略,即相机不再向前或向后运动,这是本发明中给出的对碰撞的简单响应。地形匹配通常是指虚拟环境中的动态实体如地面车辆运动时姿态随地形高低起伏、左右偏转。在漫游系统中,漫游者控制的是人眼的“化身”,即视点,所以地形匹配即指视点的高度及视向随地形变化而改变。As shown in Figure 7, in the process of viewpoint control, the roaming system must perform collision detection between the viewpoint and various entities in the virtual environment, and then give a reasonable collision response. The collision detection algorithm is included in the camera motion control module. Once a collision occurs between the viewpoint and the virtual entity, the forward or backward command given by the input device will be ignored, that is, the camera will no longer move forward or backward. A simple response to collisions given in the invention. Terrain matching usually refers to the movement of dynamic entities in the virtual environment, such as ground vehicles, whose posture fluctuates with the terrain and deflects left and right. In the roaming system, the rover controls the "incarnation" of the human eye, that is, the viewpoint, so terrain matching means that the height and direction of the viewpoint change with terrain changes.

在碰撞检测算法中,可以看到地形匹配模块被调用的时机。任何使视点位置发生变化的外部输入都将引起碰撞检测模块的调用,一旦碰撞检测的结果显示为已经发生碰撞,碰撞响应就表现为忽略视点的位置改变,视点停在原地不动。如果碰撞检测的结果为没有碰撞发生,视点就按规定的步长及方向到达一个新的位置,但这一视点位置及视向并非是视点对外部输入做出反应的最终位置,最终的位置要在地形匹配之后确定。In the collision detection algorithm, you can see when the terrain matching module is called. Any external input that changes the position of the viewpoint will cause the call of the collision detection module. Once the result of the collision detection shows that a collision has occurred, the collision response will be to ignore the change of the viewpoint and the viewpoint will stay in place. If the result of the collision detection is that no collision occurs, the viewpoint will arrive at a new position according to the specified step size and direction, but this viewpoint position and viewing direction are not the final position of the viewpoint’s response to external input. The final position must be Determined after terrain matching.

相机在视向方向上前进step距离之后,其所处的位置记为WPOS(xw,yw,zw)。经地形匹配后的相机位置必然在视点坐标系的Yv轴之上。经过点WPOS建立竖直的直线方向,即Yv轴的方向,利用视线与空间多边形集求交的方法,求出与Yv轴相交的平面及交点坐标。比较交点中Y分量不超过某一给定值且最大的一个相交平面确定为地形表面。将行走相机置于该表面的Y值高度之上。观察相机的高度值设置为行走相机加上一个固定值eye_height-walker_height。视向的地形匹配问题相对更为简单。取地形表面多边形的方向,令观察相机的方向与地形表面方向一致,但保持行走相机视向不变。After the camera advances the step distance in the viewing direction, its position is recorded as WPOS(xw, yw, zw). The camera position after terrain matching must be above the Yv axis of the viewpoint coordinate system. Establish the vertical straight line direction through the point WPOS, that is, the direction of the Yv axis, and use the method of intersecting the line of sight and the space polygon set to find the plane intersecting with the Yv axis and the coordinates of the intersection point. Among the intersection points, the Y component does not exceed a given value and the largest intersecting plane is determined as the terrain surface. Position the walking camera above the surface's Y height. The height value of the observation camera is set to the walking camera plus a fixed value eye_height-walker_height. The terrain matching problem of line of sight is relatively simpler. The direction of the terrain surface polygon is taken, so that the direction of the observation camera is consistent with the direction of the terrain surface, but the line of sight of the walking camera remains unchanged.

二维地图是漫游系统中广泛使用的漫游向导工具。与三维场景视图相比,二维地图的优势在于,它可以提供更加广阔的视野空间,便于漫游者从总体上把握当前所处的位置及周边环境状况。Two-dimensional maps are widely used navigation tools in navigation systems. Compared with the three-dimensional scene view, the advantage of the two-dimensional map is that it can provide a wider field of view, which is convenient for the roamer to grasp the current position and the surrounding environment as a whole.

一般情况下,开发二维地图显示模块需要在建模空间中提取二维基本特征,如道路、建筑尺寸数据,使用OPENGL图形系统进行线条绘制,再对多边形区域填充相应的颜色,还要控制一个视点指针,与三维场景中的视点保持一致的运动。In general, to develop a 2D map display module, it is necessary to extract 2D basic features in the modeling space, such as road and building size data, use the OPENGL graphics system to draw lines, and then fill the polygonal area with the corresponding color, and control a Viewpoint pointer, which moves in line with the viewpoint in the 3D scene.

如图8所示,本发明使用了另外一种更加简捷的二维地图开发方法,利用三维场景正投影生成二维地图。这种方法依照计算机图形学正投影的原理,将三维场景模型“压缩”到一个平面上,然后运用相机资源,实现地图的显示、缩放以及二维与三维视点的同步运动。实现二维地图向导时使用了多通道程序设计技术,即建立了如图8所示的通道模型,图中的mainchannel(左通道)也是主通道,用于显示漫游系统的三维场景,channel2D(右通道)用以显示二维地图。主通道占据整个屏幕,二维地图通道只占据屏幕的右上角八分之一的区域。在channel2D中,不仅设置了相机,还预设了光照和观察体,只是相机的镜头总是正对着表示二维地图的模型平面。当然,任何系统都不可能为了快速、便捷地实现二维地图显示而将表示三维世界的模型全部调入channel2D中,那将导致系统负荷的双倍增加,完全得不偿失,而是将三维模型简化到只有地表数据(即将所有高于地面的实体部分全部删除),经简单的修补后,形成二维地图的三维表示模型。建立了多通道机制以及二维地图表示模型后,二维地图与三维场景的视点指示同步问题转换为channel2D中的相机跟踪mainchannel中的行走相机问题,在跟踪过程中channel2D中的相机的镜头方向保持不变。二维地图的缩放功能通过为相机设置不同的焦距实现。As shown in FIG. 8 , the present invention uses another simpler two-dimensional map development method, which generates a two-dimensional map by utilizing the orthographic projection of a three-dimensional scene. According to the principle of orthographic projection in computer graphics, this method "compresses" the 3D scene model onto a plane, and then uses camera resources to realize map display, zooming, and synchronous movement of 2D and 3D viewpoints. The multi-channel programming technology is used to implement the two-dimensional map guide, that is, the channel model shown in Figure 8 is established. The main channel (left channel) in the figure is also the main channel, which is used to display the three-dimensional scene of the roaming system, and channel2D (right channel) channel) to display a two-dimensional map. The main channel occupies the entire screen, and the two-dimensional map channel only occupies one-eighth of the upper right corner of the screen. In channel2D, not only the camera is set, but also the lighting and observation body are preset, but the lens of the camera is always facing the model plane representing the two-dimensional map. Of course, it is impossible for any system to transfer all the models representing the three-dimensional world into channel2D in order to quickly and conveniently realize two-dimensional map display, which will lead to a double increase in system load, which is not worth the candle. Only the surface data (that is, all the solid parts higher than the ground) are simply repaired to form a three-dimensional representation model of the two-dimensional map. After establishing the multi-channel mechanism and the two-dimensional map representation model, the problem of synchronizing the viewpoint indication of the two-dimensional map and the three-dimensional scene is transformed into the problem of the camera in channel2D tracking the walking camera in the main channel. During the tracking process, the lens direction of the camera in channel2D remains constant. The zoom function of the 2D map is realized by setting different focal lengths for the camera.

虚拟现实应用不仅追求高的交互仿真率即帧频,而且希望帧频保持一致、恒定,至少应保持在用户期望的某一区间,尤其对于作战模拟类型的虚拟现实应用,跳跃的帧频将导致错误的模拟结果,并使参与者感到无所适从。帧频是绘制一帧场景所用时间的倒数,因此,要获得恒定帧频,必须控制一帧绘制场景表面多边形的数量。Virtual reality applications not only pursue a high interactive simulation rate, that is, the frame rate, but also hope that the frame rate remains consistent and constant, at least within a certain range expected by the user. Especially for combat simulation type virtual reality applications, jumping frame rates will lead to False simulation results and leave participants feeling overwhelmed. The frame rate is the reciprocal of the time it takes to draw a frame of the scene, so to obtain a constant frame rate, you must control the number of polygons on the surface of the scene drawn in one frame.

1996年,北卡大学的托马斯梵可豪斯和瑟斯泰勒在SodaHall漫游系统中提出了一种有效但较为复杂的恒定帧频的方法。他们为场景数据库中的每一对象设立三元组(O,L,R),其中,O表示场景中的一个对象,L表示该对象的一个LOD模型级别,R表示对该对象施以的绘制 Σ s Cost ( O , L , R ) ≤ T arg et Frame Time

Figure C0213073600122
最大算法。在对象三元组上建立两个评价函数Cost(O,L,R)和Benefit(O,L,R),Cost(O,L,R)用于计算对象选用L级LOD时R算法的绘制时间,Benefit(O,L,R)用于评价L级的多级层次细节及R算法条件下该对象对整个场景的视觉方面的贡献。这样,恒定帧频的问题转化为算法中使用一个线程专门负责上述两个函数的计算,在每一帧绘制之前确定选用的多级层次细节级别及绘制算法。In 1996, Thomas Van Kehaus and Thursteller of the University of North Carolina proposed an effective but complicated method of constant frame rate in the SodaHall roaming system. They set up a triplet (O, L, R) for each object in the scene database, where O represents an object in the scene, L represents a LOD model level of the object, and R represents the drawing applied to the object Σ the s cost ( o , L , R ) ≤ T arg et Frame Time and
Figure C0213073600122
Maximum Algorithm. Establish two evaluation functions Cost (O, L, R) and Benefit (O, L, R) on the object triplet, Cost (O, L, R) is used to calculate the drawing of the R algorithm when the object selects L-level LOD Time, Benefit(O, L, R) is used to evaluate the L-level multi-level details and the contribution of the object to the visual aspects of the entire scene under the condition of the R algorithm. In this way, the problem of constant frame rate is transformed into the use of a thread in the algorithm to be responsible for the calculation of the above two functions, and the selected multi-level detail level and rendering algorithm are determined before each frame is drawn.

北卡大学的恒定帧频方法是一种高效的基于预取的优化算法,可适用于大多数虚拟现实应用,但该算法中的计算主要依赖主观标准,且整个算法较为复杂。The constant frame rate method of the University of North Carolina is an efficient prefetch-based optimization algorithm that can be applied to most virtual reality applications, but the calculation in this algorithm mainly relies on subjective criteria, and the entire algorithm is relatively complex.

雾化不仅可以提高场景的真实感,还可以提高虚拟现实系统的整体性能。雾化的实质是将场景原有颜色与雾的颜色按一定的比例因子进行融合,比例因子的取值与雾化模型有关。与OPENGL兼容的图形加速引擎大都提供三种雾化模型:Fog can not only improve the realism of the scene, but also improve the overall performance of the virtual reality system. The essence of atomization is to fuse the original color of the scene with the color of the fog according to a certain scale factor, and the value of the scale factor is related to the atomization model. Most graphics acceleration engines compatible with OPENGL provide three fog models:

其中f为比例因子,density为雾的密度,z为观察坐标系下视点到场景面片中心点的距离,Where f is the scale factor, density is the density of the fog, z is the distance from the viewpoint to the center point of the scene patch in the observation coordinate system,

f=e-(density z)    (GL_EXP)f=e -(density z) (GL_EXP)

f = e - ( density · z ) 2    (GL_EXP2) f = e - ( density &Center Dot; z ) 2 (GL_EXP2)

f = end - z end - start (GL_LINEAR) f = end - z end - start (GL_LINEAR)

start和end为雾化作用深度的启始值与终止值。start and end are the start value and end value of the atomization depth.

在RGBA颜色模型下,观察平面上每一象素点雾化后的颜色C由下式计算:Under the RGBA color model, the color C of each pixel on the observation plane after atomization is calculated by the following formula:

C=fC1+(1-f)Cf C=fC 1 +(1-f)C f

C1为场景面片的RGBA颜色值,Cf为雾的颜色,f为上面讨论的比例因子。C 1 is the RGBA color value of the scene patch, C f is the color of the fog, and f is the scale factor discussed above.

上述雾化模型给定了雾化的一个重要性质,即,视点与实体距离越远,场景雾化的程度越大,实体在场景中的可见度越小。样条变换雾化模型对场景的雾化效果较为显著,当视点与实体的距离超过一个给定值后,实体雾化后的颜色全部由雾的颜色代替,不需要再做实时绘制。The above fog model gives an important property of fog, that is, the farther the distance between the viewpoint and the entity is, the greater the degree of fogging the scene is, and the less visible the entity is in the scene. The spline transformation atomization model has a more significant atomization effect on the scene. When the distance between the viewpoint and the entity exceeds a given value, the atomized color of the entity is replaced by the color of the fog, and no real-time drawing is required.

综合上面的分析,本发明设计了通过调整雾化模型参数达到小范围内调整每帧场景显示平均用时,进而达到交互仿真率恒定目的的方法。Based on the above analysis, the present invention designs a method to adjust the average display time of each frame of the scene in a small range by adjusting the parameters of the atomization model, and then achieve the purpose of constant interactive simulation rate.

参阅图9,本发明采用了一种简单但十分有效的恒定帧频技术,即基于雾化可见距离调整的恒定帧频方法。这种方法可以认为是一种事后调整方法。所谓事后调整是指根据前面一段时间内的帧频与目标帧频的差来动态设定雾化效果,使得每帧处理的实体表面多边形数量增加或减少,从而达到调整帧频升降的目标。具体的算法可用图9所示的流程图表示。从中可以看出,本发明实现雾化效果的算法,是用当前帧频向目标帧频逐步逼近的方式,使漫游系统的帧频在目标帧频附近“摆动”,从而达到恒定帧频于一个固定区间的目的。这个算法适用于区域较大场景的虚拟现实应用。Referring to FIG. 9 , the present invention adopts a simple but very effective constant frame rate technology, that is, a constant frame rate method based on fog visible distance adjustment. This method can be considered as an after-the-fact adjustment method. The so-called post-adjustment refers to dynamically setting the fog effect according to the difference between the frame rate in the previous period and the target frame rate, so that the number of polygons on the solid surface processed in each frame increases or decreases, so as to achieve the goal of adjusting the frame rate. The specific algorithm can be represented by the flowchart shown in FIG. 9 . It can be seen that the algorithm of the present invention to achieve the atomization effect is to use the current frame rate to gradually approach the target frame rate, so that the frame rate of the roaming system "swings" around the target frame rate, so as to achieve a constant frame rate at one The purpose of the fixed interval. This algorithm is suitable for virtual reality applications with large scenes.

表4给出了本发明实施例的通用漫游引擎系统键盘功能祥表。按照上述步骤就可以开发使用本发明的虚拟现实环境的通用漫游驱动计算机系统。Table 4 shows the keyboard function table of the universal roaming engine system according to the embodiment of the present invention. According to the steps above, a universal roaming drive computer system using the virtual reality environment of the present invention can be developed.

  状态控制 state control   热键定义 Hotkey definition   缺省状态 default state   雾化效果启用/禁止 Atomization effect enable/disable   CTRL+Z CTRL+Z   启用状态 Enabled state   二维地图向导启用/关闭 2D map guide enable/disable   M m   关闭状态 Disabled   室外场景可见/不可见 Outdoor scene visible/invisible   P P   可见状态 visible status   相机向前运动的碰撞检测启用/禁止 Collision detection enable/disable for camera forward movement   CTRL+G CTRL+G   启用碰撞检测功能 Enable collision detection   相机向后运动的碰撞检测启用/禁止 Collision detection enable/disable for camera backward movement   CTRL+R CTRL+R   启用碰撞检测功能 Enable collision detection   纹理映射打开/关闭 Texture mapping on/off   CTRL+X CTRL+X   打开状态 open state   线框显示打开/关闭 Wireframe display on/off   CTRL+P CTRL+P   关闭状态 Disabled   帧数统计结果显示/不显示 Frame count statistics display/not display   TAB+‘-’ TAB+'-'   禁止显示状态 Disable display status   调试信息输出允许/禁止 Debugging information output enable/disable   CTRL+T CTRL+T   禁止状态 Prohibited status   DOF实体自动操纵允许/禁止 Automatic manipulation of DOF entities allowed/disabled   O o   允许状态 Allowed state   透明算法切换选择 Transparent algorithm switching selection   F8,F9 F8, F9   AF状态 AF status   二维地图相机变焦选择 2D map camera zoom selection   F1-F5 F1-F5   最大焦距状态 Maximum focal length state   帧缓冲模式单/双方式切换 Frame buffer mode single/dual mode switching   CTRL+E CTRL+E   单帧缓存方式 Single frame buffer mode   鼠标光标显示/不显示 Display/hide mouse cursor   CTRL+B CTRL+B   显示状态 Display state   观察相机头部抬头/低头的允许与禁止 Permission and prohibition of looking up/down of the camera's head   CTRL+H CTRL+H   允许状态 Allowed state

表1Table 1

  键定义 key definition   运动类型 Sports type   step step   θ θ   α α   UPARROW KEY UPARROW KEY   FORWARD FORWARD   0.7 0.7   n/a n/a   n/a n/a   DOWNARROW KEY DOWNARROW KEY   BACKWARD BACKWARD   0.5 0.5   n/a n/a   n/a n/a   leftARROW KEY left ARROW KEY   TURN_left TURN_left   n/a n/a   0.3 0.3   n/a n/a   rightARROW KEY rightARROW KEY   TURN_right TURN_right   n/a n/a   0.3 0.3   n/a n/a   U KEY U KEY   LOOK_UP LOOK_UP   n/a n/a   n/a n/a   0.15 0.15   J KEY J KEY   LOOK_DOWN LOOK_DOWN   n/a n/a   n/a n/a   0.15 0.15

表2Table 2

  鼠标所在区域 The area where the mouse is located   相机运动类型 Type of camera movement   step step   θ θ   α α   区域1 Area 1   FORWARD FORWARD   speed speed   n/a n/a   n/a n/a   区域2 Area 2   BACKWARD BACKWARD   speed speed   n/a n/a   n/a n/a   区域3 Area 3   FORWARD,TURN_right FORWARD, TURN_right   speed speed   rot rot   n/a n/a   区域4 Zone 4   BACKWARD,TURN_right BACKWARD, TURN_right   speed speed   rot rot   n/a n/a   区域5 Zone 5   TURN_right TURN_right   n/a n/a   rot rot   n/a n/a   区域6 Zone 6   TURN_right TURN_right   n/a n/a   rot rot   n/a n/a   区域7 Area 7   FORWARD,TURN_left FORWARD, TURN_left   speed speed   rot rot   n/a n/a   区域8 Zone 8   BACKWARD,TURN_left BACKWARD, TURN_left   speed speed   rot rot   n/a n/a   区域9 Area 9   TURN_left TURN_left   n/a n/a   rot rot   n/a n/a   区域10 Area 10   TURN_left TURN_left   n/a n/a   rot rot   n/a n/a

表3table 3

  热键定义 Hotkey definition   功能描述 Functional description   热键定义 Hotkey definition   功能描述 Functional description   ↑上光标键 ↑Up cursor key   向前移动视点 Move the viewpoint forward   f f   雾浓度减小 Reduced fog density   ↓下光标键 ↓Cursor key down   向后移动视点 Move the viewpoint backwards   F F   雾浓度增加 Fog concentration increased   ←左光标键 ←Left cursor key   向左转动视点 Turn view left   O o   自动开/关门窗功能的设置 Setting of automatic opening/closing door and window function   →右光标键 →Right cursor key   向右转动视点 turn view right   M m   二维地图开/关 2D map on/off   U u   抬头 look up   P P   室外场景开/关 Outdoor scene on/off   J J   低头 lower your head   T T   调试信息输出开/关 Debug information output on/off   L L   直升机视点下降 Helicopter viewpoint descent   r r   减小材质中的red成份 Reduce the red component in the material   H h   直升机视点上升 Helicopter viewpoint up   R R   增加材质中的red成份 Increase the red component in the material   PageUp PageUp   设置直升机视点 Set the helicopter viewpoint   g g   减小材质中的green成份 Reduce the green component in the material   PageDown PageDown   复位直升机视点 Reset Helicopter Viewpoint   G G   增加材质中的green成份 Increase the green component in the material   Home Home   至某建筑物门前 to the front of a building   b b   减小材质中的blue成份 Reduce the blue component in the material   End End   至某指定点前 before a specified point   B B   增加材质中的blue成份 Increase the blue component in the material   F1 F1   二维地图相机变焦 2D map camera zoom   C C   云层的开/关 cloud on/off   F2 F2   二维地图相机变焦 2D map camera zoom   CTRL+G CTRL+G   向前的碰撞检测开/关 Forward collision detection on/off   F3 F3   二维地图相机变焦 2D map camera zoom   CTRL+R CTRL+R   向后的碰撞检测开/关 Backward collision detection on/off   F4 F4   二维地图相机变焦 2D map camera zoom   CTRL+B CTRL+B   鼠标指针显示开/并 Mouse pointer display on/off   F5 F5   二维地图相机变焦 2D map camera zoom   CTRL+X CTRL+X   纹理绘制开/关 Texture drawing on/off   F7 F7   透明算法一选择 Transparent Algorithm One Choice   CTRL+I CTRL+I   统计信息开/关 Statistics On/Off   F8 F8   透明算法二选择 Transparent Algorithm 2 Choice   CTRL+C CTRL+C   相机控制开/关 Camera Control On/Off   F9 F9   雾化浓度设置 Atomization concentration setting   CTRL+H CTRL+H   观察相机开/关 View camera on/off   1 1   将相机放置在某建筑一层 Place the camera on the first floor of a building   CTRL+P CTRL+P   线框绘制开/关 Wireframe drawing on/off   2 2   将相机放置在某建筑二层 Place the camera on the second floor of a building   CTRL+L CTRL+L   虚拟实体的放大 magnification of virtual entities   3 3   将相机放置在某建筑三层 Place the camera on the third floor of a building   CTRL+S CTRL+S   虚拟实体的缩小 Shrinkage of virtual entities   4 4   将相机放置在某建筑四层 Place the camera on the fourth floor of a building   CTRL+M CTRL+M   虚拟实体的正向旋转 Forward rotation of the virtual entity   5 5   将相机放置在某建筑五层 Place the camera on the fifth floor of a building   CTRL+N CTRL+N   虚拟实体的负向旋转 Negative rotation of virtual entities   a a   增加日光的ambient值 Increase the ambient value of sunlight   CTRL+Z CTRL+Z   雾化开/关 Atomization on/off   A A   减少日光的ambient值 Reduce the ambient value of sunlight   CTRL+T CTRL+T   纹理替换功能键 Texture replacement function key   d d   增加日光的diffuse值 Increase the diffuse value of sunlight   ‘-’ '-'   详细统计信息显示开/关 Detailed statistics display on/off   D D   减少日光的diffuse值 Reduce the diffuse value of sunlight   s s   太阳升起 Sun rise   S S   太阳下落 the sun sets

表4Table 4

Claims (10)

1, a kind of method that realizes universal virtual environment roaming engine comprises personal computer, graphics acceleration card, walker, stereoscopic vision demonstration and tracking equipment, scene database and roaming engine core component, and this method may further comprise the steps:
(a) create general virtual reality applications resource;
(b) load the contextual data library file that meets the scene description standard;
(c) customization roaming state mechanism;
It is characterized in that also comprising:
(d) input mapping and explanation facility and all accept once outside the input at each frame of simulation cycles is set, supports multiple input equipment access roaming system;
(e) carry out viewpoint control, manipulations of physical and state setting according to outside input command, implementing the several scenes complexity in the viewpoint control procedure subdues strategy, carries out collision detection and terrain match, support use standard 2-d input device to select and handle the object in the three-dimensional scenic, multiple roaming state of a control and environment special efficacy are set, satisfy requirements of different users.
2, the method for realization universal virtual environment roaming engine according to claim 1, it is characterized in that: the viewpoint control that described viewpoint control is to use the double camera model to roam, create and observe view camera and walking view camera, observe view camera and be subject to the walking view camera, the walking camera is in order to realize the multiple spot collision in the roam procedure and to keep direct of travel that observing camera has three rotational freedoms.
3, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that described viewpoint control further comprises:
(a) in the motion of view camera and walking camera as detect collision, then judge the character of collision;
(b) if the scene chain of command then starts the scene scheduling strategy, carry out the management and running of indoor and outdoor scene;
(c) if steerable movement entity then carries out the motion control of entity in the scene by moving coordinate system that defines in the model and kinematic parameter;
(d) if confirm collision has taken place, then provide collision response.
4, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that step (d) further comprises:
(a) input mapping and explanation facility are in order to roaming engine and input equipment are isolated, to reduce the influence of input equipment to the roaming system kernel to greatest extent;
(b) from viewpoint control and functions of the equipments, take out a middle layer;
(c) control with input equipment is mapped to this middle layer and makes it become the Controlling Source that viewpoint drives.
5, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that: in whole roam procedure, support several scenes height and complexity to subdue strategy simultaneously, may further comprise the steps:
(a) replace by the precision of the multi-level detail model of distance scheduling of observing view camera and each model of place;
(b) determine the scene chain of command with three dimensions plane in the scene database and behavior parameter, carrying out the scheduling of indoor and outdoor scene and complex indoor scene divides, in roam procedure, after detecting the model chain of command based on the collision detection algorithm of sight line, the control behavior identification (RNC-ID) analytic that defines during with modeling comes out, and is mapped to the scene controlling object and concentrates, according to the concentrated list object of controlling object, load viewable objects, unload invisible object, realize the scene management and running;
Automatically subdue the redundant polygon in the solid model when (c) model loads;
(d) texture of generation solid object surface.
6, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that: described collision detection further comprises with response:
(a) reason of analysis camera motion is set up the walking camera and is observed camera collision detection line segment forward;
(b) observe the camera collision detection;
(c) walking camera collision detection if bump, is then carried out the scheduling of virtual environment scene and is handled;
(d) if do not bump, then camera advances, and carries out terrain match.
7, according to the method for the realization universal virtual environment roaming engine of claim 1, it is characterized in that: described terrain match technology further may further comprise the steps:
(a) the walking camera advances;
(b), call the collision detection module if viewpoint position changes;
(c) the collision detection result then ignores viewpoint position and changes for bumping, and walking camera and observation camera rising certain altitude are provided with the line of vision of observing camera by the topographical surface direction simultaneously;
(d) if the result is not for bumping, then viewpoint arrives a reposition.
8, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that: state is provided with function and comprises ON/OFF atomizing effect, ON/OFF two-dimensional map guide, ON/OFF collision detection, selects the transparent processing mode, one or more of weather conditions etc. are set.
9, the method for realization universal virtual environment roaming engine according to claim 1 is characterized in that: affiliated environment special efficacy comprises with the atomizing step of the constant frame frequency method that is technical support.
10, the method for realization universal virtual environment roaming engine according to claim 9 is characterized in that: described constant frame frequency method further comprises:
(a) take out that every frame on average consumes the time in preceding 2 seconds;
(b) calculate current frame frequency;
(c) calculate the poor of current frame frequency and target frame frequency,
(d) if current frame frequency greater than the target frame frequency then increase the atomizing visibility, otherwise reduces the visibility that atomizes;
(e) new atomization parameter and end are set.
CNB021307369A 2002-11-13 2002-11-13 A Method of Realizing General Virtual Environment Roaming Engine Expired - Fee Related CN100428218C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB021307369A CN100428218C (en) 2002-11-13 2002-11-13 A Method of Realizing General Virtual Environment Roaming Engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB021307369A CN100428218C (en) 2002-11-13 2002-11-13 A Method of Realizing General Virtual Environment Roaming Engine

Publications (2)

Publication Number Publication Date
CN1414496A CN1414496A (en) 2003-04-30
CN100428218C true CN100428218C (en) 2008-10-22

Family

ID=4746450

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB021307369A Expired - Fee Related CN100428218C (en) 2002-11-13 2002-11-13 A Method of Realizing General Virtual Environment Roaming Engine

Country Status (1)

Country Link
CN (1) CN100428218C (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295220B2 (en) * 2004-05-28 2007-11-13 National University Of Singapore Interactive system and method
DE102004033593A1 (en) * 2004-07-07 2006-02-02 Siemens Ag Method for simulating a technical plant
CN100400021C (en) * 2005-07-19 2008-07-09 天津大学 Monitoring device for walking aid dynamic parameters
KR100736078B1 (en) * 2005-10-27 2007-07-06 삼성전자주식회사 3D motion graphic user interface, apparatus and method for providing same
CN100459508C (en) * 2005-12-12 2009-02-04 腾讯科技(深圳)有限公司 Internet fluid media interdynamic system and fluid media broadcasting method
CN101055494B (en) * 2006-04-13 2011-03-16 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN101630402B (en) * 2008-07-14 2017-06-16 苏州远唯网络技术服务有限公司 A kind of tree-dimensional animation engine for ecommerce
KR20100138700A (en) * 2009-06-25 2010-12-31 삼성전자주식회사 Virtual World Processing Unit and Methods
CN101615305B (en) * 2009-07-24 2011-07-20 腾讯科技(深圳)有限公司 Method and device for detecting collision
CN101702245B (en) * 2009-11-03 2012-09-19 北京大学 A Scalable General 3D Landscape Simulation System
CN101816613B (en) * 2010-06-07 2011-08-31 天津大学 Ant-colony calibrating precise force-measuring walking aid device
CN102044089A (en) * 2010-09-20 2011-05-04 董福田 Method for carrying out self-adaption simplification, gradual transmission and rapid charting on three-dimensional model
CN102385762B (en) * 2011-10-20 2013-08-28 上海交通大学 Modelica integrated three-dimensional scene simulation system
CN102520950A (en) * 2011-12-12 2012-06-27 广州市凡拓数码科技有限公司 Method for demonstrating scene
CN104076915A (en) * 2013-03-29 2014-10-01 英业达科技有限公司 Exhibition system capable of adjusting three-dimensional models according to sight lines of visitors and method implemented by exhibition system
CN104346368A (en) * 2013-07-30 2015-02-11 腾讯科技(深圳)有限公司 Indoor scene switch displaying method and device and mobile terminal
CN103543754A (en) * 2013-10-17 2014-01-29 广东威创视讯科技股份有限公司 Camera control method and device in three-dimensional GIS (geographic information system) roaming
CN103810559A (en) * 2013-10-18 2014-05-21 中国石油化工股份有限公司 Risk-assessment-based delay coking device chemical poison occupational hazard virtual reality management method
CN103606194B (en) * 2013-11-01 2017-02-15 中国人民解放军信息工程大学 Space, heaven and earth integration situation expression engine and classification and grading target browsing method thereof
JP6087301B2 (en) * 2014-02-13 2017-03-01 株式会社ジオ技術研究所 3D map display system
CN105824690A (en) * 2016-04-29 2016-08-03 乐视控股(北京)有限公司 Virtual-reality terminal, temperature adjusting method and temperature adjusting device
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment
CN108960947A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Show house methods of exhibiting and system based on virtual reality
CN107450747B (en) 2017-07-25 2018-09-18 腾讯科技(深圳)有限公司 The displacement control method and device of virtual role
CN108762209A (en) * 2018-05-25 2018-11-06 西安电子科技大学 Production Line Configured's analogue system based on mixed reality and method
CN109785424A (en) * 2018-12-11 2019-05-21 成都四方伟业软件股份有限公司 A kind of three-dimensional asynchronous model particle edges processing method
CN110245445B (en) * 2019-06-21 2020-04-07 浙江城建规划设计院有限公司 Ecological garden landscape design method based on computer three-dimensional scene simulation
CN110880204B (en) * 2019-11-21 2022-08-16 腾讯科技(深圳)有限公司 Virtual vegetation display method and device, computer equipment and storage medium
CN111243070B (en) * 2019-12-31 2023-03-24 浙江省邮电工程建设有限公司 Virtual reality presenting method, system and device based on 5G communication
CN111583403B (en) * 2020-04-28 2023-06-09 浙江科澜信息技术有限公司 Three-dimensional roaming mode creation method, device, equipment and medium
CN114344890A (en) * 2020-10-13 2022-04-15 北京悠米互动娱乐科技有限公司 A Precomputed Geographic Information Method for Visual Editing
CN112907618B (en) * 2021-02-09 2023-12-08 深圳市普汇智联科技有限公司 Multi-target sphere motion trail tracking method and system based on rigid body collision characteristics
CN115857702B (en) * 2023-02-28 2024-02-02 北京国星创图科技有限公司 Scene roaming and visual angle conversion method under space scene
CN117934687A (en) * 2024-01-25 2024-04-26 中科世通亨奇(北京)科技有限公司 Three-dimensional model rendering optimization method, system, electronic equipment and storage medium
CN119334347A (en) * 2024-09-11 2025-01-21 北京大学 Virtual reality guide information generation method, device, medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996021994A1 (en) * 1995-01-11 1996-07-18 Shaw Christopher D Tactile interface system
CN1231753A (en) * 1996-08-14 1999-10-13 挪拉赫梅特·挪利斯拉莫维奇·拉都色夫 Method for tracking and displaying a user's position and orientation in space, method for presenting a virtual environment to a user, and a system for implementing these methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996021994A1 (en) * 1995-01-11 1996-07-18 Shaw Christopher D Tactile interface system
CN1231753A (en) * 1996-08-14 1999-10-13 挪拉赫梅特·挪利斯拉莫维奇·拉都色夫 Method for tracking and displaying a user's position and orientation in space, method for presenting a virtual environment to a user, and a system for implementing these methods

Also Published As

Publication number Publication date
CN1414496A (en) 2003-04-30

Similar Documents

Publication Publication Date Title
CN100428218C (en) A Method of Realizing General Virtual Environment Roaming Engine
Shan et al. Research on landscape design system based on 3D virtual reality and image processing technology
EP3729238B1 (en) Authoring and presenting 3d presentations in augmented reality
Zhao A survey on virtual reality
KR101041723B1 (en) 3D video game system
CN107038745B (en) A 3D tourist landscape roaming interaction method and device
WO2019147392A1 (en) Puppeteering in augmented reality
CN101183276A (en) Interactive system based on camera projector technology
Dollner et al. Real-time expressive rendering of city models
GB2256567A (en) Modelling system for imaging three-dimensional models
WO1999060526A1 (en) Image processor, game machine, image processing method, and recording medium
CN108986232B (en) Method for presenting AR environment picture in VR display device
Agnello et al. Virtual reality for historical architecture
CN1171853A (en) Method for controlling level of detail displayed in computer generated screen display of complex structure
Zhang et al. The Application of Folk Art with Virtual Reality Technology in Visual Communication.
Qinping A survey on virtual reality
CN101281657A (en) A Crowd Behavior Synthesis Method Based on Video Data
Trueba et al. Complexity and occlusion management for the world-in-miniature metaphor
Zhang Animation Scene Design and Machine Vision Rendering Optimization Combining Generative Models
Ryder et al. A framework for real-time virtual crowds in cultural heritage environments
Fellner et al. Modeling of and navigation in complex 3D documents
GB2432499A (en) Image generation of objects distant from and near to a virtual camera
Choi A technological review to develop an AR-based design supporting system
Malhotra Issues involved in real-time rendering of virtual environments
CN118521697B (en) System for watching house cloud exhibition hall based on meta-universe model VR and meta-universe system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081022

Termination date: 20111113