CN118885301B - Hardware-accelerated digital GPU simulation method and system - Google Patents
Hardware-accelerated digital GPU simulation method and system Download PDFInfo
- Publication number
- CN118885301B CN118885301B CN202411354686.4A CN202411354686A CN118885301B CN 118885301 B CN118885301 B CN 118885301B CN 202411354686 A CN202411354686 A CN 202411354686A CN 118885301 B CN118885301 B CN 118885301B
- Authority
- CN
- China
- Prior art keywords
- virtual
- gpu
- module
- display
- hardware
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a hardware-accelerated digital GPU simulation method and a system, which adopt the hardware-accelerated digital GPU simulation method to convert an original digital simulation process GPU into all operations of a CPU, use an interception/forwarding mode, put the GPU into a host machine for rendering calculation, and transmit an operation result back to the digital GPU, so that the resource loss of the host machine CPU simulation GPU is released, the operation capability of the digital GPU is improved through the calculation force of the host machine GPU, the execution efficiency of the CPU is improved, and the display frame rate and resolution of the digital GPU are ensured.
Description
Technical Field
The invention relates to the technical field of digital GPU simulation, in particular to a hardware-accelerated digital GPU simulation method and system.
Background
The current GPU simulation field mainly comprises three simulation schemes:
1, software simulation directly. Since the GPU includes a large number of parallel computing units, the host CPU includes a large number of logic control units. The two uses are different in labor division and different in design architecture. Thus it is inefficient to use a general purpose host CPU to emulate a virtual machine GPU. The approach may be used for computer system emulation of multiple GPUs.
And 2, the display card is directly connected. The virtual machine directly utilizes the GPU hardware by utilizing the special interfaces provided by different graphic card manufacturers, and the scheme has high virtual GPU efficiency, but the GPUs cannot be shared, and is not suitable for simulating the scenes of the computer system with multiple GPUs.
And 3, API forwarding. According to the scheme, the GPU hardware is distributed to the virtual machines through the time slices, so that virtual multi-system sharing of the GPU is realized, and meanwhile, the virtual machines have higher simulation running efficiency.
The graphics processing unit (Graphics Processing Unit, GPU), also known as a vision processor, display chip, mainly performs floating point operations and parallel operations, which can be hundreds of times faster than the CPU. With the expansion application of the digital CPU simulation technology in an on-board display system, the digital GPU also becomes an urgent requirement. At present, the conventional GPU simulation mostly adopts software to simulate OpenGL rendering, and an internal GPU rendering part adopts a llvmpipe mode, namely all GPU operations are converted into CPU operations, so that the method has high trafficability, occupies a large amount of CPU resources, causes poor display frame rate, and reduces the execution efficiency of the CPU.
In the prior art (China patent with application number 202010134209.2, which discloses a GPU virtualization implementation system and method based on an API redirection technology), a host machine is a remote server, a virtual machine is a local client, and network transmission has performance loss, so that the performance is inferior to that of a mode of direct data reference of a local application program.
In the prior art (China patent with the application number 202010386295.6, a GPU virtualization method, a system and a medium based on a graphic library API agent) is disclosed, wherein the virtual machine transmits API information to a host machine in an inter-process communication mode, so that certain performance loss exists, and the performance loss exists in an API searching mode.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a hardware-accelerated digital GPU simulation method and system.
The hardware acceleration digital GPU simulation method provided by the invention comprises the following steps:
step 1, front-end display simulation is completed at a front-end display layer by using open source software SDL2, and a display and an interface part thereof are simulated;
modeling to realize a virtual GPU module, and realizing interaction between different types of virtual IO devices and a back-end driver by using a uniform interface;
Modeling to realize a virtual IO layer, wherein a virtual queue interface is used as a bridge for front-end and back-end communication, and different types of equipment use different numbers of virtual queues;
Modeling to realize a virtual IO annular cache layer, disassembling GPU operation into different virtual queues, calling local GPU hardware to accelerate to complete operation, and returning an operation result to the virtual IO annular cache layer;
and 5, returning the local GPU hardware acceleration operation result to the front-end display layer through the virtual GPU module to finish the display of the GPU operation result.
Preferably, the step 1 includes:
The front end display layer uses open source software SDL2 to complete the simulation of a display screen interface of a display system, including picture display and touch screen operation, supports different display frame rates and resolution settings, and supports dynamic scaling of a display window;
Step 1.2, capturing coordinates of a mouse click point, and analyzing and converting the coordinates into corresponding function call events through preset command configuration to complete mouse click event simulation;
Step 1.3, performing data interaction between the front display layer and the virtual GPU module, switching a click operation trigger picture of a display picture into a GPU drawing rendering operation command, completing drawing and rendering operation by utilizing hardware GPU capability packaged by the virtual GPU module, and loading GPU operation results to the front display layer to finish display picture refreshing;
Step 1.4, the display screen uses Gallium TGSI a renderer to accelerate the 3D rendering and screen display.
Preferably, the step 2 includes:
The virtual GPU module intercepts an OpenGL call command applied by a display system at last time through a virtual drive layer function, and converts the called OpenGL function command into a unified packaged virtual GPU call function interface through a preset command conversion table;
And 2.2, the virtual GPU module call interface performs unified interface processing on different GPU models and converts the interface into standard virtual IO call.
Preferably, the step 3 includes:
Step 3.1, the application program discovers the virtual IO equipment by loading the configuration information of the operating system and performs unified operation mounting;
step 3.2, after the virtual IO finishes running and mounting, automatically finishing the coverage of hardware IO call;
step 3.3, the virtual IO call uses a uniform driving interface;
And 3.4, classifying and disassembling the virtual IO call into different packaged virtual task queues according to different IO interface types, mapping corresponding virtual GPU models according to the specifications of the actual hardware GPU, and completing the virtual queue setting of the GPU models through configuration so as to adapt to various GPU graphics cards.
Preferably, the step 4 includes:
step 4.1, a virtual IO ring buffer layer is used for a shared memory for virtual IO data transmission;
Step 4.2, acquiring an IO request submitted by front-end equipment through a virtual IO ring buffer layer, and converting the IO request into a virtual task queue, namely initializing through a data structure of a rear-end drive, setting information of the IO equipment, combining the information into the virtual IO equipment, setting a host state, configuring and initializing the virtual queue, binding a virtual queue and a queue processing function for each IO equipment, and binding an equipment processing function to process the IO request;
and 4.3, the virtual task queue calls the local host GPU hardware resource to process the vertex, texture and illumination data of the IO request through the configured hardware acceleration technology, and converts the vertex, texture and illumination data into a two-dimensional image, wherein the two-dimensional image comprises matrix transformation and illumination calculation operation commands.
The hardware-accelerated digital GPU simulation system provided by the invention comprises:
The module M1 is used for completing front-end display simulation by using open source software SDL2 at a front-end display layer, and simulating a display and an interface part thereof;
the module M2 is used for modeling to realize a virtual GPU module, and a unified interface is used for realizing interaction between different types of virtual IO devices and a back-end driver;
The module M3 is used for modeling to realize a virtual IO layer, a virtual queue interface is used as a bridge for front-end and back-end communication, and different types of equipment use different numbers of virtual queues;
the module M4 is used for modeling to realize a virtual IO annular cache layer, disassembling GPU operation into different virtual queues, calling local GPU hardware to accelerate to complete operation, and returning an operation result to the virtual IO annular cache layer;
And the module M5, returning the local GPU hardware acceleration operation result to the front-end display layer through the virtual GPU module, and completing the display of the GPU operation result.
Preferably, the module M1 comprises:
The front end display layer uses open source software SDL2 to complete the simulation of a display screen interface of a display system, including picture display and touch screen operation, supports different display frame rates and resolution settings, and supports dynamic scaling of a display window;
The module M1.2 is used for completing the simulation of the mouse click event by capturing the coordinates of the mouse click point and converting the coordinates into corresponding function call events through the configuration analysis of a preset command;
the module M1.3 performs data interaction between the front display layer and the virtual GPU module, switches the click operation triggering picture of the display picture into a GPU drawing rendering operation command, completes drawing and rendering operation by utilizing the hardware GPU capability packaged by the virtual GPU module, and loads GPU operation results to the front display layer to finish display picture refreshing;
Module M1.4 display screen uses Gallium TGSI renderer to accelerate 3D rendering and screen display.
Preferably, the module M2 comprises:
The virtual GPU module intercepts an OpenGL call command applied by a display system at last time through a virtual drive layer function, and converts the called OpenGL function command into a unified packaged virtual GPU call function interface through a preset command conversion table;
and the module M2.2 is used for carrying out unified interface processing on different GPU models by using the virtual GPU module calling interface, and converting the interface into a standard virtual IO call.
Preferably, the module M3 comprises:
the module M3.1 is used for finding out virtual IO equipment through loading configuration information of an operating system and carrying out unified operation and mounting;
The module M3.2 automatically completes the coverage of hardware IO call after the virtual IO completes the operation mounting;
The module M3.3 is that the virtual IO call uses a uniform driving interface;
And the module M3.4 is used for classifying and disassembling the virtual IO call into different packaged virtual task queues according to different IO interface types, mapping corresponding virtual GPU models according to the specifications of the actual hardware GPU, and completing the virtual queue setting of the GPU models through configuration so as to adapt to various GPU display cards.
Preferably, the module M4 comprises:
The module M4.1 is a shared memory used for virtual IO data transmission by the virtual IO ring buffer layer;
The module M4.2 acquires an IO request submitted by front-end equipment through a virtual IO ring buffer layer, converts the IO request into a virtual task queue, and initializes the data structure driven by the rear end, sets information of the IO equipment, combines the information into the virtual IO equipment, sets a host state, configures and initializes the virtual queue, binds a virtual queue and a queue processing function for each IO equipment, and binds an equipment processing function to process the IO request;
and the module M4.3, the virtual task queue calls the local host GPU hardware resource to process the vertex, texture and illumination data of the IO request through the configured hardware acceleration technology, and converts the vertex, texture and illumination data into a two-dimensional image, wherein the two-dimensional image comprises matrix transformation and illumination calculation operation commands.
Compared with the prior art, the invention has the following beneficial effects:
(1) Through the virtualized IO frame, a set of communication frame and programming interfaces between the upper-layer application and the virtualized GPUs of various models are provided, so that the compatibility problem caused by cross-platform is reduced, and the development efficiency of a driving program is greatly improved;
(2) The execution efficiency of the host CPU is improved by reducing the consumption of the digital GPU to the host CPU resource;
(3) The operation capability of the digital GPU is improved through the host GPU hardware acceleration technology, and the display frame rate and resolution of the digital GPU are guaranteed.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a schematic diagram of a hardware-accelerated digital GPU simulation method;
FIG. 2 is a schematic diagram of a hardware-accelerated digital GPU simulation flow.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
Example 1
The invention provides a hardware-accelerated digital GPU simulation method, which comprises the following steps:
Step 1, a front-end display layer uses SDL2 to complete front-end display simulation, and an analog display and an interface part thereof;
modeling to realize a virtual GPU module, wherein various different types of virtual IO devices interact with a back-end driver by using a uniform interface;
modeling to realize a virtual IO layer, and realizing a virtual queue interface, wherein the virtual queue interface is used as a bridge for front-end and back-end communication, and the number of the virtual queues used by different types of equipment is different;
step 4, modeling to realize a virtual IO annular cache layer, disassembling GPU operation into different virtual queues, calling a local GPU to complete operation, and returning an operation result to the virtual IO annular cache layer;
and 5, returning the local GPU hardware acceleration operation result to the front-end display layer through the virtual GPU to finish the display of the GPU operation result.
The step 1 comprises the following steps:
The front end display layer uses SDL2 to complete the simulation of the display screen interface of the display system, including the picture display and the touch screen operation, supports the setting of different display frame rates and resolutions, and supports the dynamic scaling of the display window;
Step 1.2, capturing coordinates of a mouse click point, analyzing and converting the coordinates into corresponding function call events through preset command configuration, and completing mouse click event simulation;
Step 1.3, performing data interaction between the front display layer and the virtual GPU module, switching a click operation trigger picture of a display picture into a GPU drawing rendering operation command, completing drawing and rendering operation by utilizing hardware GPU capability packaged by the virtual GPU module, and loading GPU operation results to the front display layer to finish display picture refreshing;
Step 1.4, the display screen uses Gallium TGSI a renderer to accelerate the 3D rendering and screen display.
The step 2 comprises the following steps:
The virtual GPU module intercepts an OpenGL call command applied by a display system at the last time through a virtual drive layer function, and converts the called OpenGL function command into a unified packaged virtual GPU call function interface through a set command conversion table;
And 2.2, the virtual GPU call interface performs unified interface processing on different GPU models and converts the interface into standard virtual IO call.
The step 3 comprises the following steps:
Step 3.1, the application program discovers the virtual IO equipment by loading the configuration information of the operating system and performs unified operation mounting;
step 3.2, after the virtual IO finishes running and mounting, automatically finishing the coverage of hardware IO call;
Step 3.3, the virtual IO call reduces the compatibility problem brought by cross-platform, and a uniform driving interface is used;
and 3.4, classifying and disassembling the virtual IO call into different packaged virtual task queues according to different IO interface types, mapping corresponding virtual GPU models according to the specifications of the actual hardware GPU, and setting and adapting a plurality of GPU graphics cards through the virtual queues of which the GPU models are configured.
The step 4 comprises the following steps:
step 4.1, a virtual IO ring buffer layer is used for a shared memory for virtual IO data transmission;
Step 4.2, acquiring an IO request submitted by front-end equipment through a virtual IO ring buffer layer, and converting the IO request into a virtual task queue, namely initializing through a data structure of a rear-end drive, setting information of the IO equipment, combining the information into the virtual IO equipment, setting a host state, configuring and initializing the virtual queue, binding a virtual queue and a queue processing function for each IO equipment, and binding an equipment processing function to process the IO request;
and 4.3, the virtual task queue calls the local host GPU hardware resource to process the vertex, texture and illumination data of the IO request through the configured hardware acceleration technology, and converts the vertex, texture and illumination data into a two-dimensional image, wherein the two-dimensional image comprises matrix transformation and illumination calculation operation commands.
The step 5 comprises the following steps:
step 5.1, returning the result to the virtual IO annular buffer area after the virtual task queue completes GPU operation;
step 5.2, the virtual IO call reads the GPU operation result of the virtual IO ring buffer area and returns the result to the virtual GPU;
And 5.3, returning the operation result to the front-end display layer by the virtual GPU for use.
Example 2
The invention also provides a hardware-accelerated digital GPU simulation system which can be realized by executing the flow steps of the hardware-accelerated digital GPU simulation method, namely, the hardware-accelerated digital GPU simulation method can be understood as a preferred implementation mode of the hardware-accelerated digital GPU simulation system by a person skilled in the art.
The hardware accelerated digital GPU simulation system comprises a module M1, a simulation display and an interface part thereof, wherein the front display simulation is finished by using open source software SDL2 at a front display layer; the module M2 is used for modeling to realize a virtual GPU module, and a unified interface is used for realizing interaction between different types of virtual IO devices and a back-end driver; the method comprises the steps of realizing a virtual IO layer by modeling, using a virtual queue interface as a bridge for front-end and back-end communication, using different numbers of virtual queues by different types of equipment, realizing a virtual IO annular buffer layer by modeling, disassembling GPU operation into different virtual queues, calling local GPU hardware to accelerate to complete operation, and returning an operation result to the virtual IO annular buffer layer, and finishing display of the GPU operation result by the module M5, wherein the local GPU hardware acceleration operation result is returned to a front-end display layer through a virtual GPU module.
The module M1 comprises a module M1.1, wherein a front-end display layer uses open source software SDL2 to complete simulation of a display screen interface of a display system, the module comprises picture display and touch screen operation, different display frame rates and resolution settings are supported, dynamic scaling of a display window is supported, the module M1.2 is used for completing simulation of a mouse click event by capturing coordinates of a mouse click point, analyzing and converting the coordinates into corresponding function call events through preset command configuration, the module M1.3 is used for carrying out data interaction between the front-end display layer and a virtual GPU module, switching click operation triggering pictures of a display picture into GPU drawing operation commands, drawing and rendering operations are completed by utilizing hardware GPU capability packaged by the virtual GPU module, meanwhile, refreshing of the display picture is completed, and the module M1.4 is used for accelerating 3D rendering and picture display by using a Gallium TGSI renderer.
The module M2 comprises a module M2.1, wherein the virtual GPU module intercepts OpenGL call commands applied by a display system at last time through a virtual drive layer function, converts the called OpenGL function commands into unified packaged virtual GPU call function interfaces through a preset command conversion table, and the module M2.2 comprises the virtual GPU module call interfaces for carrying out unified interface processing on different GPU models and converting the unified interface processing into standard virtual IO calls.
The module M3 comprises a module M3.1, wherein an application program discovers virtual IO equipment by loading configuration information of an operating system and carries out unified operation mounting, a module M3.2, wherein after the virtual IO finishes the operation mounting, the module M3.2 automatically finishes the coverage of hardware IO call, a module M3.3, the virtual IO call uses a unified driving interface, a module M3.4, the virtual IO call is classified and disassembled into different packaged virtual task queues according to different IO interface types, the corresponding virtual GPU model is mapped according to the specification of an actual hardware GPU, and the virtual queue setting of the GPU model is finished through configuration so as to adapt to various GPU graphics cards.
The module M4.2 acquires an IO request submitted by front-end equipment through the virtual IO ring buffer layer, converts the IO request into a virtual task queue, initializes the data structure through a back-end drive, sets information of the IO equipment, combines the information into the virtual IO equipment, sets a host state, configures and initializes the virtual queue, binds a virtual queue and a queue processing function for each IO equipment, binds equipment processing functions to process the IO request, and the module M4.3 calls local host GPU hardware resources to complete processing vertex, texture and illumination data of the IO request through the configured hardware acceleration technology, and converts the processing vertex, texture and illumination data into two-dimensional images including matrix transformation and illumination calculation operation commands.
Those skilled in the art will appreciate that the systems, apparatus, and their respective modules provided herein may be implemented entirely by logic programming of method steps such that the systems, apparatus, and their respective modules are implemented as logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., in addition to the systems, apparatus, and their respective modules being implemented as pure computer readable program code. Therefore, the system, the device and the respective modules thereof provided by the invention can be regarded as a hardware component, and the modules for realizing various programs included therein can be regarded as a structure in the hardware component, and the modules for realizing various functions can be regarded as a structure in the hardware component as well as a software program for realizing the method.
The foregoing describes specific embodiments of the present application. It is to be understood that the application is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the application. The embodiments of the application and the features of the embodiments may be combined with each other arbitrarily without conflict.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411354686.4A CN118885301B (en) | 2024-09-27 | 2024-09-27 | Hardware-accelerated digital GPU simulation method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411354686.4A CN118885301B (en) | 2024-09-27 | 2024-09-27 | Hardware-accelerated digital GPU simulation method and system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118885301A CN118885301A (en) | 2024-11-01 |
| CN118885301B true CN118885301B (en) | 2025-01-14 |
Family
ID=93233703
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411354686.4A Active CN118885301B (en) | 2024-09-27 | 2024-09-27 | Hardware-accelerated digital GPU simulation method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118885301B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116010037A (en) * | 2023-02-10 | 2023-04-25 | 成都迪捷数原科技有限公司 | GPU simulation method and system based on virtual simulation platform |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110102443A1 (en) * | 2009-11-04 | 2011-05-05 | Microsoft Corporation | Virtualized GPU in a Virtual Machine Environment |
| US10310879B2 (en) * | 2011-10-10 | 2019-06-04 | Nvidia Corporation | Paravirtualized virtual GPU |
| US9099051B2 (en) * | 2012-03-02 | 2015-08-04 | Ati Technologies Ulc | GPU display abstraction and emulation in a virtualization system |
| US10417023B2 (en) * | 2016-10-31 | 2019-09-17 | Massclouds Innovation Research Institute (Beijing) Of Information Technology | GPU simulation method |
| US11720408B2 (en) * | 2018-05-08 | 2023-08-08 | Vmware, Inc. | Method and system for assigning a virtual machine in virtual GPU enabled systems |
| CN116308990A (en) * | 2022-12-15 | 2023-06-23 | 上海创景信息科技有限公司 | Simulation and integration method and system of GPU supporting OpenGL |
| CN116578416B (en) * | 2023-04-26 | 2024-07-30 | 中国人民解放军92942部队 | Signal-level simulation acceleration method based on GPU virtualization |
-
2024
- 2024-09-27 CN CN202411354686.4A patent/CN118885301B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116010037A (en) * | 2023-02-10 | 2023-04-25 | 成都迪捷数原科技有限公司 | GPU simulation method and system based on virtual simulation platform |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118885301A (en) | 2024-11-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210049729A1 (en) | Reconfigurable virtual graphics and compute processor pipeline | |
| US20170323418A1 (en) | Virtualized gpu in a virtual machine environment | |
| US8878833B2 (en) | Systems, methods, and apparatus for recording of graphical display | |
| US9026745B2 (en) | Cross process memory management | |
| CN106406977A (en) | Virtualization implementation system and method of GPU (Graphics Processing Unit) | |
| CN112269603A (en) | Graphic display method and device for compatibly running Android application on Linux | |
| EP2807555B1 (en) | Para-virtualized asymmetric gpu processors | |
| JP2006190281A (en) | System and method for virtualizing graphic subsystem | |
| JPH01129371A (en) | Raster scan display device and graphic data transfer | |
| CN106020929A (en) | System and method for supporting 3D application in virtual environment | |
| CN115904617A (en) | GPU virtualization implementation method based on SR-IOV technology | |
| CN102135866A (en) | Display optimization method based on Xen safety computer | |
| CN108762934B (en) | Remote graphic transmission system and method and cloud server | |
| JP2020525914A (en) | Firmware changes for virtualized devices | |
| CN116821040B (en) | Display acceleration method, device and medium based on GPU direct memory access | |
| WO2022095808A1 (en) | Method for implementing graphics rendering on basis of vulkan, and related apparatus | |
| CN114968152B (en) | Method for reducing VIRTIO-GPU extra performance loss | |
| CN115794294A (en) | Method and system for realizing remote desktop of vhost-user-gpu virtual machine | |
| CN120104252A (en) | Data processing method, device, equipment and readable storage medium | |
| CN108171644A (en) | A kind of X-Y scheme accelerated method based on GCN framework video cards | |
| CN108460718B (en) | Three-dimensional graphic display system optimization method and device based on low-power-consumption Feiteng | |
| CN118885301B (en) | Hardware-accelerated digital GPU simulation method and system | |
| CN114237826A (en) | High-speed rendering method and device for Android container | |
| CN114924837A (en) | Data processing method, electronic device and readable storage medium | |
| CN110347463A (en) | Image processing method, relevant device and computer storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |