[go: up one dir, main page]

CN114095655B - Method and device for displaying streaming data - Google Patents

Method and device for displaying streaming data Download PDF

Info

Publication number
CN114095655B
CN114095655B CN202111359884.6A CN202111359884A CN114095655B CN 114095655 B CN114095655 B CN 114095655B CN 202111359884 A CN202111359884 A CN 202111359884A CN 114095655 B CN114095655 B CN 114095655B
Authority
CN
China
Prior art keywords
data
eye
texture data
texture
streaming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111359884.6A
Other languages
Chinese (zh)
Other versions
CN114095655A (en
Inventor
王智利
于全夫
郝冬宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202111359884.6A priority Critical patent/CN114095655B/en
Publication of CN114095655A publication Critical patent/CN114095655A/en
Application granted granted Critical
Publication of CN114095655B publication Critical patent/CN114095655B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/2627Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The application relates to the technical field of display, and provides a method and equipment for displaying streaming data, wherein when receiving first frame streaming data sent by external equipment, the type of the streaming data is determined, and judgment is not required for each frame, so that calculation resources are saved; and obtaining texture data by performing frame decoding on the streaming data, copying the texture data into a rendering pipeline of the GPU for rendering and displaying, fully utilizing the parallel processing capability of the GPU, and not occupying CPU resources. In the rendering process, aiming at streaming data of different types, a mode matched with the type is adopted, left-eye texture data and right-eye texture data are respectively extracted from the texture data at one time, and based on the extracted left-eye texture data and right-eye texture data, left-eye pictures and right-eye pictures are respectively drawn and displayed at the same time, so that the rendering capacity and the display efficiency of VR equipment are improved, and the immersive experience of a user is further improved.

Description

Method and device for displaying streaming data
Technical Field
The present application relates to the field of display technologies, and in particular, to a method and apparatus for displaying streaming data.
Background
With the development of Virtual Reality (VR) technology, immersive experience is gradually spreading over various industries of modern life, such as live broadcast, game, etc. VR streaming is used as a high frequency usage scenario in the VR domain, and rendering and displaying streaming data directly affects the immersive experience of the user.
Taking a game scene as an example, after the VR device receives game data of the external device, the VR device renders the game data to left and right eye display screens, and in the game experience, the higher the rendering efficiency is, the higher the accuracy of player competition is.
Therefore, improving rendering display efficiency of streaming data is of great research significance to the immersive experience of VR devices.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying streaming data, which are used for improving the rendering and displaying efficiency of VR equipment on the streaming data.
In a first aspect, an embodiment of the present application provides a method for displaying streaming data, which is applied to a VR device, and includes:
when receiving first frame streaming data sent by external equipment, determining the type of the streaming data;
For each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
Respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
And respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture.
In a first aspect, an embodiment of the present application provides a VR device, including a processor, a memory, a display, and an external communication interface, where the external communication interface, the display, and the memory are connected to the processor through a bus;
The memory stores computer program instructions, and the processor performs the following operations in accordance with the computer program instructions:
Determining the type of the series flow data when receiving the series flow data of the first frame sent by the external equipment through the external communication interface;
For each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
Respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
And respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture by the display.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where computer executable instructions are stored, where the computer executable instructions are configured to cause an XR device to perform a method for displaying streaming data according to an embodiment of the present application.
In the above embodiment of the present application, after frame decoding is performed on the streaming data sent by the external device, the VR device obtains texture data, and determines the type of the streaming data according to the streaming data of the first frame, so that it is not necessary to determine for each frame, and computing resources are saved; and respectively extracting left-eye texture data and right-eye texture data from the texture data in a mode of matching with the types aiming at different types of streaming data, and respectively drawing and simultaneously displaying left-eye and right-eye pictures based on the extracted left-eye and right-eye texture data. Through the obtained texture data, the left-eye texture data and the right-eye texture data are extracted at one time, the rendering capacity of VR equipment is improved, the rendering display efficiency of streaming data is higher, and the immersive experience of a user is further improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the application, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a conventional binocular data display process;
FIG. 3 is a schematic diagram illustrating a process of displaying binocular data according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a process of displaying monocular data according to an embodiment of the present application;
Fig. 5 is a flowchart illustrating a method for displaying streaming data according to an embodiment of the present application;
Fig. 6 is a flowchart illustrating a method for displaying streaming data in an interaction process between a VR device and an external device according to an embodiment of the present application;
Fig. 7 illustrates a block diagram of a VR device provided by an embodiment of the present application.
Detailed Description
For the purposes of making the objects, embodiments and advantages of the present application more apparent, an exemplary embodiment of the present application will be described more fully hereinafter with reference to the accompanying drawings in which exemplary embodiments of the application are shown, it being understood that the exemplary embodiments described are merely some, but not all, of the examples of the application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the appended claims. Furthermore, while the present disclosure has been described in terms of an exemplary embodiment or embodiments, it should be understood that each aspect of the disclosure can be practiced separately from the other aspects.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application; as shown in fig. 1, the external device 100 interacts with a VR device 300 through a network 200. The external device 100 obtains the streaming data, transmits the streaming data to the network 200, encodes the streaming data by the network, and transmits the encoded streaming data to the VR device 300. The VR device 300 decodes the received streaming data, and renders and displays left and right eye pictures based on the decoded streaming data.
In the embodiment of the present application, the VR device is installed with a specially developed streaming application, and can only receive and display the streaming data sent by the external device during the running process of the streaming application. The external device is provided with a streaming assistant, and the streaming assistant is used for acquiring real-time pictures of the external device, encoding the real-time pictures into streaming data and then starting the streaming data to the VR device.
Based on the VR scenario shown in fig. 1, streaming data mainly originates from external devices such as a desktop computer 101, a personal computer (Personal Computer, PC) 102, and a Television (TV) 103. When the streaming data is from a picture displayed on the TV, the streaming data is monocular data, the frame picture is not divided into left and right eyes, and when the VR equipment is in rendering display, texture data of the left and right eyes are obtained from the monocular data. When the streaming data originates from a picture displayed on the computer, the streaming data is binocular data, there is a nuance between left and right eye data, and the left and right eye data are separately transmitted.
For dual purpose streaming data, there are typically two ways to render and display left and right eye pictures.
Mode one
After the external device encodes the frame picture, the left and right eye data are separately transmitted. After the VR device receives the data streams of the left eye and the right eye, the data streams are separately decoded, and rendering and displaying are performed based on the decoded data. With this scheme, the transmission and decoding of the binocular streaming data are separately performed, so that the time required is doubled, and the left and right eye data also need to be synchronously processed, which is complicated.
Mode two
The external device encodes the binocular streaming eye data into a frame picture for transmission to the VR device. After decoding the frame picture, the VR device artificially divides the left and right eye data, acquires the left and right eye data into a graphics processor (Graphics Processing Unit, GPU) twice, and then renders and displays the left and right eye pictures respectively, as shown in fig. 2. By adopting the scheme, large manpower and material resources are needed, automatic display cannot be realized, the two-time acquisition is realized, and the rendering efficiency is low.
In view of this, the embodiment of the application provides a method and a device for displaying streaming data. Firstly, determining the type of streaming data, directly copying the decoded whole frame of streaming data to a rendering pipeline for rendering, wherein CPU (Central processing Unit) resources are not occupied, and the rendering speed of the GPU is fully utilized by a fragment shader in the rendering pipeline; and the texture data of the left eye and the right eye share one frame of streaming data, and the texture data of the left eye and the right eye can be directly extracted on the GPU to respectively draw the left eye picture and the right eye picture through one-time copying, so that the rendering and displaying efficiency of VR equipment is improved, and the immersive experience of a user is further improved. The drawing capability of each frame is 16ms lower than the refresh rate of 60Hz by adopting the method of the embodiment of the application under the condition of not considering the network transmission delay.
It should be noted that the present application does not impose any limitation on the programming Language used by the shader, and may include a shader Language of an open Graphics library (Open Graphics Library, openGL), a high-level shader Language (HIGH LEVEL SHADER Language, HLSL), (C for Graphics, CG) shader Language, and a shader Language of Unity 3D.
In the embodiment of the present application, the type of the streaming data may be binocular data or monocular data.
Taking streaming data as binocular data, fig. 3 is a schematic diagram illustrating a display process of the streaming data, and as shown in fig. 3, after the VR device receives the streaming data, the streaming data is copied into the GPU, then, in a rendering pipeline of the GPU, left half texture data and right half texture data are respectively extracted, a left eye picture is drawn according to the left half texture data, a right eye picture is drawn according to the right half texture data, and the left eye picture and the right eye picture are simultaneously displayed, so that a stereoscopic visual effect is presented to a user.
Taking streaming data as monocular data as an example, fig. 4 schematically illustrates a display process of streaming data, as shown in fig. 4, after the VR device receives the streaming data, the streaming data is copied into the GPU, then, in a rendering pipeline of the GPU, texture data obtained from the monocular data is respectively used as left eye texture data and right eye texture data, a left eye picture is drawn according to the left eye texture data, a right eye picture is drawn according to the right eye texture data, and the left eye picture and the right eye picture are simultaneously displayed.
In the embodiment of the present application, referring to fig. 5, the method for displaying streaming data is executed by VR device, and mainly includes the following steps:
S501: and when receiving the first frame of streaming data sent by the external equipment, determining the type of the streaming data.
In the embodiment of the present application, a streaming application is installed in the VR device, and can receive streaming data sent by an external device, where the type of the streaming data includes monocular data and binocular data.
In the specific execution S501, the streaming assistant of the external device acquires its own display screen in real time, and transmits the acquired display screen to the network, and the network encodes a frame of screen transmitted by the external device and then transmits the encoded frame of screen to the VR device. And the VR equipment receives the streaming data transmitted by the external equipment through the streaming application. In the display process, the types of the streaming data of the same video are the same, so that the VR device determines the type of the transmitted streaming data after receiving the streaming data of the first frame, marks the type, and does not need to judge when the streaming data is subsequently received, thereby saving CPU resources, reducing the logic operation of the CPU and improving the rendering display speed.
The present embodiment does not impose any limitation on the representation of the streaming data type, for example, if the streaming data type is determined to be monocular data, it is identified as "0", and if the streaming data type is determined to be binocular data, it is identified as "1".
S502: and for each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data.
In the embodiment of the application, the VR equipment decodes each frame of streaming data in a decoding mode matched with the streaming data coding format, and copies each decoded frame of streaming data to the GPU completely, so that rendering display is performed in a rendering pipeline in the GPU, the rendering advantage of the rendering pipeline is fully utilized, the rendering display speed is improved, and CPU resources are not occupied.
In S502, after copying each frame of streaming data to the GPU, the rendering pipeline may obtain texture data of the display screen.
S503: and respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data.
When the type of the stream data is binocular data, in S503, after obtaining the texture data of the display screen, a first portion of the texture data (e.g., the left half of the texture data in fig. 3) is used as the left-eye texture data, and a second portion of the texture data (e.g., the left half of the texture data in fig. 3) is used as the right-eye texture data. The sampling formula of texture data is:
texture (v_tex, vec2 (v_texpo.x 0.5+0.5 x isleft, v_texpo.y)) formula 1
Wherein v_tex represents the acquired texture data, texture (·) represents a function of extracting texture data from v_tex, vec2 (·) represents the input texture coordinates, v_texpo.x represents s-component of the texture coordinates, v_texpo.y represents texture coordinate t-component, isLeft has a value of 0 or 1, texture (v_tex, vec2 (·)) represents the extracted right-eye texture data when isLeft has a value of 0, and texture (v_tex, vec2 (·)) represents the extracted left-eye texture data when isLeft has a value of 1.
When the type of the streaming data is monocular data, the left and right eyes display the same picture, and the texture data of the left and right eyes are obtained from the monocular data, i.e., in S503, the obtained texture data are respectively referred to as left-eye texture data and right-eye texture data (as shown in fig. 4).
S504: and respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture.
Generally, left and right eye frames in VR devices are displayed separately, so that the parallel processing capability of the GPU can be fully utilized for rendering display. When executing S504, in the vertex shader of the rendering pipeline, each vertex is generated, the vertex is rasterized, each fragment is generated, in the fragment shader (also referred to as a fragment shader or a pixel shader), the color value of the fragment to be rendered is obtained from the left-eye texture data according to the position information of the fragment to be rendered in the left display screen, the left-eye picture is rendered according to the color value of the fragment to be rendered, and at the same time, the color value of the fragment to be rendered is obtained from the right-eye texture data according to the position information of the fragment to be rendered in the right display screen, and the right-eye picture is rendered according to the color value of the fragment to be rendered.
In S504, after the left and right eye frames are drawn, the left and right eye frames drawn are synchronously displayed by the left and right glasses of the VR device, and at this time, a stereoscopic display frame is seen in the eyes of the wearer of the VR device, so that the immersive experience of the user is improved.
Taking a game scenario as an example, from the perspective of interaction between the VR device and the external device, a complete flow of streaming data display is described, as shown in fig. 6, and mainly includes the following steps:
s601: and a streaming assistant in the external equipment grabs the display picture of the running VR game to obtain streaming data.
In S601, after the external device runs the VR game, the streaming assistant grabs the display screen in real time, and transmits the grabbed frame of screen to the network, and the network encodes the frame of screen and transmits the encoded frame of screen to the VR device.
S602: the VR device receives and decodes the streaming data.
In S602, the VR device starts a streaming application to receive streaming data sent by an external device, and decodes the received streaming data.
S603: the VR device determines whether the received streaming data is the first frame streaming data, if so, S604 is executed, otherwise S606 is executed.
Since the types of the same video are consistent, in S603, the type determination is performed only according to the first frame streaming data, so that the logic determination process of the CPU is reduced, and the rendering display speed is increased.
S604: and the VR equipment determines the type of the streaming data according to the streaming data of the first frame.
In S604, the types of the streaming data include monocular data and binocular data.
S605: and the VR equipment identifies and records the streaming data according to the determined type.
In an alternative embodiment, monocular data is identified with a "0" and binocular data is identified with a "1".
S606: and the VR device determines whether the streaming data is binocular data according to the type identification of the streaming data, if so, the VR device executes S607, otherwise, the VR device executes S608.
S607: the VR device uses a first portion of the texture data as left-eye texture data and a second portion of the texture data as right-eye texture data.
A detailed description of this step is referred to S503 and is not repeated here.
S608: the VR device takes the texture data as left eye texture data and right eye texture data, respectively.
A detailed description of this step is referred to S503 and is not repeated here.
S609: and respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture.
A detailed description of this step is referred to S504 and is not repeated here.
In the embodiment of the application, the type of the streaming data is determined and identified through the streaming data of the first frame, and the type judgment is not required for each frame, so that the logic calculation process of a CPU is reduced, and the calculation speed is improved; after the VR equipment receives the streaming data transmitted by the external equipment, the VR equipment decodes the streaming data to obtain texture data of the display picture, copies the texture data into a rendering pipeline of the CPU, and renders and displays the texture data by the rendering pipeline, so that CPU resources are saved, the parallel processing capacity of the CPU is fully utilized, and the rendering and displaying speed is improved. In the rendering display process, aiming at streaming data of different types, a rendering pipeline adopts a mode matched with the type to respectively extract left-eye texture data and right-eye texture data from the texture data, and based on the extracted left-eye texture data and right-eye texture data, respectively drawing left-eye and right-eye pictures and simultaneously displaying the left-eye and right-eye pictures, and respectively obtaining the left-eye texture data and the right-eye texture data by the texture data copied at one time, so that the rendering capability of VR equipment is improved, the rendering display efficiency of the streaming data is higher, and the immersive experience of a user is further improved.
Based on the same technical concept, the embodiments of the present application provide a VR device, which can execute the flow of the streaming data display method provided by the embodiments of the present application, and achieve the same technical effects, which are not repeated here.
Referring to fig. 7, the apparatus includes a processor 701, a memory 702, a display 703, and an external communication interface 704, the display 703, and the memory 702 being connected to the processor 701 through a bus 705; the memory 702 stores computer program instructions according to which the processor 701 performs the following operations:
determining, through the external communication interface 704, a type of streaming data when receiving first frame streaming data transmitted by the external device;
for each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data;
And respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture by a display.
Optionally, when the type of the streaming data is binocular data, the processor 701 obtains left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, and specifically includes:
Acquiring first partial texture data from the acquired texture data, and taking the first partial data as left eye texture data; and
And acquiring second partial texture data from the acquired texture data, and taking the second partial data as right eye texture data.
Optionally, the sampling formulas of the left eye texture data and the right eye texture data are:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
Wherein v_tex represents the acquired texture data, texture (·) represents a function of extracting texture data from v_tex, vec2 (·) represents the input texture coordinates, v_texpo.x represents s-component of the texture coordinates, v_texpo.y represents texture coordinate t-component, isLeft has a value of 0 or 1, texture (v_tex, vec2 (·)) represents the extracted right-eye texture data when isLeft has a value of 0, and texture (v_tex, vec2 (·)) represents the extracted left-eye texture data when isLeft has a value of 1.
Optionally, when the type of the streaming data is monocular data, the processor 701 obtains left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, which specifically includes:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
Optionally, the processor 701 renders a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, which specifically operate as:
Obtaining color values of the primitives to be rendered from left-eye texture data according to the position information of the primitives to be rendered in the left display screen, and rendering a left-eye picture according to the color values of the primitives to be rendered; and
And according to the position information of the to-be-rendered fragment in the right display screen, acquiring the color value of the to-be-rendered fragment from the right-eye texture data, and rendering a right-eye picture according to the color value of the to-be-rendered fragment.
Embodiments of the present application also provide a computer readable storage medium storing instructions that, when executed, perform the method of the previous embodiments.
The embodiment of the application also provides a computer program product for storing a computer program for executing the method of the previous embodiment.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. The illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (6)

1. A method for displaying streaming data, applied to a virtual reality VR device, comprising:
when receiving first frame streaming data sent by external equipment, determining the type of the streaming data;
For each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
When the type of the streaming data is binocular data, acquiring a first part of texture data from the acquired texture data, and taking the first part of texture data as left eye texture data; obtaining second partial texture data from the obtained texture data, and taking the second partial texture data as right eye texture data; the sampling formulas of the left eye texture data and the right eye texture data are as follows:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
Wherein v_tex represents acquired texture data, texture (·) represents a function of extracting texture data from v_tex, vec2 (·) represents input texture coordinates, v_texpo.x represents s component of texture coordinates, v_texpo.y represents texture coordinate t component, isLeft has a value of 0 or 1, texture (v_tex, vec2 (·)) represents the extracted right eye texture data when isLeft has a value of 0, and texture (v_tex, vec2 (·)) represents the extracted left eye texture data when isLeft has a value of 1;
And respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture.
2. The method of claim 1, wherein when the type of the streaming data is monocular data, the respectively acquiring left-eye texture data and right-eye texture data from the acquired texture data according to the type of the streaming data comprises:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
3. The method of any of claims 1-2, wherein the rendering a left-eye picture and a right-eye picture from the left-eye texture data and the right-eye texture data, respectively, comprises:
obtaining the color value of the fragment to be rendered from the left eye texture data according to the position information of the fragment to be rendered in the left display screen, and rendering a left eye picture according to the color value of the fragment to be rendered; and
And according to the position information of the to-be-rendered fragment in the right display screen, acquiring the color value of the to-be-rendered fragment from the right eye texture data, and rendering a right eye picture according to the color value of the to-be-rendered fragment.
4. The virtual reality VR device is characterized by comprising a processor, a memory, a display and an external communication interface, wherein the external communication interface, the display and the memory are connected with the processor through a bus;
The memory stores computer program instructions, and the processor performs the following operations in accordance with the computer program instructions:
Determining the type of the series flow data when receiving the series flow data of the first frame sent by the external equipment through the external communication interface;
For each received frame of streaming data, performing frame decoding on the streaming data, and acquiring texture data from the decoded streaming data;
When the type of the streaming data is binocular data, acquiring a first part of texture data from the acquired texture data, and taking the first part of texture data as left eye texture data; obtaining second partial texture data from the obtained texture data, and taking the second partial texture data as right eye texture data; the sampling formulas of the left eye texture data and the right eye texture data are as follows:
texture(V_tex,Vec2(V_texPo.x*0.5+0.5*isLeft,V_texPo.y))
Wherein v_tex represents acquired texture data, texture (·) represents a function of extracting texture data from v_tex, vec2 (·) represents input texture coordinates, v_texpo.x represents s component of texture coordinates, v_texpo.y represents texture coordinate t component, isLeft has a value of 0 or 1, texture (v_tex, vec2 (·)) represents the extracted right eye texture data when isLeft has a value of 0, and texture (v_tex, vec2 (·)) represents the extracted left eye texture data when isLeft has a value of 1;
And respectively rendering a left-eye picture and a right-eye picture according to the left-eye texture data and the right-eye texture data, and simultaneously displaying the left-eye picture and the right-eye picture by the display.
5. The VR device of claim 4, wherein when the type of the streaming data is monocular data, the processor is configured to obtain left-eye texture data and right-eye texture data from the obtained texture data according to the type of the streaming data, respectively, by:
the acquired texture data is taken as left-eye texture data, and the acquired texture data is taken as right-eye texture data.
6. The VR device of any one of claims 4-5, wherein the processor is to render a left-eye picture and a right-eye picture from the left-eye texture data and the right-eye texture data, respectively, with the operations of:
obtaining the color value of the fragment to be rendered from the left eye texture data according to the position information of the fragment to be rendered in the left display screen, and rendering a left eye picture according to the color value of the fragment to be rendered; and
And according to the position information of the to-be-rendered fragment in the right display screen, acquiring the color value of the to-be-rendered fragment from the right eye texture data, and rendering a right eye picture according to the color value of the to-be-rendered fragment.
CN202111359884.6A 2021-11-17 2021-11-17 Method and device for displaying streaming data Active CN114095655B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111359884.6A CN114095655B (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111359884.6A CN114095655B (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Publications (2)

Publication Number Publication Date
CN114095655A CN114095655A (en) 2022-02-25
CN114095655B true CN114095655B (en) 2024-08-13

Family

ID=80301175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111359884.6A Active CN114095655B (en) 2021-11-17 2021-11-17 Method and device for displaying streaming data

Country Status (1)

Country Link
CN (1) CN114095655B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7075541B2 (en) * 2003-08-18 2006-07-11 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
CN105916022A (en) * 2015-12-28 2016-08-31 乐视致新电子科技(天津)有限公司 Video image processing method and apparatus based on virtual reality technology
CN106162142A (en) * 2016-06-15 2016-11-23 南京快脚兽软件科技有限公司 A kind of efficient VR scene drawing method
CN108241211B (en) * 2016-12-26 2020-09-15 成都理想境界科技有限公司 Head-mounted display device and image rendering method
CN108282648B (en) * 2018-02-05 2020-11-03 北京搜狐新媒体信息技术有限公司 A VR rendering method, device, wearable device and readable storage medium
CN111988598B (en) * 2020-09-09 2022-06-21 江苏普旭科技股份有限公司 Visual image generation method based on far and near view layered rendering

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system

Also Published As

Publication number Publication date
CN114095655A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN111033570B (en) Rendering images from computer graphics using two rendering computing devices
US12299826B2 (en) Multi-layer reprojection techniques for augmented reality
US10334238B2 (en) Method and system for real-time rendering displaying high resolution virtual reality (VR) video
US10536709B2 (en) Prioritized compression for video
CN102902502B (en) Display system and display method suitable for display wall
CN112565653B (en) Video frame insertion method, system, electronic equipment and storage medium
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
US11468629B2 (en) Methods and apparatus for handling occlusions in split rendering
WO2012094076A9 (en) Morphological anti-aliasing (mlaa) of a re-projection of a two-dimensional image
KR20150003406A (en) Moving image distribution server, moving image reproduction apparatus, control method, recording medium, and moving image distribution system
CN114419234A (en) Three-dimensional scene rendering method and device, electronic equipment and storage medium
US20230147244A1 (en) Methods and apparatus for occlusion handling techniques
Li et al. Enhancing 3d applications using stereoscopic 3d and motion parallax
CN119563152A (en) Post-process occlusion-based rendering for extended reality (XR)
US10237563B2 (en) System and method for controlling video encoding using content information
US20120176367A1 (en) Morphological anti-aliasing (mlaa) of a re-projection of a two-dimensional image
US20210312704A1 (en) Rendering using shadow information
WO2020193703A1 (en) Techniques for detection of real-time occlusion
CN114095655B (en) Method and device for displaying streaming data
US6559844B1 (en) Method and apparatus for generating multiple views using a graphics engine
CN118573826A (en) Information display method and device, computer readable storage medium and electronic equipment
CN113691835B (en) Video implantation method, apparatus, apparatus, and computer-readable storage medium
CN105872540A (en) Video processing method and device
US20250078410A1 (en) Mesh stitching for motion estimation and depth from stereo
CN104837058A (en) Method and system for improving video image quality displayed by cloud terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant