[go: up one dir, main page]

WO2018135730A1 - Procédé de génération d'image vr, procédé de traitement d'image vr et système de traitement d'image vr - Google Patents

Procédé de génération d'image vr, procédé de traitement d'image vr et système de traitement d'image vr Download PDF

Info

Publication number
WO2018135730A1
WO2018135730A1 PCT/KR2017/012090 KR2017012090W WO2018135730A1 WO 2018135730 A1 WO2018135730 A1 WO 2018135730A1 KR 2017012090 W KR2017012090 W KR 2017012090W WO 2018135730 A1 WO2018135730 A1 WO 2018135730A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
user
specified
output information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/012090
Other languages
English (en)
Korean (ko)
Inventor
고범준
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of WO2018135730A1 publication Critical patent/WO2018135730A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the present invention relates to a VR image generation method, a VR image processing method, and a VR image processing system based on a 3D coordinate system.
  • 360-degree panoramic video of the VR (virtual reality) video can be viewed by selecting the direction or point that the user wants to view within the entire video.
  • Such VR images can be viewed through a video display device such as a head mount display (HMD).
  • HMD head mount display
  • the VR image generating method matching the VR image to the three-dimensional coordinate system; Specifying at least one object in the image based on the matched three-dimensional coordinate system while driving control of the VR image; Setting object information including location information, expression time information, and output information based on coordinate values for the specified object; And generating VR image data to which object information is added.
  • the step of specifying the object may include specifying an area formed from at least one specific coordinate value selected by the user during the time that the object is expressed in the VR image, or setting the direction based on the origin on the 3D coordinate system.
  • region made based on an excitation angle value can be specified.
  • the output information may include at least one of caption information, voice information, and additional information.
  • the additional information may include at least one of tag information, description information about an object, object related image information, object related advertisement information, and link information.
  • the specified area may be moved along the object.
  • the VR image processing method driving the VR image matched to the three-dimensional coordinate system; Detecting a line of sight of the user based on an origin of the VR image, and displaying a corresponding field of view of the VR image on the display unit according to the detected line of sight; And processing output information for each specified object based on at least one of object information set for at least one predetermined object in the VR image, gaze information of the user, and a user selection signal.
  • the object information includes position information, expression time information, and output information based on coordinate values.
  • At least one of direction information and output information indicating a position of the specified object during the time that the object is expressed can be output in the viewing area of the display unit.
  • the processing of the output information on the specified object may further include: measuring interest of each object according to the measured time length by operating a virtual timer for each object when the specified object is located in the viewing area of the display unit. It may include. At this time, in the process of processing the output information for the specified object, if the measured interest meets a set condition, at least one of the description information, object-related image information, object-related advertisement information, link information as the output information You can output
  • the processing of the output information on the specified object may include: updating interest for each user according to the measured interest for each object; And updating the output information according to the degree of interest.
  • the description information about the object as the output information, the object At least one of related image information, object related advertisement information, and link information may be output.
  • the VR image processing system for providing the object information set for the VR image and at least one predetermined object in the VR image via a network; And driving the VR image by matching the 3D coordinate system, detecting a user's gaze based on the user's rotation or eye movement at the origin of the VR image, and a corresponding visual field of the VR image according to the detected gaze.
  • the VR image server may provide a VR image and object information in a real time streaming transmission method.
  • various image processing can be performed using at least one object information set in a VR image based on a 3D coordinate system, thereby providing an improved interactive environment between the user and the image, thereby providing a more realistic and interesting VR image to the user.
  • object information set in a VR image based on a 3D coordinate system
  • FIG. 1 is a conceptual diagram illustrating a VR image processing system according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of a user device according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method of generating a VR image according to an embodiment of the present invention.
  • FIG. 4 is an exemplary diagram for describing specifying an object based on a specific coordinate value in a VR image.
  • 5 is an exemplary diagram for describing an angle value having directivity with respect to a user's gaze.
  • FIG. 6 is an exemplary diagram for describing specifying a viewing area in a VR image.
  • FIG. 7 is a flowchart illustrating a VR image processing method according to an embodiment of the present invention.
  • FIG. 8 is a view for explaining movement of the viewing area according to the user's gaze movement.
  • 9 is a diagram for explaining interest measurement.
  • FIG. 10 is an exemplary diagram for describing output information output in a display screen of a viewing area.
  • 11 is an exemplary diagram for describing output information output through a selection signal of a user device.
  • FIG. 12 is a flowchart illustrating a method of outputting output information based on measured interests according to an embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a method of outputting output information based on a user selection signal according to an embodiment of the present invention.
  • FIG. 14 is an exemplary diagram illustrating a method of outputting object information and direction information based on location information of an object in a VR image.
  • FIG. 15 is a conceptual diagram for describing a method of expressing a position of an object based on position information of the object in a VR image.
  • Figure 1 is a conceptual diagram showing a VR image processing system according to an embodiment of the present invention.
  • the VR image processing system may include a VR image server 100 and a user device 200.
  • the VR image server 100 is connected to the user device 200 through a network to the user device 200 for at least one predetermined object (eg, person or object) in the VR image and the VR image.
  • Set object information can be provided.
  • the VR image server 100 may be provided with a database for storing the VR image and object information.
  • the VR image server 100 may be a server computer of a VR image provider.
  • the user device 200 is a device that drives a VR image and outputs a part of the image in the corresponding VR image according to the user's eyes.
  • the user device 200 may be a head mount display (HMD), a computer, a smartphone, or the like. It may be a configuration consisting of a combination of.
  • HMD head mount display
  • the user device 200 may include a driving unit 210, a sensing unit 220, a display unit 230, an input unit 240, and a control unit 250.
  • the driver 210 is configured to drive the VR image by matching the 3D coordinate system.
  • the sensor 220 is a component that detects the gaze of the user based on the rotation of the user (ie, the rotation of the device) or the movement of the eye based on the origin of the VR image.
  • the detector 220 may include an angular velocity sensor, a gyro sensor, and the like to detect the rotation of the device.
  • the detector 220 may include an eye tracking sensor that detects the direction (eye) or movement of the eye. It can be provided.
  • the display unit 230 is configured to display a corresponding viewing area while driving the VR image according to the detected gaze.
  • the input unit 240 is configured to receive a selection signal, an operation signal, and the like from the user, and may include various operation buttons and selection buttons.
  • an input means such as a mouse or a keyboard may be used.
  • the controller 250 controls the driver 210, the detector 220, the display 230, and the inputter 240.
  • the controller 250 controls at least one of object information, gaze information of the user, and a user selection signal while driving the VR image. Based on this, output information for each specified object can be processed.
  • the user device 200 may be realized through an application stored in a storage medium that executes functions of the driver 210, the detector 220, the display 230, the input 240, and the controller 250.
  • the driver 210 the detector 220
  • the display 230 the input 240
  • the controller 250 the controller 250
  • it may be implemented as a combination of hardware for implementing each function.
  • the user device 200 may further include a communication unit 260 for communicating with the VR image server 100.
  • the apparatus may further include a speaker for outputting an audio signal of the VR image.
  • the VR image server 100 may create and edit VR image data by using a VR image authoring tool.
  • a VR image authoring tool For example, when specifying an area for an object, an object area may be specified using an input device such as a mouse or a keyboard in an administrator computer connected to the VR video server 100, or may be connected to the manager computer as an input device.
  • the object area may be specified by using a pointer or the like that interworks with the user's gaze of the HMD.
  • FIG. 3 is a flowchart illustrating a method of generating a VR image according to an embodiment of the present invention.
  • FIG. 4 is an exemplary diagram for describing an object specification based on a specific coordinate value in a VR image.
  • FIG. 6 is an exemplary diagram for describing an angular value having directionality based on a line of sight, and
  • FIG. 6 is an exemplary diagram for describing specifying a viewing area in a VR image.
  • the VR image is matched to a 3D coordinate system (S10).
  • the VR image is a 360 degree video, for example, by arranging a plurality of (eg, six) photographing apparatuses at an angular interval and primarily generating images obtained through the photographing apparatuses through image processing such as stitching.
  • a spherical 360 degree video Since generating the primary VR image is well known, a detailed description thereof will be omitted.
  • At least one object in the VR image is specified based on the matched 3D coordinate system (S20).
  • specifying the object (S20) may specify an area formed from at least one specific coordinate value selected by the user during the time that the object is expressed in the VR image, or may refer to the origin on the 3D coordinate system.
  • the region made based on the angular value having directionality can be specified.
  • an object when specifying an object, an object may be specified as shown in FIG. 4.
  • 4 is a screen illustrating a part of a viewing area in a VR image. The user may select a specific coordinate value for the object to be specified in the VR image to specify an area formed by the specific coordinate value.
  • regions are specified for two objects (people).
  • a quadrangular area formed by selecting two or more specific points for example, (x1, y1, z1) and (x4, y4, z4) is specified. Can be.
  • the left object when specifying the left object, it is possible to specify a quadrangular area formed by selecting two or more specific points (for example, (x6, y6, z6) and (x7, y7, z7)).
  • the area formed by the specific point is not limited to a quadrilateral, but may have a polygonal shape consisting of three or more specific points, or a circle having a predetermined distance from one specific point or about a specific point.
  • a polygonal area of a predetermined size may be specified.
  • the specific point is specified in the rectangular coordinate system, but the present invention is not limited thereto.
  • the specific point may be specified in the spherical coordinate system.
  • the value of ⁇ (angle from the z-axis in the positive direction to the straight line formed by the specific point in the visual direction, that is, the angle of the vertical component) and ⁇ value (positive
  • region made by specifying four specific points can be specified based on the angle to the straight line which the x-axis of a direction, the straight line which forms an origin and a specific point projected on the xy plane, ie, the angle of a horizontal component).
  • r is a constant value as the distance from the origin (that is, the eyes of the user) to the VR image.
  • the region of the object to be specified can be specified using a specific point made based on an angle value having a directionality with respect to the origin in the spherical coordinate system.
  • a region made based on an angular value having directivity may be specified.
  • the angle value ⁇ 1 having the directionality set based on the user's line of sight is upward and downward.
  • the angle value ⁇ 1 having the directionality set based on the user's gaze is Determined left and right. Accordingly, a rectangle is formed by two linear components in the horizontal direction in the VR image formed by the upper and lower angle values ⁇ 1 and two linear components in the vertical direction in the VR image formed by the left and right angle values ⁇ 1. Areas of shape may be specified.
  • an area of the object may be specified through a function of tracking a specific image (face shape, object shape, etc.) in the VR image.
  • the object when the object to be specified moves in the VR image according to time, the object may be specified while moving the specified area along the corresponding object.
  • FIG. 6A shows specifying a viewing area in a spherical VR image viewed by the user's eye (for example, a virtual camera), and FIG. 6B shows a specified viewing area.
  • the user may specify a view area of the VR image viewed by the user during viewing of the VR image as an area of view by specifying two or more coordinate values on the three-dimensional coordinate system.
  • an area made based on an angular value having directivity may be specified as the viewing area.
  • location information based on coordinate values for the specified object expression time information including start time and end time at which the object is expressed, and object information including additional output information about the object are set. (S30).
  • the output information may include at least one of caption information, voice information, and additional information.
  • the caption information may be a caption for the dialogue spoken by the person object or a caption for explaining the object of the object.
  • the voice information may include additional sound information about the object.
  • the additional information is set to variously use the specified object and may include at least one of tag information, description information about the object, image information related to the object, advertisement information related to the object, and link information.
  • the tag information may include type information or name information of an object for searching through a keyword or the like and categorizing by category.
  • Descriptive information is information for describing an object. If the object is a person, it may include information such as occupation, age, body, company, education, and representative works. If the object is an object, the information may include information such as product specification and price. have.
  • the object-related image information may include recommended image information related to the object, and the advertisement information may include advertisement image information in which the object appears.
  • the link information may include related site link information such as Facebook, Twitter, etc. for the object.
  • the output information of each object may further include expression time information (ie, expression start time and expression end time) and position information to be expressed in the VR image based on the 3D coordinate system.
  • expression time information ie, expression start time and expression end time
  • position information to be expressed in the VR image based on the 3D coordinate system.
  • VR image data to which the set object information is added is generated (S40).
  • the generated VR image data may be stored on the VR image server 100.
  • the VR image server 100 may store the VR image and the object information as one integrated data file and provide the same to the user device 200.
  • the VR image server 100 may store the VR image and the object information as a data file separating the VR image and the object information.
  • the object information may be selectively provided to the user device 200.
  • VR image data to which object information generated by the VR image generation method is added may be used to provide various services to a user through image processing.
  • FIG. 7 is a flowchart illustrating a VR image processing method according to an embodiment of the present invention.
  • the VR image processing method first drives a VR image matched to a 3D coordinate system through a driving unit of the user device (S110). Subsequently, the user's gaze is sensed based on the origin of the VR image while driving the VR image through the sensing unit, and a corresponding viewing area of the VR image is displayed on the display unit according to the detected gaze (S120). Subsequently, the controller processes the output information for each specified object based on at least one of preset object information, gaze information of the user, and a user selection signal through the input unit for at least one predetermined object in the VR image. (S130).
  • the object information may include location information, expression time information, and output information based on coordinate values
  • the output information may include at least one of caption information, voice information, and additional information
  • the additional information may include at least one of tag information, description information about an object, object related image information, object related advertisement information, and link information.
  • the specified object (now speaking person) is located in an area outside the viewing area, during the time that the object is expressed (for example, At least one of direction information (for example, an arrow) and output information (for example, a caption) indicating a position of the object may be output to a set position in the viewing area of the display unit.
  • direction information for example, an arrow
  • output information for example, a caption
  • subtitles uttered by the speaker (for example, 'how was your today?') Are displayed as object information. 13 may be output along with direction information such as an arrow indicating a speaker's position (FIG. 13A), a gradually increasing point display (FIG. 13B), and a sequentially blinking dot display.
  • direction information such as an arrow indicating a speaker's position (FIG. 13A), a gradually increasing point display (FIG. 13B), and a sequentially blinking dot display.
  • the subtitle is not set as the object information for the specified speaker, only the direction information may be displayed for the speaker to be uttered.
  • FIG. 15 is a conceptual diagram for describing a method of expressing a position of an object based on position information of the object in a VR image.
  • a method of displaying a speaker's position as a specified object based on an arrow is illustrated.
  • the user can know the position of the speaker based on the direction of the arrow, so that the user can be induced to identify the speaker by switching the field of view to the direction of the arrow.
  • the position of the speaker may be expressed through visual image processing that changes the position / shape of the subtitle, which is object information, without directly displaying the direction information. That is, when the coordinates corresponding to the speaker's position information are not included in the screen range displayed to the user, the position of the caption, which is the object information, is gradually incremented in the direction close to the speaker's position at the set position such as the center, center, or bottom. By repeatedly moving to the designated position, the direction of the speaker can be indicated.
  • the direction of the speaker may be indicated by using a gradual change such as subtitle color or density.
  • the direction of the speaker may be indicated by changing the color of the subtitle, such as blurring the subtitle when the speaker is far from the direction of the viewer, or changing to a set color (eg, green) as the speaker gets closer.
  • the direction of the speaker may be indicated by tilting the subtitle in the direction of the speaker or by gradually increasing the size of the subtitle.
  • a virtual timer for each object Operate to measure the interest of each object according to the measured length of time. For example, as shown in FIG. 8, first, an object is included in the field of view when "user gaze-1". Then, when “ user gaze-2 ", the gaze has moved but the object is still included in the field of view. Then, when "user gaze-3”, the gaze moves so that the object is only partially included in the field of view. Then, when "user gaze -4", the gaze moves so that the object is located outside the viewing area. In this case, as shown in FIG.
  • a virtual timer is operated to measure the length of time that the object is located in the viewing area.
  • the length of time may be measured until at least some of the objects are not included in the field of view, or may be measured until all of the objects are not included in the field of view.
  • At least one of description information about the object, image information related to the object, advertisement information related to the object, and link information may be output as output information.
  • the output information may display the recommendation image information as the object-related image information with high interest and the product of interest information as the object-related advertisement information on the display screen of the viewing area while driving the VR image.
  • output information may be output for one object having the highest interest after the VR image driving ends. For example, for an object of high interest, we recommend an image whose name (product name, person's name, etc.) is included in the title (for example, tag information), or in an image classification category preset for the object. Related videos included may be recommended.
  • the VR image server 100 may store user information through user-specific membership and then log in by accessing the VR image server 100 through the user device 200. have. If the non-login user is not registered, the output information may be output according to the measured interest of each object in the reproduced VR image.
  • FIG. 12 is a flowchart illustrating a method of outputting output information based on measured interests according to an embodiment of the present invention.
  • the user device 200 connects to the VR image server 100, and among the object information about the VR image and the VR image. By requesting at least one, the requested VR image and the object information may be received respectively.
  • the user device 200 may request VR image data including object information from the VR image server 100 and receive the requested VR image data.
  • the VR image server 100 may provide the VR image and the object information to the connected user apparatus 200 in a real time streaming manner, so that the user apparatus 200 may execute the VR image in real time.
  • the VR image server 100 provides the VR image and the object information to the connected user device 200 in a download manner, and after the user device 200 downloads the VR image and the object information to the storage unit, the VR image and the object information are stored in the VR. You can also run an image.
  • the user device 200 executes the corresponding VR image (S211). Subsequently, the user device 200 detects the user's gaze while executing the VR image to check the viewing unit range (ie, the display screen of the viewing area) (S212), and further, the object unit range (ie, the object) using the object information. The region specified by step S213). Next, it is determined whether the object part range in the view part is included (S214). If included (Yes) starts to operate the virtual timer (S215). If not included (No), the virtual timer stops operation (S216). Next, the interest of each object is measured according to the measurement time for each object measured by the virtual timer (S217). Subsequently, when execution of the corresponding VR image is ended (S218), output information on the object of interest with high interest may be output according to whether or not the user logs in (S219).
  • the user device 200 transmits the measured interest information and the user information to the VR image server 100, and based on the accumulated image interest object for each user as output information from the VR image server 100.
  • Recommended video information and advertising information can be provided.
  • the user device 100 may add a score based on the interest information measured on a predetermined tag for each object.
  • a classification keyword related to the VR image may be designated as a tag.
  • Multiple tags per object may exist in the VR image.
  • the tag score for each object may be calculated as (timer time per object / total VR video time) ⁇ tag correction value.
  • the tag correction value is set to have a larger correction value as the degree of relevance for each tag increases.
  • the user device 200 may transmit the tag score for each object as the measured interest information to the VR image server 100 together with the user information.
  • the VR image server 100 may transmit output information to the user device 200 by using the accumulated tag information corrected by adding the tag value of the image previously viewed for each user. Accordingly, the user device 200 may output the recommendation image information and the advertisement information of the object of interest based on the accumulated interest of the user.
  • the user device 200 may output the recommendation image information and the advertisement information as output information of the object information for the object of high interest measured according to the execution of the VR image ( S220).
  • the output information on the selected object may be output using a pointer linked to the user's eye movement.
  • the output information includes description information about the object, object related image information, object related advertisement information, and link information. At least one of them may be output.
  • FIG. 11A when a pointer is located in the object area, a selection signal is input through the input unit of the user device, and as shown in FIG. 11B, the object is output as the output information. Description information and link information may be displayed.
  • FIG. 13 is a flowchart illustrating a method of outputting output information based on a user selection signal according to an embodiment of the present invention.
  • the user device 200 accesses the VR image server 100 and requests object information on a VR image. Requested object information can be received. Alternatively, the user device 200 may request VR image data including object information from the VR image server 100 and receive the requested VR image data.
  • the user device 200 executes the corresponding VR image (S231). Subsequently, the user device 200 checks the point coordinates of the device pointer linked to the user's line of sight (S232), and outputs the pointer together with the display screen of the viewing area. In this case, the user device 200 checks an object portion range (ie, an area specified as an object) (S233). Next, it is determined whether a pointer within the range of the object portion is located (S234). If the pointer within the range of the object part is not located, the determination step S234 is continuously repeated. If the pointer within the range of the object portion is located, the user device 200 determines whether a specific button is input as the selection signal (S235). If a specific button is input, output information on the corresponding object is output on the display screen (S236). If a specific button is not input, the process is repeated again from step S234. In this case, the user device includes a specific button as an input unit.
  • the specific button input step S235 may be replaced with a step of determining whether the pointer in the range of the object unit is positioned for a predetermined time. That is, when the pointer in the range of the object portion is located for a predetermined time, output information about the object is output, and when it is not located for the set time, the process may be repeated from step S234.
  • various image processing can be performed using at least one object information set in the VR image based on the three-dimensional coordinate system, thereby providing a more interactive environment between the user and the image, thereby making the user more realistic and realistic. It can provide interesting VR video.
  • the method of the present invention as described above can be written in a computer program. And the code and code segments constituting the program can be easily inferred by a computer programmer in the art.
  • the written program is stored in a computer-readable recording medium (information storage medium), and read and executed by a computer to implement the method of the present invention.
  • the recording medium may include any type of computer readable recording medium.
  • VR video server 200 user device
  • driver 220 detector

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un procédé de génération d'image VR, un procédé de traitement d'image VR et un système de traitement d'image VR. Le procédé de génération d'image VR, selon un mode de réalisation de la présente invention, comprend les étapes suivantes : la mise en correspondance d'une image VR avec un système de coordonnées tridimensionnelles ; l'entraînement et la commande de l'image VR, et la spécification d'au moins un objet dans l'image sur la base du système de coordonnées tridimensionnelles mis en correspondance ; le réglage d'informations d'objet qui comprennent des informations de position, des informations de temps d'expression, et des informations de sortie sur la base d'une valeur de coordonnée de l'objet spécifié ; et la génération de données d'image VR auxquelles les informations d'objet sont ajoutées.
PCT/KR2017/012090 2017-01-23 2017-10-30 Procédé de génération d'image vr, procédé de traitement d'image vr et système de traitement d'image vr Ceased WO2018135730A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170010622A KR101949261B1 (ko) 2017-01-23 2017-01-23 Vr 영상 생성 방법, vr 영상 처리 방법 및 vr 영상 처리 시스템
KR10-2017-0010622 2017-01-23

Publications (1)

Publication Number Publication Date
WO2018135730A1 true WO2018135730A1 (fr) 2018-07-26

Family

ID=62908832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/012090 Ceased WO2018135730A1 (fr) 2017-01-23 2017-10-30 Procédé de génération d'image vr, procédé de traitement d'image vr et système de traitement d'image vr

Country Status (2)

Country Link
KR (1) KR101949261B1 (fr)
WO (1) WO2018135730A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292424A (zh) * 2018-12-06 2020-06-16 珀斯特传媒有限公司 多视点360度vr内容提供系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102212405B1 (ko) * 2019-02-20 2021-02-04 ㈜브이리얼 Vr 영상 컨텐츠 제작 방법, 장치 및 프로그램
KR102148379B1 (ko) * 2019-07-24 2020-08-26 신용강 원격 의류매장 서비스 방법
KR102270852B1 (ko) * 2019-09-30 2021-06-28 한국항공대학교산학협력단 뷰포트 탐색 가능한 디바이스를 이용한 360도 비디오 서비스 제공 장치 및 방법
KR102231381B1 (ko) * 2019-11-18 2021-03-23 김현수 다수의 2차원 영상을 이용한 가상 현실용 영상 제작 시스템 및 방법
KR102181647B1 (ko) * 2020-08-20 2020-11-23 신용강 오프라인 의류매장과 동일한 환경을 제공하는 원격 의류매장 서비스 방법
KR102181648B1 (ko) * 2020-08-20 2020-11-24 신용강 원격 의류매장 플랫폼 제공 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050083258A (ko) * 2004-02-21 2005-08-26 삼성전자주식회사 Av 데이터에 동기된 텍스트 서브 타이틀 데이터를기록한 정보저장매체, 재생방법 및 장치
KR20120091033A (ko) * 2009-09-30 2012-08-17 마이크로소프트 코포레이션 비디오 컨텐츠-인지 광고 배치 방법
KR20160023888A (ko) * 2013-06-25 2016-03-03 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 시야 밖 증강 현실 영상의 표시
KR101639275B1 (ko) * 2015-02-17 2016-07-14 서울과학기술대학교 산학협력단 다중 실시간 영상 획득 카메라를 활용한 360도 구형 렌더링 영상 표출 장치 및 이를 통한 자동 영상 분석 방법
JP2016537903A (ja) * 2013-08-21 2016-12-01 ジョーント・インコーポレイテッドJaunt Inc. バーチャルリアリティコンテンツのつなぎ合わせおよび認識

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020025301A (ko) * 2000-09-28 2002-04-04 오길록 다중 사용자를 지원하는 파노라믹 이미지를 이용한증강현실 영상의 제공 장치 및 그 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050083258A (ko) * 2004-02-21 2005-08-26 삼성전자주식회사 Av 데이터에 동기된 텍스트 서브 타이틀 데이터를기록한 정보저장매체, 재생방법 및 장치
KR20120091033A (ko) * 2009-09-30 2012-08-17 마이크로소프트 코포레이션 비디오 컨텐츠-인지 광고 배치 방법
KR20160023888A (ko) * 2013-06-25 2016-03-03 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 시야 밖 증강 현실 영상의 표시
JP2016537903A (ja) * 2013-08-21 2016-12-01 ジョーント・インコーポレイテッドJaunt Inc. バーチャルリアリティコンテンツのつなぎ合わせおよび認識
KR101639275B1 (ko) * 2015-02-17 2016-07-14 서울과학기술대학교 산학협력단 다중 실시간 영상 획득 카메라를 활용한 360도 구형 렌더링 영상 표출 장치 및 이를 통한 자동 영상 분석 방법

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292424A (zh) * 2018-12-06 2020-06-16 珀斯特传媒有限公司 多视点360度vr内容提供系统

Also Published As

Publication number Publication date
KR101949261B1 (ko) 2019-02-18
KR20180088538A (ko) 2018-08-06

Similar Documents

Publication Publication Date Title
WO2018135730A1 (fr) Procédé de génération d'image vr, procédé de traitement d'image vr et système de traitement d'image vr
US11430098B2 (en) Camera body temperature detection
CN108415705B (zh) 网页生成方法、装置、存储介质及设备
AU2011205223C1 (en) Physical interaction with virtual objects for DRM
TWI610097B (zh) 電子系統、可攜式顯示裝置及導引裝置
WO2013051180A1 (fr) Appareil de traitement d'image, procédé de traitement d'image et programme
WO2020130689A1 (fr) Dispositif électronique pour recommander un contenu de jeu et son procédé de fonctionnement
WO2015122565A1 (fr) Système d'affichage permettant d'afficher une imagé de réalité augmentée et son procédé de commande
CN112578971B (zh) 页面内容展示方法、装置、计算机设备及存储介质
WO2019156332A1 (fr) Dispositif de production de personnage d'intelligence artificielle pour réalité augmentée et système de service l'utilisant
WO2021075699A1 (fr) Dispositif électronique et procédé de fonctionnement associé
CN109074154A (zh) 增强和/或虚拟现实中的悬停触摸输入补偿
WO2013105760A1 (fr) Système de fourniture de contenu et son procédé de fonctionnement
CN112965911B (zh) 界面异常检测方法、装置、计算机设备及存储介质
WO2017029918A1 (fr) Système, procédé et programme pour afficher une image animée avec un champ de vision spécifique
WO2016182149A1 (fr) Dispositif d'affichage vestimentaire pour affichage de progression de processus de paiement associé à des informations de facturation sur une unité d'affichage, et son procédé de commande
CN111179674A (zh) 直播教学方法、装置、计算机设备及存储介质
CN110969159B (zh) 图像识别方法、装置及电子设备
CN109844600A (zh) 信息处理设备、信息处理方法和程序
KR20180088005A (ko) Vr 영상 저작 도구 및 vr 영상 저작 장치
WO2015182846A1 (fr) Appareil et procédé permettant de fournir une publicité au moyen d'un suivi de pupille
WO2021085812A1 (fr) Appareil électronique et son procédé de commande
WO2022239793A1 (fr) Dispositif d'analyse de sujet
CN112000900A (zh) 推荐景点信息的方法、装置、电子设备及存储介质
CN114327033A (zh) 一种虚拟现实设备及媒资播放方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893174

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17893174

Country of ref document: EP

Kind code of ref document: A1