CN104185008B - A kind of method and apparatus of generation 3D media datas - Google Patents
A kind of method and apparatus of generation 3D media datas Download PDFInfo
- Publication number
- CN104185008B CN104185008B CN201410350305.5A CN201410350305A CN104185008B CN 104185008 B CN104185008 B CN 104185008B CN 201410350305 A CN201410350305 A CN 201410350305A CN 104185008 B CN104185008 B CN 104185008B
- Authority
- CN
- China
- Prior art keywords
- data
- media data
- media
- initial
- datas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 241001269238 Data Species 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000001360 synchronised effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000013499 data model Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
It is an object of the invention to provide a kind of method and apparatus of generation 3D media datas.The method according to the invention comprises the following steps:Determine the content type of the initial media data;3D models of place corresponding with the initial media data are determined according to the content type;According to the described image data corresponding with initial media data and the 3D models of place, 3D media datas corresponding with the initial media data are generated, to play the 3D media datas.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of method and apparatus of generation 3D media datas.
Background technology
In the prior art, generally it is only capable of obtaining the 3D videos generated based on 3D data sources, however, the video of current main-stream
Information is still 2D videos, also, because the video data volume is larger, and the demand of user is then more random, by all videos all
It is unpractical to respond user's request to carry out 3D conversion process., it is necessary to face a variety of demands especially when entering performing network living broadcast
User, it is single to carry out video to be uniformly processed and meet the user's request of diversified forms.
The content of the invention
It is an object of the invention to provide a kind of method and apparatus of generation 3D media datas.
According to an aspect of the invention, there is provided a kind of method of generation 3D media datas, wherein, methods described includes
Following steps:
A determines the content type of the initial media data;
B determines 3D models of place corresponding with the initial media data according to the content type;
C according to the described image data corresponding with initial media data and the 3D models of place, generate with it is described
The corresponding 3D media datas of initial media data, to play the 3D media datas.
According to an aspect of the present invention, a kind of playing device of generation 3D media datas is additionally provided, wherein, it is described to broadcast
Device is put to comprise the following steps:
Content determining device, the content type for determining the initial media data;
Model determining device, for determining 3D scenes corresponding with the initial media data according to the content type
Model;
Generating means, for according to the described image data corresponding with initial media data and the 3D scenes mould
Type, generates 3D media datas corresponding with the initial media data, to play the 3D media datas.
Compared with prior art, the present invention has advantages below:Corresponding 3D is determined according to the content type of media data
Model of place, to generate corresponding 3D media datas based on 3D models of place, improves the efficiency of generation 3D media datas;And
And, the motion related information of media data and the 3D models of place of determination can be combined, corresponding 3D media datas are generated simultaneously
Play, further increase the accuracy of generation 3D media datas.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, of the invention is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 illustrates a kind of method flow diagram of generation 3D media datas according to the present invention;
Fig. 2 illustrates a kind of structural representation of the playing device of generation 3D media datas according to the present invention.
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
The present invention is described in further detail below in conjunction with the accompanying drawings.
Fig. 1 illustrates a kind of method flow diagram of generation 3D media datas according to the present invention.According to the side of the present invention
Method includes step S1, step S2 and step S3.
Wherein, the 3D media datas include but are not limited to following any:
1) there is the right and left eyes image pair of parallax;
2) binocular tri-dimensional video.
Wherein, the method according to the invention is realized by the playing device being contained in computer equipment.It is described to calculate
Machine equipment according to the instruction for being previously set or storing, can carry out the electricity of numerical computations and/or information processing automatically including a kind of
Sub- equipment, its hardware includes but is not limited to microprocessor, application specific integrated circuit (ASIC), programmable gate array (FPGA), numeral
Processor (DSP), embedded device etc..The computer equipment includes the network equipment and/or user equipment.Wherein, the net
Network equipment includes but is not limited to single network server, the server group of multiple webservers composition or based on cloud computing
The cloud being made up of a large amount of main frames or the webserver of (Cloud Computing), wherein, cloud computing is the one of Distributed Calculation
Kind, a super virtual computer being made up of the computer collection of a group loose couplings.The user equipment includes but is not limited to
Any one can carry out the electricity of man-machine interaction with user by modes such as keyboard, mouse, remote control, touch pad or voice-operated devices
Sub- product, for example, personal computer, tablet personal computer, smart mobile phone, PDA, game machine or IPTV etc..Wherein, the user sets
Network residing for the standby and network equipment includes but is not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
Preferably, the playing device is contained in user equipment.
It should be noted that the user equipment, the network equipment and network are only for example, other are existing or from now on may be used
Can occur user equipment, the network equipment and network be such as applicable to the present invention, also should be included in the scope of the present invention with
It is interior, and be incorporated herein by reference.
Reference picture 1, in step sl, playing device determine the content type of the initial media data.
Wherein, the initial media data include video data, for example, the video or one section of film of one section of programme televised live
Video etc..
Wherein, the initial media data may correspond to different content types.For example, one section of television program video can quilt
It is divided into content types such as " news ", " physical culture " or " variety ".
Preferably, the content type is determined based on the scene information for the content played in the initial media data
It is classified, for example, the initial media data corresponding to sports tournament can be divided into football match type, baseball match type, tennis match
Type etc., in another example, the initial media data corresponding to variety show can be divided into conversation type, select-elite type etc..
Wherein, to determine that the mode of the content type of the initial media data includes but is not limited to following any for playing device
Kind:
1) the predetermined content-type information of initial media data is directly obtained;
2) relevant information of initial media data is matched with predetermined content type, to determine and the initial media
The corresponding content type of data.For example, initial media data are the videos of one section of programme televised live, then by the title of the programme televised live
Matched with predetermined content type, to obtain the corresponding content type of the video.
According to the first example of the present invention, initial media data live video stream_1 of a length of 1 minute when being one section,
Playing device obtains the video profile of the video, and the content type for determining the video is " baseball game ".
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that the implementation of any content type for determining the initial media data, all should be wrapped
Containing within the scope of the invention.
Then, in step s 2, playing device determines corresponding with the initial media data according to the content type
3D models of place.
Specifically, the playing device is corresponding with the content type to inquire about and obtain according to the content type
At least one 3D model of place, and select corresponding 3D with the initial media data at least one 3D model of place
Scape model.
Wherein, the 3D models of place include the corresponding depth information of view data for predicting initial media data
Model.
Wherein, the 3D models of place can be obtained based on machine-learning process is performed to multiple media datas.For example, logical
Acquisition content type is crossed for the view data of video and its depth information of determination of " football match " and corresponding machine is performed
Learning process sets up 3D models of place corresponding with content type " football match ", to export phase based on media data information
The 3D models of place for the 3D media datas answered.
Continuation foregoing First example is illustrated, playing device inquired about in 3D scene model datas storehouse and obtain with just
The beginning media data stream_1 corresponding 3D models of place model_1 of content type " baseball game ".
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that it is any determined according to the content type it is corresponding with the initial media data
The implementation of 3D models of place, should be included in the scope of the present invention.
Then, in step s3, playing device is according to the described image data corresponding with initial media data and institute
3D models of place are stated, 3D media datas corresponding with the initial media data are generated, to play the 3D media datas.
Preferably, the step S3 further comprises step S301 (not shown) and step S302 (not shown).
In step S301, playing device obtains phase according to described image data corresponding with the initial media data
The motion related information answered.
Wherein, described image data include but is not limited to following any:
1) each frame data in the initial media data;
2) one or more of picture number by being obtained after handling each frame data in initial media data
According to;For example, by the automatic matching for carrying out block between consecutive frame, it is obtaining, with similar block one or many by matching
Individual frame is used as item of image data etc..
Preferably, the motion related information includes but is not limited to following at least any one information:
1) scene motion information;Wherein, the scene includes the one or more cut sections that can be recognized in view data
Block.
For example, obtaining the fortune of each segmentation block respectively by comparing the change of segmentation block in multiple images data
Dynamic information etc..
2) object of which movement information corresponding with least one object in described image data.
For example, by recognizing one or more objects included in view data, and compare one or more objects
Positional information in multiple images data respectively, movable information to determine the object etc..
Continuation is illustrated to foregoing First example, and playing device extracts the frame of video of the video as view data and base
Figure in each frame of video is divided into some cut sections, and by comparing the position of each cut section in each video
Change is put, to divide stagnant zone and the moving region in frame of video, and the motion related information of moving region is determined.
Then, in step s 302, playing device is according to the motion related information and the 3D models of place information
To generate 3D media datas corresponding with the initial media data, to play the 3D media datas.
Preferably, the step 302 further comprises step S3021 (not shown) and step S3022 (not shown).
In step S3021, playing device according to the moving parameter information and the 3D models of place, obtain with it is described
The corresponding depth information of view data.
Preferably for each image, playing device is by using fortune of the 3D models of place to described image data
Dynamic parameter information is handled, to obtain depth information corresponding with the view data.
Wherein, the playing device can utilize the 3D models of place, using multiple technologies, such as depth based on motion feature
Degree estimation (DFM, depth from motion) technology etc., based on the view data inputted and the related letter of corresponding motion
Cease to obtain depth information corresponding with the view data.
It should be noted that those skilled in the art can select other suitable methods according to actual conditions and demand
The depth information is obtained, and is not limited to the method mentioned in specification.
Then, in step S3022, playing device is generated according to the depth information obtained comprising with the depth
3D media datas view data, corresponding with the initial media data of information.
Specifically, playing device directly regard the view data with the depth information as 3D media datas;Or, broadcast
Device is put to synchronize the voice data of the view data with the depth information and the initial media data, with
Generate the 3D media datas.
Continuation is illustrated to foregoing First example, and in step S3021, playing device regard the view data as 3D
Scape model model_1 input, obtains the corresponding depth informations of stagnant zone such as sky, ground in each image.Also, broadcast
Device is put using the 3D model of place model_1, using DFM technologies, the motion phase based on the moving region in the view data
Information is closed, to obtain the corresponding depth information in the moving region such as baseballer, baseball in each image.Then, in step S3022
In, playing device is generated according to obtained depth information corresponding with view data comprising the image with the depth information
Data, corresponding with video 3D media datas.
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that any according to the motion related information and the 3D models of place information next life
Into 3D media datas corresponding with the initial media data, to play the implementation of the 3D media datas, it all should include
Within the scope of the invention.
Preferably, methods described also includes step S4 (not shown) and step S5 (not shown).
In step s 4, when playing live media data, playing device is by the live media data in order history
The part of media data of period are used as initial media data.
After execution of step S1 to step S3, in step s 5, playing device will be corresponding with the initial media data
3D media datas played out simultaneously with the live media data.
For example, when playing live media data, playing device was in step s 4 by the live media data past 5 minutes
Media data be used as initial media data.Then, playing device performs step S1 to step S3, to generate and the initial media
The corresponding 3D media datas of data.Then, in step s 5, playing device is by the 3D media datas generated and live media number
According to playing out simultaneously.
The method according to the invention, determines corresponding 3D models of place, with based on 3D according to the content type of media data
Model of place generates corresponding 3D media datas, improves the efficiency of generation 3D media datas;Further, it is possible to reference to media data
Motion related information and determination 3D models of place, generate corresponding 3D media datas and simultaneously play, further increase life
Into the accuracy of 3D media datas.
Fig. 2 illustrates a kind of structural representation of the playing device of generation 3D media datas according to the present invention.According to
The playing device of the present invention includes content determining device 1, model determining device 2 and generating means 3.
Reference picture 2, content determining device 1 determines the content type of the initial media data.
Wherein, the initial media data include video data, for example, the video or one section of film of one section of programme televised live
Video etc..
Wherein, the initial media data may correspond to different content types.For example, one section of television program video can quilt
It is divided into content types such as " news ", " physical culture " or " variety ".
Preferably, the content type is determined based on the scene information for the content played in the initial media data
It is classified, for example, the initial media data corresponding to sports tournament can be divided into football match type, baseball match type, tennis match
Type etc., in another example, the initial media data corresponding to variety show can be divided into conversation type, select-elite type etc..
Wherein, content determining device 1 determine the content type of the initial media data mode include but is not limited to
Under it is any:
1) the predetermined content-type information of initial media data is directly obtained;
2) relevant information of initial media data is matched with predetermined content type, to determine and the initial media
The corresponding content type of data.For example, initial media data are the videos of one section of programme televised live, then by the title of the programme televised live
Matched with predetermined content type, to obtain the corresponding content type of the video.
According to the first example of the present invention, initial media data live video stream_1 of a length of 1 minute when being one section,
Content determining device 1 obtains the video profile of the video, and the content type for determining the video is " baseball game ".
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that the implementation of any content type for determining the initial media data, all should be wrapped
Containing within the scope of the invention.
Then, model determining device 2 determines corresponding 3D with the initial media data according to the content type
Scape model.
Specifically, model determining device 2 is corresponding with the content type to inquire about and obtain according to the content type
At least one 3D model of place, and select corresponding 3D with the initial media data at least one 3D model of place
Scape model.
Wherein, the 3D models of place include the corresponding depth information of view data for predicting initial media data
Model.
Wherein, the 3D models of place can be obtained based on machine-learning process is performed to multiple media datas.For example, logical
Acquisition content type is crossed for the view data of video and its depth information of determination of " football match " and corresponding machine is performed
Learning process sets up 3D models of place corresponding with content type " football match ", to export phase based on media data information
The 3D models of place for the 3D media datas answered.
Continuation is illustrated to foregoing First example, and model determining device 2 is inquired about and obtained in 3D scene model datas storehouse
Obtain 3D model of place model_1 corresponding with initial media data stream_1 content type " baseball game ".
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that it is any determined according to the content type it is corresponding with the initial media data
The implementation of 3D models of place, should be included in the scope of the present invention.
Then, generating means 3 are according to the described image data corresponding with initial media data and the 3D scenes mould
Type, generates 3D media datas corresponding with the initial media data, to play the 3D media datas.
Preferably, the generating means 3 further comprise that motion acquisition device (not shown) and three-dimensional generating means (are schemed not
Show).
Wherein, acquisition device obtains corresponding motion according to described image data corresponding with the initial media data
Relevant information.
Wherein, described image data include but is not limited to following any:
1) each frame data in the initial media data;
2) one or more of picture number by being obtained after handling each frame data in initial media data
According to;For example, by the automatic matching for carrying out block between consecutive frame, it is obtaining, with similar block one or many by matching
Individual frame is used as item of image data etc..
Preferably, the motion related information includes but is not limited to following at least any one information:
1) scene motion information;Wherein, the scene includes the one or more cut sections that can be recognized in view data
Block.
For example, obtaining the fortune of each segmentation block respectively by comparing the change of segmentation block in multiple images data
Dynamic information etc..
2) object of which movement information corresponding with least one object in described image data.
For example, by recognizing one or more objects included in view data, and compare one or more objects
Positional information in multiple images data respectively, movable information to determine the object etc..
Continuation is illustrated to foregoing First example, and playing device extracts the frame of video of the video as view data and base
Figure in each frame of video is divided into some cut sections, then acquisition device is by comparing each point in each video
The change in location in area is cut, to divide stagnant zone and the moving region in frame of video, and the motion correlation letter of moving region is determined
Breath.
Then, three-dimensional generating means are generated and institute according to the motion related information and the 3D models of place information
The corresponding 3D media datas of initial media data are stated, to play the 3D media datas.
Preferably, the three-dimensional generating means further comprise depth acquisition device (not shown) and sub- generating means (figure
Do not show).
Wherein, depth acquisition device is obtained and described image according to the moving parameter information and the 3D models of place
The corresponding depth information of data.
Preferably for each image, depth acquisition device is by using the 3D models of place to described image data
Moving parameter information handled, to obtain corresponding with view data depth information.
Wherein, the depth acquisition device can utilize the 3D models of place, using multiple technologies, be such as based on motion feature
Estimation of Depth (DFM, depth from motion) technology etc., move based on the view data inputted and accordingly phase
Information is closed to obtain depth information corresponding with the view data.
It should be noted that those skilled in the art can select other suitable methods according to actual conditions and demand
The depth information is obtained, and is not limited to the method mentioned in specification.
Then, sub- generating means are generated according to the depth information obtained comprising the picture number with the depth information
According to, 3D media datas corresponding with the initial media data.
Specifically, sub- generating means directly regard the view data with the depth information as 3D media datas;Or,
Sub- generating means carry out the voice data of the view data with the depth information and the initial media data same
Step, to generate the 3D media datas.
Continuation is illustrated to foregoing First example, and depth acquisition device regard the view data as 3D models of place
Model_1 input, obtains the corresponding depth informations of stagnant zone such as sky, ground in each image.Also, depth is obtained
Device utilizes the 3D model of place model_1, and using DFM technologies, the motion based on the moving region in the view data is related
Information, to obtain the corresponding depth information in the moving region such as baseballer, baseball in each image.Then, sub- generating means root
Generated according to obtained depth information corresponding with view data comprising with the depth information it is view data, regarded with this
Frequently corresponding 3D media datas.
It should be noted that the above-mentioned examples are merely illustrative of the technical solutions of the present invention, rather than to the limit of the present invention
System, it should be appreciated by those skilled in the art that any according to the motion related information and the 3D models of place information next life
Into 3D media datas corresponding with the initial media data, to play the implementation of the 3D media datas, it all should include
Within the scope of the invention.
Preferably, the playing device also includes data acquisition facility (not shown) and simultaneously playing device (not shown).
When playing live media data, data acquisition facility is by the live media data in the order history period
Part of media data are used as initial media data.
The operation for the content type for determining the initial media data has been performed to according to described and initial in playing device
The corresponding described image data of media data and the 3D models of place, generate 3D media corresponding with the initial media data
After the operation of data, simultaneously playing device will 3D media datas corresponding with the initial media data and the live media number
According to playing out simultaneously.
For example, when playing live media data, playing device was in step s 4 by the live media data past 5 minutes
Media data be used as initial media data.Then, playing device performs the content type of the determination initial media data
Operate to described image data corresponding with initial media data and the 3D models of place according to, generate and described initial
The operation of the corresponding 3D media datas of media data, to generate 3D media datas corresponding with the initial media data.Then, together
Playing device is walked by the 3D media datas generated and live media data while playing out.
According to the solution of the present invention, corresponding 3D models of place are determined according to the content type of media data, with based on 3D
Model of place generates corresponding 3D media datas, improves the efficiency of generation 3D media datas;Further, it is possible to reference to media data
Motion related information and determination 3D models of place, generate corresponding 3D media datas and simultaneously play, further increase life
Into the accuracy of 3D media datas.
The software program of the present invention can realize steps described above or function by computing device.Similarly, originally
The software program (including related data structure) of invention can be stored in computer readable recording medium storing program for performing, for example, RAM is deposited
Reservoir, magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the present invention can employ hardware to reality
It is existing, for example, as coordinating with processor so as to performing the circuit of each function or step.
In addition, the part of the present invention can be applied to computer program product, such as computer program instructions, when its quilt
When computer is performed, by the operation of the computer, the method according to the invention and/or technical scheme can be called or provided.
And the programmed instruction of the method for the present invention is called, it is possibly stored in fixed or moveable recording medium, and/or pass through
Broadcast or the data flow in other signal bearing medias and be transmitted, and/or be stored according to described program instruction operation
In the working storage of computer equipment.Here, including a device according to one embodiment of present invention, the device includes using
In the memory and processor for execute program instructions of storage computer program instructions, wherein, when the computer program refers to
When order is by the computing device, method and/or skill of the plant running based on foregoing multiple embodiments according to the present invention are triggered
Art scheme.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power
Profit is required rather than described above is limited, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the present invention.Any reference in claim should not be considered as to the claim involved by limitation.This
Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in system claims is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
Claims (14)
1. a kind of method of generation 3D media datas, wherein, it the described method comprises the following steps:
- when playing live media data, the part of media data by the live media data in the order history period are made
For initial media data;
Wherein, methods described is further comprising the steps of:
A determines the content type of the initial media data, wherein, the content type is based on institute in the initial media data
The scene information of the content of broadcasting come determine its classify;
B determines 3D models of place corresponding with the initial media data according to the content type;
C is generated and the initial media data according to view data corresponding with initial media data and the 3D models of place
Corresponding 3D media datas, to play the 3D media datas;
Wherein, methods described is further comprising the steps of:
- play out 3D media datas corresponding with the initial media data simultaneously with the live media data.
2. according to the method described in claim 1, wherein, the step c comprises the following steps:
C1 obtains corresponding motion related information according to described image data corresponding with the initial media data;
C2 generates corresponding with the initial media data according to the motion related information and the 3D models of place information
3D media datas, to play the 3D media datas.
3. method according to claim 2, wherein, the motion related information includes following at least any one information:
- scene motion information;
- with described image data at least one of the corresponding object of which movement information of object.
4. according to the method in claim 2 or 3, wherein, the step c2 comprises the following steps:
C21 obtains depth information corresponding with described image data according to the motion related information and the 3D models of place;
C22 is generated according to the depth information obtained comprising view data and described initial with the depth information
The corresponding 3D media datas of media data.
5. method according to claim 4, wherein, the step c22 comprises the following steps:
- synchronize the voice data of the view data with the depth information and the initial media data, with
Generate the 3D media datas.
6. according to the method described in claim 1, wherein, the 3D media datas include it is following any:
- the right and left eyes image pair with parallax;
- binocular tri-dimensional video.
7. according to the method described in claim 1, wherein, methods described is performed by user equipment.
8. a kind of playing device of generation 3D media datas, wherein, the playing device comprises the following steps:
Data acquisition facility, for when playing live media data, by the live media data in the order history period
Part of media data be used as initial media data;
Wherein, the playing device also includes:
Content determining device, the content type for determining the initial media data, wherein, the content type is based on described
The scene information for the content played in initial media data determine its classify;
Model determining device, for determining 3D scenes mould corresponding with the initial media data according to the content type
Type;
Generating means, for according to view data corresponding with initial media data and the 3D models of place, generation with it is described
The corresponding 3D media datas of initial media data, to play the 3D media datas;
Wherein, the playing device also includes:
Simultaneously playing device, for will 3D media datas corresponding with the initial media data and the live media data it is same
When play out.
9. playing device according to claim 8, wherein, the generating means include:
Acquisition device is moved, for obtaining corresponding motion according to described image data corresponding with the initial media data
Relevant information;
Three-dimensional generating means, for generated according to the motion related information and the 3D models of place information with it is described just
The corresponding 3D media datas of beginning media data, to play the 3D media datas.
10. playing device according to claim 9, wherein, the motion related information includes following at least any one letter
Breath:
- scene motion information;
- with described image data at least one of the corresponding object of which movement information of object.
11. the playing device according to claim 9 or 10, wherein, the three-dimensional generating means include:
Depth acquisition device, for according to the motion related information and the 3D models of place, obtaining and described image data
Corresponding depth information;
Sub- generating means, the view data with the depth information is included for being generated according to the depth information obtained
, 3D media datas corresponding with the initial media data.
12. playing device according to claim 11, wherein, the sub- generating means are additionally operable to:
The voice data of the view data with the depth information and the initial media data is synchronized, with life
Into the 3D media datas.
13. playing device according to claim 8, wherein, the 3D media datas include following any:
- the right and left eyes image pair with parallax;
- binocular tri-dimensional video.
14. playing device according to claim 8, wherein, the playing device is contained in user equipment.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410350305.5A CN104185008B (en) | 2014-07-22 | 2014-07-22 | A kind of method and apparatus of generation 3D media datas |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410350305.5A CN104185008B (en) | 2014-07-22 | 2014-07-22 | A kind of method and apparatus of generation 3D media datas |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104185008A CN104185008A (en) | 2014-12-03 |
| CN104185008B true CN104185008B (en) | 2017-07-25 |
Family
ID=51965704
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410350305.5A Expired - Fee Related CN104185008B (en) | 2014-07-22 | 2014-07-22 | A kind of method and apparatus of generation 3D media datas |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104185008B (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11062183B2 (en) | 2019-05-21 | 2021-07-13 | Wipro Limited | System and method for automated 3D training content generation |
| CN115525181A (en) * | 2022-11-28 | 2022-12-27 | 深圳飞蝶虚拟现实科技有限公司 | Method and device for manufacturing 3D content, electronic device and storage medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103002297A (en) * | 2011-09-16 | 2013-03-27 | 联咏科技股份有限公司 | Method and device for generating dynamic depth value |
| EP2733670A1 (en) * | 2011-09-08 | 2014-05-21 | Samsung Electronics Co., Ltd | Apparatus and method for generating depth information |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130070049A1 (en) * | 2011-09-15 | 2013-03-21 | Broadcom Corporation | System and method for converting two dimensional to three dimensional video |
-
2014
- 2014-07-22 CN CN201410350305.5A patent/CN104185008B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2733670A1 (en) * | 2011-09-08 | 2014-05-21 | Samsung Electronics Co., Ltd | Apparatus and method for generating depth information |
| CN103002297A (en) * | 2011-09-16 | 2013-03-27 | 联咏科技股份有限公司 | Method and device for generating dynamic depth value |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104185008A (en) | 2014-12-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11861905B2 (en) | Methods and systems of spatiotemporal pattern recognition for video content development | |
| US10832057B2 (en) | Methods, systems, and user interface navigation of video content based spatiotemporal pattern recognition | |
| US11275949B2 (en) | Methods, systems, and user interface navigation of video content based spatiotemporal pattern recognition | |
| CN105745938B (en) | Multi-view audio and video interactive playback | |
| CN104394422B (en) | A kind of Video segmentation point acquisition methods and device | |
| US10405009B2 (en) | Generating videos with multiple viewpoints | |
| CN110213613B (en) | Image processing method, device and storage medium | |
| WO2019183235A1 (en) | Methods and systems of spatiotemporal pattern recognition for video content development | |
| WO2018053257A1 (en) | Methods and systems of spatiotemporal pattern recognition for video content development | |
| CN112509148A (en) | Interaction method and device based on multi-feature recognition and computer equipment | |
| CN115442658A (en) | Live broadcast method and device, storage medium, electronic equipment and product | |
| CN110519532A (en) | A kind of information acquisition method and electronic equipment | |
| CN112131431A (en) | Data processing method, data processing equipment and computer readable storage medium | |
| CN104185008B (en) | A kind of method and apparatus of generation 3D media datas | |
| Wang et al. | Context-dependent viewpoint sequence recommendation system for multi-view video | |
| CN105261041A (en) | Information processing method and electronic device | |
| US11902603B2 (en) | Methods and systems for utilizing live embedded tracking data within a live sports video stream | |
| US10137371B2 (en) | Method of recording and replaying game video by using object state recording method | |
| CN115756263A (en) | Script interaction method and device, storage medium, electronic equipment and product | |
| Hu | Impact of VR virtual reality technology on traditional video advertising production | |
| CN111382313B (en) | Dynamic detection data retrieval method, device and apparatus | |
| US20250272977A1 (en) | Interactive Video System For Sports Media | |
| Vanherle et al. | Automatic Camera Control and Directing with an Ultra-High-Definition Collaborative Recording System | |
| HK40035304A (en) | Data processing method and device, and computer readable storage medium | |
| Yao et al. | Beyond the Broadcast: Enhancing VR Tennis Broadcasting through Embedded Visualizations and Camera Techniques |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20180425 Address after: 201203 China (Shanghai) free trade pilot area 501-2, room 5, 5 Po Bo Road. Patentee after: Shanghai Tong view Thai Digital Technology Co., Ltd. Address before: 201204 Room 102, 4 Lane 299, Bi Sheng Road, Zhangjiang hi tech park, Pudong New Area, Shanghai. Patentee before: Shanghai Synacast Media Tech. Co., Ltd. |
|
| TR01 | Transfer of patent right | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170725 Termination date: 20200722 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |