[go: up one dir, main page]

CN102625122B - Recording equipment, recording method, playback apparatus and back method - Google Patents

Recording equipment, recording method, playback apparatus and back method Download PDF

Info

Publication number
CN102625122B
CN102625122B CN201210045765.8A CN201210045765A CN102625122B CN 102625122 B CN102625122 B CN 102625122B CN 201210045765 A CN201210045765 A CN 201210045765A CN 102625122 B CN102625122 B CN 102625122B
Authority
CN
China
Prior art keywords
data
stream
view video
view
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210045765.8A
Other languages
Chinese (zh)
Other versions
CN102625122A (en
Inventor
服部忍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN102625122A publication Critical patent/CN102625122A/en
Application granted granted Critical
Publication of CN102625122B publication Critical patent/CN102625122B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42646Internal components of the client ; Characteristics thereof for reading from or writing on a non-volatile solid state storage medium, e.g. DVD, CD-ROM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4347Demultiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/85Television signal recording using optical recording on discs or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8227Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being at least another television signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to recording equipment, recording method, playback apparatus, back method, recording medium and program, though its make with the incompatible equipment of playback multi-view point video data in also can play such as BD, on it, recorded by utilizing predictive encoding method to the encode primary image stream that obtains and expand the recording medium of image stream of multi-view point video data. In the addressed location of the basic view video of storage, forbid MVC head to be encoded. For the viewpoint component without MVC head being stored in addressed location, define to make view_id to be identified as 0. The present invention is applicable to follow the playback apparatus of BD-ROM standard.

Description

Recording equipment, recording method, playback apparatus and back method
The application is that international application no is that PCT/JP2010/055273, international filing date are that on March 25th, 2010, national applications number are 201080001730.3, are entitled as the divisional application of the application for a patent for invention of " recording equipment, recording method, playback apparatus, back method, recording medium and program ".
Technical field
The present invention relates to recording equipment, recording method, playback apparatus, back method, recording medium and program, more specifically relate to make can with the incompatible equipment of the playback of multi-view point video data in play and stored by utilizing predictive encoding method to encode recording equipment, recording method, playback apparatus, back method, recording medium and the program of the recording medium such as BD of the stream of expansion image (extendedimage) that obtains and the stream of primary image (baseimage) of multi-view point video data.
Background technology
Two dimensional image content is the main flow of content, and such as film etc., but are recently realized the 3 D visual image content that stereoscopic vision watches and are concerned.
In order to show that 3 D visual image must have special equipment. The example of such equipment of watching for stereoscopic vision comprises IP (integral photography) the 3 D visual image system by NHK (Japan Broadcasting Corporation) exploitation.
The view data of 3 D visual image comprises the view data (view data of the image of taking from multiple viewpoints) of multiple viewpoints. Viewpoint number is more, and the scope that viewpoint covers is wider, and object just can be in sight from more directions. Thereby can realize a class and " can see the TV of the degree of depth ".
In 3 D visual image, the image of viewpoint minimum number is the stereo-picture (also referred to as 3D rendering) of two viewpoints. The view data of stereo-picture comprises as the data of the left image of the image of being watched by left eye with as the data of the right image of the image watched by right eye.
On the other hand, the content of the high-definition picture such as film has larger data volume, and therefore, in order to record this big data quantity content, huge storage capacity recording medium is essential.
The example of this huge storage capacity recording medium has Blu-Ray (blue light) (registration mark) dish (hereinafter also referred to as BD), for example BD (Blu-Ray (registration mark))-ROM (read-only storage). Reference listing patent documentation PTL1: Japan not substantive examination Patent Application Publication No.2005-348314
Summary of the invention
Technical problem
Incidentally, in BD standard, in BD or how this view data of playback of the view data that not yet how regulation records the 3 D visual image including stereo-picture.
The view data of stereo-picture comprises two data flow: the data flow of left image and the data flow of right image. Upper if these two data flow are in statu quo recorded to BD, in widely used BD player, possibly cannot these data flow of playback.
Consider that this situation made the present invention, and the invention enables can with the incompatible equipment of the playback of multi-view point video data in play and stored by utilizing predictive encoding method to multi-view point video data the encode stream of expansion image that obtains and the recording medium such as BD of the stream of primary image.
The scheme of dealing with problems
Comprise according to the recording equipment of one aspect of the invention: code device, be used for utilizing predictive encoding method to encode to multi-view point video data, and the stream of the stream of output primary image and expansion image, the data that form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form the stream of described expansion image have and comprise and illustrate that these data are data heads of the identification information of the data of expansion viewpoint.
Described code device can be from by utilizing predictive encoding method to encode the stream of the primary image that obtains and be made up of the data with described data head and remove described data head multi-view point video data, and the stream of the primary image that is made up of the data without described data head of output.
Described code device can be equal to or greater than 1 value to the setting of described data head, and exports the stream of described expansion image, and wherein said value is as illustrating that described data are identification informations of the data of expansion viewpoint.
Comprise the following steps according to the recording method of one aspect of the invention: utilize predictive encoding method to encode to multi-view point video data, and the stream of the stream of output primary image and expansion image, the data that form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form the stream of described expansion image have and comprise and illustrate that these data are data heads of the identification information of the data of expansion viewpoint.
According to a kind of program of one aspect of the invention, make computer carry out the processing comprising the following steps: to utilize predictive encoding method to encode to multi-view point video data, and the stream of the stream of output primary image and expansion image, the data that form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form the stream of described expansion image have and comprise and illustrate that these data are data heads of the identification information of the data of expansion viewpoint.
According to a kind of recording medium of one aspect of the invention, store by utilize predictive encoding method to multi-view point video data encode the primary image that obtains stream and expand the stream of image, the data that wherein form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form the stream of described expansion image have and comprise and illustrate that these data are data heads of the identification information of the data of expansion viewpoint.
Playback apparatus according to a further aspect of the present invention comprises: reading device, for reading from recording medium by utilizing predictive encoding method to the encode primary image stream that obtains and expand image stream of multi-view point video data, the data that wherein form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form described expansion image stream have and comprise that its value is equal to or greater than the data head of 1 described identification information, described its value is equal to or greater than 1 described identification information and shows the data that these data are expansion viewpoints; And decoding device, data for the less viewpoint of the value from be set to identification information at data head start sequentially to carry out processing, to not there are data data head, in primary image stream as the data of null value as described identification information are set in described data head, and the first then decoding data to described expansion image stream of decoding data to described primary image stream.
Back method according to a further aspect of the present invention, comprise the following steps: read by utilizing predictive encoding method to the encode primary image stream that obtains and expand image stream of multi-view point video data from recording medium, the data that wherein form described primary image stream do not have the data head of the identification information that comprises viewpoint, and the data that form described expansion image stream have and comprise that its value is equal to or greater than the data head of 1 described identification information, described its value is equal to or greater than 1 described identification information and shows the data that these data are expansion viewpoints; And if start sequentially to carry out processing from the data that are set to the viewpoint that the value of identification information is less data head, will not there are data data head, in primary image stream as the data of null value as described identification information are set in described data head, and the first then decoding data to described expansion image stream of decoding data to described primary image stream.
A kind of program according to a further aspect of the present invention, make computer carry out the processing comprising the following steps: to read by utilizing predictive encoding method to the encode primary image stream that obtains and expand image stream of multi-view point video data from recording medium, the data that wherein form described primary image stream do not have the data head of the identification information that comprises viewpoint, and the data that form described expansion image stream have and comprise that its value is equal to or greater than the data head of 1 described identification information, described its value is equal to or greater than 1 described identification information and shows the data that these data are expansion viewpoints, if and start sequentially to carry out processing from the data that are set to the viewpoint that the value of identification information is less data head, will not there are data data head, in primary image stream as the data of null value as described identification information are set in described data head, and the first then decoding data to described expansion image stream of decoding data to primary image stream.
In one aspect of the invention, utilize predictive encoding method to be encoded to multi-view point video data, and the stream of the stream of output primary image and expansion image, the data that form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form the stream of described expansion image have and comprise and illustrate that these data are data heads of the identification information of the data of expansion viewpoint.
In another aspect of the present invention, by utilizing predictive encoding method to be read from recording medium the encode stream of the primary image that obtains and the stream that expands image of multi-view point video data, the data that wherein form the stream of described primary image do not have the data head of the identification information that comprises viewpoint, and the data that form described another kind of stream have and comprise and illustrate that these data are the data of expansion viewpoint and the data head with the identification information of the value that is equal to or greater than 1, and if start sequentially to carry out processing from the data that are set to the viewpoint that the value of identification information is less data head, data heads will do not there is, data in the stream of primary image are as the data of null value as described identification information are set in described data head, and the first then decoding data of the stream to described expansion image of the decoding data of the stream to primary image.
Beneficial effect of the present invention
According to the present invention, can with the incompatible equipment of the playback of multi-view point video data in play and stored by utilizing predictive encoding method to multi-view point video data the encode stream of expansion image that obtains and the recording medium such as BD of the stream of primary image.
Brief description of the drawings
Fig. 1 shows the ios dhcp sample configuration IOS DHCP of the playback system of the playback apparatus that comprises that the present invention is applied to.
Fig. 2 shows shooting example.
The block diagram of Fig. 3 shows the ios dhcp sample configuration IOS DHCP of MVC encoder.
Fig. 4 shows the example of reference picture.
Fig. 5 shows the ios dhcp sample configuration IOS DHCP of TS.
Fig. 6 shows another ios dhcp sample configuration IOS DHCP of TS.
Fig. 7 shows another ios dhcp sample configuration IOS DHCP of TS.
Fig. 8 shows the example of AV flow management.
Fig. 9 shows the structure of main path (MainPath) and bypass footpath (SubPath).
Figure 10 shows the example of the managerial structure that will be recorded to the file in CD.
Figure 11 shows the grammer of PlayList (playlist) file.
Figure 12 shows the example of the method that uses the reserved_for_future_use in Figure 11.
Figure 13 shows the implication of the value of 3D_PL_type.
Figure 14 shows the implication of the value of view_type.
Figure 15 shows the grammer of the PlayList () in Figure 11.
Figure 16 shows the grammer of the SubPath () in Figure 15.
Figure 17 shows the grammer of the SubPlayItem (i) in Figure 16.
Figure 18 shows the grammer of the PlayItem () in Figure 15.
Figure 19 shows the grammer of the STN_table () in Figure 18.
Figure 20 shows the ios dhcp sample configuration IOS DHCP of playback apparatus.
Figure 21 shows the ios dhcp sample configuration IOS DHCP of the decoder element in Figure 20.
Figure 22 shows for video flowing being carried out to the configuration of processing.
Figure 23 shows for video flowing being carried out to the configuration of processing.
Figure 24 shows for video flowing being carried out to another configuration of processing.
Figure 25 shows the example of addressed location (AccessUnit).
Figure 26 shows for video flowing being carried out to another configuration of processing.
Figure 27 shows the configuration of synthesis unit and prime thereof.
Figure 28 shows another figure of the configuration of synthesis unit and prime thereof.
The block diagram of Figure 29 shows the ios dhcp sample configuration IOS DHCP of software assembling processing unit.
Figure 30 shows the example of each configuration that comprises software assembling processing unit.
Figure 31 shows the ios dhcp sample configuration IOS DHCP of the 3D video TS generation unit being arranged in recording equipment.
Figure 32 illustrates another ios dhcp sample configuration IOS DHCP of the 3D video TS generation unit being located in recording equipment.
Figure 33 illustrates the another ios dhcp sample configuration IOS DHCP of the 3D video TS generation unit being located in recording equipment.
Figure 34 shows the configuration for addressed location being carried out to the playback apparatus side at interface.
Figure 35 shows decoding and processes.
Figure 36 shows sealing GOP (CloseGOP) structure.
Figure 37 shows open GOP (OpenGOP) structure.
Figure 38 shows the maximum number of a frame in GOP and field.
Figure 39 shows sealing gop structure.
Figure 40 shows open gop structure.
Figure 41 shows the example arranging to the decoding starting position of EP_map.
Figure 42 shows the problem causing in the time of the gop structure of undefined subordinate view video (Dependentviewvideo).
Figure 43 shows the concept of picture searching.
Figure 44 shows the structure of the AV stream being recorded on CD.
Figure 45 shows the example of segment AV stream.
Figure 46 is illustrated in the conceptive EP_map showing corresponding to the segment AV stream in Figure 45.
Figure 47 shows the example of the data structure of the indicated source grouping of SPN_EP_start.
The block diagram of Figure 48 shows the ios dhcp sample configuration IOS DHCP of the hardware of computer.
Detailed description of the invention
The<the first embodiment>
[ios dhcp sample configuration IOS DHCP of playback system]
Fig. 1 shows the ios dhcp sample configuration IOS DHCP of the playback system of the playback apparatus 1 that comprises that the present invention is applied to.
As shown in Figure 1, this playback system comprises the playback apparatus 1 and the display device 3 that connect by HDMI (high-definition media interface) cable etc. CD 2 such as BD is installed to playback apparatus 1
Show that the necessary stream of stereo-picture (also referred to as 3D rendering) that viewpoint number is 2 is recorded in CD 2.
Playback apparatus 1 is and the player of 3D playback compatibility that is recorded in the stream in CD 2. The stream of record in playback apparatus 1 playback CD 2, and on the display device 3 being formed by television receiver etc., show the 3D rendering obtaining by playback. Playback apparatus 1 plays back audio in the same way, and output from be arranged on loudspeaker display device 3 etc.
Several different methods is proposed as 3D rendering display methods. Now, will adopt type-1 display methods below and type-2 display methods as 3D rendering display methods.
Type-1 display methods is a kind of so method, be used for showing 3D rendering, the data of this 3D rendering comprise the data of the image (L image) of being watched by left eye and the data of the image (R image) watched by right eye, and Alternation Display L image and R image.
Type-2 display methods is a kind ofly to utilize L figure that the data of original image and Depth (degree of depth) data generate and R image to show the method for 3D rendering by demonstration, and wherein original image is with the image of origin that acts on generation 3D rendering. The data of the 3D rendering using in type-2 display methods comprise the data of original image and are provided for the data of the Depth of the original image for generating L image and R image.
Type-1 display methods is the display methods that needs glasses in the time watching/listen to. And type-2 display methods is not need glasses to watch/listen to the display methods of 3D rendering.
Stream is recorded in CD 2, thereby can utilize type-1 display methods and type-2 display methods to show 3D rendering.
For example H.264AVC (advanced video coding)/MVC (multi-view video coding) is used as the coding method for record this stream on CD 2.
[H.264AVC/MVC profile]
In H.264AVC/MVC, the image stream that is called basic view video (baseviewvideo) and the image stream that is called subordinate view video flowing are defined. After this, will be H.264AVC/MVC suitably referred to as MVC.
Fig. 2 shows shooting example.
As shown in Figure 2, utilize for the camera of L image with for the camera of R image to come same object to carry out and take. The elementary streams (elementarystream) of the video of taking by the camera for L image with for the camera of R image is imported into MVC encoder.
The block diagram of Fig. 3 shows the configuration of MVC encoder.
As shown in Figure 3, H.264/AVC MVC encoder 11 comprises encoder 21, H.264/AVC decoder 22, Depth computing unit 23, subordinate view video encoder 24 and multiplexer 25.
The stream of the video #1 being taken by the camera for L image is imported into H.264/AVC encoder 21 and Depth computing unit 23. In addition, the stream of the video #2 being taken by the camera for R image is imported into Depth computing unit 23 and subordinate view video encoder 24. The stream of video #2 also can be imported into H.264/AVC encoder 21 and Depth computing unit 23, and the stream of video #1 also can be imported into Depth computing unit 23 and subordinate view video encoder 24.
H.264/AVC encoder 21 by the stream encryption precedent of video #1 as profile video flowing H.264AVC/High. H.264/AVC encoder 21, using the AVC video flowing obtaining by coding as basic view video flowing, outputs to H.264/AVC decoder 22 and multiplexer 25.
H.264/AVC decoder 22 is decoded to the AVC video flowing that H.264/AVC encoder 21 provides certainly, and the stream of the video #1 obtaining by decoding is outputed to subordinate view video encoder 24.
The stream of Depth computing unit 23 based on video #1 and the flowmeter calculation Depth of video #2, and the Depth data that calculate are outputed to multiplexer 25.
Subordinate view video encoder 24 is encoded to the stream that the stream of the video #1 coming and the video #2 of outside input are provided from decoder 22 H.264/AVC, and exports subordinate view video flowing.
In basic view video, do not allow to utilize another stream as the predictive coding with reference to image. But as shown in Figure 4,, for subordinate view video, allow to utilize basic view video as the predictive coding with reference to image. For example, in the time that L image has been carried out coding as basic view video and R image as subordinate view video in the situation that, the data volume of thus obtained subordinate view video flowing is less than the data volume of basic view video flowing.
Note, because coding is based on H.264/AVC, therefore basic view video has been carried out to the prediction on time orientation. In addition, equally for subordinate view video, also carried out the prediction on time orientation and be used as the prediction between view. For the subordinate view video of decoding, need to first complete the decoding of the basic view video to correspondence, because basic view video is referenced in the time of coding.
Subordinate view video encoder 24 is by by utilizing prediction between these views subordinate view video flowing obtaining of encoding to output to multiplexer 25.
Multiplexer 25 will provide the basic view video flowing that comes, the subordinate view video flowing (data of Depth) coming is provided and provide the subordinate view video flowing coming to be multiplexed into for example MPEG2TS from subordinate view video encoder 24 from Depth computing unit 23 from encoder 21 H.264/AVC. Basic view video flowing and subordinate view video flowing can be multiplexed in single MPEG2TS, or can be included in the MPEG2TS of separation.
Multiplexer 25 is exported generated TS (MPEG2TS). The TS exporting from multiplexer 25 is recorded in the CD 2 recording equipment together with other management datas, and is provided for playback apparatus 1 in being recorded in CD 2.
If the subordinate view video using together with basic view video in type-1 display methods need to be distinguished with the subordinate view video (Depth) using together with basic view video in type-2 display methods, the former will be called D1 view video, and the latter is called D2 view video.
In addition, utilize the 3D playback in the type-1 display methods that basic view video and D1 view video carry out will be called B-D1 playback. Utilize the 3D playback in the type-2 display methods that basic view video and D2 view video carry out will be called B-D2 playback.
If carry out B-D1 playback in response to the instruction providing from user etc., playback apparatus 1 reads and plays basic view video flowing and D1 view video flowing from CD 2.
Equally, if carry out B-D2 playback, playback apparatus 1 reads and plays basic view video flowing and D2 view video flowing from CD 2.
In addition,, if carry out the playback of general 2D image, playback apparatus 1 only reads and the basic view video flowing of playback from CD 2.
Because basic view video flowing is the AVC video flowing by H.264/AVC encoding, therefore, as long as can show 2D image by its basic view video flowing of playback with any player of BD format compatible.
The situation of D1 view video by mainly describing subordinate view video below. In the time being only known as subordinate view video, represent D1 view video. Equally, D2 view video is also recorded in CD 2 in the mode identical with D1 view video, and played.
[ios dhcp sample configuration IOS DHCP of TS]
Fig. 5 shows the ios dhcp sample configuration IOS DHCP of TS.
Basic view video, subordinate view video, main audio, basic PG, subordinate PG, basic I G and subordinate IG stream be multiplexed in the main TS in Fig. 5. By this way, subordinate view video flowing can be included in main TS together with basic view video flowing.
Main TS and auxiliary TS are recorded in CD 2. Main TS is the TS that at least comprises basic view video flowing. Auxiliary TS is the TS that will use together with main TS, comprise the stream outside basic view video flowing.
The stream of basic view and subordinate view is also prepared for PG described later and IG, available as in video so that 3D is shown.
The plane of the PG obtaining by each stream is decoded and the basic view of IG is through being combined to and being shown with the plane of the basic view video obtaining by basic view video flowing is decoded. Similarly, the plane of the subordinate view of PG and IG is through being combined to and being shown with the plane of the subordinate view video obtaining by subordinate video views stream is decoded.
For example, if basic view video flowing is the stream of L image, and subordinate view video flowing is the stream of R image, and the stream of the basic view of PG and IG is the graphical stream of L image. In addition, the PG of subordinate view stream and IG stream are the graphical stream of R image.
On the other hand, if stream and subordinate view video flowing that basic view video flowing is R image are the stream of L image, the stream of the basic view of PG and IG is the graphical stream of R image. In addition, the PG of subordinate view stream and IG stream are the graphical stream of L image.
Fig. 6 shows another ios dhcp sample configuration IOS DHCP of TS.
The stream of basic view video and subordinate view video is multiplexed in the main TS in Fig. 6.
On the other hand, the stream of main audio, basic PG, subordinate PG, basic I G and subordinate IG is multiplexed in auxiliary TS.
Therefore, video flowing can be multiplexed in main TS, and the stream of PG and IG can be multiplexed in auxiliary TS.
Fig. 7 shows another ios dhcp sample configuration IOS DHCP of TS.
The stream of basic view video, main audio, basic PG, subordinate PG, basic I G and subordinate IG is multiplexed in the main TS in the part A of Fig. 7.
On the other hand, subordinate view video flowing is included in auxiliary TS.
Therefore, subordinate view video flowing can be included in the TS different from the TS that comprises basic view video flowing.
The stream of basic view video, main audio, PG and IG is multiplexed in the main TS in the part B in Fig. 7. On the other hand, the stream of subordinate view video, basic PG, subordinate PG, basic I G and subordinate IG is multiplexed in auxiliary TS.
The PG and the IG that are included in main TS are the stream for 2D playback. The stream being included in auxiliary TS is the stream for 3D playback.
Therefore, the stream of PG and the stream of IG may not shared by 2D playback and 3D playback.
As mentioned above, basic view video flowing and subordinate view video flowing can be included in different MPEG2TS. The basic view video flowing of explanation and subordinate view video flowing are included in different MPEG2TS and the advantage being recorded.
For example, consider to be multiplexed with the situation that bit rate that single MPEG2TS allows is restricted. In this case, when when basic view video flowing and subordinate view video flowing, the two is all included in single MPEG2TS, the bit rate that need to reduce each stream meets this constraint. As a result, deterioration of image quality.
By making each stream be included in necessity of eliminating reduction bit rate in different MPEG2TS, thereby can prevent deterioration of image quality.
[application form]
Fig. 8 shows the example of the management of the AV stream that playback apparatus 1 carries out.
The management of AV stream is to utilize the PlayList shown in Fig. 8 and the two-layer execution of Clip. AV stream can be recorded in the local memory device of playback apparatus 1, and in CD 2.
Here, comprise AV stream and as of the Clip information of the information accompanying with it to being used as an object, and will be always called Clip. Here, the file of storage AV stream will be known as AV stream file. In addition, the file of storage Clip information is also referred to as Clip message file.
AV stream is placed on time shaft, and the accessing points of each Clip is mainly specified by the timestamp in PlayList. Clip message file is for finding the address etc. that starts decoding in AV stream.
PlayList is one group of playback joint (section) of AV stream. A playback joint in AV stream is called a PlayItem (playing item). PlayItem is saved by playback on time shaft a pair of enter point (Inpoint) and go out point (Outpoint) representative. As shown in Figure 8, PlayList is made up of single or multiple PlayItem.
First PlayList from the left side of Fig. 8 comprises two PlayItem, and first half and the latter half of the AV stream that the Clip in left side comprises are quoted by these two PlayItem respectively.
Second PlayList from left side comprises a PlayItem, and the whole AV stream that the Clip on right side comprises is quoted by it.
The 3rd PlayList from left side comprises two PlayItem, and a part for the AV stream that comprises of the part for AV stream that comprises of the Clip in left side and the Clip on right side is quoted by these two PlayItem.
For example, if the PlayItem in the left side that first PlayList comprises from left side is appointed as playback object by dish Navigator, carry out the playback of the first half to the AV stream that Clip that quote, left side comprises by this PlayItem. Therefore, PlayList is used as the reproducing management information of the playback for managing AV stream.
To be called main path (MainPath) by the playback path rearranging of the one or more PlayItem in PlayList.
In addition, will be called bypass footpath (SubPath) by the playback path rearranging of the one or more SubPlayItem in PlayList.
Fig. 9 shows the structure in main path and bypass footpath.
A PlayList can have a main path and one or more bypasses footpath.
The stream that above-mentioned basic view video flowing is quoted as the PlayItem that forms main path is managed. In addition, the stream that subordinate view video flowing is quoted as the SubPlayItem that forms bypass footpath is managed.
PlayList in Fig. 9 has a main path and three bypass footpaths, and this main path comprises the arrangement of three PlayItem.
Starting anew according to order is that the each PlayItem that forms main path is provided with ID. Also start anew to be provided with IDSubpath_id=0, Subpath_id=1 and Subpath_id=2 to bypass footpath respectively according to order.
In example in Fig. 9, comprise a SubPlayItem in the bypass footpath of Subpath_id=0, comprise two SubPlayItem in the bypass footpath of Subpath_id=1, and comprise a SubPlayItem in the bypass footpath of Subpath_id=2.
The ClipAV stream of being quoted by a PlayItem comprises at least one video flowing (main image data).
In addition, this ClipAV stream may comprise or one or more audio streams of timing place (synchronously) playback that may be identical not included in video flowing included not included in this ClipAV stream.
ClipAV stream may comprise or may not comprise one or more bitmap subtitle data (PG (presenting figure)) stream that will flow with this ClipAV the video flowing synchronized playback comprising.
ClipAV stream may comprise or may not comprise one or more IG (interactive graphics (IG)) stream that flows the video flowing synchronized playback comprising with this ClipAV. IG stream is for showing the figure of button of being operated by user etc. and so on.
In the ClipAV stream of being quoted by a PlayItem, multiplexing video flowing, synchronize the zero of broadcasting or multiple audio stream, zero or multiple PG stream and zero or multiple IG with it and flow.
In addition, SubPlayItem quotes as flowing the video flowing, audio stream, PG stream etc. of homogeneous turbulence not with the ClipAV being quoted by PlayItem.
Utilize such PlayList, PlayItem and SubPlayItem management AV stream to describe to some extent in Japan not substantive examination Patent Application Publication No.2008-252740 and Japan not substantive examination Patent Application Publication No.2005-348314.
[bibliographic structure]
Figure 10 shows the example of the managerial structure that is recorded in the file on CD 2.
As shown in figure 10, file is managed by bibliographic structure in level mode. On CD 2, create a root. It under root, is the scope by a recording/playback system management.
BDMV catalogue is disposed under root.
As be provided with name " Index.bdmv " file Index file (index file) and be and then stored in below BDMV catalogue as the MovieObject file of file that is provided with name " MovieObject.bdmv ".
BACKUP catalogue, PLAYLIST catalogue, CLIPINF catalogue, STREAM catalogue etc. are also provided below BDMV catalogue.
The PlayList file of having described PlayList is stored in PLAYLIST catalogue. The title that 5 bit digital and extension name " .mpls " combine is set to each PlayList file. Filename " 00000.mpls " is set to a PlayList file shown in Figure 10.
Clip message file is stored in CLIPINF catalogue. The title that 5 bit digital and extension name " .clpi " combine is set to each Clip message file.
Filename " 00001.clpi ", " 00002.clpi " and " 00003.clpi " are set to respectively three Clip message files in Figure 10. Hereinafter, as required, Clip message file will be known as clpi file.
For example, clpi file " 00001.clpi " is the file of wherein having described the information relevant with the Clip of basic view video.
Clpi file " 00002.clpi " is the file of wherein having described the information relevant with the Clip of D2 view video.
Clpi file " 00003.clpi " is the file of wherein having described the information relevant with the Clip of D1 view video.
Stream file is stored in STREAM catalogue. The title that the title that 5 bit digital and extension name " .m2ts " combine or 5 bit digital and extension name " .ilvt " combine is set to each stream file. Hereinafter, the file that is provided with extension name " .m2ts " will be suitably called m2ts file. In addition, the file that is provided with extension name " .ilvt " will be called ilvt file.
M2ts file " 00001.m2ts " is the file for 2D playback. Carry out reading basic view video flowing by specifying this file.
M2ts file " 00002.m2ts " is D2 view video stream file, and m2ts file " 00003.m2ts " is D1 view video stream file.
Ilvt file " 10000.ilvt " is the file for B-D1 playback. Carry out reading basic view video flowing and D1 view video flowing by specifying this file.
Ilvt file " 20000.ilvt " is the file for B-D2 playback. Carry out reading basic view video flowing and D2 view video flowing by specifying this file.
Except the catalogue shown in Figure 10, below BDMV catalogue, also provide the catalogue of storing audio stream file etc.
[grammer of every segment data]
Figure 11 shows the grammer of PlayList file.
PlayList file is the file that is provided with extension name " .mpls ", is stored in the PLAYLIST catalogue in Figure 10.
Type_indicator in Figure 11 represents the type of " xxxxx.mpls " file.
Version_number represents the version number of " xxxxx.mpls " file. Version_number is made up of 4 bit digital. For example, " 0240 " is expressed as " the 3D specification version " that arrange for the PlayList file of 3D playback.
PlayList_start_address represents the start address of PlayList (), taking the relative byte number from the first byte of PlayList file as unit.
PlayListMark_start_address represents the start address of PlayListMark (), taking the relative byte number from the first byte of PlayList file as unit.
ExtensionData_start_address represents the start address of ExtensionData (), taking the relative byte number from the first byte of PlayList file as unit.
After the reserved_for_future_use of 160 bits is included in ExtensionData_start_address.
The parameter relevant with the playback controls of the PlayList such as playback restriction is stored in AppInfoPlayList ().
The Parameter storage relevant with main path, bypass footpath etc. is in PlayList (). The content of PlayList () will be described below.
PlayList label information, that is, with jump destination (jump) as user operation or be used to indicate relevant information such as the mark of order of chapter jump etc., be stored in PlayListMark ().
Private data can be inserted in ExtensionData ().
Figure 12 shows the concrete example of the description of PlayList file.
As shown in figure 12, the view_type of the 3D_PL_type of 2 bits and 1 bit describes in PlayList file.
3D_PL_type represents the type of PlayList.
View_type represents that the basic view video flowing that its playback is managed by PlayList is L image (L view) stream or R image (R view) stream.
Figure 13 shows the implication of the value of 3D_PL_type.
The value 00 of 3D_PL_type represents that this is the PlayList for 2D playback.
The value 01 of 3D_PL_type represents that this is the PlayList for the B-D1 playback of 3D playback.
The value 10 of 3D_PL_type represents that this is the PlayList for the B-D2 playback of 3D playback.
For example, if the value of 3D_PL_type is 01 or 10,3DPlayList information is registered in the ExtensionData () of PlayList file. For example, to read the information that basic view video flowing is relevant with subordinate view video flowing from CD 2 and be registered as 3DplayList information.
Figure 14 shows the implication of the value of view_type.
If carry out 3D playback, the value 0 of view_type represents that basic view video flowing is L view stream. If carry out 2D playback, the value 0 of view_type represents that basic view video flowing is AVC video flowing.
The value 1 of view_type represents that basic view video flowing is R view stream.
It is L view stream or R view stream that playback apparatus 1 can utilize the view_type describing in PlayList file to identify basic view video flowing.
For example, if vision signal is output to display device 3 via HDMI cable, can require playback apparatus 1 to export them in distinguishing L view signal and R view signal.
Be L view stream or R view stream by making playback apparatus 1 can identify basic view video flowing, playback apparatus 1 can be exported them in distinguishing L view signal and R view signal.
Figure 15 shows the grammer of the PlayList () in Figure 11.
Length (length) is the signless integer of 32 bits, the byte number of instruction from this length field to the end of PlayList (). , length represents the byte number from reserved_for_future_use to PlayList end.
After length, prepare the reserved_for_future_use of 16 bits.
Number_of_PlayItems is the field of 16 bits, indicates the number of PlayItem in a PlayList. In the situation of the example in Fig. 9, the number of PlayItem is 3. According to PlayItem () the order occurring in PlayList from 0 to PlayItem_id assignment. For example, in Fig. 9, given PlayItem_id=0,1,2.
Number_of_SubPaths is the field of 16 bits, indicates the number in the bypass footpath in a PlayList. In the situation of the example in Fig. 9, the number in bypass footpath is 3. The order occurring in PlayList according to SubPath () from 0 to SubPath_id assignment. For example, in Fig. 9, given SubPath_id=0,1,2. Hereinafter, PlayItem () is quoted by the number of PlayItem, and SubPath () is quoted by the number in bypass footpath.
Figure 16 shows the grammer of the SubPath () in Figure 15.
Length is the signless integer of 32 bits, the byte number of instruction from this length field to the end of SubPath (). , length represents the byte number from reserved_for_future_use to PlayList end.
After length, prepare the reserved_for_future_use of 16 bits.
Subpath_type is the field of 8 bits, the type of the application in instruction bypass footpath. It is audio frequency, bitmap captions or text subtitle that for example SubPath_type is used to for example indicate the type in bypass footpath.
After SubPath_type, prepare the reserved_for_future_use of 15 bits.
Is_repeat_SubPath is 1 bit, the field of specifying the back method in bypass footpath, and instruction repeats playback or the playback of Exactly-once to bypass footpath to bypass footpath at the during playback of main path. For example, if different (if the paths that main path is play as the lantern slide of rest image of the playback timing of the playback of the Clip being quoted by main path timing and the Clip being quoted by bypass footpath, and bypass footpath is as the path of the audio frequency as BGM), use this field.
After is_repeat_SubPath, prepare the reserved_for_future_use of 8 bits.
Number_of_SubPlayItems is the field of 8 bits, the number (number of entry) of the SubPlayItem of instruction in a bypass footpath. For example, the number_of_SubPlayItems of the SubPlayItem of the SubPath_id=0 in Fig. 9 is 1, and the number_of_SubPlayItems of the SubPlayItem of SubPath_id=1 is 2. Hereinafter, SubPlayItem () is quoted by the number of SubPlayItem.
Figure 17 shows the grammer of the SubPlayItem (i) in Figure 16.
Length is the signless integer of 16 bits, the byte number of instruction from this length field to the end of SubPlayItem ().
Quoting for SubPlayItem the situation that the situation of a Clip and SubPlayItem quote multiple Clip is described the SubPlayItem in Figure 17 (i).
The situation of quoting a Clip for SubPlayItem is described.
Clip_Information_file_name[0] Clip that indicates to be cited.
Clip_codec_identifier[0] represent the decoding method of Clip. At Clip_codec_identifier[0] include afterwards reserved_for_future_use.
Is_multi_Clip_entries is the mark whether multiple Clip of instruction are registered. If the mark of is_multi_Clip_entries becomes (stand), quote the grammer in the situation of multiple Clip with reference to SubPlayItem.
Ref_to_STC_id[0] be the information (discontinuous point on system time basis) relevant with STC discontinuous point.
SubPlayItem_IN_time represents the starting position of the playback joint in bypass footpath, and SubPlayItem_OUT_time represents end position.
Sync_PlayItem_id and sync_start_PTS_of_PlayItem represent the time in the time starting playback on the time shaft of bypass footpath at main path.
SubPlayItem_IN_time, SubPlayItem_OUT_time, sync_PlayItem_id and sync_start_PTS_of_PlayItem are used in the Clip being quoted by SubPlayItem jointly.
Below will be for " if is_multi_Clip_entries==1b ", and the situation that SubPlayItem quotes multiple Clip is described.
The number of the Clip that num_of_Clip_entries indicates to quote. Clip_Information_file_name[SubClip_entry_id] number specify except Clip_Information_file_name[0] the number of Clip.
Clip_codec_identifier[SubClip_entry_id] represent the decoding method of Clip.
Ref_to_STC_id[SubClip_entry_id] be the information (discontinuous point on system time basis) relevant with STC discontinuous point. At ref_to_STC_id[SubClip_entry_id] include afterwards reserved_for_future_use.
Figure 18 shows the grammer of the PlayItem () in Figure 15.
Length is the signless integer of 16 bits, the byte number of instruction from this length field to the end of PlayItem ().
Clip_Information_file_name[0] represent the filename of the Clip message file of the Clip that quoted by PlayItem. Note, 5 identical bit digital are comprised in the filename of filename m2ts file corresponding thereto, that comprise Clip of Clip message file.
Clip_codec_identifier[0] represent the decoding method of Clip. At Clip_codec_identifier[0] include afterwards reserved_for_future_use. After reserved_for_future_use, include is_multi_angle and connection_condition.
Ref_to_STC_id[0] be the information (discontinuous point on system time basis) relevant with STC discontinuous point.
IN_time represents the starting position of the playback joint of PlayItem, and OUT_time represents end position.
After OUT_time, include UO_mask_table (), PlayItem_random_access_mode and still_mode.
STN_table () comprises the information of the AV stream of being quoted by object PlayItem. In addition, if exist bypass played explicitly with object PlayItem, also comprise the information of the AV stream of being quoted by the SubPlayItem that forms its bypass footpath.
Figure 19 shows the grammer of the STN_table () in Figure 18.
STN_table () is set to the attribute of PlayItem.
Length is the signless integer of 16 bits, the byte number of instruction from this length field to the end of STN_table (). After this length, also prepare to have the reserved_for_future_use of 16 bits.
Number_of_video_stream_entries represents to be transfused to (registration) in STN_table () table and is provided with the number of the stream of video_stream_id.
Video_stream_id is the information for identifying video flowing. For example, basic view video flowing utilizes this video_stream_id to specify.
The ID of subordinate view video flowing can be defined in STN_table (), or can obtain by calculating, for example, a predetermined value is added with the ID of basic view video flowing.
Video_stream_number is the video stream number of seeing from user, for video switch.
Number_of_audio_stream_entries represents to be imported into the number of the stream of the first audio stream in STN_table (), that be provided with audio_stream_id. Audio_stream_id is the information for identification audio stream, and audio_stream_number is the audio frequency stream number of seeing from user, for Audio conversion.
Number_of_audio_stream2_entries represents to be imported into the number of the stream of the second audio stream that is provided with audio_stream_id2 in STN_table (). Audo_stream_id2 is the information for identification audio stream, and audio_stream_number is the audio frequency stream number of seeing from user, for Audio conversion. In this example, the audio frequency of playback to be switched.
Number_of_PG_txtST_stream_entries represents to be imported into the number of the stream that is provided with PG_txtST_stream_id in STN_table (). Among these, be transfused to by bitmap captions being carried out to PG stream and text subtitle file (txtST) that vernier length coding obtains. PG_txtST_stream_id is the information for identifying caption stream, and PG_txtST_stream_number is the captions stream number of seeing from user, switches for captions.
Number_of_IG_stream_entries represents to be imported into the number of the stream that is provided with IG_stream_id in STN_table (). Among these, IG stream is transfused to. IG_stream_id is the information for identifying IG stream, and IG_stream_number is the figure stream number of seeing from user, switches for figure.
The ID of main TS and auxiliary TS is also registered in STN_table (). In stream_attribute (), having described its ID is not the ID of elementary streams, but the ID of TS.
[ios dhcp sample configuration IOS DHCP of playback apparatus 1]
The block diagram of Figure 20 shows the ios dhcp sample configuration IOS DHCP of playback apparatus 1.
The whole operation of playback apparatus 1 carried out ready control program and controls by controller 51.
For example, controller 51 control panel drivers 52 read the PlayList file for 3D playback. In addition, controller 51 also makes to read main TS and auxiliary TS based on the ID registering in STN_table, and these are offered to decoder element 56.
Disk drive 52 from CD 2 reading out datas, and outputs to controller 51, memory 53 and decoder element 56 by the data that read according to the control of controller 51.
Memory 53 as required storage control 51 is carried out the essential data of various types of processing.
Local memory device 54 is formed by for example HDD (hard disk drive) configuration. The subordinate view video flowing of downloading from server 72 etc. is recorded to local memory device 54. The stream being recorded in local memory device 54 is also offered decoder element 56 as required.
Internet interface 55 is carried out and the communicating by letter of server 72 via network 71 according to the control of controller 51, and the data of downloading from server 72 are offered to local memory device 54.
The data that are recorded in CD 2 for upgrading are downloaded from server 72. Can use with together with basic view video flowing in being recorded in CD 2 by the subordinate view video flowing that makes to download, can realize the 3D playback to the content different from the content of CD 2. In the time having downloaded subordinate view video flowing, the content of PlayList is also suitably upgraded.
Decoder element 56 is to provide the stream coming to decode from disk drive 52 or local memory device 54, and the vision signal of acquisition is outputed to display device 3. Audio signal is also output to display device 3 via predetermined route.
Operation input block 57 comprises input equipment such as button, button, touch pad, driver plate, mouse and for receiving the receiving element of signal of infrared ray sending from predetermined remote controller and so on. Operation input block 57 detects user's operation, and the signal of the content of the operation that representative is detected offers controller 51.
Figure 21 shows the ios dhcp sample configuration IOS DHCP of decoder element 56.
Figure 21 shows the configuration of carrying out the processing to vision signal. In decoder element 56, also carry out the decoding processing of audio signal. The result of the decoding processing of carrying out audio signal as object is output to display device 3 via unshowned route.
PID filter 101 identifies based on forming the PID of grouping of TS or the ID of stream that to provide from disk drive 52 or local memory device 54 TS coming be main TS or auxiliary TS. Main TS is outputed to buffer 102 by PID filter 101, and auxiliary TS is outputed to buffer 103.
Based on PID, the sequentially grouping of the main TS of storage to they sequences in read buffers 102 of PID filter 104.
For example, the grouping of the basic view video flowing that PID filter 104 comprises the main TS of formation outputs to B video buffer 106, and the grouping that forms subordinate view video flowing is outputed to switch 107.
In addition, the grouping of the basic I G stream that PID filter 104 also comprises the main TS of formation outputs to switch 114, and the grouping that forms subordinate IG stream is outputed to switch 118.
The grouping of the basic PG stream that PID filter 104 comprises the main TS of formation outputs to switch 122, and the grouping that forms subordinate PG stream is outputed to switch 126.
As described in reference to figure 5, the stream of basic view video, subordinate view video, basic PG, subordinate PG, basic I G and subordinate IG can be multiplexed in main TS.
Based on PID, the sequentially grouping of the auxiliary TS of storage to they sequences in read buffers 103 of PID filter 105.
The grouping of the subordinate view video flowing that for example, PID filter 105 comprises the auxiliary TS of formation outputs to switch 107.
In addition, the grouping of the basic I G stream that PID filter 105 also comprises the auxiliary TS of formation outputs to switch 114, and the grouping that forms subordinate IG stream is outputed to switch 118.
The grouping of the basic PG stream that PID filter 105 comprises the auxiliary TS of formation outputs to switch 122, and the grouping that forms subordinate PG stream is outputed to switch 126.
As described in reference to Figure 7, subordinate view video flowing can be included in auxiliary TS. In addition, as described in reference to figure 6, the stream of basic PG, subordinate PG, basic I G and subordinate IG can be multiplexed in auxiliary TS.
Switch 107 provides the grouping of the subordinate view video flowing coming to output to D video buffer 108 by forming from PID filter 104 or PID filter 105.
Switch 109 according to the rules the temporal information of decoding timing sequentially read the basic view video packets that is stored in B video buffer 106, be stored in the subordinate view video packets in D video buffer 108, for example, same temporal information be set to a certain picture of the basic view video of storage data grouping and store the grouping of the data of the picture of corresponding subordinate view video.
The grouping of reading from B video buffer 106 or D video buffer 108 is outputed to Video Decoder 110 by switch 109.
Video Decoder 110, to provide the grouping coming to decode from switch 109, outputs to switch 111 by the basic view video obtaining by decoding or subordinate view video.
Switch 111 will output to B video plane generation unit 112 by the data that obtain that basic view video packets is decoded, and will output to D video plane generation unit 113 by the data that obtain that subordinate view video packets is decoded.
The data of B video plane generation unit 112 based on providing from switch 111 generate basic view video plane, and are outputed to synthesis unit (combiningunit) 130.
D video plane generation unit 113 generates subordinate view video plane based on the data that provide from switch 111, and is outputed to synthesis unit 130.
Switch 114 will provide the grouping of the formation basic I G stream coming to output to BIG buffer 115 from PID filter 104 or PID filter 105.
The grouping of the formation basic I G stream of BIG decoder 116 to storage in BIG buffer 115 is decoded, and the data that obtain by decoding are outputed to BIG plane generation unit 117.
BIG plane generation unit 117 generates basic I G plane based on the data that provide from BIG decoder 116, and is outputed to synthesis unit 130.
Switch 118 will provide the grouping of the formation subordinate IG stream coming to output to DIG buffer 119 from PID filter 104 or PID filter 105.
The grouping of the formation subordinate IG stream of DIG decoder 120 to storage in DIG buffer 119 is decoded, and the data that obtain by decoding are outputed to DIG plane generation unit 121.
DIG plane generation unit 121 generates subordinate IG plane based on the data that provide from DIG decoder 120, and is outputed to synthesis unit 130.
Switch 122 will provide the grouping of the basic PG stream of the formation of coming to output to BPG buffer 123 from PID filter 104 or PID filter 105.
BPG decoder 124 is decoded to the grouping of the basic PG stream of the formation of storage in BPG buffer 123, and the data that obtain by decoding are outputed to BPG plane generation unit 125.
The data of BPG plane generation unit 125 based on providing from BPG decoder 124 generate basic PG plane, and are outputed to synthesis unit 130.
Switch 126 will provide the grouping of the formation subordinate PG stream coming to output to DPG buffer 127 from PID filter 104 or PID filter 105.
The grouping of the formation subordinate PG stream of DPG decoder 128 to storage in DPG buffer 127 is decoded, and the data that obtain by decoding are outputed to DPG plane generation unit 129.
DPG plane generation unit 129 generates subordinate PG plane based on the data that provide from DPG decoder 128, and is outputed to synthesis unit 130.
Synthesis unit 130 is by the basic view video plane that comes being provided, the basic I G plane of coming being provided and providing the basic PG plane of coming to synthesize them from BPG plane generation unit 125 from BIG plane generation unit 117 from B video plane generation unit 112 according to predesigned order being stacking, thereby generates the plane of basic view.
In addition, synthesis unit 130 is also by next subordinate view video plane being provided, the subordinate IG plane of coming being provided and providing the subordinate PG plane of coming to synthesize them from DPG plane generation unit 129 from DIG plane generation unit 121 from D video plane generation unit 113 according to predesigned order being stacking, thus the plane of generation subordinate view.
Synthesis unit 130 is exported the data of basic view plane and subordinate view plane. The video data of exporting from synthesis unit 130 is output to display device 3, carries out 3D show by the basic view plane of Alternation Display and subordinate view plane.
[the first example of T-STD (MPTS-System Target Decoder)]
Now, by the decoder in the configuration shown in description Figure 21 and configuration around thereof.
Figure 22 shows the configuration of carrying out the processing to video flowing.
In Figure 22, identical label mark for the configuration identical with the configuration shown in Figure 21. Figure 22 shows PID filter 104, B video buffer 106, switch 107, D video buffer 108, switch 109, Video Decoder 110 and DPB (decoded picture buffering device) 151. Although not shown in Figure 21, the DPB151 that has wherein stored the data of decoding picture is provided to the rear class of Video Decoder 110.
The grouping of the basic view video flowing that PID filter 104 comprises the main TS of formation outputs to B video buffer 106, and the grouping that forms subordinate view video flowing is outputed to switch 107.
For example, PID=0 is endowed the grouping that forms basic view video flowing as the fixed value of PID. In addition, the fixed value except 0 is endowed the grouping that forms subordinate view video flowing as PID.
The grouping that its head has been described PID=0 by PID filter 104 outputs to B video buffer 106, and the grouping that its head has been described the PID except 0 is outputed to switch 107.
Output to the grouping of B video buffer 106 via TB (transmission buffer)1And MB (multiplexing buffer)1Be stored in VSB1In. The data of the elementary streams of basic view video are stored in VSB1In.
The grouping of exporting from PID filter 104 and PID filter 105 places that are formed in Figure 21 are all provided for switch 107 from the grouping of the subordinate view video flowing of auxiliary TS extraction.
If provide from PID filter 104 grouping that forms subordinate view video flowing, switch 107 is outputed to D video buffer 108.
In addition, if provide from PID filter 105 grouping that forms subordinate view video flowing, switch 107 is also outputed to D video buffer 108.
Output to the grouping of D video buffer 108 via TB2And MB2Be stored in VSB2In. The data of the elementary streams of subordinate view video are stored in VSB2In.
Switch 109 sequentially reads the VSB that is stored in B video buffer 1061In basic view video packets and be stored in the VSB of D video buffer 1082In subordinate view video packets, and they are outputed to Video Decoder 110.
For example, the basic view video packets of 109 certain time of output of switch, and after this export immediately the subordinate view of same time and divide into groups. In this way, the basic view video packets of sequentially just same time of switch 109 and subordinate view video packets output to Video Decoder 110.
In the grouping of data of a certain picture of the basic view video of storage and the grouping of the data of the picture of the storage subordinate view video corresponding with it, guarantee the same time information that PCR (program clock reference) is synchronous having encoded time set. Even be included in separately in the situation in different TS at basic view video flowing and subordinate view video flowing, same time information is also set to the grouping of the data of having stored the picture corresponding to each other.
Temporal information can be DTS (decoded time stamp) and PTS (presentative time stamp), and is set to each PES (packetizing elementary streams) grouping.
Particularly, in the time arranging the picture of each stream by coding order/decoding order, be positioned at the picture of basic view video at place of same time and the picture that the picture of subordinate view video is considered to correspond to each other. Identical DTS is set to the PES grouping of the data of having stored a certain basic view video pictures and divides into groups with the PES of the data of having stored subordinate view video pictures corresponding with this picture in decoding order.
In addition, in the time arranging the picture of each stream by display order, the picture of basic view video and the picture of subordinate view video that are positioned at place of same time are also considered to the picture corresponding to each other. Identical PTS is set to the PES grouping of the data of having stored a certain basic view video pictures and divides into groups with the PES of the data of having stored subordinate view video pictures corresponding with this picture in display order.
If the basic gop structure of view video flowing and the gop structure of subordinate view video flowing are identical structures, the picture corresponding to each other in decoding order is also the picture corresponding to each other in display order, after will be described.
If serial is carried out grouping and is transmitted, the VSB from B video buffer 106 at certain timing place1The DTS of the grouping of reading1With the VSB from D video buffer 108 in timing place following closely2The DTS of the grouping of reading2Represent the identical time, as shown in figure 22.
Switch 109 is by the VSB from B video buffer 1061The basic view video packets reading or from the VSB of D video buffer 1082The subordinate view video packets reading outputs to Video Decoder 110.
Video Decoder 110 sequentially, to provide the grouping coming to decode from switch 109, stores the data of the data of the basic view video pictures obtaining by decoding or subordinate view video pictures in DPB151 into.
Switch 111 reads at predetermined timing place the data through decoding picture that are stored in DPB151. The data through decoding picture of storing in DPB151 in addition, also by Video Decoder 110 for predicting another width picture.
If serial carries out data transmission, the PTS of the data of the PTS of the data of the basic view video pictures reading in a certain timing place and the subordinate view video pictures that reads in timing place following closely represents the identical time.
Basic view video flowing and subordinate view video flowing can be multiplexed in single TS, for example, describe with reference to figure 5 grades, or can be included in separately in different TS, as described with reference to figure 7.
Even in basic view video flowing and subordinate view video flowing are multiplexed to single TS or can be included in separately in the situation in different TS, playback apparatus 1 also can be by being arranged on the decoder model in Figure 22 wherein to process this situation.
For example, as shown in figure 23, if only hypothesis provides the situation of single TS, playback apparatus 1 can not be processed basic view video flowing and subordinate view video flowing is included in the medium situation of different TS separately.
In addition, according to the decoder model in Figure 22, even if basic view video flowing and subordinate view video flowing are included in different TS separately, but because the two has identical DTS, therefore can grouping be offered to Video Decoder 110 in correct timing.
Can walk abreast and be provided for the decoder of basic view video and the decoder for subordinate view video. In this case, be provided for for the decoder of basic view video with for the decoder of subordinate view video in same timing place that is grouped at same time point place.
[the second example]
Figure 24 shows another configuration for carrying out the processing to video flowing.
Configuration in Figure 22, Figure 24 also shows switch 111, L video plane generation unit 161 and R video plane generation unit 162. In addition, PID filter 105 is shown in the prime of switch 107. Suitably omit unnecessary description.
L video plane generation unit 161 generates L view video plane, and is provided to replace the B video plane generation unit 112 in Figure 21.
R video plane generation unit 162 generates R view video plane, and is provided to replace the D video plane generation unit 113 in Figure 21.
In this example, switch 111 needs to identify and export L view video data and R view video data.
, switch 111 need to identification be video datas of L view or R view by the data that obtain that basic view video packets is decoded.
In addition, switch 111 also need identification are video datas of L view or R view by the data that obtain that subordinate view video packets is decoded.
View_type with reference to Figure 12 and Figure 14 description is used to identify L view and R view. For example, the view_type describing in PlayList file is outputed to switch 111 by controller 51.
If the value of view_type is 0, switch 111 by the data of storing in DPB151, output to L video plane generation unit 161 by the data that obtain that the basic view video packets being identified by PID=0 is decoded. As mentioned above, the value 0 of view_type represents that basic view video flowing is the stream of L view.
In this case, the subordinate view video packets of the PID mark by by except 0 data that obtain of decoding are outputed to R video plane generation unit 162 by switch 111.
On the other hand, if the value of view_type is 1, switch 111 by the data of storing in DPB151, output to R video plane generation unit 162 by the data that obtain that the basic view video packets being identified by PID=0 is decoded. The value 1 of view_type represents that basic view video flowing is the stream of R view.
In this case, the subordinate view video packets of the PID mark by by except 0 data that obtain of decoding are outputed to L video plane generation unit 161 by switch 111.
L video plane generation unit 161 generates L view video plane based on the data that provide from switch 111, and is outputed to synthesis unit 130.
R video plane generation unit 162 generates R view video plane based on the data that provide from switch 111, and is outputed to synthesis unit 130.
In the basic view video of H.264AVC/MVC encoding in utilization and the elementary streams of subordinate view video, do not exist and represent that stream is the information (field) of L view or R view.
Therefore,, by PlayList file, view_type being set, recording equipment can make playback apparatus 1 identify respectively L view or R view naturally of basic view video flowing and subordinate view video flowing.
Playback apparatus 1 can be identified respectively L view or R view naturally of basic view video flowing and subordinate view video flowing, and can switch output destination according to this recognition result.
If for each L view and the R view also prepared in IG and PG plane, the video flowing of L view and R view can be distinguished from each other open, thereby playback apparatus 1 can easily be combined to L view plane and R view plane.
As mentioned above, if via HDMI cable outputting video signal, require outputting video signal after L view signal and R view signal are distinguished from each other out. Playback apparatus 1 can be processed this requirement.
Can identify the data that obtain by the basic view video packets of storing in DPB151 is decoded and by the data that obtain that subordinate view video packets is decoded based on view_id instead of PID.
In the time that H.264AVC/MVC utilization encodes, be provided with view_id to the addressed location of the stream that forms coding result. Can identify the corresponding view component of each addressed location according to this view_id.
Figure 25 shows the example of addressed location.
Addressed location #1 in Figure 25 is the unit that comprises the data of basic view video. Addressed location #2 is the unit that comprises subordinate view video. Addressed location is to have comprised that the data of a for example width picture are to make to carry out taking picture as unit the unit of access.
In H.264AVC/MVC utilization encodes, the data of every width picture of basic view video and subordinate view video are stored in this unit. In the time that H.264AVC/MVC utilization encodes, as shown in addressed location #2, MVC head is added to each view component. This MVC head comprises view_id.
In the situation of the example in Figure 25, for addressed location #2, can identify the view component being stored in this addressed location according to view_id is subordinate view video.
On the other hand, as shown in Figure 25, do not add MVC head to the basic view video of the view component as storing in addressed location #1.
As mentioned above, basic view video flowing is the data that also will be used for 2D playback. Therefore, in order to ensure with its compatibility, do not add MVC head to basic view video in when coding. Or, removed the MVC head once adding. Description is utilized to the coding of recording equipment below.
Playback apparatus 1 is defined (setting) for to take the view_id of the view component that there is no MVC head as to be 0, and this view component is taken as for basic view video. Be provided with value except 0 as view_id in when coding to subordinate view video.
Therefore, playback apparatus 1 can identify basic view video by the view_id based on being identified as 0, and can identify subordinate view video by the view_id except 0 based on actual setting.
In switch 111 in Figure 24, can carry out to the data that obtain by basic view video packets is decoded with by the identification of the data that obtain that subordinate view video is decoded based on this view_id.
[the 3rd example]
Figure 26 shows another configuration for carrying out the processing to video flowing.
In example in Figure 26, provide B video plane generation unit 112 to replace the L video plane generation unit 161 in Figure 24, and provide D video plane generation unit 113 to replace R video plane generation unit 162. In the rear class of B video plane generation unit 112 and D video plane generation unit 113, be provided with switch 171. Same for the configuration shown in Figure 26, come the output destination of switch data based on view_type.
Switch 111 by the data of storing in DPB151, output to B video plane generation unit 112 by the data that obtain that basic view video packets is decoded. In addition, switch 111 will output to D video plane generation unit 113 by the data that obtain that subordinate view video packets is decoded.
As mentioned above, the data that obtain by basic view video packets is decoded and the data that obtain by subordinate view video packets is decoded are based on PID or view_id and identified.
The data of B video plane generation unit 112 based on providing from switch 111 generate basic view video plane, and by its output.
D video plane generation unit 113 generates subordinate view video plane based on the data that provide from switch 111, and by its output.
The view_type describing in PlayList file is provided to switch 171 from controller 51.
If the value of view_type is 0, switch 171 will provide the basic view video plane coming to output to synthesis unit 130 as L view video plane from B video plane generation unit 112. The value 0 of view_type represents that basic view video flowing is the stream of L view.
In addition, in this case, switch 171 will provide the subordinate view video plane coming to output to synthesis unit 130 as R view video plane from D video plane generation unit 113.
On the other hand, if the value of view_type is 1, switch 171 will provide the subordinate view video plane coming to output to synthesis unit 130 as the plane of L view video from D video plane generation unit 113. The value 1 of view_type represents that basic view video flowing is the stream of R view.
In addition, in this case, switch 171 will provide the basic view video plane coming to output to synthesis unit 130 as R view video plane from B video plane generation unit 112.
According to the configuration of Figure 26, playback apparatus 1 can be identified L view or R view, and can switch output destination according to recognition result.
[the first example of plane synthetic model]
Figure 27 shows the synthesis unit 130 in configuration shown in Figure 21 and the configuration of rear class thereof.
Equally in Figure 27, identical label mark for the configuration identical with the configuration shown in Figure 21.
The grouping that forms the IG stream that main TS or auxiliary TS comprise is imported into switch 181. The grouping that is imported into the formation IG stream of switch 181 comprises the grouping of basic view and the grouping of subordinate view.
The grouping that forms the PG stream that main TS or auxiliary TS comprise is imported into switch 182. The grouping that is imported into the formation PG stream of switch 182 comprises the grouping of basic view and the grouping of subordinate view.
As described in reference to figure 5 grades, equally for IG and PG, the basic view stream and the subordinate view stream that show for carrying out 3D are prepared.
The IG of basic view is with shown with the synthetic mode of basic view video, and the IG of subordinate view is with shown with the synthetic mode of subordinate view video, thereby user can watch the video of 3D and button, icon in 3D mode.
Equally, the PG of basic view is with shown with the synthetic mode of basic view video, and the PG of subordinate view is with shown with the synthetic mode of subordinate view video, thereby user can watch video and the caption character etc. of 3D.
The grouping that forms basic I G stream is outputed to BIG decoder 116 by switch 181, and the grouping that forms subordinate IG stream is outputed to DIG decoder 120. Switch 181 comprises switch 114 in Figure 21 and the function of switch 118. In Figure 27, omit the diagram of each buffer.
BIG decoder 116 is decoded to the grouping that the formation basic I G stream coming is provided from switch 181, and the data that obtain by decoding are outputed to BIG plane generation unit 117.
BIG plane generation unit 117, based on provide the data of coming to generate the plane of basic I G from BIG decoder 116, is outputed to synthesis unit 130.
DIG decoder 120 is decoded to the grouping that the formation subordinate IG stream coming is provided from switch 181, and the data that obtain by decoding are outputed to DIG plane generation unit 121. Can make basic I G stream and subordinate IG stream be decoded by a decoder.
DIG plane generation unit 121, based on provide the data of coming to generate subordinate IG plane from DIG decoder 120, is outputed to synthesis unit 130.
The grouping that forms basic PG stream is outputed to BPG decoder 124 by switch 182, and the grouping that forms subordinate PG stream is outputed to DPG decoder 128. Switch 182 comprises switch 122 in Figure 21 and the function of switch 126.
BPG decoder 124 is decoded to the grouping that the basic PG stream of the formation of coming is provided from switch 182, and the data that obtain by decoding are outputed to BPG plane generation unit 125.
The data of BPG plane generation unit 125 based on providing from BPG decoder 124 generate the plane of basic PG, and are outputed to synthesis unit 130.
DPG decoder 128 is decoded to the grouping that the formation subordinate PG stream coming is provided from switch 182, and the data that obtain by decoding are outputed to DPG plane generation unit 129. Can make basic PG stream and subordinate PG stream be decoded by a decoder.
The plane of DPG plane generation unit 129 based on provide the data of coming to generate subordinate PG from DPG decoder 128, and outputed to synthesis unit 130.
Video Decoder 110 is sequentially decoded to the grouping providing from switch 109 (Figure 22 etc.), and the data of the data of the basic view video obtaining by decoding and subordinate view video are outputed to switch 111.
Switch 111 will output to B video plane generation unit 112 by the data that obtain that the grouping of basic view video is decoded, and will output to D video plane generation unit 113 by the data that obtain that the grouping of subordinate view video is decoded.
The plane of B video plane generation unit 112 based on provide the data of coming to generate basic view video from switch 111, and by its output.
The plane of D video plane generation unit 113 based on provide the data of coming to generate subordinate view video from switch 111, and by its output.
Synthesis unit 130 comprises adder unit 191 to 194 and switch 195.
Adder unit 191 provides on the subordinate view video plane that comes with by synthetic these planes from D video plane generation unit 113 provide the subordinate PG plane of coming to be added to from DPG plane generation unit 129, and will synthesize result and output to adder unit 193. The subordinate PG plane that is provided to adder unit 191 from DPG plane generation unit 129 is performed colouring information conversion process (CLUT (color lookup table) processing).
Adder unit 192 provides on the basic view video plane that comes with by synthetic these planes from B video plane generation unit 112 provide the basic PG plane of coming to be added to from BPG plane generation unit 125, and will synthesize result and output to adder unit 194. The basic PG plane that is provided to adder unit 192 from BPG plane generation unit 125 is performed colouring information conversion process or utilizes the correcting process of deviant.
Adder unit 193 will provide the subordinate IG plane of coming to be added in the synthetic result of adder unit 191 so that these planes are synthetic from DIG plane generation unit 121, and will synthesize result and export as subordinate view plane. Be provided to the subordinate IG plane of adder unit 193 from DIG plane generation unit 121 through colouring information conversion process.
Adder unit 194 will provide the basic I G plane of coming to be added in the synthetic result of adder unit 192 so that these planes are synthetic from BIG plane generation unit 117, and will synthesize result and export as basic view plane. The basic I G plane that is provided to adder unit 194 from BIG plane generation unit 117 is through colouring information conversion process or utilize the correcting process of deviant.
Based on thus generate basic view plane and subordinate view plane show image be such image, this image make button and icon in sight above, under it (depth direction) see captioned test, under captioned test, see video.
If the value of view_type is 0, switch 195 is exported basic view plane as L view plane, and exports subordinate view plane as R view plane. View_type is provided to switch 195 from controller 51.
In addition, if the value of view_type is 1, switch 195 is exported basic view plane as R view plane, and exports subordinate view plane as L view plane. Which plane is that basic view plane or the subordinate view plane in provided plane identifies based on PID or view_id.
Therefore, in playback apparatus 1, carried out basic view plane, subordinate view plane, and the plane of video, IG and PG is synthetic.
In synthetic all completed stages of all planes to video, IG and PG, determine that based on view_type the synthetic result of basic view plane is L view or R view, and R view plane and L view plane are all output.
In addition, in synthetic all completed stages of all planes to video, IG and PG, determine that based on view_type the synthetic result of subordinate view plane is L view or R view, and R view plane and L view plane are all output.
[the second example]
Figure 28 shows the configuration of synthesis unit 130 and prime thereof.
In the configuration shown in Figure 28, identical label mark for the configuration identical with the configuration shown in Figure 27. In Figure 28, the configuration of synthesis unit 130 is different from the configuration in Figure 27. In addition, the operation of switch 111 is different from the operation of the switch 111 in Figure 27. L video plane generation unit 161 is provided to replace B video plane generation unit 112, and R video plane generation unit 162 is provided to replace D video plane generation unit 113. Unnecessary description will be omitted.
The value of identical view_type is provided to the switch 201 and 202 of switch 111 and synthesis unit 130 from controller 51.
Switch 111 switches based on view_type the data that obtain by the grouping of basic view video is decoded in the mode identical with switch 111 in Figure 24 and by the output destination of the data that obtain that the grouping of subordinate view video is decoded.
For example, if the value of view_type is 0, switch 111 will output to L video plane generation unit 161 by the data that obtain that the grouping of basic view video is decoded. In this case, switch 111 will output to R video plane generation unit 162 by the data that obtain that the grouping of subordinate view video is decoded.
On the other hand, if the value of view_type is 1, switch 111 will output to R video plane generation unit 162 by the data that obtain that the grouping of basic view video is decoded. In this case, switch 111 will output to L video plane generation unit 161 by the data that obtain that the grouping of subordinate view video is decoded.
The plane of L video plane generation unit 161 based on provide the data of coming to generate L view video from switch 111, and outputed to synthesis unit 130.
The plane of R video plane generation unit 162 based on provide the data of coming to generate R view video from switch 111, and outputed to synthesis unit 130.
Synthesis unit 130 comprises switch 201, switch 202 and adder unit 203 to 206.
Switch 201 switches from BIG plane generation unit 117 the basic I G plane of coming and the output destination that the subordinate IG plane of coming is provided from DIG plane generation unit 121 is provided based on view_type.
For example, if the value of view_type is 0, switch 201 will provide the basic I G plane of coming to output to adder unit 206 as L view plane from BIG plane generation unit 117. In this case, switch 201 will provide the subordinate IG plane of coming to output to adder unit 205 as R view plane from DIG plane generation unit 121.
On the other hand, if the value of view_type is 1, switch 201 will provide the subordinate IG plane of coming to output to adder unit 206 as L view plane from DIG plane generation unit 121. In this case, switch 201 will provide the basic I G plane of coming to output to adder unit 205 as R view plane from BIG plane generation unit 117.
Switch 202 switches from BPG plane generation unit 125 the basic PG plane of coming and the output destination that the subordinate PG plane of coming is provided from DPG plane generation unit 129 is provided based on view_type.
For example, if the value of view_type is 0, switch 202 will provide the basic PG plane of coming to output to adder unit 204 as L view plane from BPG plane generation unit 125. In this case, switch 202 will provide the subordinate PG plane of coming to output to adder unit 203 as R view plane from DPG plane generation unit 129.
On the other hand, if the value of view_type is 1, switch 202 will provide the subordinate PG plane of coming to output to adder unit 204 as L view plane from DPG plane generation unit 129. In this case, switch 202 will provide the basic PG plane of coming to output to adder unit 203 as R view plane from BPG plane generation unit 125.
Adder unit 203 provides on the R view video plane that comes so that these planes are synthetic from R video plane generation unit 162 provide the PG plane of R view of coming to be added to from switch 202, and will synthesize result and output to adder unit 205.
Adder unit 204 provides on the L view video plane that comes so that these planes are synthetic from L video plane generation unit 161 provide the PG plane of L view of coming to be added to from switch 202, and will synthesize result and output to adder unit 206.
Adder unit 205 by the plane of the synthetic result that provides the IG plane of R view of coming to be added to adder unit 203 from switch 201 so that these planes are synthetic, and will synthesize result and export as R view plane.
Adder unit 206 by the plane of the synthetic result that provides the IG plane of L view of coming to be added to adder unit 204 from switch 201 so that these planes are synthetic, and will synthesize result and export as L view plane.
Like this, in playback apparatus 1, before synthetic with another plane, determine the each L of being view in basic view plane and the subordinate view plane of video, IG and PG or the plane of R view.
Having carried out after this determines, carry out the synthetic of plane to video, IG and PG, thus the plane of L view is synthetic and the plane of R view is synthetic each other each other.
[ios dhcp sample configuration IOS DHCP of recording equipment]
The block diagram of Figure 29 shows the ios dhcp sample configuration IOS DHCP of software assembling processing unit 301.
Video encoder 311 has the configuration identical with MVC encoder 11 in Fig. 3. H.264AVC/MVC, video encoder 311 generates basic view video flowing and subordinate view video flowing to multistage coding video data by basis, and they are outputed to buffer 312.
For example, video encoder 311 has identical PCR in when coding DTS and PTS are set to reference. , identical DTS is set to the PES grouping of the data for storing a certain basic view video pictures and the PES grouping of the data for being stored in the decoding order subordinate view video pictures corresponding with this picture by video encoder 311.
In addition, identical PTS is set to the PES grouping of the data for storing a certain basic view video pictures and the PES grouping of the data for being stored in the display order subordinate view video pictures corresponding with this picture by video encoder 311.
To describe below, video encoder 311 using identical information as additional information (it is and the relevant side information of decoding) arrange to basic view video pictures and in decoding order corresponding basic view video pictures.
In addition, after will describe, video encoder 311 is set to basic view video pictures by identical value and in display order in corresponding basic view video pictures, as the value of POC of output order that represents picture.
In addition, after will describe, video encoder 311 is carried out coding, and the gop structure of basic view video flowing and the gop structure of subordinate view video flowing are matched.
Audio coder 313 is encoded to input audio stream, and thus obtained data are outputed to buffer 314. The audio stream being recorded in dish is imported into audio coder 313 with basic view video flowing together with subordinate view video flowing.
Data encoder 315 for example, is encoded to the aforementioned various types of data except Audio and Video (, PlayList file etc.), and the data that obtain by encoding are outputed to buffer 316.
The coding that data encoder 315 is carried out according to video encoder 311 will represent that basic view video flowing is that L view stream or the view_type of R view stream arrange the file to PlayList. Can arrange and represent that subordinate view video flowing is the type that L view stream or the information of R view stream are replaced basic view video flowing.
In addition, data encoder 315 also arranges EP_map described later each in the Clip message file of basic view video flowing and the Clip message file of subordinate view video flowing. Be set to EP_map as decoding starting position the picture of basic view video flowing and the picture of subordinate view video flowing be the picture corresponding to each other.
Multiplexing Unit 317 carries out multiplexing to the video data of storing in each buffer and voice data and the data except the stream together with synchronizing signal, and is outputed to Error Correction of Coding unit 318.
Error Correction of Coding unit 318 adds error correcting code to data after multiplexing by Multiplexing Unit 317.
Modulating unit 319 is modulated the data that provide from Error Correction of Coding unit 318, and by its output. The output of modulating unit 319 becomes will be recorded to software on CD 2, that can play in playback apparatus 1.
Also there is software assembling (softwarefabrication) processing unit 301 of this configuration to recording equipment being provided with.
Figure 30 shows the ios dhcp sample configuration IOS DHCP that comprises software assembling processing unit 301.
A part for configuration shown in Figure 30 can be arranged in recording equipment.
The tracer signal that software assembling processing unit 301 generates is carried out stamper processing by pre-stamper processing unit (premasteringprocessingunit) 331, is generated to make to have the signal that will be recorded to the form in CD 2. The signal generating is provided for master record unit 333.
In the stamper production unit (amasteringforrecordingfabricationunit) 332 for recording, prepare the stamper that formed by glass etc., apply the recording materials that formed by photoresist (photoresist) thereon. Thereby produce the stamper for recording.
In master record unit 333, laser beam is modulated according to next tracer signal is provided from pre-stamper processing unit 331, and is irradiated on the photoresist on stamper. Thereby the photoresist on stamper is exposed according to this tracer signal. Subsequently, this stamper is developed, to make occurring pit (pit) on stamper.
In metal negative production unit 334, stamper is carried out to the processing of electroforming and so on, thereby produce pit in glass master and be transferred to its metal negative. In addition, make metal stamping and pressing (stamper) from this metal negative, and set it as mould (moldingdie).
In forming processes unit 335, the material such as PMMA (acrylic acid), PC (Merlon) is waited and is injected in mould by injection, and it is carried out to immobilization (fixing). Or, after 2P (ultraviolet curable resin) etc. is applied in metal stamping and pressing, ultraviolet ray is irradiated on it, thereby is hardened. Thereby the pit in metal stamping and pressing can be transferred on the duplicate being formed from a resin.
In film formation processing unit 336, by gas deposition or sputter at and form reflectance coating on duplicate. Or, on duplicate, form reflectance coating by being spin-coated on.
In post processing processing unit 337, carry out necessary processing, that is, this dish is carried out to interior diameter and overall diameter processing, and two dishes are bonded together to. In addition, pasted label thereon and added behind axle center (hub) to it, dish is inserted in tray salver. Thereby, completed that wherein recorded can be by the CD 2 of the data of playback apparatus 1 playback.
The<the second embodiment>
[the H.264AVC/MVC operation 1 of profile video flowing]
As mentioned above, in the BD-ROM of the standard as CD 2 standard, adopt H.264AVC/MVC profile to realize the coding to 3D video.
In addition, in BD-ROM standard, basic view video flowing is used as L view video flowing, and subordinate view video flowing is used as R view video flowing.
Basic view video is encoded as H.264AVC/ advanced profile video flowing, thus even player in the past or only with the player of 2D playback compatibility also can playing the CD 2 as 3D compatible disc. , can ensure backward compatible.
Particularly, even in the decoder of following H.264AVC/MVC, also can only decode (broadcasting) basic view video flowing. , even if become in existing 2DBD player also can be by the stream of playback reliably for basic view video flowing.
In addition, basic view video flowing is all used in 2D playback and 3D playback, thus the burden can reduce writing (authoring) time. In writing side, for AV stream, can make 3D compatible disc by also prepare subordinate view video flowing except conventional operation.
Figure 31 shows the ios dhcp sample configuration IOS DHCP of the 3D video TS generation unit being arranged in recording equipment.
3D video TS generation unit in Figure 31 comprises that MVC encoder 401, MVC head remove unit 402 and multiplexer 403. Be imported into MVC encoder 401 with the data of video #1 of L view and the data of the video #2 of R view of taking with reference to the mode described in figure 2.
MVC encoder 401 utilizes the H.264/AVC data of the video #1 to L view to encode in the mode identical with MVC encoder 11 in Fig. 3, and the AVC video data obtaining by encoding is exported as basic view video flowing. In addition, the MVC encoder 401 also data of the data of the video #1 based on L view and the video #2 of R view generates subordinate view video flowing, and by its output.
The basic view video flowing of exporting from MVC encoder 401 is made up of addressed location, has stored the data of every width picture of basic view video in each addressed location. In addition, the subordinate view video flowing of exporting from MVC encoder 401 is made up of addressed location, has stored the data of every width picture of subordinate view video in each addressed location.
The each addressed location that forms basic view video flowing comprises MVC head with the each addressed location that forms subordinate view video flowing, has described the view_id that is stored in the component of view wherein for identifying in MVC head.
Be equal to, or greater than 1 fixed value and be used as the view_id describing in the MVC head of subordinate view video. This is also applicable to the example in Figure 32 and Figure 33.
, different from the MVC encoder 11 in Fig. 3, MVC encoder 401 is that the form of adding MVC head generates each stream of basic view video and subordinate view video and these are flowed to the encoder of exporting. In the MVC of Fig. 3 encoder 11, MVC head is only added to the subordinate view video that H.264AVC/MVC utilization encodes.
The basic view video flowing of exporting from MVC encoder 401 is provided to MVC head and removes unit 402, and subordinate view video flowing is provided to multiplexer 403.
MVC head removes unit 402 and removes and form the MVC head that each addressed location of basic view video flowing comprises. MVC head removes unit 402 the basic view video flowing being made up of the addressed location that has removed MVC head is outputed to multiplexer 403.
Multiplexer 403 generates and comprises that removing unit 402 from MVC head provides the basic view video flowing coming and the TS that the subordinate view video flowing coming is provided from MVC encoder 401, and by its output. In the example of Figure 31, comprise the TS of basic view video flowing and comprise that the TS of subordinate view video flowing is exported respectively, but these streams can be output by being multiplexed in same TS as mentioned above.
Thereby, depend on mounting means, can provide and receive L view video and R view video and export the MVC encoder with the basic view video of MVC head and each stream of subordinate view video.
Or the whole configuration shown in Figure 31 can be included in the MVC encoder shown in Fig. 3. This is also applicable to the configuration shown in Figure 32 and Figure 33.
Figure 32 shows another ios dhcp sample configuration IOS DHCP that is arranged on the total 3D video TS generation unit of recording equipment.
3D video TS generation unit in Figure 32 comprises that mixed processing unit 411, MVC encoder 412, separative element 413, MVC head remove unit 414 and multiplexer 415. The data of the data of the video #1 of L view and the video #2 of R view are imported into mixed processing unit 411.
Each width picture of L view and each width picture of R view are arranged according to coding order in mixed processing unit 411. Each width picture of subordinate view video is encoded with reference to the corresponding picture of basic view video, and therefore, in the result of arranging according to coding order, the picture of the picture of L view and R view is by alternative arrangement.
The picture of L view and the picture of R view arranged according to coding order are outputed to MVC encoder 412 by mixed processing unit 411.
H.264AVC/MVC MVC encoder 412 utilizes the every width picture providing from mixed processing unit 411 is encoded, and the stream obtaining by encoding is outputed to separative element 413. Basic view stream and subordinate view video flowing are multiplexed in the stream of exporting from MVC encoder 412.
The basic view video flowing that the stream of exporting from MVC encoder 412 comprises is to be made up of the addressed location of data of the picture of having stored basic view video. In addition, the subordinate view video flowing that the stream of exporting from MVC encoder 412 comprises is to be made up of the addressed location of data of every width picture of having stored subordinate view video.
The MVC head of having described for identifying the view_id that is stored in the component of view is wherein included in the each addressed location that forms basic view video flowing and the each addressed location that forms subordinate view video flowing.
Separative element 413 is providing basic view video flowing and subordinate view video flowing in the stream coming to separate by multiplexing from MVC encoder 412, and by they output. The basic view video flowing that self-separation unit 413 is exported is provided to MVC head and removes unit 414, and subordinate view video flowing is provided to multiplexer 415.
MVC head removes unit 414 and removes the MVC head that formation self-separation unit 413 provides each addressed location of the basic view video flowing coming to comprise. MVC head removes unit 414 the basic view video flowing being made up of the addressed location that has removed MVC head is outputed to multiplexer 415.
Multiplexer 415 generates and comprises from MVC head and remove the TS that unit 414 provides the basic view video flowing that comes and self-separation unit 413 that the subordinate view video flowing coming is provided, and by its output.
Figure 33 shows another ios dhcp sample configuration IOS DHCP of the 3D video TS generation unit being arranged in recording equipment.
3D video TS generation unit in Figure 33 comprises AVC encoder 421, MVC encoder 422 and multiplexer 423. The data of the video #1 of L view are imported into AVC encoder 421, and the data of the video #2 of R view are imported into MVC encoder 422.
The AVC encoder 421 bases H.264/AVC data of the video #1 to L view are encoded, and the AVC video flowing obtaining by encoding is outputed to MVC encoder 422 and multiplexer 423 as basic view video flowing. Each addressed location that forms the basic view video flowing of exporting from AVC encoder 421 does not comprise MVC head.
MVC encoder 422 is to providing the basic view video flowing (AVC video flowing) coming to decode the data of the video #1 that generates L view from AVC encoder 421.
In addition, the data of the video #2 of the R view of the data of the video #1 of the L view of MVC encoder 422 based on obtaining by decoding and outside input generate subordinate view video flowing, and are outputed to multiplexer 423. The each addressed location that forms the subordinate view video flowing of exporting from MVC encoder 422 comprises MVC head.
Multiplexer 423 generations comprise from AVC encoder 421 provides the basic view video flowing coming and the TS that the subordinate view video flowing coming is provided from MVC encoder 422, and by its output.
AVC encoder 421 in Figure 33 has the function of the H.264/AVC encoder 21 in Fig. 3, and MVC encoder 422 has H.264/AVC encoder 22 in Fig. 3 and the function of subordinate view video encoder 24. In addition, multiplexer 423 has the function of the multiplexer 25 in Fig. 3.
The 3D video TS generation unit with this configuration is arranged in recording equipment, thereby can forbid that the MVC head of each addressed location of the data to having stored basic view video encodes. In addition, its view_id is set to be equal to, or greater than 1 MVC head and can be included in each addressed location of data of storage subordinate view video.
Figure 34 shows the configuration for addressed location is decoded of playback apparatus 1 side.
Figure 34 shows switch 109 and the Video Decoder 110 etc. described with reference to Figure 22. The addressed location #1 that comprises the data of basic view video is read from buffer with the addressed location #2 of the data that comprise subordinate view video, and is provided for switch 109.
Carry out coding with reference to basic view video, and therefore for the subordinate view video of correctly decoding, need to decode to the basic view video of correspondence.
In standard H.264/MVC, the view_id that decoder-side utilizes MVC head to comprise calculates the decoding order of each unit. In addition, in basic view video, defined the value of view_id when minimum of a value is always set to encode. Decoder starts decoding from comprising the unit of the MVC head that is provided with minimum view_id, with can be according to proper order decode basic view video and subordinate view video.
Incidentally, the MVC head of forbidding addressed location Video Decoder 110, that store basic view video to being provided for playback apparatus 1 is encoded.
Therefore,, in playback apparatus 1, defined the view component being stored in the addressed location without MVC head to make to identify its view_id as 0.
Therefore, playback apparatus 1 can be identified basic view video by the view_id based on being identified as 0, and can identify subordinate view video based on the actual view_id that is set to the value except 0.
The addressed location #1 that first switch 109 in Figure 34 is set to identified minimum of a value 0 view_id outputs to Video Decoder 110 and decoding is performed.
In addition, after having completed the decoding of addressed location #1, switch 109 outputs to addressed location #2 (it is the unit that is greater than 0 fixed value Y and is set to view_id) Video Decoder 110 and decoding is performed. The picture that is stored in the subordinate view video in addressed location #2 is the picture corresponding with the picture that is stored in the basic view video in addressed location #1.
Like this, forbid that the MVC head in the addressed location to storing basic view video is encoded, thereby in conventional players, also can be used as and can play stream and process even if be recorded in basic view video flowing in CD 2.
Even even being to expand the situation of condition of the basic view video flowing of BD-ROM3D standard from BD-ROM standard by condition setting that also can playback stream in conventional player, this condition also can be met.
For example, as shown in figure 35, if MVC head is added to respectively basic view video and subordinate view video and first basic view video is carried out to decoding, basic view video can not be play in conventional players. For the H.264/AVC decoder being arranged in conventional players, MVC head is undefined data. If inputted this undefined data, some decoders can not be ignored these data, and therefore processing may failure.
Note, in Figure 35, the view_id of basic view video is X, and the view_id of subordinate view video is the Y that is greater than X.
In addition, even if forbidden the coding to MVC head, by the view_id of basic view video is defined as and is considered to 0, also can make playback apparatus 1 first carry out the decoding to basic view video, carry out subsequently corresponding subordinate view video decode. , can carry out decoding according to correct order.
[operation 2]
About gop structure
In standard H.264/AVC, undefined according to the GOP of MPEG-2 video standard (group of picture) structure.
Therefore,, in the BD-ROM standard for the treatment of video flowing H.264/AVC, defined the H.264/AVC gop structure of video flowing, and realized the various functions of utilizing gop structure such as random access.
H.264AVC/MVC encoding and in the basic view video flowing and subordinate view video flowing of the video flowing that obtains, there is no the definition of gop structure as utilizing, as at video flowing H.264/AVC.
Basic view video flowing is video flowing H.264/AVC. Therefore, the gop structure of basic view video flowing has the structure identical with the gop structure of the H.264/AVC video flowing defining in BD-ROM standard.
The gop structure of subordinate view video flowing is also defined as the gop structure with basic view video flowing, that is, and and the identical structure of gop structure of the H.264/AVC video flowing defining in BD-ROM standard.
The gop structure of the H.264/AVC video flowing defining in BD-ROM standard has following characteristics.
1. about the feature of flow structure
(1) open GOP/ sealing gop structure
Figure 36 shows sealing gop structure.
Every width picture in Figure 36 is to form the H.264/AVC picture of video flowing. Sealing GOP comprises IDR (instantaneous decoding refresh) picture.
IDR picture is I picture, and first decoded in the GOP that comprises IDR picture. In the time of decoding IDR picture, be all reset with all information relevant with decoding POC (picture sequence counting) such as the frame number of the state of reference picture buffer (DPB151 in Figure 22), management up to now.
As shown in figure 36, for the current GOP as sealing GOP, prevent in the picture of current GOP the picture in the last GOP of front (past) picture reference in display order with respect to IDR picture.
In addition, in the picture of current GOP, prevent that follow-up (in the future) picture with respect to IDR picture is with reference to the picture that exceedes this IDR picture in last GOP in display order. In H.264/AVC, allow the picture of the P picture after I picture before with reference to this I picture in display order.
Figure 37 shows open gop structure.
As shown in figure 37, in the current GOP as open GOP, allow in the picture of current GOP display order at non-IDRI picture (not being the I picture of IDR picture) picture before with reference to the picture in last GOP.
In addition, forbid in the picture of current GOP that the picture of display order after non-IDRI picture is with reference to the picture that exceedes (beyond) this non-IDRI picture in last GOP.
(2) SPS and PPS are coded in the first addressed location of GOP reliably.
SPS (sequence parameter set) is the header information of sequence, comprises the information relevant with the coding of whole sequence. In the time starting a certain sequential decoding, the SPS that comprises the identification information of this sequence is necessary. PPS (image parameters collection) is the header information of picture, comprises and relevant information that whole picture is encoded.
(3) maximum 30 PPS that can encode in the first addressed location of GOP. If encoded multiple PPS in the first addressed location, the id of each PPS (pic_parameter_set_id) should be not identical.
(4) can be to maximum PPS codings in the addressed location except the first addressed location of GOP.
2. about the feature of reference configuration
(1) requiring I, P and B picture is to be respectively the picture being formed by I, P and B segments configure.
(2) require just after this reference picture, to be encoded reliably immediately in coding order at reference picture (I or P picture) B picture before in display order.
(3) require coding order and the display order of reference picture (I or P picture) to be maintained (identical).
(4) forbid from P picture with reference to B picture.
(5) if coding in order non-with reference to B picture (B1) in non-reference picture (B2) before, also require in display order B1 before B2.
Non-is not by the B picture of another follow-up width picture reference in coding order with reference to B picture.
(6) can be with reference to upper or the next reference picture (I or P picture) in display order with reference to B picture.
(7) non-can be with reference to upper or the next reference picture (I or P picture) or described with reference to B picture in display order with reference to B picture.
(8) require the number of continuous B picture to be 3 to the maximum.
About the frame in GOP or the feature of maximum number
According to the frame rate of video specified frame in a GOP or maximum number, as shown in figure 38.
As shown in figure 38, for example, if carry out according to the frame rate of 29.97 frames per second the demonstration that interweaves, can be taking the maximum field number of the Image Display of 1GOP as 60. In addition, if carry out progressive display according to the frame rate of 59.94 frames per second, can be taking the largest frames number of the Image Display of 1GOP as 60.
The gop structure with above-mentioned feature is also defined as the gop structure of subordinate view video flowing.
In addition, also the structure of a certain GOP of basic view video flowing is matched and is defined as a constraint with the structure of the GOP of corresponding subordinate view video flowing.
Figure 39 shows the basic view video flowing of definition in the above-described manner or the sealing gop structure of subordinate view video flowing.
As shown in figure 39, for the current GOP as sealing GOP, forbid the picture at the last GOP of front (past) picture reference with respect to IDR picture or grappling picture (anchorpicture) in display order in the picture of this current GOP. Grappling picture will be described below.
In addition, forbid that follow-up (in the future) picture with respect to IDR picture or grappling picture in display order in the picture of current GOP is with reference to the picture that exceeds IDR picture or grappling picture in last GOP.
Figure 40 shows the open gop structure of basic view video flowing or subordinate view video flowing.
As shown in figure 40, for the current GOP as open GOP, allow in the picture of this current GOP in display order the picture with reference to last GOP at non-IDR grappling picture (not being the grappling picture of IDR picture) picture before.
In addition, forbid that the picture after non-IDR grappling picture in display order in the picture of current GOP is with reference to the picture that exceeds non-IDR grappling picture in last GOP.
Define by this way gop structure, thereby in the characteristic aspect of the flow structure such as open GOP or sealing GOP, the GOP coupling of a certain GOP of basic view video flowing and corresponding subordinate view video flowing.
In addition, multiple features of the reference configuration of picture are mated, that is, making becomes non-with reference to B picture with the non-picture with reference to the corresponding subordinate view video of B picture of basic view video reliably.
In addition,, between a certain GOP of basic view video flowing and the GOP of the subordinate view video flowing of correspondence, the number of the number of frame and field also mates.
Like this, the gop structure of subordinate view video flowing is defined as the structure identical with the gop structure of basic view video flowing, thereby the GOP corresponding to each other of stream can have identical feature.
In addition, even start to carry out decoding from the middle part of stream, also can not have problems and carry out decoding. For example in the time of free broadcasting (trickplay) or random access, carry out the decoding starting from the middle part of stream.
If the gop structure difference corresponding to each other of stream, the number difference of for example frame, may occur a stream can normal playback and another stream can not playback situation. But this situation can be prevented from.
If the middle part decoding since a stream different in the situation that between stream of the structure of the GOP corresponding to each other in hypothesis may occur the subordinate view video not yet decoded situation of necessary basic view video pictures of decoding. In this case, as a result of, can not, to the decoding of subordinate view video pictures, therefore can not carry out 3D and show. In addition, depend on that installation method may not export the image of basic view video, but can prevent this inconvenience.
About EP_map
Random access and the starting position of freely decoding while playing can be arranged in EP_map by the gop structure that utilizes basic view video flowing and subordinate view video flowing. EP_map is included in Clip message file.
Two constraints are below defined as being set to the constraint of EP_map as the picture of decoding starting position.
1. the position of the grappling picture of arranging after SubsetSPS or SubsetSPS after the position of IDR picture of layout be used as the position that can be set to subordinate view video flowing.
Grappling picture is by the picture H.264AVC/MVC specifying, and is the picture of the subordinate view video flowing that is encoded by the reference in the reference between execution view instead of time of implementation direction.
If 2. a certain picture of subordinate view video flowing is set to EP_map as decoding starting position, the picture of corresponding basic view video flowing is also set to EP_map as decoding starting position.
Figure 41 shows the example that meets decoding above-mentioned two constraints, that be set to EP_map starting position.
In Figure 41, show the picture that forms basic view video flowing and the picture that forms subordinate view video flowing according to decoding order.
In the picture of subordinate view video flowing, utilize the picture P shown in color1Grappling picture or IDR picture. SubsetSPS is included in next-door neighbour and comprises picture P1The addressed location of data before addressed location in.
In the example of Figure 41, as shown in white arrow #11, picture P1Be set to the EP_map of subordinate view video flowing as decoding starting position.
As in basic view video flowing with picture P1The picture P of corresponding picture11It is IDR picture. As shown in white arrow #12, as the picture P of IDR picture11Also be set to the EP_map of basic view video flowing as decoding starting position.
If in response to instruction random access or free play instruction from picture P1And P11Start decoding, first carry out picture P11Decoding. Picture P11IDR picture, therefore, can be to picture P not with reference to another picture in the situation that11Decode.
Complete picture P11Decoding time, next, to picture P1Decode. To picture P1The decoded picture P of reference while decoding11. Picture P1IDR picture or grappling picture, therefore, if completed picture P11Decoding just can carry out picture P1Decoding.
Subsequently, according to the picture P in basic view video for example1Picture P in picture, subordinate view video afterwards11Picture afterwards .... the order waiting is carried out decoding.
The structure of corresponding GOP is identical, and therefore therefore decoding also from corresponding position, and can decode and be set to picture and the subsequent pictures of EP_map in the case of not there is not the problem of relevant basic view video and subordinate view video. Thereby can realize random access.
The picture of arranging on the upwardly extending dotted line of Vertical Square left side in Figure 41 is not decoded picture.
Figure 42 shows the gop structure of subordinate view video ifndef by the problem causing.
In the example of Figure 42, utilize the picture P of IDR picture shown in color, the basic view video of conduct21Be set to EP_map as decoding starting position.
If consider the picture P from basic view video21Start decoding, and as in subordinate view video with picture P21The picture P of corresponding picture31It not the situation of grappling picture. Ifndef gop structure, the picture that does not ensure the subordinate view video corresponding with the IDR picture of basic view video is IDR picture or grappling picture.
In this case, even at the picture P to basic view video21Decoding after, can not decoding picture P31. Reference on time orientation is for to picture P31It is also essential decoding, but the picture in the dotted line left side of extending in vertical direction (in decoding order at front picture) is not decoded.
Picture P31Can not be decoded, therefore reference picture P31Other pictures of subordinate view video can not be decoded.
Gop structure by definition subordinate view video flowing can be avoided this situation.
At basic view video and subordinate view video, in the two, decoding starting position is set to EP_map, thereby playback apparatus 1 can easily be determined the starting position of decoding.
If only a certain picture of basic view video is set to EP_map as decoding starting position, playback apparatus 1 need to be by calculating to determine the picture of the subordinate view video corresponding with the picture of this decoding starting position, and this makes to process and has complicated.
Even if the corresponding picture of basic view video and subordinate view video has identical DTS/PTS, if the bit rate difference of video, the byte align in TS can not be mated, and therefore this also makes to process and has complicated.
Figure 43 shows for the MVC to being made up of basic view video flowing and subordinate view video flowing and flows the concept of carrying out random access or freely playing required picture searching.
As shown in figure 43, in the time carrying out random access or freely play, search for non-IDR grappling picture or IDR picture, and determine decoding starting position.
Now, EP_map will be described. The EP_map situation that decoding starting position for basic view video is set to provides description. Equally, the decoding starting position of subordinate view video is also set to the EP_map of subordinate view video.
Figure 44 shows the structure of the AV stream being recorded on CD 2.
The TS that comprises basic view video flowing is formed by the integer unit with 6144 byte-sized (aligned units (AlighedUnit)) of an aiming at configuration.
Aligned units by 32 sources divide into groups (source grouping) form. Source grouping has 192 bytes. The grouping of source is by divide into groups transmission grouping (Transportpackt) formation of extra head (TP_extraheader) and 188 bytes of the transmission of 4 bytes.
The data of basic view video are grouped and change into MPEG2PES grouping. PES grouping is to form by adding PES packet header to the data division of PES grouping. PES grouping comprises the stream ID of the type for determining the elementary streams that will transmit by this PES grouping.
PES grouping is also grouped and changes into transmission grouping. That is, PES grouping is divided into the size of the pay(useful) load of transmission grouping, and transmission packet header is added to this pay(useful) load, thereby forms transmission grouping. Transmission packet header comprises PID, and it is as the identification information that is stored in the data in pay(useful) load.
Note, provide source packet number to the grouping of each source, source packet number for example increases progressively 1 as 0 for the grouping of each source taking the beginning of ClipAV stream. In addition, aligned units is from first byte of source grouping.
In the time having provided the timestamp of accessing points of Clip, EP_map is used to search for the data address that should start reading out data in ClipAV stream file. EP_map lists stomion (alistofentrypoints) in from one of elementary streams and MPTS extraction.
EP_map has the address information that should start the entrance of decoding in AV stream for searching for. What one section of EP data in EP_map were made up of the address corresponding with this PTS in the AV stream of PTS and addressed location forms configuration. In AVC/H.264, the data of a picture are stored in an addressed location.
Figure 45 shows the example of ClipAV stream.
ClipAV stream in Figure 45 is the video flowing (basic view video flowing) that is divided into groups to form by the source that utilizes PID=x to identify. In this video flowing, the PID that the grouping of each source is comprised by the transmission packet header in this source grouping distinguishes.
In Figure 45, the source grouping of the first byte that comprises IDR picture in the source grouping of this video flowing is marked by color. The instruction of colourless square frame comprises not as the source grouping of the data of random access point or comprises the source grouping of the data of another stream.
For example, distinguish by PID=x, comprise video flowing can random access IDR picture source the first byte and that there is source packet number X1 grouping be disposed in the position of the PTS=pts (x1) on the time shaft of ClipAV stream.
Similarly, comprise that the source grouping of the first byte that next can random access IDR picture is used as the source grouping with source packet number X2, and be disposed in the position of PTS=pts (x2).
Figure 46 is in the conceptive example that flows corresponding EP_map with the ClipAV in Figure 45 that shows.
As shown in figure 46, EP_map is formed by stream_PID, PTS_EP_start and SPN_EP_start configuration.
Stream_PID represents the PID of the transmission grouping for transmitting video flowing.
PTS_EP_start represents the PTS of the addressed location from can random access IDR picture starting.
SPN_EP_start has represented to comprise the address of the source grouping of the first byte of the addressed location that will be quoted by the value of PTS_EP_start.
The PID of video flowing is stored in stream_PID, and is generated as the EP_map_for_one_stream_PID () of the table information that represents the corresponding relation between PTS_EP_start and SPN_EP_start.
For example, at the EP_map_for_one_stream_PID[0 of the video flowing of PID=x] in, PTS=pts (x1) and source packet number X1, PTS=pts (x2) and source packet number X2 ..., and PTS=pts (xk) and source packet number Xk be described in corresponding mode.
Also generate this table for multiplexing each video flowing in same ClipAV stream. The EP_map that comprises generated table is stored in this ClipAV and flows in corresponding Clip message file.
Figure 47 shows the example of the data structure of the source grouping of being specified by SPN_EP_start.
As mentioned above, the form configuration that source grouping is added the head of 4 bytes with the transmission grouping to 188 bytes forms. Transmission packet partial is made up of head part (TP head) and pay(useful) load part. SPN_EP_start represents the source packet number of the source grouping of first byte that has comprised the addressed location from IDR picture.
In AVC/H.264, addressed location (, picture) is from AU delimiter (addressed location delimiter). After AU delimiter, be SPS and PPS. After this beginning part or the whole part of the data of the fragment of IDR picture, have been stored.
The value 1 of the payload_unit_start_indicator of the TP head of transmission grouping represents that new PES grouping is from the pay(useful) load of this transmission grouping. Addressed location is from this source grouping.
This EP_map is for the each preparation in basic view video flowing and subordinate view video flowing.
[operation 3]
In the time of coding, POC (picture sequence counting) is set to each picture that forms basic view video flowing and subordinate view video flowing. POC is the value that represents the DISPLAY ORDER of picture.
In H.264/AVC, defined as follows POC: " with respect to the last IDR picture in decoding order or with respect to last picture along with picture position increases progressively and is worth the storage management control operation that the variable not successively decreasing comprises all reference picture are labeled as " be not used in reference to " in output order ".
In the time of coding, carry out operation setting to the POC of the picture of basic view video flowing and the POC to the picture of subordinate view video flowing is set in consistent mode.
For example, POC=1 is set to the first secondary picture in the DISPLAY ORDER of basic view video flowing. After this, arrange and increase progressively with 1 to the POC of each width picture.
In addition, be also set to the first width picture in the DISPLAY ORDER of subordinate view video flowing with arranging to the identical POC=1 of the first width picture of basic view video flowing. After this, arrange and also increase progressively with 1 to the POC of each width picture.
As mentioned above, because the gop structure of basic view video flowing is identical with the gop structure of subordinate view video flowing, so identical POC is set to the picture corresponding to each other on DISPLAY ORDER in each width picture of basic view video flowing and subordinate view video flowing.
Therefore, playback apparatus 1 can be by processing them using the view component that has been set up identical POC as the view component corresponding to each other on DISPLAY ORDER.
For example, the picture that playback apparatus 1 can be set up POC=1 in being set up the picture of POC=1 and the picture of subordinate view video flowing in the picture of basic view video flowing is processed as picture respect to one another.
In addition, in the each width picture that forms basic view video flowing and subordinate view video flowing, be also provided with SEI (supplemental enhancement information). SEI is the additional information comprising about the supplementary of decoding, by H.264/AVC defining.
Picture timing SEI as one of SEI comprises temporal information, the time of for example reading from CPB (encoded picture buffer) in the time of coding and the time of reading from DPB (DPB151 Figure 22) in the time of decoding. In addition, picture timing SEI comprises about the information of displaying time with about the information of picture structure.
In the time of coding, carry out operation setting to the picture timing SEI of the picture of basic view video flowing and the picture timing SEI to the picture of subordinate view video flowing is set in consistent mode.
For example, if T1 is set to the first picture in the coded sequence basic view video flowing as the time of reading from CPB, T1 is also set to the first picture the coded sequence of subordinate view video flowing as time of reading from CPB.
The picture timing SEI that, has an identical content is set to the picture corresponding to each other in coded sequence or decoding order in each width picture of basic view video flowing and subordinate view video flowing.
Thereby playback apparatus 1 can be using the view component that has been set up identical picture timing SEI as the view component processing corresponding to each other on decoding order.
POC and picture timing SEI are included in the elementary streams of basic view video and subordinate view video, and by Video Decoder 101 references in playback apparatus 1.
The information that Video Decoder 110 can comprise based on elementary streams is identified corresponding view component. Equally, Video Decoder 110 can be carried out decoding processing according to correct decoding order and based on POC according to correct DISPLAY ORDER based on picture timing SEI.
Due to without identify corresponding view component with reference to PlayList etc., thus can tackle system layer or more high-rise in there is the situation of problem. In addition, the decoder that also can carry out the layer that is independent of the problem that occurred is installed.
Above-mentioned processing sequence can utilize hardware implement also can utilize software to carry out. If utilize software to carry out this processing sequence, the program that forms this software is installed to the computer, the general purpose personal computer etc. that specialized hardware, hold from program recorded medium.
The block diagram of Figure 48 shows the ios dhcp sample configuration IOS DHCP of carrying out the hardware of the computer of above-mentioned processing sequence according to program.
CPU (CPU) 501, ROM (read-only storage) 502 and RAM (random access storage device) 503 are interconnected by bus 504.
Bus 504 is also connected with input/output interface 505. The input block 506 being made up of keyboard, mouse etc., the output unit 507 being made up of display, loudspeaker etc. are connected to input/output interface 505. In addition, the memory cell 508 being formed by hard disk, nonvolatile memory etc., the communication unit 509 being formed by network interface etc., and be connected to bus 504 for the driver 510 that drives removable media 511.
In the computer of configuration like this, for example, CPU501 is loaded into the program of storage in memory cell 508 RAM503 and carries out above-mentioned processing sequence via input/output interface 505 and bus 504.
The program of being carried out by CPU501 is for example recorded in removable media 511, or for example, is installed in memory cell 508 via wired or wireless transmission medium (, LAN, internet, digital broadcasting etc.).
The program that computer is carried out can be that order for describing according to this description is carried out the program of processing according to sequential, or can be to process or necessary timing place when for example called and so on carries out the program of processing for executed in parallel, etc.
Embodiments of the invention are not limited to above-described embodiment, but can make various types of amendments in the situation that not departing from essence of the present invention.
Label list
1 playback apparatus, 2 CDs, 3 display devices, 11MVC encoder, 21H.264/AVC encoder, 22H.264/AVC decoder, 23 depth calculation unit, 24 subordinate view video encoders, 25 multiplexers, 51 controllers, 52 disk drives, 53 memories, 54 local storages, 55 internet interfaces, 56 decoder elements, 57 operation input blocks.

Claims (6)

1. a recording equipment for record data in the compatible BD of 3D, comprising:
Code device, for utilizing predictive encoding method to encode to multi-view point video data, andOutput is used for the primary image stream of 2D and 3D playback and the expansion image stream for 3D playback, formsThe data of described primary image stream do not have the data head of the identification information that comprises viewpoint, described in makingThe described data head of primary image stream is considered for the view_id with null value in decode procedure,And the data that form described expansion image stream have the data head that comprises described identification information, described markKnowledge information has the veiw_id of nonzero value and illustrates that these data are data of expansion viewpoint, and
Tape deck, it will be recorded to institute from the MPTS of multiplexer output together with other management datasState in the compatible BD of described 3D in recording equipment.
2. recording equipment according to claim 1, wherein said code device is from pre-by utilizingThe code method of delimiting the organizational structure multi-view point video data are encoded and obtain and by thering is described data headThe primary image stream that forms of data in remove described data head, and output is by not having described numberThe primary image stream forming according to the data of head.
3. recording equipment according to claim 1, wherein said code device is to described data headPortion arranges the value that is equal to or greater than 1, and exports the stream of described expansion image, and wherein said value is used asShowing described data is identification informations of the data of expansion viewpoint.
4. a recording method for record data in the compatible BD of 3D, comprises the following steps:
Utilize predictive encoding method to encode to multi-view point video data, and output for 2D andThe primary image stream of 3D playback and for the expansion image stream of 3D playback, forms described primary image streamData do not there is the data head of the identification information that comprises viewpoint, make the institute of described primary image streamState data head and in decode procedure, be considered for the view_id with null value, and form described expansionThe data of image stream have the data head that comprises identification information, and described identification information has nonzero valueVeiw_id also illustrates that these data are data of expansion viewpoint, and
The compatible BD of described 3D will be recorded to from the MPTS of multiplexer output together with other management datasIn.
5. the compatible BD playback apparatus of playing of 3D, comprising:
Reading device, for reading by utilizing predictive encoding method from the compatible BD recording medium of 3DMulti-view point video data are encoded and the primary image for 2D and the 3D playback stream that obtains and usingIn the expansion image stream of 3D playback, the data that wherein form the stream of described primary image do not have and compriseThe data head of the identification information of viewpoint, and the data that form described expansion image stream have and comprise its valueBe equal to or greater than the data head of 1 described identification information, described its value be equal to or greater than 1 described inIdentification information shows the data that these data are expansion viewpoints; And
Decoding device, for starting sequentially to carry out processing from the data of viewpoint, will not have data headData portion, in primary image stream are set to described mark letter as having in described data headZero view_id value of breath, and the decoding data first described primary image being flowed is then to instituteState the decoding data that expands image stream.
6. the compatible BD back method of playing of 3D, comprises the following steps:
Read by utilizing predictive encoding method to multi-view point video number from the compatible BD recording medium of 3DThe primary image for 2D and the 3D playback stream obtaining according to encoding and for the expansion of 3D playbackFill image stream, the data that wherein form described primary image stream do not have the identification information that comprises viewpointData head, and the data that form described expansion image stream have and comprise that its value is equal to or greater than 1 instituteState the data head of identification information, described its value is equal to or greater than 1 described identification information and shows thisData are data of expansion viewpoint; And
Start sequentially to carry out processing from the data of viewpoint, by do not have data head, primary imageData in stream are set to zero of described identification information as having in described data headView_id value, and
First then the decoding data of described primary image stream is entered the data of described expansion image streamRow decoding.
CN201210045765.8A 2009-04-08 2010-03-25 Recording equipment, recording method, playback apparatus and back method Expired - Fee Related CN102625122B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2009094254A JP4962525B2 (en) 2009-04-08 2009-04-08 REPRODUCTION DEVICE, REPRODUCTION METHOD, AND PROGRAM
JP2009-094254 2009-04-08
CN201080001730.3A CN102047673B (en) 2009-04-08 2010-03-25 Recording device, recording method, playback device, playback method, recording medium and program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201080001730.3A Division CN102047673B (en) 2009-04-08 2010-03-25 Recording device, recording method, playback device, playback method, recording medium and program

Publications (2)

Publication Number Publication Date
CN102625122A CN102625122A (en) 2012-08-01
CN102625122B true CN102625122B (en) 2016-05-04

Family

ID=42936181

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201210045765.8A Expired - Fee Related CN102625122B (en) 2009-04-08 2010-03-25 Recording equipment, recording method, playback apparatus and back method
CN201080001730.3A Expired - Fee Related CN102047673B (en) 2009-04-08 2010-03-25 Recording device, recording method, playback device, playback method, recording medium and program

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201080001730.3A Expired - Fee Related CN102047673B (en) 2009-04-08 2010-03-25 Recording device, recording method, playback device, playback method, recording medium and program

Country Status (14)

Country Link
US (1) US9088775B2 (en)
EP (1) EP2285127B1 (en)
JP (1) JP4962525B2 (en)
KR (1) KR20120006431A (en)
CN (2) CN102625122B (en)
AU (1) AU2010235599B2 (en)
BR (1) BRPI1002816A2 (en)
CA (1) CA2724974C (en)
ES (1) ES2526578T3 (en)
MX (1) MX2010013210A (en)
MY (1) MY156159A (en)
RU (1) RU2525483C2 (en)
TW (1) TWI444042B (en)
WO (1) WO2010116895A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2326092A4 (en) * 2008-09-18 2012-11-21 Panasonic Corp IMAGE DECODING DEVICE, IMAGE ENCODING DEVICE, IMAGE DECODING METHOD, IMAGE ENCODING METHOD, AND PROGRAM
JP2011199396A (en) * 2010-03-17 2011-10-06 Ntt Docomo Inc Moving image prediction encoding device, moving image prediction encoding method, moving image prediction encoding program, moving image prediction decoding device, moving image prediction decoding method, and moving image prediction decoding program
WO2012077981A2 (en) 2010-12-07 2012-06-14 삼성전자 주식회사 Transmitter for transmitting data for constituting content, receiver for receiving and processing the data, and method therefor
KR101831775B1 (en) 2010-12-07 2018-02-26 삼성전자주식회사 Transmitter and receiver for transmitting and receiving multimedia content, and reproducing method thereof
JPWO2012169204A1 (en) * 2011-06-08 2015-02-23 パナソニック株式会社 Transmission device, reception device, transmission method, and reception method
EP2739043A4 (en) * 2011-07-29 2015-03-18 Korea Electronics Telecomm TRANSMISSION APPARATUS AND METHOD AND RECEIVER APPARATUS AND METHOD FOR PROVIDING 3D SERVICE THROUGH LINKING WITH A REAL-TIME REALIZED REFERENCE IMAGE AND WITH ADDITIONAL IMAGE AND CONTENT ISSUED SEPARATELY
US10447990B2 (en) 2012-02-28 2019-10-15 Qualcomm Incorporated Network abstraction layer (NAL) unit header design for three-dimensional video coding
US9014255B2 (en) * 2012-04-03 2015-04-21 Xerox Corporation System and method for identifying unique portions of videos with validation and predictive scene changes
CA2870989C (en) 2012-04-23 2018-11-20 Panasonic Intellectual Property Corporation Of America Encoding method, decoding method, encoding apparatus, decoding apparatus, and encoding and decoding apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1269941A (en) * 1997-08-29 2000-10-11 松下电器产业株式会社 Optical disc for recording high-resolution video images and general video images, optical disc reproduction device, optical disc recording device, and reproduction control information generation device
EP1501316A1 (en) * 2002-04-25 2005-01-26 Sharp Kabushiki Kaisha Multimedia information generation method and multimedia information reproduction device
CN101180884A (en) * 2005-04-13 2008-05-14 诺基亚公司 Method, apparatus and system for efficient fine-grained scaling (FGS) encoding and decoding of video data
CN101356822A (en) * 2006-01-10 2009-01-28 汤姆逊许可公司 Method and apparatus for constructing a reference picture list for scalable video

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US353144A (en) * 1886-11-23 Table-corner
US429072A (en) * 1890-05-27 Machine for cutting circular plates
US5619256A (en) * 1995-05-26 1997-04-08 Lucent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions
EP2259585B1 (en) * 1996-12-04 2013-10-16 Panasonic Corporation Optical disk for high resolution and three dimensional video recording, optical disk reproduction apparatus, and optical disk recording apparatus
JP4051841B2 (en) * 1999-12-01 2008-02-27 ソニー株式会社 Image recording apparatus and method
KR100397511B1 (en) * 2001-11-21 2003-09-13 한국전자통신연구원 The processing system and it's method for the stereoscopic/multiview Video
JP3707685B2 (en) * 2002-05-08 2005-10-19 ソニー株式会社 Optical disc apparatus, optical disc recording method, optical disc recording method program, and recording medium recording optical disc recording method program
JP2003333505A (en) * 2002-05-14 2003-11-21 Pioneer Electronic Corp Recording medium, video recording apparatus, video reproducing apparatus, and video recording and reproducing apparatus
KR100751422B1 (en) * 2002-12-27 2007-08-23 한국전자통신연구원 Stereoscopic video encoding and decoding method, encoding and decoding device
CA2553708C (en) * 2004-02-06 2014-04-08 Sony Corporation Information processing device, information processing method, program, and data structure
JP4608953B2 (en) 2004-06-07 2011-01-12 ソニー株式会社 Data recording apparatus, method and program, data reproducing apparatus, method and program, and recording medium
JP2006238373A (en) * 2005-02-28 2006-09-07 Sony Corp Coder and coding method, decoder and decoding method, image processing system and image processing method, recording medium, and program
WO2007010779A1 (en) * 2005-07-15 2007-01-25 Matsushita Electric Industrial Co., Ltd. Packet transmitter
JP4638784B2 (en) * 2005-07-19 2011-02-23 オリンパスイメージング株式会社 Image output apparatus and program
US7817866B2 (en) * 2006-01-12 2010-10-19 Lg Electronics Inc. Processing multiview video
US8767836B2 (en) * 2006-03-27 2014-07-01 Nokia Corporation Picture delimiter in scalable video coding
JP5255558B2 (en) * 2006-03-29 2013-08-07 トムソン ライセンシング Multi-view video encoding method and apparatus
MX2008012437A (en) * 2006-03-30 2008-11-18 Lg Electronics Inc A method and apparatus for decoding/encoding a video signal.
KR101137347B1 (en) * 2006-05-11 2012-04-19 엘지전자 주식회사 apparatus for mobile telecommunication and method for displaying an image using the apparatus
CN101578884B (en) * 2007-01-08 2015-03-04 诺基亚公司 System and method for providing and using predetermined signaling of interoperability points for transcoded media streams
KR101396948B1 (en) * 2007-03-05 2014-05-20 경희대학교 산학협력단 Method and Equipment for hybrid multiview and scalable video coding
JP2008252740A (en) 2007-03-30 2008-10-16 Sony Corp Remote commander and command generating method, playback apparatus and playback method, program, and recording medium
KR101393169B1 (en) * 2007-04-18 2014-05-09 톰슨 라이센싱 Coding systems
WO2008140190A1 (en) * 2007-05-14 2008-11-20 Samsung Electronics Co, . Ltd. Method and apparatus for encoding and decoding multi-view image
JP4720785B2 (en) * 2007-05-21 2011-07-13 富士フイルム株式会社 Imaging apparatus, image reproducing apparatus, imaging method, and program
KR101388265B1 (en) * 2007-06-11 2014-04-22 삼성전자주식회사 System and method for generating and playing three dimensional image files based on two dimensional image media standards
US20080317124A1 (en) * 2007-06-25 2008-12-25 Sukhee Cho Multi-view video coding system, decoding system, bitstream extraction system for decoding base view and supporting view random access
CN101222630B (en) * 2007-11-30 2010-08-18 武汉大学 Time-domain gradable video encoding method for implementing real-time double-frame reference
KR101506217B1 (en) * 2008-01-31 2015-03-26 삼성전자주식회사 A method and apparatus for generating a stereoscopic image data stream for reproducing a partial data section of a stereoscopic image, and a method and an apparatus for reproducing a partial data section of a stereoscopic image
US8565576B2 (en) * 2008-05-01 2013-10-22 Panasonic Corporation Optical disc for reproducing stereoscopic video image
JP4564107B2 (en) 2008-09-30 2010-10-20 パナソニック株式会社 Recording medium, reproducing apparatus, system LSI, reproducing method, recording method, recording medium reproducing system
KR101626486B1 (en) * 2009-01-28 2016-06-01 엘지전자 주식회사 Broadcast receiver and video data processing method thereof
JP2010244630A (en) * 2009-04-07 2010-10-28 Sony Corp Information processing device, information processing method, program, and data structure
RU2477009C2 (en) * 2009-04-28 2013-02-27 Панасоник Корпорэйшн Image decoding method and image coding apparatus
WO2011049519A1 (en) * 2009-10-20 2011-04-28 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangement for multi-view video compression
CN105847781B (en) * 2010-07-21 2018-03-20 杜比实验室特许公司 Coding/decoding method for the transmission of multilayer frame compatible video
US20120219069A1 (en) * 2011-02-28 2012-08-30 Chong Soon Lim Methods and apparatuses for encoding and decoding images of a plurality of views using multiview video coding standard and mpeg-2 video standard

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1269941A (en) * 1997-08-29 2000-10-11 松下电器产业株式会社 Optical disc for recording high-resolution video images and general video images, optical disc reproduction device, optical disc recording device, and reproduction control information generation device
EP1501316A1 (en) * 2002-04-25 2005-01-26 Sharp Kabushiki Kaisha Multimedia information generation method and multimedia information reproduction device
CN101180884A (en) * 2005-04-13 2008-05-14 诺基亚公司 Method, apparatus and system for efficient fine-grained scaling (FGS) encoding and decoding of video data
CN101356822A (en) * 2006-01-10 2009-01-28 汤姆逊许可公司 Method and apparatus for constructing a reference picture list for scalable video

Also Published As

Publication number Publication date
KR20120006431A (en) 2012-01-18
WO2010116895A1 (en) 2010-10-14
AU2010235599A1 (en) 2010-10-14
ES2526578T3 (en) 2015-01-13
RU2010149262A (en) 2012-06-10
AU2010235599B2 (en) 2015-03-26
EP2285127A1 (en) 2011-02-16
TWI444042B (en) 2014-07-01
MY156159A (en) 2016-01-15
MX2010013210A (en) 2010-12-21
EP2285127A4 (en) 2013-03-13
TW201041390A (en) 2010-11-16
RU2525483C2 (en) 2014-08-20
HK1157551A1 (en) 2012-06-29
JP2010245968A (en) 2010-10-28
JP4962525B2 (en) 2012-06-27
CN102047673B (en) 2016-04-13
US20110081131A1 (en) 2011-04-07
CA2724974C (en) 2014-08-19
CN102625122A (en) 2012-08-01
BRPI1002816A2 (en) 2016-02-23
CA2724974A1 (en) 2010-10-14
US9088775B2 (en) 2015-07-21
EP2285127B1 (en) 2014-11-19
CN102047673A (en) 2011-05-04

Similar Documents

Publication Publication Date Title
CN103079082B (en) Recording method
CN103179420B (en) Recording equipment, recording method, playback apparatus, back method, program and recording medium
CN102625122B (en) Recording equipment, recording method, playback apparatus and back method
CN102292992B (en) Information processing device, information processing method, playback device, playback method, and recording medium
EP2288173B1 (en) Playback device, playback method, and program
WO2010116955A1 (en) Information processing device, information processing method, reproduction device, reproduction method, and program
EP2285129A1 (en) Information processing device, information processing method, program, and recording medium
HK1157551B (en) Recording device, recording method, reproduction device, reproduction method, recording medium, and program
HK1157546B (en) Recording device, recording method, reproduction device, reproduction method, program, and recording medium
HK1164003B (en) Information processing device, information processing method, playback device, playback method, and recording medium
HK1163999A (en) Playback device, playback method, and program
HK1157547B (en) Recording device, recording method, reproduction device, and reproduction method
JP2012135001A (en) Recording method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160504

Termination date: 20210325

CF01 Termination of patent right due to non-payment of annual fee