HK1220057B - Guided color grading for an extended dynamic range image - Google Patents
Guided color grading for an extended dynamic range image Download PDFInfo
- Publication number
- HK1220057B HK1220057B HK16107865.9A HK16107865A HK1220057B HK 1220057 B HK1220057 B HK 1220057B HK 16107865 A HK16107865 A HK 16107865A HK 1220057 B HK1220057 B HK 1220057B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- color
- dynamic range
- computer
- unrated
- Prior art date
Links
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority from U.S. provisional patent application No.61/894,382 filed on 22/10/2013, which is hereby incorporated by reference in its entirety.
Technical Field
The present disclosure relates to color grading (color grading) of video and images. More particularly, the present invention relates to guided color grading for extended dynamic range.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more embodiments of the disclosure and, together with the description of the exemplary embodiments, serve to explain the principles and implementations of the disclosure.
FIG. 1 illustrates an overview of one embodiment of an SDR-guided VDR ranking system.
FIG. 2 shows an overview of one embodiment of spatial alignment.
Fig. 3 shows an overview of the main steps of one embodiment of automatic re-recording (re-mapping).
Fig. 4 illustrates an example of inverse tone mapping and color gamut according to one embodiment of the present invention.
Fig. 5 illustrates an example of manipulating a special area according to an embodiment of the present invention.
FIG. 6 depicts an exemplary embodiment of target hardware for implementing one embodiment of the present disclosure.
Disclosure of Invention
In a first aspect of the disclosure, a method for performing color grading is described, the method comprising: providing, by a computer, an unrated image; providing, by a computer, a color graded lower dynamic range image; and color grading, by a computer, the unrated image based on the color graded lower dynamic range image, thereby obtaining a color graded image of a higher dynamic range.
In a second aspect of the disclosure, a method for performing color grading is described, the method comprising: providing, by a computer, an unrated image; providing, by a computer, a color graded lower dynamic range image; applying, by a computer, an inverse tone and gamut mapping transformation to the color graded lower dynamic range image, thereby obtaining an inverse mapped lower dynamic range image; providing, by a computer, a higher dynamic range edit based on the unrated image; and applying, by a computer, a color transformation to the unrated image based on the higher dynamic range editing and the inverse mapped lower dynamic range image, thereby obtaining a higher dynamic range primary image.
In a third aspect of the disclosure, a method for performing color grading is described, the method comprising: removing, by the computer, the at least one first special manipulation region from the unrated image, thereby obtaining at least one first special manipulation region and first remaining data; removing, by the computer, at least one second special manipulation region from the color-graded lower dynamic range image, thereby obtaining second residual data; estimating, by a computer, a global color transform based on the first and second residual data, wherein the global color transform comprises minimizing a difference between the unrated image and the color graded lower dynamic range image; processing, by a computer, the at least one first special handling area, thereby obtaining at least one processed special area; estimating, by the computer, a local color transformation of the at least one local region; and combining, by the computer, the global color transform and the local color transform based on the at least one processed special region.
Detailed Description
The term 'dynamic range' (DR) as used herein may relate to the ability of the Human Visual System (HVS) to perceive a range of intensities (e.g., luminance, brightness) in an image, for example, from darkest darkness to brightest brightness. In this sense, DR relates to 'scene-referred' strength. DR may also relate to the ability of a display device to adequately or approximately render (render) an intensity range of a particular width. In this sense, DR is related to 'display-related' strength. Unless a specific meaning is explicitly stated to have a particular meaning at any point in this specification, it should be understood that the terms may be used interchangeably in either sense, for example.
The term High Dynamic Range (HDR) as used herein relates to the DR breadth of an HVS spanning the order of about 14-15. For example, well-adapted humans (well-adapted humanans) with substantially normal vision (e.g., in one or more of a statistical, biometric, or ophthalmic sense) have an intensity range that spans about 15 orders of magnitude. Adapted humans (adapted human) can perceive a weak light source of a few photons. Also, these people can perceive the near extreme glare intensity of the sun at noon in deserts, oceans or snowy (or even glance at the sun, but very briefly to prevent injury). However, such a span is available to 'adapted' people, e.g. those whose HVS has a period for resetting and adjusting.
In contrast, DR, which a person can simultaneously perceive a wide breadth in the intensity range, may be somewhat truncated relative to HDR. The term 'visual dynamic range' (VDR) as used herein may relate to DR that may be simultaneously perceived by the HVS. VDR as used herein may refer to DR spanning 5-6 orders of magnitude, however, is not intended to limit any span of dynamic range, and VDR may be narrower or equal to HDR.
Until recently, displays had a DR much narrower than HDR or VDR. Television (TV) and computer monitor devices using typical Cathode Ray Tubes (CRTs), Liquid Crystal Displays (LCDs) with constant fluorescent white backlights, or plasma screen technologies are limited to approximately 3 orders of magnitude in their DR rendering capabilities. Such conventional displays thus represent a Low Dynamic Range (LDR) or a Standard Dynamic Range (SDR) with respect to VDR and HDR. Digital cinema systems exhibit some of the same limitations as other display devices. In this application, "Visual Dynamic Range (VDR)" is intended to mean any extended dynamic range that is wider than LDR or SDR, but narrower than or equal to HDR. The SDR image may be, for example, 48 nits (nit) of film content, or 100 nits of blu-ray content. VDR can also be expressed interchangeably as EDR (enhanced dynamic range). In general, the methods of the present disclosure relate to any dynamic range above SDR.
Advances in the underlying technology of digital cinema will enable future digital cinema systems to render images and video content of significantly improved quality characteristics on the same content rendered by today's digital cinema systems. For example, future digital cinema systems can have a higher DR (e.g., VDR) than the SDR/LDR of conventional digital cinema systems, and a larger color gamut than the color gamut of conventional digital cinema systems.
Here and in the following, by "signal" a physical electrical signal is here meant, for example, a digital bit stream, to represent an image or a parameter related to image processing. Such signal (or image) processing may introduce a loss in image quality, one of the reasons being the difference between the input signals used at the transmitting and receiving ends, respectively, in the processing.
This disclosure describes systems and methods for generating Visual Dynamic Range (VDR) rated video based on raw data and Standard Dynamic Range (SDR) levels of the same material. By "grading" is meant herein color grading, or a process for changing and/or enhancing the color of a video or image.
By "raw data" is meant herein video or images that have not been color graded.
In particular, the present disclosure describes systems and methods for color-grading original (unrated) content into a VDR video sequence by taking into account previously-graded SDR versions.
Those skilled in the art understand that a method may be referred to as automated if the method is performed by a computer and each detail of the execution is not manually supervised. In contrast, manual processing (hand pass) is also performed by a computer, but it includes some direct, on-site human supervision that may direct, inspect, or otherwise direct the operation of the computer.
The systems and methods of the present disclosure may be used, for example, to: automatic temporal and spatial alignment between SDR images and original (unrated) images; inverse tone mapping; estimating color transformation and secondary color grading; automatic re-recording (re-mapping). The possible applications listed here are not intended as a limitation on the possible uses.
In embodiments of the present disclosure, color re-grading includes primary and secondary color grading, as well as allowing special handling areas of the video or image. The meaning of primary and secondary color grading processes will be understood by those skilled in the art.
The creation of VDR content may typically include editing raw footage (footages) by a colorist to be color graded to a VDR range to reflect the director's intent while the resulting VDR master (master) is monitored on a high dynamic range display. The VDR range may refer to, for example, a luminance range including 0.005 nit to 10,000 nit. The resulting VDR masters can then generally be archived and mapped into different versions (different dynamic ranges and color gamuts) by a clipping process (trim pass) that is relatively cost-effective (computationally speaking) compared to conventional color grading processes. However, the color grading step from the cropped raw shot to the VDR master still typically takes a lot of effort and time.
Color grading of the original shot can be a necessary step in post-production for a new film undergoing VDR workflow. However, for existing films that have been color rated into one or more SDR versions rather than VDR versions, it is possible to utilize systems and methods for accelerating the color rating of the original image into a VDR master as described in this disclosure.
In several embodiments, the SDR version provides guidance not only on temporal and spatial editing decisions, but also on how to perform color correction.
FIG. 1 illustrates an overview of one embodiment of an SDR-guided VDR ranking system. The unrated original (raw) image or video (105) may undergo automatic spatial and/or temporal alignment (110), followed by manual clipping (115) and automatic re-recording (120). A hierarchical SDR image (125) corresponding to the original image (105) may be used in steps (110) and (120).
The next step may include a manual clipping process for color grading (130) before obtaining the VDR master image (135).
In fig. 1, the unclassified original (105) may refer to different versions of unclassified material, including film scans, digital middlings, raw camera data, etc., while the classified SDR image (125) may refer to different Standard Dynamic Range (SDR) versions, such as the P3 cinema version of 48 nit, or the Rec709 blu-ray disc version of 100 nit. The method of fig. 1 may be scene-by-scene based.
In several embodiments, for each scene in the ranked SDR (125), the corresponding scene in the unrated raw material (105) is known.
In step (110), automatic spatial and temporal alignment may be applied to the original image (105) to obtain an unrated original version of the aligned content that is spatially and temporally aligned with the hierarchical SDR (125).
In other words, the time cut location and pan and scan (pan- & -scan) box may be found so that the aligned original image matches the ranked SDR image pixel by pixel.
Manual clipping (115) may be provided for the spatially and temporally aligning step. Step (115) may serve multiple purposes, for example the director may want to cut a VDR scene that is spatially and temporally different from an SDR scene, and step (115) may provide the required flexibility for this purpose. In another example, the manual clipping process (115) may provide a quality control step so that any errors that occur in the automatic alignment (110) can be corrected before entering the automatic re-recording step (120).
After step (115), the spatial and temporal content is color-rated to the VDR range by an automatic re-recording step (120), with the same intent as in the hierarchical SDR version. Here manual clipping processing (130) may be required to obtain director's approval, as the automatic re-recording (120) results may need to be refined to meet the director's intent.
The above process may be scene-based, and a scene may include one frame or several frames. In each of the steps of spatial/temporal alignment (110) and automatic re-recording (120), the method of fig. 1 may fall back to frame-based if there is a difference between frames within a scene.
Automatic spatial and temporal alignment
An exemplary embodiment of step (110) will be described in more detail below.
In several embodiments, given one scenario in the hierarchical SDR, time alignment may be performed first. If the frame rate does not change, the method may include finding a frame in the unrated original that corresponds to a frame in the rated SDR content. If the frame rates are different, it may be advantageous to find more than one corresponding frame from which it may be possible to determine the frame rate ratio between the unrated original and the rated SDR content.
In order to obtain more robust results, in both cases, i.e., the case of a frame rate change and the case of no frame rate change, it may be more advantageous to select multiple candidate frames in the hierarchical SDR scene and find their corresponding frames in the unrated original. Motion analysis may be required to determine whether there are static frames in the scene and remove duplicate frames so that a one-to-one mapping can be found for the candidate frames.
For temporal alignment, metadata-based approaches can cope with certain situations with reduced relative complexity, where there is not a large amount of spatial and color variation between the two frame sequences to be aligned.
In situations where complexity increases due to the large amount of spatial (e.g., due to cropping) and color differences (e.g., due to ranking) between the unrated original and the ranked SDR frames, it may be desirable to apply a more robust method based on features that are invariant to spatial and temporal transforms.
In particular, in several embodiments of the present disclosure, invariant feature points may be extracted from each selected candidate frame in the hierarchical SDR image. With the same method, feature points can be extracted from each of the non-repeated frames of the original that is not classified. Detecting invariant feature points within an image is a common task in computer vision. Comparison of detection methods is described, for example, in Acomparison of fine region detectors, IJCV 65(l/2):43-72,2005, Mikolajczy et al, which is incorporated herein by reference in its entirety.
Those skilled in the art will appreciate that many different methods may be used for feature point detection. After detecting local feature points, a number of different descriptors can be used to characterize the local regions, i.e. to extract their discriminating features, including using scale-invariant feature transform (SIFT), gradient position and orientation histogram (GLOH), Speeded Up Robust Features (SURF), etc. SIFT is described, for example, in lowe. passive image features from scale-innovative keys, IJCV,60(2): 11 months 91-110,2004, which is incorporated by reference in its entirety. GLOH is described in Mikolajczyk et al, application evaluation of local descriptors, in PAMI 27(10):1615-1630,2004, which is incorporated by reference in its entirety. SURF is described in Bay et al SURF: Speeded Up Robust feeds, ECCV, pp.404-417,2008, which is incorporated by reference in its entirety.
If two sets of descriptors are extracted from two different images, a match may be calculated based on the similarity of the descriptors. Thus, it may be possible to find the best matching frame in the unrated original for each selected frame in the ranked SDR. This process can be done in the other direction, i.e. selecting candidate frames in the unrated original and finding their best match in the ranking SDR.
After finding the best one-to-one match, the time offset can be estimated. Random sample consensus (RANSAC), described for example in Fischler et al, Random sample consensus: A partial for model fitting with automatic programs analysis and automatic programs CACM,24(6), 381-395, 6 1981, which is incorporated by reference in its entirety, can be used to estimate the time offset between the unclassified original and the classified SDR, along with the frame rate ratio. RANSAC can provide robust results, as long as there is a certain percentage of true matches, the error matches are random.
In some embodiments, the method of estimating time offset and frame ratio using RANSAC comprises the steps of:
two pairs of corresponding frames, S1, S2 and U1, U2, were randomly picked, where frames S1, S2 in the ranked SDR matched with frames U1, U2, respectively, in the unrated original. If (U1-U2) × (S1-S2) < ═ 0, repeating step i;
estimating a time offset and a frame ratio, wherein the time offset is equal to S1-U1 and the frame ratio is equal to (U2-U1)/(S2-S1);
checking how many frame numbers have consistency with the model calculated in step ii for all corresponding frame number pairs, and recording the maximum consistency rate and the corresponding offset and ratio thereof;
if the above steps are repeated a certain number of times, the process is ended and the best offset, ratio and anchor position of S1 are output, otherwise repeat from step i.
After the unclassified original is time-aligned with the classified SDR, spatial alignment may be performed. Spatial alignment may be performed by matching the extracted feature points of the matching frames and estimating a spatial transformation between the two frames. This step may be required because in many cases there may be a change in resolution and panning and scanning from the original image to the hierarchical SDR image. For example, the original may have a 4K resolution, while the rating SDR may be 1080 p.
Those skilled in the art will appreciate that other methods may be applied to obtain a parametric spatial transform model from the coordinates of matching feature points throughout the scene, in a similar manner to the RANSAC method described above, if the spatial transform is time invariant within the scene. In some embodiments, an affine (affine) transformation of 6 parameters may be estimated between an unrated original and a ranked SDR image.
FIG. 2 shows an overview of one embodiment of spatial alignment. The original content (210) and the hierarchical SDR content (205) are subjected to a feature detection step (215). Feature descriptors (220) may be extracted and then matched (225). The transform estimation is performed, for example, using RANSAC (230), to find the optimal spatial transform (235).
In some cases, the panning and scanning processes may be time varying within a scene. In this case, the spatial transform may be estimated on a frame-by-frame basis, from which the temporal variations of panning and scanning may also be modeled.
After the content is aligned temporally and spatially, it is now possible to obtain aligned content with color differences only with respect to the hierarchical SDR. This aligned content can then facilitate an automatic re-recording step so that the colors in the rating SDR can be used to guide the color rating in the unrated original.
After the automatic alignment is performed, a manual clipping process for alignment may be required. The manual clipping process can correct any possible errors in the automatic alignment, but also allows different editing decisions to be used on the VDR image so that the VDR graded image can be different from the SDR version both spatially and temporally. In this case, the automated VDR re-recording may still use the aligned content to help estimate the color transform and apply the estimated transform to the edited VDR version of the original to obtain the VDR master.
Automatic re-recording
One underlying concept of automatic re-recording (e.g., 120) is that all original information is available in the original that is not rated, and the rated SDR can provide a way to perform color correction when the target display is an SDR display. While SDR and VDR displays may have different dynamic ranges and gamuts, the director's intent may remain the same for those colors within the 3D gamut. In case of a leveled SDR image having color cast (color clipping) due to its limited dynamic range and color gamut, the missing information can be recovered from the original not leveled. Another possible advantage is that the same director's intent can be applied to new content when there are different editing decisions between SDR and VDR. For example, the VDR images may have different edits to better tell stories, or different cropping or spatial formats.
In some embodiments, inverse tone mapping and gamut mapping are first applied to the ranked SDR, given information about the dynamic range and gamut of the target VDR master. After the mapping step, the color transformation can be modeled between the unrated original and the inverse mapping result, where the cropped area will become the estimated transformed outlier (outlier) due to the cropping. The modeled color transform may then be applied back to the unrated original to obtain a re-recording result.
Fig. 3 shows an overview of the main steps of one embodiment of automatic re-recording for the special case where the SDR content is the Rec709 version of 100 nit and the output VDR content is the P3 version of 4000 nit.
A hierarchical SDR image, 100 nit Rec709(305) is provided. Inverse tone mapping and gamut mapping (310) is applied to the image (305) to obtain an inverse mapped image (315).
The unrated original image (320) is then available for spatial alignment (325). The global color transform estimate (330) is obtained from the mapped image (315) and the uncategorized alignment (325). VDR-specific editing (340) may also be performed on the unclassified original (320). The VDR specific edited image may be a spatially and temporally aligned unfractionated original, or a version with different cropping or different cropping.
A color transformation function (335) is obtained after the estimation (330) and a color transfer (345) may be applied. A VDR master (350), in this example P3 of 4000 nits, is obtained as the output of the process in FIG. 3.
In other embodiments, inverse tone mapping and gamut mapping may be further performed as shown in fig. 4.
The input in the example of fig. 4 is an SDR image (405), 100 nits Rec709 with gamma 2.4. Gamma decoding (410) is performed in a first step to obtain a linear RGB image. Then, a color space conversion is applied to the IPT space (415). In some embodiments, PQ encoding may be used for values in IPT space.
An inverse tone mapping curve is then applied to the SDR content (425). Thereafter, saturation correction may be applied (430), and the colors may be converted back to RGB space with P3 gamma 2.4 (435), thereby obtaining a VDR graded image (440).
Those skilled in the art will appreciate that any inverse tone mapping and gamut mapping may be applied in the systems and methods of the present disclosure to obtain an inverse mapped image version from a hierarchical SDR image. The inverse mapping will not be able to recover the lost information due to clipping of the gamut and dynamic range. However, since the appearance of an image in its inverse-mapped version will be similar to the ranked SDR version, the inverse-mapped image may clarify the director's intent for color grading. The director's intent may be used to direct the global color grading process from unclassified originals, where the missing information is still available.
Estimating color transforms
Given a set of corresponding colors between the unrated original image and the inverse mapped image, the RANSAC method can be used to estimate the global color transform. Before RANSAC is applied, the corresponding color to be used needs to be determined, and also the color transformation model. In one embodiment, an adaptive sampling mechanism may be applied to obtain a subset of corresponding colors. Since the images are spatially and temporally aligned, each pair of pixels within the entire scene may provide a corresponding pair of colors. However, there may be too many pixel pairs, and using all of these may be redundant, and thus, sampling may be advantageous. At the same time, the distribution of intensity is content dependent. It is advantageous to accurately model the global color transform across all luminance levels, whereby several embodiments of the present disclosure can apply an adaptive sampling scheme to sample the corresponding colors, such that the resulting samples have a nearly uniform intensity distribution. Thus, the estimated curve will have better performance in those regions where the original distribution is low, e.g., at the high and low ends of the intensity range.
After the sample pairs for the corresponding colors are obtained, a global transformation model needs to be determined. The selected model may be suitable for fitting the data in an optimal way. In one embodiment, three one-dimensional curves are utilized to model the color transformation in three different color components. In this embodiment, each of the three curves is used to fit one of the three color components. Each one-dimensional function may be parameterized. For example, a polynomial, piecewise polynomial, or other function may be used as will be appreciated by those skilled in the art. A model based on a look-up table may also be used, for example with cubic interpolation between look-up entries for each one-dimensional fit function.
In some cases, the one-dimensional function may not have the desired accuracy. In this case, more complex models may be used. For example, a 3 × 3 linear matrix transform may be used to model the global color transform following three one-dimensional curves. In several embodiments, RANSAC may be used to estimate the global color transform once the data set and transform model are determined.
One advantage of applying RANSAC to fit the model includes its robustness to outliers. This may be meaningful because there may be outliers when fitting the model. For example, colors that are clipped due to limited dynamic range and gamut may be outliers of the model. These outliers may be correctly color graded again after applying the estimated color transform back to the original image that was not graded.
Considering secondary classification and special manipulation regions
In some cases, there may be a local color grading, i.e., a secondary color grading, applied to the content graded to the SDR. In those cases, both the cropped pixels of the image and those underlying local color grading will become outliers of the estimated global color transform modeling step that serves as a feature in several embodiments of the present disclosure described above.
Thus, if there are a large number of outliers, it has to be determined whether these outliers are due to clipping or to secondary color grading. Outlier colors in the hierarchical SDR content may be compared to the 3D boundaries of the color gamut and the 3D boundaries of the dynamic range. If the color is close to the boundary, it may be determined that the outlier originated from clipping; otherwise, outliers may result from secondary ranking. If there are some number of outliers from the quadratic gradation, the first step may be to estimate the mask (mask) that identifies the local region that is subject to quadratic gradation, and then estimate the quadratic gradation of these pixels using the same method used to estimate the global color transform. The color transform for the secondary grading may then be applied back to the masked (masked) pixels in the unclassified original. The secondary classification region may then be blended into the region below the primary classification.
In some cases, there may be some regions that are rated differently for SDR images and for VDR images. When the target dynamic ranges are different, the regions will be ranked differently. For example, highlight areas in SDR images, like some light sources, may be squeezed (grow) into a small dynamic range at the high end, while they may occupy a larger dynamic range in VDR images. For this example, the regions may be identified and then special maneuvers applied to the regions, while the other regions undergo the same "standard" processing described above in this disclosure. The special handling area can be added back after the processing.
Fig. 5 illustrates an example of manipulating a special area.
The special manipulation areas (515, 525) may be removed from the ranked SDR image (505) or the unrated original (510). For the case of the SDR image (505), the data remaining (540) after removal (525) is used along with the data remaining (535) after removal (515) to estimate a global color transform (550), which global color transform (550) minimizes the difference between the unrated original image (510) and the inverse mapped SDR image.
Local color transforms (555) on regions with large errors, which have large differences between the unrated original image (510) and the inverse mapped SDR image, are then estimated. The special region (530) is subjected to a special manipulation (545), thereby obtaining a processed special manipulation region (560). The region (560) is then combined with the content transformed in step (555) to obtain the final output (570).
Although several embodiments of the methods described in this disclosure include an alignment step, the methods may be performed without such alignment. Color grading is then applied to the unclassified original image based on the estimated color transform.
Fig. 6 is an exemplary embodiment of target hardware (10) (e.g., a computer system) for implementing the embodiments of fig. 1-5. The target hardware includes a processor (15), a memory bank (20), a local interface bus (35), and one or more input/output devices (40). The processor may execute one or more instructions related to the implementations of fig. 1-5, provided by the operating system (25) based on some executable programs stored in the memory (20). These instructions are carried to the processor (20) through the local interface (35) as determined by the local interface and some data interface protocol specific to the processor (15). It should be noted that the local interface (35) is a symbolic representation of several elements such as controllers, buffers (caches), drivers, repeaters, and receivers, typically for providing address, control, and/or data connections between multiple elements of a processor-based system. In some embodiments, the processor (15) may be equipped with some local memory (cache) that can store some instructions that are executed to increase some execution speed. Execution of the instructions by the processor may require the use of some input/output device (40), such as inputting data from a file stored on a hard disk, inputting commands from a keyboard, outputting data to a display, or outputting data to a USB flash drive. In some embodiments, the operating system (25) facilitates the performance of these tasks by acting as a central element to collect the various data and instructions needed to execute the program and provide these data and instructions to the microprocessor. In some embodiments, there may be no operating system present, all tasks being under direct control of the processor (15), although the basic architecture of the target hardware device (10) will remain consistent with that depicted in FIG. 6. In some embodiments, multiple processors may be used in a parallel configuration to increase execution speed. In this case, the executable program may be specifically tailored to execute in parallel. Furthermore, in some embodiments, the processor (15) may perform portions of the implementations of fig. 1-5, and other portions may be implemented using dedicated hardware/firmware placed at input/output locations accessible to the target hardware (10) through the local interface (35). The target hardware (10) may include a plurality of executable programs (30), where each program may run independently or in conjunction with each other.
The methods and systems described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. Features described as blocks, modules, or components may be implemented together (e.g., in a logic device such as an integrated logic device) or separately (e.g., as separately connected, stand-alone logic devices). The software portion of the methods of the present disclosure may include a computer-readable medium comprising instructions that, when executed, perform, at least in part, the methods described above. The computer-readable medium may include, for example, Random Access Memory (RAM) and/or Read Only Memory (ROM). These instructions may be executed by a processor, such as a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or a field programmable logic array (FPGA).
Having described various embodiments of the present disclosure, it will be understood, however, that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other embodiments are within the scope of the following claims.
The above examples are provided to enable those of ordinary skill in the art to make and use the full disclosure and description of how the embodiments of the present disclosure are made and used, and are not intended to limit the scope of what the inventors regard as their disclosure.
Modifications of the above-described modes for carrying out the methods and systems disclosed herein, which are obvious to those skilled in the art, are intended to be within the scope of the appended claims. All patents and publications mentioned in the specification are indicative of the levels of skill of those skilled in the art to which the disclosure pertains. All references cited in this disclosure are incorporated by reference to the same extent as if each reference had been individually incorporated by reference in its entirety.
It is to be understood that this disclosure is not limited to particular methods and systems, and thus, the disclosure may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting. As used in this specification and the appended claims, the singular forms "a", "an", and "the" include plural referents unless the content clearly dictates otherwise. The term "plurality" includes two or more of the referenced item unless the content clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
Claims (7)
1. A method for performing color grading to obtain a higher dynamic range color graded image, the method comprising:
providing, by a computer, an unrated image (105);
providing, by a computer, a color-graded lower dynamic range image (125) representing the same material as the unrated image; and
color grading (120), by a computer, the unrated image based on the color graded lower dynamic range image, thereby obtaining a color graded image of higher dynamic range,
the method further comprises the following steps: spatially and temporally aligning (110), by a computer, the unrated image (105) with the color graded lower dynamic range image (125) to obtain an aligned unrated image, wherein the color grading is applied, by a computer, to the aligned unrated image based on the color graded lower dynamic range image to obtain the color graded image (135) of the higher dynamic range.
2. The method of claim 1, wherein spatially and temporally aligning by a computer comprises applying a manual clip alignment process.
3. The method of claim 1, wherein color grading by a computer comprises applying a manual clipping color grading process.
4. The method of claim 1, wherein spatially and temporally aligning comprises:
detecting, by a computer, feature points in the unrated image and in the color-graded lower dynamic range image;
matching, by a computer, the feature points; and
estimating, by a computer, a transformation between the unrated image and the color-graded lower dynamic range image based on the matching of the feature points.
5. The method of claim 4, wherein the detecting further comprises extracting, by a computer, feature descriptors in at least one local region of the unrated image and at least one local region of the color graded lower dynamic range image, wherein the matching further comprises matching, by a computer, the feature descriptors.
6. The method of claim 1, wherein the image is a frame in a video.
7. The method of claim 4, wherein the image is a frame in a video.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201361894382P | 2013-10-22 | 2013-10-22 | |
| US61/894,382 | 2013-10-22 | ||
| PCT/US2014/061600 WO2015061335A2 (en) | 2013-10-22 | 2014-10-21 | Guided color grading for extended dynamic range |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1220057A1 HK1220057A1 (en) | 2017-04-21 |
| HK1220057B true HK1220057B (en) | 2018-05-04 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105684412B (en) | Guided color grading for extended dynamic range images | |
| CN105323497B (en) | High dynamic range (cHDR) operation with constant bracketing | |
| US8903169B1 (en) | Automatic adaptation to image processing pipeline | |
| US9870600B2 (en) | Raw sensor image and video de-hazing and atmospheric light analysis methods and systems | |
| CN102763134A (en) | Parametric Interpolation for High Dynamic Range Video Tone Mapping | |
| KR20150031241A (en) | A device and a method for color harmonization of an image | |
| Pan et al. | Multi-exposure high dynamic range imaging with informative content enhanced network | |
| US8325196B2 (en) | Up-scaling | |
| US12050830B2 (en) | Content-aware PQ range analyzer and tone mapping in live feeds | |
| JP2017050683A (en) | Image processor, imaging apparatus, and image processing method and program | |
| CN111754412A (en) | Method and device for constructing data pairs and terminal equipment | |
| CN114140348A (en) | Contrast enhancement method, device and equipment | |
| KR101215666B1 (en) | Method, system and computer program product for object color correction | |
| CN113177438A (en) | Image processing method, apparatus and storage medium | |
| KR102136716B1 (en) | Apparatus for Improving Image Quality and Computer-Readable Recording Medium with Program Therefor | |
| WO2023110878A1 (en) | Image processing methods and systems for generating a training dataset for low-light image enhancement using machine learning models | |
| CN113724143A (en) | Method and device for image restoration | |
| WO2023110880A1 (en) | Image processing methods and systems for low-light image enhancement using machine learning models | |
| Li et al. | Contrast enhancement algorithm for outdoor infrared images based on local gradient-grayscale statistical feature | |
| US11037311B2 (en) | Method and apparatus for augmenting data in monitoring video | |
| JP5209713B2 (en) | Chroma key production method and apparatus | |
| CN106683047B (en) | Illumination compensation method and system for panoramic image | |
| US9607365B1 (en) | Systems and methods for enhancing quality of image media | |
| HK1220057B (en) | Guided color grading for an extended dynamic range image | |
| Mohammadi et al. | An entropy-based inverse tone mapping operator for high dynamic range applications |