WO2006033235A1 - 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム - Google Patents
画像処理方法、画像処理装置、撮像装置及び画像処理プログラム Download PDFInfo
- Publication number
- WO2006033235A1 WO2006033235A1 PCT/JP2005/016383 JP2005016383W WO2006033235A1 WO 2006033235 A1 WO2006033235 A1 WO 2006033235A1 JP 2005016383 W JP2005016383 W JP 2005016383W WO 2006033235 A1 WO2006033235 A1 WO 2006033235A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- brightness
- hue
- image processing
- area
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6083—Colour correction or control controlled by factors external to the apparatus
- H04N1/6086—Colour correction or control controlled by factors external to the apparatus by scene illuminant, i.e. conditions at the time of picture capture, e.g. flash, optical filter used, evening, cloud, daylight, artificial lighting, white point measurement, colour temperature
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
Definitions
- Image processing method image processing apparatus, imaging apparatus, and image processing program
- the present invention relates to an image processing method, an image processing device, an imaging device, and an image processing program.
- Patent Document 1 discloses a method for calculating an additional correction value in place of the discriminant regression analysis method.
- the method described in Patent Document 1 deletes the high luminance region and the low luminance region from the luminance histogram indicating the cumulative number of pixels of luminance (frequency number), and further uses the frequency frequency limit to limit the luminance.
- An average value is calculated, and a difference between the average value and the reference luminance is obtained as a correction value.
- Patent Document 2 describes a method of determining a light source state at the time of photographing in order to compensate for the extraction accuracy of a face region.
- a face candidate area is extracted, and the average brightness of the extracted face candidate area is calculated with respect to the entire image. (Shooting close-up flash)) and adjust the tolerance of the judgment criteria for the face area.
- Patent Document 2 as a method for extracting a face candidate region, a method using a two-dimensional histogram of hue and saturation described in JP-A-6-67320, JP-A-8-122944, JP-A-8-184925
- the pattern matching and pattern search methods described in Japanese Patent Laid-Open No. 9138471 and Japanese Patent Laid-Open No. 9138471 are used for bow I.
- Patent Document 2 discloses a method for removing a background region other than a face as disclosed in JP-A-8-122944. And the method described in Japanese Patent Application Laid-Open No. 8-184925 and the method of discriminating using the ratio of the linear portion, the line object property, the contact ratio with the outer edge of the screen, the density contrast, the pattern of the density change, and the periodicity. Has been. A method that uses a one-dimensional histogram of density is described to determine shooting conditions. This method is based on an empirical rule that the face area is dark and the background area is bright in the case of backlighting, and that the face area is bright and the background area is dark in the case of close-up flash photography.
- Patent Document 1 JP 2002-247393 A
- Patent Document 2 JP 2000-148980 A
- Patent Document 1 reduces the influence of a region having a large luminance deviation in a backlight scene and a strobe scene, but in a shooting scene in which a person is a main subject, There was a problem that the brightness of was inappropriate.
- Patent Document 2 can achieve the effect of compensating for the identification of the face area in the case of typical backlight or close-up flash photography, but if it does not apply to the typical composition, There was a problem that the effect could not be obtained.
- An object of the present invention is to determine a gradation adjustment method based on an index that quantitatively represents shooting conditions (light source conditions and exposure conditions) of shot image data and a discrimination map that evaluates the reliability of the index. This is to improve the lightness reproducibility of the subject.
- the captured image data includes at least a predetermined combination of brightness and hue and a distance from the outer edge of the screen of the captured image data.
- An occupancy ratio calculating step of dividing the area into any one of the combinations of brightness and calculating an occupancy ratio indicating a ratio of the entire photographic image data for each of the divided areas; and a gradation of the photographic image data A bias amount calculating step for calculating a bias amount indicating a distribution bias; and
- a gradation conversion step of performing gradation conversion processing on the captured image data using the determined gradation adjustment method is performed.
- FIG. 1 is a perspective view showing an external configuration of an image processing apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing an internal configuration of the image processing apparatus according to the present embodiment.
- FIG. 3 is a block diagram showing a main part configuration of the image processing unit in FIG.
- ⁇ 4] A diagram showing an internal configuration (a) of the scene determination unit, an internal configuration (b) of the ratio calculation unit, and an internal configuration (c) of the image processing condition calculation unit.
- FIG. 5 is a flowchart showing a flow of processing executed in an image adjustment processing unit.
- FIG. 6 is a flowchart showing index calculation processing executed in the scene determination unit.
- FIG. 7 is a flowchart showing a first occupancy ratio calculation process for calculating a first occupancy ratio for each brightness and hue region.
- FIG. 8 is a diagram showing an example of a program for converting RGB power into the HSV color system.
- FIG. 9 is a diagram showing the brightness (V) —hue (H) plane and the region rl and region r2 on the V—H plane.
- FIG. 10 is a diagram showing the lightness (V) —hue (H) plane, and regions r3 and r4 on the V—H plane.
- FIG. 11 is a diagram showing a curve representing a first coefficient for multiplying the first occupancy for calculating index 1
- FIG. 12 is a diagram showing a curve representing a second coefficient for multiplying the first occupancy for calculating index 2;
- FIG. 13 is a flowchart showing a second occupancy ratio calculation process for calculating a second occupancy ratio based on the composition of captured image data.
- FIG. 14 is a diagram showing areas nl to n4 determined according to the distance from the outer edge of the screen of captured image data.
- FIG. 15 is a diagram showing, by region (n1 to n4), curves representing a third coefficient for multiplying the second occupancy ratio for calculating index 3;
- FIG. 16 is a flowchart showing a bias amount calculation process executed in a bias calculator.
- FIG. 17 is a flowchart showing tone adjustment condition determination processing executed in the image processing condition calculation unit.
- FIG. 18 Plot of indices 4-6 calculated according to shooting conditions.
- FIG. 19 is a diagram showing a discrimination map for discriminating shooting conditions.
- FIG. 20 A diagram showing the relationship between an index for specifying shooting conditions, parameters A to C, and gradation adjustment methods A to C.
- FIG. 21 is a diagram showing a gradation conversion curve corresponding to each gradation adjustment method.
- FIG. 22 is a diagram showing a luminance frequency distribution (histogram) (a), a normalized histogram (b), and a block-divided histogram (c).
- histogram luminance frequency distribution
- b normalized histogram
- c block-divided histogram
- FIG. 23 is a diagram ((a) and (b)) for explaining the deletion of the low luminance region and the high luminance region of the luminance histogram power, and a diagram for explaining the limitation of the luminance frequency ((c) and ( d)).
- FIG. 24 is a diagram showing a gradation conversion curve representing gradation adjustment conditions when the photographing condition is backlight or under.
- FIG. 25 is a block diagram showing a configuration of a digital camera to which the imaging apparatus of the present invention is applied.
- the captured image data is divided into regions that are combinations of predetermined brightness and hue, and For each divided area, an occupancy ratio indicating a ratio of the entire captured image data is set. calculate.
- an adjustment amount determination step for determining a gradation adjustment amount for the captured image data based on the calculated index.
- the level determined by the adjustment method determination step is determined.
- the photographed image data is subjected to gradation conversion processing of the gradation adjustment amount determined in the adjustment amount determination step.
- the captured image data is a combination of a distance from the outer edge of the screen of the captured image data and brightness. Is divided into predetermined areas, and for each of the divided areas, an occupancy ratio indicating a ratio of the entire captured image data is calculated.
- an adjustment amount determination step for determining a gradation adjustment amount for the captured image data based on the calculated index.
- the level determined by the adjustment method determination step is determined.
- tone conversion processing of the tone adjustment amount determined in the adjustment amount determination step is performed on the captured image data.
- the captured image data is divided into regions corresponding to combinations of predetermined brightness and hue, For each divided area, a first occupancy ratio indicating the ratio of the entire captured image data is calculated, and the captured image data is combined with the distance from the outer edge of the captured image data and the brightness. Dividing into predetermined areas, and for each of the divided areas, calculate a second occupancy ratio indicating the ratio of the entire captured image data,
- the first occupancy rate and the second occupancy rate calculated in the occupancy rate calculation step and the deviation amount calculated in the deviation amount calculation step are preset according to the shooting conditions.
- the index for specifying the shooting condition is calculated by multiplying the obtained coefficient.
- the method further includes an adjustment amount determination step of determining a gradation adjustment amount for the captured image data based on the calculated index, and in the gradation conversion step, the adjustment method Using the gradation adjustment method determined in the determination step, the photographed image data is subjected to gradation conversion processing of the gradation adjustment amount determined in the adjustment amount determination step.
- the number of accumulated pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data.
- a occupancy ratio calculating step wherein the occupancy ratio is calculated based on the generated two-dimensional histogram.
- the cumulative number of pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data.
- a two-dimensional histogram is created by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data.
- the occupation rate is calculated based on the created two-dimensional histogram.
- a two-dimensional histogram is calculated by calculating the cumulative number of pixels for each predetermined hue and brightness of the photographed image data. Including a step of creating a ram, and in the occupation rate calculating step, the first occupation rate is calculated based on the created two-dimensional histogram.
- the form described in Item 12 includes a predetermined high brightness in the index calculation step.
- the coefficients with different signs are used for the skin color hue area of the color tone and the hue area other than the high brightness skin color hue area.
- the form described in Item 13 is an intermediate lightness of a flesh color hue region in the index calculation step. Different code coefficients are used for the region and the lightness region other than the intermediate lightness region.
- the brightness area of the hue area other than the high brightness skin color hue area is a predetermined high brightness area.
- the brightness area other than the intermediate brightness area is a brightness area in the flesh-color hue area.
- the skin color hue region of high brightness includes a region in the range of 170 to 224 in terms of brightness value of the HSV color system. It is.
- the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
- the form described in Item 18 is the image processing method according to any one of Items 12, 14, and 16, wherein a hue region other than the high brightness skin color hue region includes a blue hue region, At least one of the green hue regions is included.
- the lightness region other than the intermediate lightness region is a shadow region.
- the form described in Item 20 is the image processing method described in Item 18, wherein the hue value of the blue hue region is in the range of 161 to 250 in terms of the hue value of the HSV color system.
- the hue value in the hue area is in the range of 40 to 160 in the HSV color system.
- the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
- Item 22 and Item 23 is the image processing method described in any one of Items 12 to 21, wherein the hue value of the flesh color hue region is a hue value of the HSV color system. ⁇ 39 and 3
- Item 24 and Item 25 is the image processing method described in any one of Items 12 to 23, in which the skin color hue region is expressed by a predetermined conditional expression based on lightness and saturation. It is divided into one area.
- the deviation amount includes a brightness deviation amount of the photographed image data, and a screen of the photographed image data. It contains at least one of the average brightness value in the center and the brightness difference value calculated under different conditions.
- the brightness deviation amount of the photographed image data is, for example, the standard deviation or variance of the brightness of the photographed image data.
- the average brightness value of the captured image data at the center of the screen is, for example, the average luminance value at the center of the screen, or the skin color area at the center of the screen.
- the brightness difference value calculated under different conditions is, for example, a difference value between the maximum brightness value and the minimum brightness value of the captured image data.
- the form according to Item 27 is the image processing method according to any one of Items 1 to 26, wherein the photographing condition indicates a light source condition and an exposure condition at the time of photographing,
- the light source condition includes a flash photographing state
- the exposure condition includes an under photographing state derived from a shutter speed and an aperture value at the time of photographing.
- the photographed image data is at least in a region including any combination of a predetermined brightness and hue and a combination of the distance from the outer edge of the screen and the brightness of the photographed image data.
- An occupancy ratio calculating unit that divides and calculates an occupancy ratio indicating a ratio of the entire captured image data for each of the divided areas;
- a deviation amount calculation unit that calculates a deviation amount indicating a deviation in gradation distribution of the photographed image data, an occupancy rate of each area calculated by the occupancy rate calculation unit, and a deviation amount calculated by the deviation amount calculation unit
- An index calculation unit that calculates an index for specifying the shooting condition by multiplying a preset coefficient according to the shooting condition
- An adjustment method determining unit that determines a gradation adjustment method for the captured image data according to the determined shooting condition
- An image processing apparatus including a gradation conversion unit that performs gradation conversion processing on the captured image data using the determined gradation adjustment method.
- the occupancy ratio calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue, and For each divided area, an occupancy ratio indicating the ratio of the entire captured image data is calculated.
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index is provided, and in the gradation conversion unit, the gradation adjustment method determined by the adjustment method determination unit The adjustment is performed on the captured image data using A gradation conversion process of the gradation adjustment amount determined by the amount determination unit is performed.
- the occupancy ratio calculation unit includes a combination of the distance from the outer edge of the screen of the captured image data and the brightness.
- the area is divided into predetermined areas, and for each of the divided areas, an occupation ratio indicating a ratio of the entire captured image data is calculated.
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index
- gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit is performed on the captured image data using the gradation adjustment method determined by the adjustment method determination unit. Apply.
- the occupancy ratio calculation unit divides the captured image data into regions that are combinations of predetermined brightness and hue, and For each divided area, a first occupancy ratio indicating the ratio of the entire captured image data is calculated, and the captured image data is combined with the distance from the outer edge of the captured image data and the light intensity combination power. A second occupancy ratio indicating a ratio of the entire captured image data is calculated for each of the divided areas, and the index calculation unit calculates the occupancy ratio calculation unit. An index for identifying the shooting condition by multiplying the first occupancy ratio and the second occupancy ratio by the coefficient set in advance according to the shooting condition to the deviation amount calculated by the deviation amount calculation unit. Is calculated.
- Item 34 is the image processing apparatus described in Item 33,
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index is provided, and in the gradation conversion unit, the gradation adjustment method determined by the adjustment method determination unit Is used to perform gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit on the captured image data.
- a two-dimensional histogram is obtained by calculating the cumulative number of pixels for each distance and brightness from the outer edge of the screen of the captured image data.
- a histogram creation unit to create, and the occupancy rate calculation unit, The occupation ratio is calculated based on the created two-dimensional histogram.
- a two-dimensional histogram is obtained by calculating the cumulative number of pixels for each distance and brightness from the outer edge of the screen of the captured image data.
- a histogram creation unit is provided, and the occupation rate calculation unit calculates the second occupation rate based on the created two-dimensional histogram.
- a two-dimensional histogram is created by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data.
- a histogram creation unit is provided, and the occupation rate calculation unit calculates the occupation rate based on the created two-dimensional histogram.
- a two-dimensional histogram is created by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data.
- a histogram creation unit is provided, and the occupation rate calculation unit calculates the first occupation rate based on the created two-dimensional histogram.
- the index calculation unit has a predetermined high brightness. Different code coefficients are used for the flesh-color hue area and the hue area other than the high-brightness flesh-color hue area.
- the index calculation unit includes an intermediate part of a flesh-color hue region. Different code coefficients are used for the lightness region and the lightness region other than the intermediate lightness region.
- the lightness region of the hue region other than the high-brightness skin color hue region is a predetermined high-lightness region.
- the lightness area other than the intermediate lightness area is a lightness area in the flesh-color hue area.
- the high-brightness skin color hue region includes a region in the range of 170 to 224 in terms of the brightness value of the HSV color system. It is.
- the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
- the hue region other than the high brightness skin color hue region includes a blue hue region, At least one of the green hue regions is included.
- the lightness region other than the intermediate lightness region is a shadow region.
- the hue value of the blue hue region is in the range of 161 to 250 in terms of the hue value of the HSV color system.
- the hue value of the green hue region is in the range of 40 to 160 as the hue value of the HSV color system.
- the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
- the hue value of the flesh color hue region is 0 to 39 in the HSV color system.
- the skin color hue region is divided into two regions according to a predetermined conditional expression based on lightness and saturation. It is divided into.
- the deviation amount includes a deviation amount of brightness of photographed image data, and a screen of the photographed image data. It contains at least one of the average brightness value in the center and the brightness difference value calculated under different conditions.
- the photographing condition indicates a light source condition and an exposure condition at the time of photographing.
- a robot shooting state is included, and the exposure condition includes an under shooting state derived from the shutter speed and aperture value at the time of shooting.
- an image capturing unit that captures captured image data by capturing a subject, and the captured image data include at least a combination of predetermined brightness and hue and a screen of the captured image data. Is divided into areas consisting of any combination of distance and brightness from the outer edge, and the ratio of the divided area to the entire captured image data is indicated for each of the divided areas.
- An occupancy rate calculation unit for calculating an occupancy rate;
- a deviation amount calculation unit that calculates a deviation amount indicating a deviation in gradation distribution of the photographed image data, an occupancy rate of each area calculated by the occupancy rate calculation unit, and a deviation amount calculated by the deviation amount calculation unit
- An index calculation unit that calculates an index for specifying the shooting condition by multiplying a preset coefficient according to the shooting condition
- An adjustment method determining unit that determines a gradation adjustment method for the captured image data according to the determined shooting condition
- a gradation conversion unit that performs gradation conversion processing on the photographed image data using the determined gradation adjustment method.
- the occupation ratio calculation unit divides the photographed image data into regions having a combination of predetermined brightness and hue, and the divided image data is divided. For each area, an occupancy ratio indicating the ratio of the entire captured image data is calculated.
- the form according to Item 57 is the imaging apparatus according to Item 56,
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index is provided, and in the gradation conversion, the gradation adjustment determined by the adjustment method determination unit Using the method, the photographed image data is subjected to gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit.
- the occupancy ratio calculation unit includes a combination of a distance from the outer edge of the screen of the captured image data and brightness. The area is divided into predetermined areas, and for each of the divided areas, an occupation ratio indicating a ratio of the entire captured image data is calculated.
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index is provided, and in the gradation conversion unit, the gradation adjustment method determined by the adjustment method determination unit The adjustment is performed on the captured image data using A gradation conversion process of the gradation adjustment amount determined by the amount determination unit is performed.
- the occupation ratio calculation unit divides the photographed image data into regions having a combination of predetermined brightness and hue, and the divided image data is divided. For each region, a first occupancy ratio indicating the ratio of the entire captured image data is calculated, and the captured image data is determined to be a predetermined combination power of distance and brightness from the outer edge of the captured image data screen. A second occupancy ratio indicating a ratio of the entire captured image data is calculated for each of the divided areas, and the index calculation unit calculates the first occupancy ratio calculated by the occupancy ratio calculation unit.
- the index for identifying the shooting condition is calculated by multiplying the occupancy ratio and the second occupancy ratio by the deviation amount calculated by the deviation amount calculation unit by a coefficient set in advance according to the shooting condition. To do.
- an adjustment amount determination unit that determines a gradation adjustment amount for the captured image data based on the calculated index is provided, and in the gradation conversion unit, the gradation adjustment method determined by the adjustment method determination unit Is used to perform gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination unit on the captured image data.
- a two-dimensional histogram is calculated by calculating a cumulative pixel count for each distance and brightness from the outer edge of the screen of the captured image data.
- a histogram creation unit is provided, and the occupancy rate calculation unit calculates the occupancy rate based on the created two-dimensional histogram.
- a two-dimensional histogram is obtained by calculating the cumulative number of pixels for each distance and brightness from the outer edge of the screen of the captured image data. It has a histogram creation part to create,
- the occupation ratio calculation unit calculates the second occupation ratio based on the created two-dimensional histogram.
- the two-dimensional histogram is calculated by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data.
- a histogram creation unit is provided, and the occupancy rate calculation unit calculates the occupancy rate based on the created 2D histogram.
- the two-dimensional histogram is calculated by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data in the imaging device described in Item 60 or 61.
- a histogram creation unit is provided, and the occupancy rate calculation unit calculates the first occupancy rate based on the created 2D histogram.
- the index calculation unit includes a skin color having a predetermined high brightness. Different sign coefficients are used for the hue area and the hue area other than the high brightness skin color hue area.
- the index calculation unit includes the intermediate lightness of the flesh color hue region. Different sign coefficients are used for the region and the lightness region other than the intermediate lightness region.
- the brightness region of the hue region other than the high-brightness skin color hue region is a predetermined high brightness region.
- the brightness area other than the intermediate brightness area is a brightness area in the flesh-color hue area.
- the skin color hue region of high brightness includes a region in the range of 170 to 224 in terms of brightness value of the HSV color system. .
- the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
- the form according to Item 72 includes a blue hue region and a green hue region in the hue region other than the high brightness skin color hue region. At least one of them.
- the lightness region other than the intermediate lightness region is a shadow region.
- the hue value of the blue hue region is in a range of 161 to 250 in terms of the hue value of the HSV color system
- the hue value in the green hue region is in the range of 40 to 160 in the HSV color system.
- the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
- the hue value of the flesh color hue region is a hue value of the HSV color system in the form described in Item 76 and 77. It is in the range of 39 and 330-359.
- the form described in Item 78 and 79 includes two skin color hue regions according to a predetermined conditional expression based on brightness and saturation. Divided into regions.
- the deviation amount includes a brightness deviation amount of captured image data, a screen of the captured image data. It includes at least one of the average brightness value in the center and the brightness difference value calculated under different conditions.
- the imaging condition indicates a light source condition and an exposure condition at the time of imaging, and the light source condition includes a strobe.
- the exposure condition includes an under shooting state derived from the shutter speed and aperture value at the time of shooting.
- the form according to Item 82 is provided in a computer for executing image processing.
- the photographed image data is divided into at least a predetermined combination of brightness and hue and a combination of the distance and brightness from the outer edge of the screen of the photographed image data, and for each of the divided areas, An occupancy ratio calculating step for calculating an occupancy ratio indicating the ratio of the entire captured image data;
- the captured image data is divided into regions that are combinations of predetermined brightness and hue. Then, for each of the divided areas, an occupancy ratio indicating a ratio of the entire captured image data is calculated.
- gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination step is performed on the captured image data using the gradation adjustment method determined by the adjustment method determination step. Apply.
- the captured image data is combined with the distance from the outer edge of the screen of the captured image data and the brightness.
- the area is divided into predetermined areas, and for each of the divided areas, an occupation ratio indicating a ratio of the entire captured image data is calculated.
- the method further includes an adjustment amount determination step of determining a gradation adjustment amount for the captured image data based on the calculated index, and the gradation adjustment method determined by the adjustment method determination step in the gradation conversion. Then, gradation conversion processing of the gradation adjustment amount determined by the adjustment amount determination step is performed on the captured image data.
- the captured image data is divided into regions that are combinations of predetermined brightness and hue. For each of the divided areas, a first occupancy ratio indicating a ratio of the entire captured image data is calculated, and the captured image data is combined with the distance between the outer edge force of the captured image data and the lightness combination power. And a second occupancy ratio indicating the ratio of the entire captured image data is calculated for each of the divided areas.
- the second occupancy ratio is calculated by the occupancy ratio calculation step.
- First occupancy and second occupancy An index for specifying the shooting condition is calculated by multiplying the ratio and the deviation amount calculated in the bias amount calculation step by a coefficient set in advance according to the shooting condition.
- the method further includes an adjustment amount determination step of determining a gradation adjustment amount for the captured image data based on the calculated index, and the gradation adjustment method determined by the adjustment method determination step in the gradation conversion step Is used to perform gradation conversion processing of the gradation adjustment amount determined in the adjustment amount determination step.
- the cumulative number of pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data.
- the cumulative number of pixels is calculated for each distance and brightness from the outer edge of the screen of the captured image data.
- a step of creating a two-dimensional histogram by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data And the occupancy ratio calculating step calculates the occupancy ratio based on the created two-dimensional histogram.
- the image processing program described in Item 87 or 88 is a histogram for creating a two-dimensional histogram by calculating the cumulative number of pixels for each predetermined hue and brightness of the captured image data.
- the brightness region of the hue region other than the high-brightness skin color hue region is a predetermined high brightness region.
- the brightness region other than the intermediate brightness region is a brightness region in the flesh-color hue region.
- the high-brightness skin color hue region includes a region in the range of 170 to 224 in terms of the brightness value of the HSV color system. It is.
- the intermediate brightness region includes a region in the range of 85 to 169 in terms of the brightness value of the HSV color system.
- the form described in Item 99 includes a blue hue region, a green hue in a hue region other than the high brightness skin color hue region. At least one of the regions is included.
- the brightness area other than the intermediate brightness area is a shadow area.
- the hue value of the blue hue region is in the range of 161 to 250 as the hue value of the HSV color system, and the green color The hue value in the hue region is in the range of 40 to 160 in the HSV color system.
- the brightness value of the shadow area is in the range of 26 to 84 as the brightness value of the HSV color system.
- the form described in Item 103 and 104 is the hue value of the HSV color system in the image processing program according to any one of Items 93 to 102.
- the form described in Items 105 to 106 is based on a predetermined conditional expression based on lightness and saturation in the skin color hue region.
- the deviation amount includes the brightness deviation amount of the photographed image data, and the photographed image data. At least one of the average brightness value at the center of the screen and the brightness difference value calculated under different conditions is included.
- the photographing condition indicates a light source condition and an exposure condition at the time of photographing
- the light source condition includes A flash shooting state
- the exposure condition includes an under shooting state derived from the shutter speed and aperture value at the time of shooting.
- FIG. 1 is a perspective view showing an external configuration of the image processing apparatus 1 according to the embodiment of the present invention.
- the image processing apparatus 1 is provided with a magazine loading section 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2 are provided an exposure processing unit 4 for exposing the photosensitive material and a print creating unit 5 for developing and drying the exposed photosensitive material to create a print. On the other side of the casing 2, a tray 6 for discharging the prints produced by the print creation unit 5 is provided.
- a CRT (Cathode Ray Tube) 8 serving as a display device, a film scanner unit 9 that is a device for reading a transparent document, a reflective document input device 10, and an operation unit 11 are provided at the top of the housing 2.
- the CRT8 power print is composed of a display means that displays the image of the image information to be created on the screen.
- the housing 2 is provided with an image reading unit 14 that can read image information recorded on various digital recording media, and an image writing unit 15 that can write (output) image signals to various digital recording media.
- a control unit 7 that centrally controls each of these units is provided inside the housing 2.
- the image reading unit 14 includes a PC card adapter 14a and a floppy (registered trademark) disk adapter 14b, and a PC card 13a and a floppy (registered trademark) disk 13b can be inserted therein.
- the PC card 13a has a memory in which a plurality of frame image data captured by a digital camera is recorded.
- a plurality of frame image data captured by a digital camera is recorded on the floppy (registered trademark) disk 13b.
- PC card 1 Recording media on which frame image data is recorded in addition to 3a and floppy disk 13b include, for example, multimedia card (registered trademark), memory stick (registered trademark), MD data, CD-ROM, etc. .
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c.
- Examples of the optical disk 16c include CD-R, DVD-R, and the like.
- the operation unit 11, the CRT 8, the film scanner unit 9, the reflective document input device 10, and the image reading unit 14 have a structure in which the casing 2 is integrally provided. Either one or more powers may be provided as separate bodies!
- the force print creation method exemplified by the photosensitive material exposed to light and developed to create a print is not limited to this.
- a method such as a kuget method, an electrophotographic method, a heat sensitive method, or a sublimation method may be used.
- FIG. 2 shows a main part configuration of the image processing apparatus 1.
- the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection original input device 10, an image reading unit 14, a communication means (input) 32,
- the image writing unit 15, data storage unit 71, template storage unit 72, operation unit 11, CRT 8, and communication unit (output) 33 are configured.
- the control unit 7 is constituted by a microcomputer, and is stored in a storage unit (not shown) such as a ROM (Read Only Memory).
- the control unit 7 includes an image processing unit 70 according to the image processing apparatus of the present invention. Based on an input signal (command information) from the operation unit 11, the control unit 7 receives from the film scanner unit 9 and the reflection original input device 10. The scanned image signal, the image signal read from the image reading unit 14, and the image signal input from the external device via the communication means 32 are subjected to image processing to obtain an image for exposure. Image information is formed and output to the exposure processing unit 4. Further, the image processing unit 70 performs a conversion process corresponding to the output form on the image signal subjected to the image processing, and outputs it. As output destinations of the image processing unit 70, there are a CRT 8, an image writing unit 15, a communication means (output) 33, and the like.
- the exposure processing unit 4 performs image exposure on the photosensitive material and outputs the photosensitive material to the print creating unit 5.
- the print creating unit 5 develops the exposed photosensitive material and dries it to create prints Pl, P2, and P3.
- Print P1 is a service size, high-definition size, panorama size, etc.
- print P2 is an A4 size print
- print P3 is a business card size print.
- the film scanner unit 9 reads a frame image recorded on a transparent original such as a developed negative film N or a reversal film imaged by an analog camera, and obtains a digital image signal of the frame image.
- the reflective original input device 10 reads an image on the print P (photo print, document, various printed materials) by a flat bed scanner, and obtains a digital image signal.
- the image reading unit 14 reads frame image information recorded on the PC card 13 a and the floppy (registered trademark) disk 13 b and transfers the frame image information to the control unit 7.
- the image reading unit 14 includes, as image transfer means 30, a PC card adapter 14a, a floppy (registered trademark) disk adapter 14b, and the like.
- the image reading unit 14 reads frame image information recorded on the PC card 13a inserted into the PC card adapter 14a or the floppy disk 13b inserted into the floppy disk adapter 14b. And transfer to the control unit 7.
- a PC card reader or a PC card slot is used as the PC card adapter 14a.
- the communication means (input) 32 receives an image signal representing a captured image and a print command signal from another computer in the facility where the image processing apparatus 1 is installed or a distant computer via the Internet or the like. To do.
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c as the image transport unit 31.
- the image writing unit 15 is connected to the floppy disk 16a inserted in the floppy disk adapter 15a and the MO adapter 15b.
- the MO 16b inserted, the optical disc 16c inserted into the optical disc adapter 15c, and the image signal generated by the image processing method of the present invention are written.
- the data storage means 71 stores and sequentially stores image information and order information corresponding to the image information (information on how many sheets of image power are to be created, information on the print size, etc.).
- the template storage means 72 stores the background image, illustration image, etc., which are sample image data corresponding to the sample identification information Dl, D2, and D3, and data of at least one template that sets the synthesis region.
- a predetermined template is selected from a plurality of templates that are set by the operator's operation and stored in advance in the template storage means 72, and the frame image information is synthesized by the selected template and designated sample identification information Dl, D2,
- the sample image data selected based on D3 is combined with the image data based on the order and the Z or character data to create a print based on the specified sample.
- the synthesis using this template is performed by the well-known Chromaki method.
- the sample identification information Dl, D2, and D3 for designating the print sample is configured to be input from the operation unit 211.
- these sample identification information is stored in the print sample or order. Since it is recorded on the sheet, it can be read by reading means such as OCR. Or it can also input by an operator's keyboard operation.
- sample image data is recorded corresponding to sample identification information D1 that specifies a print sample, sample identification information D1 that specifies a print sample is input, and this sample identification information that is input Select sample image data based on D1, and combine the selected sample image data with the image data and Z or character data based on the order to create prints based on the specified samples. Users can actually order samples for printing and can meet the diverse requirements of a wide range of users.
- the first sample identification information D2 designating the first sample and the image data of the first sample are stored, and the second sample identification information D3 designating the second sample and the first sample identification data D3 are stored.
- the image data of two samples is stored, the sample image data selected based on the designated first and second sample identification information D2, D3, the image data based on the order, and the Z or character data To create a print based on the specified sample. Therefore, it is possible to synthesize a wider variety of images, and to create prints that meet a wide variety of user requirements.
- the operation unit 11 includes information input means 12.
- the information input means 12 is composed of, for example, a touch panel and outputs a pressing signal from the information input means 12 to the control unit 7 as an input signal.
- the operation unit 11 may be configured with a keyboard, a mouse, and the like.
- the CRT 8 displays image information and the like according to the display control signal input from the control unit 7.
- the communication means (output) 33 sends an image signal representing a photographed image after the image processing of the present invention and order information attached thereto to other links in the facility where the image processing apparatus 1 is installed.
- the computer transmits to a distant computer via the Internet or the like.
- the image processing apparatus 1 includes an image input unit that captures image information obtained by dividing and metering images of various digital media and image originals, an image processing unit, and a processed image.
- Image output means for displaying images, printing output, writing to image recording media, and order information attached to image data for remote computers via the communication line via another communication line or computer Means for transmitting.
- FIG. 3 shows the internal configuration of the image processing unit 70.
- the image processing unit 70 includes an image adjustment processing unit 701, a film scan data processing unit 702, a reflection original scan data processing unit 703, an image data format decoding processing unit 704, a template processing unit 705, and a CRT specific process.
- a unit 706, a printer specific processing unit A707, a printer specific processing unit B708, and an image data format creation processing unit 709 are configured.
- the film scan data processing unit 702 performs a calibration operation specific to the film scanner unit 9, negative / positive reversal (in the case of a negative document), dust scratch removal, contrast adjustment, and the like on the image data input from the film scanner unit 9. It performs processing such as granular noise removal and sharpening enhancement, and outputs the processed image data to the image adjustment processing unit 701.
- processing such as granular noise removal and sharpening enhancement
- the reflection document scan data processing unit 703 performs a calibration operation specific to the reflection document input device 10, negative / positive reversal (in the case of a negative document), dust flaw removal, and contrast adjustment for the image data input from the reflection document input device 10. Then, processing such as noise removal and sharpening enhancement is performed, and the processed image data is output to the image adjustment processing unit 701.
- the image data format decoding processing unit 704 applies a compression code to the image data input from the image transfer means 30 and Z or the communication means (input) 32 according to the data format of the image data as necessary. Processing such as restoration and conversion of the color data expression method is performed, the data is converted into a data format suitable for computation in the image processing unit 70, and output to the image adjustment processing unit 701. In addition, when the size of the output image is specified from any of the operation unit 11, the communication means (input) 32, and the image transfer means 30, the image data format decoding processing unit 704 detects the specified information. And output to the image adjustment processing unit 701. Information about the size of the output image specified by the image transfer means 30 is embedded in the header information and tag information of the image data acquired by the image transfer means 30.
- the image adjustment processing unit 701 is based on a command from the operation unit 11 or the control unit 7, and includes a film scanner unit 9, a reflective document input device 10, an image transfer unit 30, a communication unit (input) 32, and a template.
- the image data received from the image processing unit 705 is subjected to image processing described later (see FIGS. 6, 7, 13, and 17) for image formation optimized for viewing on the output medium.
- Digital image data is generated and output to the CRT specific processing unit 706, the printer specific processing unit A707, the printer specific processing unit B708, the image data format creation processing unit 709, and the data storage unit 71.
- the image adjustment processing unit 701 determines the shooting condition of the shot image data as shown in FIG.
- a scene discriminating unit 710 that determines tone adjustment conditions (tone adjustment method, tone adjustment amount) and a tone conversion unit 711 that performs tone conversion processing according to the determined tone adjustment conditions.
- the photographing conditions are classified into light source conditions and exposure conditions.
- the light source condition is derived from the light source at the time of photographing, the positional relationship between the main subject (mainly a person) and the photographer. In the broader sense, it also includes the type of light source (sunlight, strobe light, tandasten lighting and fluorescent lamps).
- Backlit scenes occur when the sun is located behind the main subject.
- a strobe (close-up) scene occurs when the main subject is strongly irradiated with strobe light. Both scenes have the same brightness (light / dark ratio), and the relationship between the brightness of the foreground and background of the main subject is merely reversed.
- the exposure conditions are derived from the settings of the camera shutter speed, aperture value, etc., and underexposure is under, proper exposure is normal, and overexposure is over. In a broad sense, so-called “white jump” and “shadow collapse” are also included. Under all light source conditions, under or over exposure conditions can be used. Especially in DSC (digital still camera) with a narrow dynamic range, even if the automatic exposure adjustment function is used, due to the setting conditions aimed at suppressing overexposure, the frequency of underexposed exposure conditions is high. High,.
- FIG. 4 (a) shows the internal configuration of the scene discrimination unit 710.
- the scene discriminating unit 710 includes a ratio calculating unit 712, a bias calculating unit 722, an index calculating unit 713, and an image processing condition calculating unit 714.
- the ratio calculation unit 712 includes a color system conversion unit 715, a histogram creation unit 716, and an occupation rate calculation unit 717.
- the color system conversion unit 715 converts the RGB (Red, Green, Blue) value of the captured image data into the HSV color system.
- the HSV color system is a representation of image data with three elements: Hue, Saturation, and Value (Value or Brightness), and is based on the color system proposed by Munsell. Was devised.
- “brightness” means “brightness” that is generally used unless otherwise noted.
- the power to use V (0 to 255) of the HSV color system as “brightness” is the unit system that expresses the brightness of any other color system. It may be used.
- numerical values such as various coefficients described in the present embodiment are recalculated.
- the photographed image data in the present embodiment is assumed to be image data having a person as a main subject.
- the histogram creation unit 716 creates a two-dimensional histogram by dividing the captured image data into regions composed of a predetermined combination of hue and brightness, and calculating the cumulative number of pixels for each of the divided regions. In addition, the histogram creation unit 716 divides the captured image data into predetermined regions having a combination power of distance and brightness from the outer edge of the screen of the captured image data, and calculates the cumulative number of pixels for each of the divided regions. To create a two-dimensional histogram. Note that the captured image data is divided into regions that have the combined power of distance, brightness, and hue from the outer edge of the screen of the captured image data, and a three-dimensional histogram is created by calculating the cumulative number of pixels for each divided region. You may make it do. In the following, a method of creating a two-dimensional histogram will be adopted.
- the occupancy calculation unit 717 indicates the ratio of the cumulative number of pixels calculated by the histogram creation unit 716 to the total number of pixels (the entire captured image data) for each region divided by the combination of brightness and hue. Calculate the first occupancy (see Table 1). The occupancy calculation unit 717 also calculates the total number of pixels calculated by the histogram creation unit 716 for each area divided by the combination of the distance from the outer edge of the screen of the captured image data and the brightness (the captured image data). Calculate the second occupancy ratio (see Table 4) indicating the ratio of the total occupancy.
- the bias calculation unit 722 calculates a bias amount indicating the bias of the gradation distribution of the captured image data.
- the deviation amount is a standard deviation of luminance values of photographed image data, a luminance difference value, a skin color average luminance value at the center of the screen, an average luminance value at the center of the screen, and a skin color luminance distribution value.
- the processing for calculating these deviation amounts will be described in detail later with reference to FIG.
- the index calculation unit 713 sets the first coefficient set in advance (for example, by discriminant analysis) to the first occupancy rate calculated for each area in the occupancy rate calculation unit 717 according to the imaging conditions. By multiplying (see Table 2) and taking the sum, index 1 for specifying the shooting conditions is calculated. Index 1 indicates the characteristics of flash photography such as indoor photography, close-up photography, and high brightness of the face color, and is used to separate the image that should be identified as a flash from other shooting conditions. [0152] When calculating the index 1, the index calculation unit 713 uses coefficients of different signs for a predetermined high-lightness skin color hue region and a hue region other than the high-lightness skin color hue region.
- the skin color hue region of a predetermined high lightness includes a region of 170 to 224 in the lightness value of the HSV color system.
- the hue area other than the predetermined high brightness skin color hue area includes at least one of the high brightness areas of the blue hue area (hue values 161 to 250) and the green hue area (hue values 40 to 160).
- the index calculation unit 713 sets the first occupancy calculated for each area in the occupancy ratio calculation unit 717 to the second occupancy previously set according to the imaging conditions (for example, by discriminant analysis). By multiplying the coefficient (see Table 3) and taking the sum, index 2 for specifying the shooting conditions is calculated.
- Indicator 2 is a composite indication of characteristics during backlighting such as outdoor shooting, sky blue high brightness, and low facial color, and is used to separate the image that should be identified as backlight from other shooting conditions. .
- the index calculation unit 713 uses different codes for the intermediate brightness area of the flesh color hue area (hue values 0 to 39, 330 to 359) and the brightness areas other than the intermediate brightness area.
- the coefficient of is used.
- the intermediate brightness area of the flesh tone hue area includes areas with brightness values of 85 to 169.
- the brightness area other than the intermediate brightness area includes, for example, a shadow area (brightness value 26-84).
- the index calculation unit 713 sets the second occupancy calculated for each area in the occupancy calculation unit 717 to the third occupancy set in advance (for example, by discriminant analysis) according to the imaging conditions. By multiplying the coefficient (see Table 5) and taking the sum, index 3 for specifying the shooting conditions is calculated. Indicator 3 shows the difference in contrast between the center and outside of the screen of the captured image data between backlight and strobe, and quantitatively shows only the image that should be identified as backlight or strobe. When calculating the index 3, the index calculation unit 713 uses different values of coefficients depending on the distance from the outer edge of the screen of the captured image data.
- the index calculation unit 713 is set in advance (for example, by discriminant analysis) to the index index 3 and the average luminance value of the skin color region in the center of the screen of the captured image data according to the shooting conditions.
- Index 4 is calculated by multiplying the coefficient and taking the sum.
- the index calculation unit 713 calculates the average luminance value of the skin color area in the index 2, index 3, and center of the screen.
- Each index 5 is calculated by multiplying a coefficient set in advance (for example, by discriminant analysis) according to the photographing condition and taking the sum.
- the index calculation unit 713 multiplies the deviation amount calculated by the bias calculation unit 722 by a fourth coefficient (see Table 6) set in advance (for example, by discriminant analysis) according to the shooting conditions. Then, index 6 is calculated by taking the sum.
- FIG. 4C shows the internal configuration of the image processing condition calculation unit 714.
- the image processing condition calculation unit 714 includes a scene determination unit 718, a gradation adjustment method determination unit 719, a gradation adjustment parameter calculation unit 720, and a gradation adjustment amount calculation unit 721.
- the scene discriminating unit 718 is divided into regions in advance according to the values of the index 4, the index 5 and the index 6 calculated by the index calculating unit 713 and the accuracy of the shooting conditions, and discriminates the reliability of the index Based on the map (see FIG. 19), the shooting condition of the shot image data is determined.
- the gradation adjustment method determination unit 719 determines a gradation adjustment method for the captured image data according to the imaging conditions determined by the scene determination unit 718. For example, when the shooting condition is direct light or strobe light, as shown in Fig. 21 (a), the method of correcting the translation (offset) of the pixel value of the input captured image data (tone adjustment method A) is Applied. When the shooting condition is backlit or under, as shown in FIG. 21 (b), a method (tone adjustment method B) for gamma correction of the pixel value of the input captured image data is applied. If the shooting condition is between backlight and direct light (low accuracy region (1)), or between strobe and under light (low accuracy region (2)), as shown in Fig. 21 (c). Gamma correction and translation (offset) correction (tone adjustment method C) are applied to the pixel values of the input captured image data.
- the tone adjustment parameter calculation unit 720 calculates parameters (key correction values, etc.) necessary for tone adjustment based on the values of the index 4, the index 5, and the index 6 calculated by the index calculation unit 713. Put out.
- the gradation adjustment amount calculation unit 721 calculates (determines) the gradation adjustment amount for the captured image data based on the gradation adjustment parameter calculated by the gradation adjustment parameter calculation unit 720. Specifically, the gradation adjustment amount calculation unit 721 is determined by the gradation adjustment method determination unit 719. A gradation conversion curve corresponding to the gradation adjustment parameter calculated by the gradation adjustment parameter calculation unit 720 is selected from a plurality of gradation conversion curves set in advance corresponding to the determined gradation adjustment method. . Note that the tone conversion curve (tone adjustment amount) is calculated based on the tone adjustment parameter calculated by the tone adjustment parameter calculation unit 720.
- the tone conversion unit 711 performs tone conversion of the captured image data according to the tone conversion curve determined by the tone adjustment amount calculation unit 721.
- the template processing unit 705 Based on a command from the image adjustment processing unit 701, the template processing unit 705 reads predetermined image data (template) from the template storage unit 72 and synthesizes the image data to be processed and the template. The template processing is performed, and the image data after the template processing is output to the image adjustment processing unit 701.
- the CRT specific processing unit 706 performs processing such as changing the number of pixels and color matching on the image data input from the image adjustment processing unit 701 as necessary, and displays information such as control information that needs to be displayed.
- the combined display image data is output to CRT8.
- the printer-specific processing unit A707 performs printer-specific calibration processing, color matching, pixel number change processing, and the like as necessary, and outputs processed image data to the exposure processing unit 4.
- a printer specific processing unit B708 is provided for each printer apparatus to be connected.
- the printer-specific processing unit B708 performs printer-specific calibration processing, color matching, pixel number change, and the like, and outputs processed image data to the external printer 51.
- the image data format creation processing unit 709 converts the image data input from the image adjustment processing unit 701 to various general-purpose image formats represented by JPEG, TIFF, Exif, and the like as necessary.
- the processed image data is output to the image transport unit 31 and the communication means (output) 33.
- the categories A707, printer specific processing unit B708, and image data format creation processing unit 709 are provided to help understand the functions of the image processing unit 70, and are not necessarily realized as physically independent devices. For example, it may be realized as a type of software processing by a single CPU.
- the size of the captured image data is reduced (step T1).
- a known method for example, a bilinear method, a bicubic method, a two-arrest naver method, or the like
- the reduction ratio is not particularly limited, but is preferably about 1Z2 to LZ10 of the original image from the viewpoint of processing speed and the accuracy of determining the photographing condition.
- step T2 DSC white balance adjustment correction processing is performed on the reduced captured image data (step T2), and the shooting conditions are specified based on the corrected captured image data.
- An index calculation process for calculating the indices (index 1 to 6) is performed (step ⁇ 3). The index calculation process of step IV3 will be described in detail later with reference to FIG.
- Step ⁇ 4 based on the index calculated in Step ⁇ ⁇ ⁇ ⁇ 3 and the discrimination map, the shooting conditions of the shot image data are determined, and the tone adjustment conditions (tone adjustment method, tone adjustment for the shot image data) are determined.
- Gradation adjustment condition determination processing for determining (amount) is performed (step ⁇ 4).
- the gradation adjustment condition determination processing in step IV4 will be described in detail later with reference to FIG.
- step ⁇ 5 gradation conversion processing is performed on the original photographed image data in accordance with the gradation adjustment condition determined in step ⁇ 4 (step ⁇ 5).
- step ⁇ 6 the sharpness adjustment processing is performed on the captured image data after the gradation conversion processing.
- step ⁇ 6 it is preferable to adjust the processing amount according to the shooting conditions and output print size.
- step ⁇ 7 a process for removing the noise enhanced by the gradation adjustment and the enhancement of sharpness is performed.
- step T8 the type of media that outputs the captured image data Color correction processing is performed to convert the color space according to the type (step T8), and the captured image data after image processing is output to the designated media.
- step ⁇ 3 in FIG. 5 the index calculation process executed in the scene determination unit 710 will be described.
- photographed image data is image data reduced in step T1 in FIG.
- the captured image data is divided into predetermined image areas, and an occupation ratio indicating the ratio of each divided area to the entire captured image data (first occupation ratio, second occupation ratio). ) Is calculated (step S1). Details of the occupation rate calculation process will be described later with reference to FIGS.
- step S2 a bias amount calculation process for calculating a bias amount indicating a bias in the gradation distribution of the photographed image data is performed (step S2).
- the bias amount calculation processing in step S2 will be described in detail later with reference to FIG.
- an index for specifying the light source condition is calculated based on the occupation ratio calculated by the ratio calculation unit 712 and the coefficient set in advance according to the imaging condition (step S3). Also, an index for specifying the exposure condition is calculated based on the occupation ratio calculated in the ratio calculation unit 712 and a coefficient set in advance according to the shooting condition (step S4), and this index calculation process is performed. finish.
- the method for calculating the indices in steps S3 and S4 will be described in detail later.
- the RGB values of the photographed image data are converted into the HSV color system (step S10).
- Figure 8 shows an example of a conversion program (HSV conversion program) that obtains hue values, saturation values, and brightness values by converting to the RGB power HSV color system, using program code (c language).
- HSV conversion program shown in Fig. 8
- the digital image data values that are input image data are defined as InR, InG, and InB
- the calculated hue value is defined as OutH
- the scale is defined as 0 to 360
- the degree value is OutS
- the lightness value is OutV
- the unit is defined as 0 to 255.
- the shot image data is divided into areas having a predetermined combination of brightness and hue. Then, a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sll).
- step Sll the area division of the captured image data
- Lightness (V) i Lightness value power ⁇ ) to 25 (vl), 26-50 ( ⁇ 2), 51 to 84 ( ⁇ 3), 85 to 169 ( ⁇ 4), 170 to 199 (v5), 200 It is divided into seven areas of ⁇ 224 (v6) and 225 ⁇ 255 (v7).
- Hue (H) is a flesh color range (HI and H2) with a hue value of 0 to 39, 330 to 359, a green hue range (H3) with a hue value of 0 to 160, and a blue hue range with a hue value of 161 to 250. It is divided into four areas: (H4) and red hue area (H5).
- the flesh-color hue area is further divided into a flesh-color area (HI) and other areas (H2).
- HI flesh-color area
- H2 other areas
- the hue '(H) that satisfies the following formula (1) is defined as the flesh color area (HI), and the formula (1) is not satisfied! /, Let the region be (H2).
- Hue '(H) Hue (H) + 60 (when 0 ⁇ Hue (H) ⁇ 300),
- Hue, (H) Hue (H) — 300 (when 300 ⁇ Hue (H) 360 360),
- Luminance (Y) InRX0.30 + InGXO.59 + InBXO. 11 (A), Hue '(H) Z Luminance (Y) ⁇ 3.0 ⁇ (Saturation (S) Z255) +0.7 (1)
- a first occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S12).
- the occupation rate calculation process ends. Assuming that Rij is the first occupancy calculated in the divided area, which is the combined power of the lightness area vi and the hue area Hj, the first occupancy ratio in each divided area is expressed as shown in Table 1.
- Table 2 shows, for each divided area, the first coefficient necessary for calculating the index 1 that quantitatively indicates the accuracy of strobe shooting, that is, the brightness state of the face area at the time of strobe shooting.
- the coefficient of each divided area shown in Table 2 is a weighting coefficient by which the first occupancy Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the photographing conditions.
- Figure 9 shows the brightness (v) —hue (H) plane.
- a positive (+) coefficient is used for the first occupancy calculated from the area (rl) distributed in the high brightness skin color hue area in Fig. 9, and the other hue is blue.
- Hue region (r2) force A negative (one) coefficient is used for the first occupancy calculated.
- Figure 11 shows a curve (coefficient curve) where the first coefficient in the flesh-color area (HI) and the first coefficient in the other areas (green hue area (H3)) change continuously over the entire brightness. ).
- the sign of the first coefficient in (H3)) is negative (one).
- Index 1 is defined as equation (3) using the sum of the H1 to H4 regions shown in equations (2-1) to (2-4).
- Indicator 1 sum of H1 regions + sum of H2 regions + sum of H3 regions + sum of H4 regions +4.424 (3)
- Table 3 shows the second coefficient required for each divided area to calculate the index 2 that quantitatively shows the accuracy of backlighting, that is, the brightness state of the face area during backlighting.
- the coefficient of each divided area shown in Table 3 is a weighting coefficient by which the first occupancy ratio Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the shooting conditions.
- FIG. 10 shows the brightness (V) —hue (H) plane.
- a negative (one) coefficient is used for the occupancy calculated from the area (r4) distributed in the intermediate lightness of the flesh-colored hue area in Fig. 10.
- a positive (+) coefficient is used for the occupation ratio calculated from the low brightness (shadow) area (r3) of the flesh tone hue area.
- Fig. 12 shows the second coefficient in the flesh color region (HI) as a curve (coefficient curve) that continuously changes over the entire brightness.
- the sign of the second coefficient of the intermediate lightness area of the flesh tone hue area with a lightness value of 85 to 169 (v4) is negative (-), and the lightness value is 26 to 84 (v2,
- the sign of the second coefficient in the low lightness (shadow) region of v3) is positive (+), and it is obvious that the sign of the coefficient in both regions is different.
- Index 2 is defined as equation (5) using the sum of the H1 to H4 regions shown in equations (41) to (44).
- Indicator 2 H1 region sum + H2 region sum + H3 region sum + H4 region sum +1.554 (5)
- the index 1 and the index 2 are calculated based on the brightness and hue distribution amount of the captured image data, the index 1 and the index 2 are effective in determining the imaging condition when the captured image data is a color image.
- the second occupancy ratio calculation process executed in the ratio calculation unit 712 to calculate the index 3 will be described in detail with reference to the flowchart of FIG. [0203]
- the RGB values of the photographed image data are converted into the HSV color system (step S20).
- the captured image data is divided into areas where the combined power of the distance from the outer edge of the captured image screen and the brightness is determined, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided area. (Step S21).
- the area division of the captured image data will be described in detail.
- Figs. 14 (a) to 14 (d) show four regions nl to n4 divided according to the distance from the outer edge of the screen of the captured image data.
- the area nl shown in FIG. 14 (a) is the outer frame
- the area n2 shown in FIG. 14 (b) is the area inside the outer frame
- the area n3 shown in FIG. 14 (c) is the area n2.
- an inner area, an area n4 shown in FIG. 14 (d) is an area at the center of the captured image screen.
- a second occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S22).
- the occupation rate calculation process ends. Assuming that Qij is the second occupancy calculated in the divided area, which is the combination of the lightness area vi and the screen area nj, the second occupancy ratio in each divided area is shown in Table 4.
- Table 5 shows the third coefficient necessary for calculating the index 3 for each divided region.
- the coefficient of each divided area shown in Table 5 is a weighting coefficient by which the second occupancy Qij of each divided area shown in Table 4 is multiplied, and is set in advance according to the photographing conditions.
- FIG. 15 shows the third coefficient in the screen areas nl to n4 as a curve (coefficient curve) that continuously changes over the entire brightness.
- n3 area B Q13X24.6 + Q23X12.1 + (omitted)
- n4 area sum Q14X1.5 + Q24X (-32.9) + (omitted)
- Index 3 is defined as equation (7) using the sum of the N1 to H4 regions shown in equations (6-1) to (6-4).
- Indicator 3 sum of nl regions + sum of n2 regions + sum of n3 regions + sum of n4 regions 12.6201
- Index 3 is calculated based on the compositional characteristics (distance of the outer edge force of the screen of the captured image data) based on the brightness distribution position of the captured image data. Therefore, the shooting conditions for the monochrome image as well as the color image are determined. It is also effective for discrimination.
- the luminance Y (brightness) of each pixel is calculated from the RGB (Red, Green, Blue) values of the captured image data using Equation (A), and the standard deviation (xl) of the luminance is calculated. (Step S23).
- the standard deviation (xl) of luminance is expressed as shown in Equation (8).
- the pixel luminance value is the luminance of each pixel of the captured image data
- the average luminance value is the average value of the luminance of the captured image data.
- the total number of pixels is the number of pixels of the entire captured image data.
- a luminance difference value (x2) is calculated (step S24).
- Luminance difference value (x2) (maximum luminance value, average luminance value) Z255 (9)
- the maximum luminance value is the maximum luminance value of the captured image data.
- the average luminance value (x3) of the flesh color region in the center of the screen of the photographed image data is calculated (step S25), and further, the average luminance value (x4) in the center of the screen is calculated (step S25).
- the center of the screen is, for example, an area composed of an area n3 and an area n4 in FIG.
- the flesh color luminance distribution value (x5) is calculated (step S27), and this deviation amount calculation processing ends.
- the skin color brightness distribution value (x5) is It is expressed as 10).
- x6 be the average luminance value of the skin color area in the center of the screen of the captured image data.
- the central portion of the screen is, for example, a region composed of region n2, region n3, and region n4 in FIG.
- index 4 is defined as in equation (11) using index indexes 3 and x6, and index 5 is defined as in equation (12) using index 2, index 3, and x6.
- Indicator 4 0. 46 X indicator 1 + 0. 61 X indicator 3 + 0. 01 ⁇ ⁇ 6— 0. 79 (11)
- Index 5 0. 58X Index 2 + 0.18X Index 3 + (— 0.03) ⁇ 6 + 3. 34 (12)
- the weighting coefficient multiplied by each index in Equation (11) and Equation (12) is It is set in advance according to the shooting conditions.
- the index 6 is obtained by multiplying the deviation amounts (xl) to (x5) calculated in the deviation amount calculation processing by a fourth coefficient set in advance according to the imaging conditions.
- Table 6 shows the fourth coefficient, which is a weighting coefficient by which each deviation is multiplied.
- Indicator 6 xlX0.02 + x2Xl. 13 + x3XO.06 + x4X (-0.01) + x5XO.03— 6.49 (13)
- This index 6 has luminance histogram distribution information that is based only on the compositional characteristics of the captured image data screen, and is particularly effective in distinguishing between a flash photography scene and an under photography scene.
- step T4 in FIG. 5 the gradation adjustment condition determination process (step T4 in FIG. 5) executed in the image processing condition calculation unit 714 will be described with reference to the flowchart in FIG.
- the shooting condition of the shot image data is determined based on the index (indexes 4 to 6) calculated by the index calculation unit 713 and the determination map divided in advance according to the shooting conditions. (Step S30).
- the index indexes 4 to 6
- the determination map divided in advance according to the shooting conditions.
- Fig. 18 (a) shows 60 images taken under each of the following conditions: forward light, backlight, and strobe, and index 4 and index 5 were calculated for a total of 180 digital image data. The values of index 4 and index 5 are plotted.
- Figure 18 (b) shows the results of plotting the values of index 4 and index 6 for images where 60 images were taken under each strobe and under shooting conditions, and index 4 was greater than 0.5.
- the discriminant map is used to evaluate the reliability of the index. As shown in FIG. It consists of strobe and under basic areas, a low accuracy area (1) between backlight and direct light, and a low accuracy area (2) between strobe and under. Note that there are other low-accuracy regions such as a low-accuracy region between backlight and strobe on the discrimination map, but they are omitted in this embodiment.
- Table 7 shows a plot of each index value shown in Fig. 18 and the shooting conditions discriminated based on the discrimination map of Fig. 19.
- the light source condition can be quantitatively determined based on the values of the index 4 and the index 5
- the exposure condition can be quantitatively determined based on the values of the index 4 and the index 6.
- the low accuracy region (1) between the forward light and the backlight can be distinguished from the values of the indicators 4 and 5
- the low accuracy region (2) between the strobe and the under can be determined from the values of the indicators 4 and 6. Can be determined.
- a gradation adjustment method for the shot image data is determined in accordance with the determined shooting conditions (step S31).
- the shooting condition is normal light or strobe light
- the gradation adjustment method A Fig. 21 (a)
- the gradation adjustment method B Fig. 21 ( b)
- tone adjustment method C Fig. 21 (c)
- the gradation adjustment method for one of the adjacent shooting conditions is A or B in any low accuracy area, so both gradation adjustment methods are mixed. It is preferable to apply the gradation adjustment method C described above. By setting the low-accuracy region in this way, the processing result can be smoothly transferred even when different gradation adjustment methods are used. In addition, it is possible to reduce variations in density between multiple photo prints taken of the same subject.
- the tone conversion curve shown in FIG. 21 (b) is convex upward, but may be convex downward.
- the tone conversion curve shown in FIG. 21 (c) is convex downward, but may be convex upward.
- step S32 parameters necessary for gradation adjustment are calculated based on the index calculated by the index calculation unit 713 (step S32).
- the calculation method of the gradation adjustment parameter calculated in step S32 will be described. In the following, it is assumed that the 8-bit captured image data has been converted to 16-bit in advance, and the unit of the captured image data value is 16-bit.
- Reproduction target correction value Brightness reproduction target value (30360) -P4
- a CDF cumulative density function
- the maximum and minimum values of the CDF force obtained are determined.
- the maximum and minimum values are obtained for each RGB.
- the maximum and minimum values for each of RGB obtained respectively, Rmax, R mm ⁇ Gmax, Gmm ⁇ Bmax, Bmm and to 0
- Rx normalized data in R plane is R, Gx in G plane
- the normalized data R 1, G 2, and B are respectively expressed as Expression (14) to Expression (16).
- N (B + G + R) / 3 (17)
- Figure 22 (a) shows the frequency distribution (histogram) of the brightness of RGB pixels before normalization.
- the horizontal axis represents luminance
- the vertical axis represents pixel frequency. This histogram is created for each RGB.
- regularity is applied to the captured image data for each plane according to equations (14) to (16).
- Fig. 22 (b) shows a histogram of brightness calculated by equation (17). Since the captured image data is normally entered at 65535, each pixel takes an arbitrary value between the maximum value of 65535 and the minimum value power.
- FIG. 22 (c) When the luminance histogram shown in FIG. 22 (b) is divided into blocks divided by a predetermined range, a frequency distribution as shown in FIG. 22 (c) is obtained.
- the horizontal axis is the block number (luminance) and the vertical axis is the frequency.
- Each block number of the luminance histogram (Fig. 23 (d)) obtained by deleting the high luminance region and the low luminance region from the normalized luminance histogram and further limiting the cumulative number of pixels,
- the parameter P2 is the average luminance value calculated based on each frequency.
- the norm P1 is an average value of the luminance of the entire captured image data
- the parameter P3 is an average value of the luminance of the skin color region (HI) in the captured image data.
- the key correction value for parameter P7, the key correction value 2 for parameter P7 ', and the luminance correction value 2 for parameter P8 are defined as shown in equations (18), (19), and (20), respectively.
- P7 (Key correction value) [P3— ((Indicator 5Z6) X 18000) + 22000)] Z24. 78 (18)
- the offset value 3 of the meter P10 is a gradation adjustment parameter in the case of shooting conditions corresponding to the low accuracy region (1) or (2) on the discrimination map.
- the calculation method of the meter P10 will be described below.
- a reference index is determined. For example, in the low accuracy region (1), the index 5 is determined as the reference index, and in the low accuracy region (2), the index 6 is determined as the reference index. Then, by normalizing the value of the reference index in the range of 0 to 1, the reference index is converted into a normalized index. Normalized indicator Is defined as in equation (21).
- Normalized index (Standard index Minimum index value) Z (Maximum index value Minimum index value) (21)
- the maximum index value and minimum index value are within the corresponding low accuracy range. The maximum and minimum values of the reference index.
- the correction amounts at the boundary between the corresponding low accuracy region and the two regions adjacent to the low accuracy region are ex and ⁇ , respectively.
- the correction amounts ⁇ and j8 are fixed values calculated in advance using the reproduction target value defined at the boundary of each region on the discrimination map.
- the nomometer P10 is expressed as in equation (22) using the normalized index of equation (21) and the correction amounts a and ⁇ .
- the correlation between the normal index and the correction amount is a linear relationship, but it may be a curve relationship in which the correction amount shifts more gradually.
- the tone adjustment amount for the captured image data is determined based on the calculated tone adjustment parameter (step S33), and the tone adjustment condition determination process is performed. finish.
- step S33 specifically, from the plurality of gradation conversion curves set in advance corresponding to the gradation adjustment method determined in step S31, it corresponds to the gradation adjustment parameter calculated in step S32.
- the gradation conversion curve to be selected is selected (determined). Note that the gradation conversion curve (gradation adjustment amount) may be calculated based on the gradation adjustment parameter calculated in step S32.
- the photographed image data is gradation converted according to the determined gradation conversion curve.
- offset correction parallel shift of 8-bit value that matches parameter P1 with P5 is performed by the following equation (23).
- RGB value of output image RGB value of input image + P6 (23)
- a gradation conversion curve corresponding to Equation (23) is selected from the plurality of gradation conversion curves shown in FIG.
- the gradation conversion curve may be calculated (determined) based on Expression (23).
- a gradation conversion curve corresponding to the parameter P7 (key correction value) shown in Expression (18) is selected from the plurality of gradation conversion curves shown in FIG. 21 (b).
- a specific example of the gradation conversion curve in Fig. 21 (b) is shown in Fig. 24.
- the correspondence between the value of parameter P7 and the selected gradation conversion curve is shown below.
- the photographing condition is backlight
- RGB value of output image RGB value of input image + P9 (24)
- a gradation conversion curve corresponding to Expression (24) is selected from a plurality of gradation conversion curves shown in FIG. Alternatively, calculate (determine) the gradation conversion curve based on Equation (24).
- RGB value of output image RGB value of input image + P10 (25)
- the gradation conversion curve corresponding to the equation (25) is selected from the plurality of gradation conversion curves shown in FIG. Alternatively, calculate (determine) the gradation conversion curve based on Equation (25).
- an index that quantitatively indicates the shooting condition of the captured image data is calculated, and the index is calculated according to the calculated index and the accuracy of the shooting condition.
- the shooting conditions are determined, the gradation adjustment method for the captured image data is determined according to the determination result, and the gradation adjustment amount (gradation conversion) of the captured image data is determined.
- the gradation adjustment amount (gradation conversion) of the captured image data is determined.
- index 3 is also used to derive the compositional elemental power of the captured image data.
- index 3 is also used to derive the compositional elemental power of the captured image data.
- FIG. 25 shows a configuration of a digital camera 200 to which the imaging apparatus of the present invention is applied.
- Digital camera 200, Fig. 25 [As shown] CPU201, optical system 202, image sensor ⁇ 203, AF calculation ⁇ 204, WB calculation ⁇ 205, AE calculation ⁇ 206, lens control ⁇ 207, image processing unit 208 , Display unit 209, recording data creation unit 210, recording medium 211, scene mode setting key 212, color space setting key 213, release button 214, and other operation keys 215 c
- the CPU 201 comprehensively controls the operation of the digital camera 200.
- the optical system 202 is a zoom lens, and forms a subject image on a charge-coupled device (CCD) image sensor in the imaging sensor unit 203.
- the imaging sensor unit 203 photoelectrically converts an optical image by a CCD image sensor, converts it into a digital signal (AZD conversion), and outputs it.
- the image data output from the imaging sensor unit 203 is input to the AF calculation unit 204, the WB calculation unit 205, the AE calculation unit 206, and the image processing unit 208.
- the AF calculation unit 204 calculates and outputs the distances of the AF areas provided at nine places in the screen. The determination of the distance is performed by determining the contrast of the image, and the CPU 201 selects a value at the closest distance among them and sets it as the subject distance.
- the WB calculation unit 205 calculates and outputs a white balance evaluation value of the image.
- the white balance evaluation value is a gain value required to match the RGB output value of a neutral subject under the light source at the time of shooting, and is calculated as the ratio of RZG and BZG with reference to the G channel. The calculated evaluation value is input to the image processing unit 208, and the white balance of the image is adjusted.
- the AE calculation unit 206 calculates and outputs an appropriate exposure value for the image data, and the CPU 201 calculates an aperture value and a shutter speed value so that the calculated appropriate exposure value matches the current exposure value.
- the aperture value is output to the lens control unit 207, and the corresponding aperture diameter is set.
- the shutter speed value is output to the image sensor unit 203, and the corresponding CCD integration time is set.
- the image processing unit 208 performs processing such as white balance processing, CCD filter array interpolation processing, color conversion, primary gradation conversion, and sharpness correction on the captured image data, and then performs the above-described implementation. Similar to the form, an index (index 1 to 6) for specifying the shooting condition is calculated, the shooting condition is determined based on the calculated index, and the gradation conversion process determined based on the determination result is performed. By doing so, it is converted into a preferable image. Its PEG compression And so on. The JPEG-compressed image data is output to the display unit 209 and the recording data creation unit 210.
- the display unit 209 displays the captured image data on the liquid crystal display and various information according to instructions from the CPU 201.
- the recording data creation unit 210 formats JPEG-compressed image data and various captured image data input from the CPU 201 into an Exif (Exchangeable Image File Format) file, and records the data on the recording medium 211.
- Exif Exchangeable Image File Format
- the recording media 211 there is a part called maker note as a space where each manufacturer can write free information, and it is also possible to record the judgment result of shooting conditions and index 4, index 5 and index 6. Oh ,.
- the shooting scene mode can be switched by a user setting. That is, three modes can be selected as a shooting scene mode: a normal mode, a portrait mode, and a landscape mode scene.
- a shooting scene mode When the user operates the scene mode setting key 212 and the subject is a person, the portrait mode and the landscape mode are selected. In case of, switch to landscape mode to perform primary gradation conversion suitable for the subject.
- the digital camera 200 records the selected shooting scene mode information by adding it to the maker note portion of the image data file. The digital camera 200 also records the position information of the AF area selected as the subject in the image file in the same manner.
- the user can set the output color space using the color space setting key 213.
- sRGB IEC61966-2-i; ⁇ RAW can be selected.
- sRGB the image processing in this embodiment is executed.
- Raw is selected, The image processing of this embodiment is not performed, and the image is output in a color space unique to CCD.
- an index that quantitatively indicates the shooting condition of the shot image data is calculated, and The shooting conditions are determined based on the calculated index, the gradation adjustment method for the captured image data is determined according to the determination result, and the gradation adjustment amount (gradation conversion curve) of the captured image data is determined. Accordingly, it is possible to appropriately correct the brightness of the subject. In this way, appropriate gradation conversion processing according to the shooting conditions is performed inside the digital camera 200. Thus, even when the digital camera 200 and the printer are directly connected without using a personal computer, a preferable image can be output.
- a face image may be detected from the photographed image data, the photographing condition may be determined based on the detected face image, and the gradation adjustment condition may be determined.
- Exif information may be used to determine the shooting conditions. By using Exif information, it is possible to further improve the accuracy of determining the shooting conditions.
- an index that quantitatively indicates the shooting condition of the captured image data is calculated, the shooting condition is determined based on the calculated index, and the captured image data is determined according to the determination result.
- the gradation adjustment method it is possible to appropriately correct the brightness of the subject.
- the reliability of the determination results can be increased, and appropriate gradation conversion according to the shooting conditions can be performed. It becomes possible.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-275212 | 2004-09-22 | ||
JP2004275212A JP2006092133A (ja) | 2004-09-22 | 2004-09-22 | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006033235A1 true WO2006033235A1 (ja) | 2006-03-30 |
Family
ID=36089998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/016383 WO2006033235A1 (ja) | 2004-09-22 | 2005-09-07 | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2006092133A (ja) |
WO (1) | WO2006033235A1 (ja) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006325015A (ja) * | 2005-05-19 | 2006-11-30 | Konica Minolta Photo Imaging Inc | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
KR101323731B1 (ko) | 2006-01-04 | 2013-10-31 | 삼성전자주식회사 | 노출 영역 표시 방법 |
JP5955133B2 (ja) * | 2012-06-29 | 2016-07-20 | セコム株式会社 | 顔画像認証装置 |
KR102052723B1 (ko) * | 2017-11-30 | 2019-12-13 | 주식회사 룰루랩 | 포터블 피부 상태 측정 장치, 및 피부 상태 진단 및 관리 시스템 |
KR102052722B1 (ko) * | 2017-11-30 | 2019-12-11 | 주식회사 룰루랩 | 편광 필름을 포함하는 포터블 피부 상태 측정 장치, 및 피부 상태 진단 및 관리 시스템 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000148980A (ja) * | 1998-11-12 | 2000-05-30 | Fuji Photo Film Co Ltd | 画像処理方法、画像処理装置及び記録媒体 |
JP2002199221A (ja) * | 2000-12-27 | 2002-07-12 | Fuji Photo Film Co Ltd | 濃度補正曲線生成装置および方法 |
JP2002247393A (ja) * | 2001-02-14 | 2002-08-30 | Konica Corp | 画像処理方法 |
-
2004
- 2004-09-22 JP JP2004275212A patent/JP2006092133A/ja active Pending
-
2005
- 2005-09-07 WO PCT/JP2005/016383 patent/WO2006033235A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000148980A (ja) * | 1998-11-12 | 2000-05-30 | Fuji Photo Film Co Ltd | 画像処理方法、画像処理装置及び記録媒体 |
JP2002199221A (ja) * | 2000-12-27 | 2002-07-12 | Fuji Photo Film Co Ltd | 濃度補正曲線生成装置および方法 |
JP2002247393A (ja) * | 2001-02-14 | 2002-08-30 | Konica Corp | 画像処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2006092133A (ja) | 2006-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006123492A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
WO2006120839A1 (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
US20050141002A1 (en) | Image-processing method, image-processing apparatus and image-recording apparatus | |
US20040095478A1 (en) | Image-capturing apparatus, image-processing apparatus, image-recording apparatus, image-processing method, program of the same and recording medium of the program | |
JPWO2005079056A1 (ja) | 画像処理装置、撮影装置、画像処理システム、画像処理方法及びプログラム | |
JP2007184888A (ja) | 撮像装置、画像処理装置、画像処理方法、及び画像処理プログラム | |
JP2005192162A (ja) | 画像処理方法、画像処理装置及び画像記録装置 | |
US20050259282A1 (en) | Image processing method, image processing apparatus, image recording apparatus, and image processing program | |
WO2006033235A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
JP2007311895A (ja) | 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム | |
JP2007221678A (ja) | 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム | |
US6801296B2 (en) | Image processing method, image processing apparatus and image recording apparatus | |
JP2007293686A (ja) | 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム | |
WO2006033236A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
JP2004096508A (ja) | 画像処理方法、画像処理装置、画像記録装置、プログラム及び記録媒体 | |
WO2006033234A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
WO2006132067A1 (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
JP2006318255A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
JP4449619B2 (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
WO2006077702A1 (ja) | 撮像装置、画像処理装置及び画像処理方法 | |
JP2005332054A (ja) | 画像処理方法、画像処理装置、画像記録装置及び画像処理プログラム | |
JP2007312125A (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP2006094000A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
WO2006077703A1 (ja) | 撮像装置、画像処理装置及び画像記録装置 | |
JP2006092168A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05778255 Country of ref document: EP Kind code of ref document: A1 |