WO2006132067A1 - Procédé et dispositif de traitement d’image, dispositif d’imagerie et programme de traitement d’image - Google Patents
Procédé et dispositif de traitement d’image, dispositif d’imagerie et programme de traitement d’image Download PDFInfo
- Publication number
- WO2006132067A1 WO2006132067A1 PCT/JP2006/310003 JP2006310003W WO2006132067A1 WO 2006132067 A1 WO2006132067 A1 WO 2006132067A1 JP 2006310003 W JP2006310003 W JP 2006310003W WO 2006132067 A1 WO2006132067 A1 WO 2006132067A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- condition
- image data
- brightness
- hue
- shooting
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 343
- 238000003384 imaging method Methods 0.000 title claims abstract description 91
- 238000003672 processing method Methods 0.000 title claims description 33
- 238000012937 correction Methods 0.000 claims abstract description 298
- 238000004458 analytical method Methods 0.000 claims abstract description 72
- 230000007704 transition Effects 0.000 claims description 166
- 238000000034 method Methods 0.000 claims description 135
- 238000006243 chemical reaction Methods 0.000 claims description 118
- 230000008569 process Effects 0.000 claims description 86
- 238000004364 calculation method Methods 0.000 claims description 74
- 239000011159 matrix material Substances 0.000 claims description 22
- 230000001186 cumulative effect Effects 0.000 claims description 15
- 238000009826 distribution Methods 0.000 claims description 14
- 230000008859 change Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 9
- 239000000463 material Substances 0.000 description 8
- 238000012546 transfer Methods 0.000 description 8
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- 101100385969 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) CYC8 gene Proteins 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000012886 linear function Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000000611 regression analysis Methods 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 239000000428 dust Substances 0.000 description 2
- 238000011068 loading method Methods 0.000 description 2
- GGCZERPQGJTIQP-UHFFFAOYSA-N sodium;9,10-dioxoanthracene-2-sulfonic acid Chemical compound [Na+].C1=CC=C2C(=O)C3=CC(S(=O)(=O)O)=CC=C3C(=O)C2=C1 GGCZERPQGJTIQP-UHFFFAOYSA-N 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910052900 illite Inorganic materials 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- VGIBGUSAECPPNB-UHFFFAOYSA-L nonaaluminum;magnesium;tripotassium;1,3-dioxido-2,4,5-trioxa-1,3-disilabicyclo[1.1.1]pentane;iron(2+);oxygen(2-);fluoride;hydroxide Chemical compound [OH-].[O-2].[O-2].[O-2].[O-2].[O-2].[F-].[Mg+2].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[K+].[K+].[K+].[Fe+2].O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2 VGIBGUSAECPPNB-UHFFFAOYSA-L 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- -1 silver halide Chemical class 0.000 description 1
- 238000005092 sublimation method Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
- 238000004383 yellowing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/6083—Colour correction or control controlled by factors external to the apparatus
- H04N1/6086—Colour correction or control controlled by factors external to the apparatus by scene illuminant, i.e. conditions at the time of picture capture, e.g. flash, optical filter used, evening, cloud, daylight, artificial lighting, white point measurement, colour temperature
Definitions
- Image processing method image processing apparatus, imaging apparatus, and image processing program
- the present invention relates to an image processing method, an image processing device, an imaging device, and an image processing program.
- Negative film has a wide range of recordable brightness (dynamic range). For example, even a film camera photographed with an inexpensive camera with no exposure control (so-called a device that creates a photographic print) By correcting the density on the minilab (small-scale lab)) side, it is possible to create a photo print that is not inferior! Therefore, the improvement of density correction efficiency in minilabs is indispensable for developing the provision of inexpensive cameras and high value-added prints, and various improvements have been made such as automation if digitalized.
- Patent Document 1 discloses a method for calculating an additional correction value in place of the discriminant regression analysis method.
- the method described in Patent Document 1 deletes the high luminance region and the low luminance region from the luminance histogram indicating the cumulative number of luminance pixels (frequency number), and further uses the frequency number limited to reduce the luminance.
- An average value is calculated, and a difference between the average value and the reference luminance is obtained as a correction value.
- Patent Document 2 describes a method of determining a light source state at the time of photographing in order to compensate for the extraction accuracy of a face region.
- a face candidate area is extracted, and the average brightness of the extracted face candidate area is calculated with respect to the entire image. (Shooting close-up flash)) and adjust the tolerance of the judgment criteria for the face area.
- Patent Document 2 as a method for extracting a face candidate region, a method using a two-dimensional histogram of hue and saturation described in JP-A-6-67320, JP-A-8-122944, JP-A-8-184925
- the pattern matching and pattern search methods described in Japanese Patent Laid-Open No. 9-138471 and Japanese Patent Laid-Open No. 9-138471 are used for bow I.
- Patent Document 2 discloses a method for removing a background region other than a face, as described in JP-A-8-122944 and JP-A-8-184925. Citing methods using the contact ratio, density contrast, density change pattern and periodicity with the. It describes a method that uses a one-dimensional histogram of density to determine shooting conditions. This method is based on an empirical rule that the face area is dark and the background area is bright in the case of backlight, and that the face area is bright and the background area is dark in the case of close-up flash photography. As described above, the advancement of the imaging condition discrimination technology leads to an improvement in the accuracy of the automatic density correction, and the compensation effect for the captured image of the digital camera is being steadily improved. Patent Document 1: JP 2002-247393 A
- Patent Document 2 JP 2000-148980 A
- the applicant of the present invention conducted a detailed investigation on the difference in image quality correction level for each shooting condition and the subjective influence on the fluctuation.
- the subjective tolerance for variations in density correction for under-shooting scenes and backlight scenes is very large, for scenes where the exposure is appropriate or overexposure, such as when the shooting conditions are direct light or close-up flash photography.
- the difference was that it responded sensitively to slight lightness (high density) processing and color differences. This is especially true for so-called professional photographers.With regard to the processing of images taken free of charge by various cameras, the image quality correction processing is limited only to higher brightness (lower density)! Satisfied, and! /, And amazing results.
- the problem of the present invention is that continuity and stability can be obtained from a scene where the shooting condition is determined to be direct light or close-up flash photography and the exposure is appropriate to overexposure. It is to realize excellent image quality correction processing.
- the image processing method of the present invention includes an imaging condition analysis step for analyzing a shooting condition of captured image data, and an analysis value of the shooting condition obtained by the shooting condition analysis step. Based on the processing condition calculation step for calculating the image quality correction processing condition for the captured image data and the predetermined image quality correction processing condition as a reference condition, based on the image quality correction processing condition calculated by the processing condition calculation step.
- the image processing method of the present invention is based on a shooting condition analysis step of analyzing shooting conditions of the shot image data, and a predetermined image quality correction processing condition for the shot image data as a reference condition. Based on the analysis value of the shooting condition obtained in the analysis step, a transition condition calculation step for calculating a transition condition for transitioning from the reference condition to an uncorrected condition without performing image quality correction processing, and the transition condition And an image quality correction processing step of performing an image quality correction process on the captured image data of a predetermined imaging condition in accordance with the transition condition calculated in the calculation step.
- the image processing apparatus of the present invention includes a photographing condition analysis unit that analyzes a photographing condition of photographed image data, and an image quality for the photographed image data based on an analysis value of the photographing condition obtained by the photographing condition analysis unit.
- a processing condition calculation unit for calculating a correction processing condition, and a predetermined image quality correction processing condition as a reference condition. Based on the image quality correction processing condition calculated by the processing condition calculation unit, the image quality is calculated from the reference condition.
- Transition condition calculation means for calculating a transition condition for making a transition to an uncorrected condition that is not subjected to correction processing, and photographic image data with a predetermined shooting condition according to the transition condition calculated by the transition condition calculation means
- image quality correction processing means for performing image quality correction processing.
- the image processing apparatus of the present invention includes a shooting condition analysis unit that analyzes a shooting condition of the shot image data, a predetermined image quality correction processing condition for the shot image data as a reference condition, and the shooting condition Based on the analysis value of the imaging condition obtained by the analysis means, the transition condition calculation means for calculating the transition condition for transitioning from the reference condition to the uncorrected condition without performing image quality correction processing, and the transition condition And image quality correction processing means for performing image quality correction processing on the photographic image data under a predetermined imaging condition in accordance with the transition condition calculated by the calculation means.
- the imaging apparatus of the present invention is obtained by an imaging unit that captures captured image data by capturing a subject, an imaging condition analysis unit that analyzes the imaging condition of the captured image data, and the imaging condition analysis unit. Based on analysis values of shooting conditions! Then, processing condition calculation means for calculating image quality correction processing conditions for the captured image data, and predetermined image quality correction processing conditions.
- the image processing apparatus includes: a condition calculation unit; and an image quality correction processing unit that performs an image quality correction process on the captured image data under a predetermined imaging condition in accordance with the transition condition calculated by the transition condition calculation unit.
- the imaging apparatus of the present invention includes an imaging means for capturing a captured image data by capturing a subject, a capturing condition analyzing means for analyzing a capturing condition of the captured image data, and a predetermined value for the captured image data. Transition for transition from the reference condition to an uncorrected condition without image quality correction processing based on the analysis value of the shooting condition obtained by the shooting condition analysis means. Transition condition calculating means for calculating a condition, and image quality correction processing means for performing image quality correction processing on captured image data of a predetermined shooting condition in accordance with the transition condition calculated by the transition condition calculating means. It is characterized by.
- An image processing program of the present invention includes a computer for executing image processing, a photographing condition analysis function for analyzing photographing conditions of photographed image data, and analysis of photographing conditions obtained by the photographing condition analysis function.
- a processing condition calculation function for calculating image quality correction processing conditions for the captured image data based on the values, and a predetermined image quality correction processing condition as a reference condition, and the image quality correction processing calculated by the processing condition calculation function.
- a transition condition calculation function that calculates a transition condition for transitioning from the reference condition to an uncorrected condition that is not subjected to image quality correction processing, and a predetermined condition according to the transition condition calculated by the transition condition calculation function
- an image processing program of the present invention includes a shooting condition analysis function for analyzing shooting conditions of shot image data and a predetermined image quality correction process for the shot image data in a computer for executing image processing.
- a shooting condition analysis function for analyzing shooting conditions of shot image data and a predetermined image quality correction process for the shot image data in a computer for executing image processing.
- a transition condition for making a transition from the reference condition to an uncorrected condition without image quality correction processing is calculated based on the analysis value of the shooting condition obtained by the shooting condition analysis function.
- the correction amount for an image in which the shooting condition is determined to be under or backlit while maintaining the correction amount for an image in which the shooting condition is determined to be under or backlit, the correction amount for the captured image data in which the shooting condition is normal (appropriate exposure) or over.
- the continuity and stability can be improved.
- FIG. 1 is a perspective view showing an external configuration of an image processing apparatus according to Embodiments 1 and 2 of the present invention.
- FIG. 2 is a block diagram showing an internal configuration of the image processing apparatus according to the first and second embodiments.
- FIG. 3 is a block diagram showing a main part configuration of the image processing unit in FIG.
- FIG. 4 is a flowchart showing the flow of processing executed in the image adjustment processing unit of the first embodiment.
- FIG. 5 is a flowchart showing an imaging condition analysis process executed in the scene determination unit.
- FIG. 6 is a flowchart showing a first occupancy ratio calculation process for calculating a first occupancy ratio for each brightness and hue area.
- FIG. 7 is a diagram showing an example of a program for converting RGB power into the HSV color system.
- FIG. 8 is a diagram showing the brightness (V) —hue (H) plane and the region rl and region r2 on the V—H plane.
- FIG. 9 is a diagram showing the lightness (V) —hue (H) plane, and regions r3 and r4 on the V—H plane.
- FIG. 10 is a diagram showing a curve representing a first coefficient for multiplying the first occupancy for calculating index 1;
- FIG. 11 is a diagram showing a curve representing a second coefficient for multiplying the first occupancy for calculating index 2;
- FIG. 12 is a flowchart showing a second occupancy ratio calculation process for calculating a second occupancy ratio based on the composition of captured image data.
- FIG. 13 is a diagram showing regions nl to n4 determined according to the distance from the outer edge of the screen of captured image data.
- FIG.14 A curve representing the third coefficient for multiplying the second occupancy to calculate index 3 The figure shown according to area
- ⁇ 17 A diagram showing a discrimination map for discriminating shooting conditions.
- ⁇ 20 A diagram showing the relationship between the input key correction conversion value and the output key correction conversion value when the shooting condition is direct light.
- ⁇ 21 A diagram showing the relationship between the input key correction conversion value and the transition coefficient when the shooting condition is direct light.
- FIG. 22 is a flowchart showing image quality correction processing executed in the image quality correction unit.
- FIG. 23 A diagram showing a relationship between an index for specifying shooting conditions and gradation adjustment methods A to C.
- FIG. 24 is a diagram showing a gradation conversion curve corresponding to each gradation adjustment method.
- FIG. 25 is a flowchart showing the flow of processing executed in the image adjustment processing unit of the second embodiment.
- ⁇ 26 A diagram showing the relationship between the input key correction conversion value and the output key correction conversion value when the shooting condition is the low accuracy region (2).
- FIG. 28 is a diagram showing a transition start point and a transition end point in the low accuracy region (2).
- FIG. 29 is a flowchart showing image quality correction processing according to the first embodiment.
- ⁇ 30 A diagram showing the relationship (solid line) between the input key correction conversion value and the output key correction conversion value when the shooting condition is front light.
- ⁇ 31 A diagram showing a gradation conversion curve when the photographing condition is backlight or under.
- FIG. 32 is a flowchart showing image quality correction processing according to the second embodiment.
- FIG. 34 is a diagram showing a gradation conversion curve defined for each transition coefficient in the second embodiment.
- FIG. 35 is a flowchart showing image quality correction processing according to the third embodiment.
- FIG. 36 is a diagram showing the relationship between transition coefficients and color matrix coefficient values.
- FIG. 37 is a flowchart showing image quality correction processing according to the fourth embodiment.
- FIG. 38 is a block diagram showing the configuration of a digital camera to which the imaging apparatus of the present invention is applied.
- FIG. 39 is a diagram showing a setting screen when manually setting transition conditions.
- FIG. 1 is a perspective view showing an external configuration of an image processing apparatus 1 according to Embodiments 1 and 2 of the present invention.
- the image processing apparatus 1 is provided with a magazine loading unit 3 for loading a photosensitive material on one side surface of a housing 2. Inside the housing 2 are provided an exposure processing unit 4 for exposing the photosensitive material, and a print creating unit 5 for developing the exposed photosensitive material, drying it, and creating a print.
- a tray 6 for discharging the print created by the print creation unit 5 is provided on the other side of the casing 2.
- a CRT (Cathode Ray Tube) 8 as a display device, a film scanner unit 9 that reads a transparent original, a reflective original input device 10, and an operation unit 11 are provided at the top of the housing 2. Yes.
- This CRT8 constitutes the display means for displaying the image of the image information to be printed on the screen.
- the housing 2 is provided with an image reading unit 14 that can read image information recorded on various digital recording media, and an image writing unit 15 that can write (output) image signals on various digital recording media.
- a control unit 7 that centrally controls these units is provided inside the housing 2.
- the image reading unit 14 includes a PC card adapter 14a and a floppy (registered trademark) disk adapter 14b, and a PC card 13a and a floppy (registered trademark) disk 13b can be inserted therein.
- the PC card 13a has a memory in which a plurality of frame image data captured by a digital camera is recorded.
- a plurality of frame image data captured by a digital camera is recorded on the floppy (registered trademark) disk 13b.
- Recording media that record frame image data in addition to the PC card 13a and floppy disk 13b include, for example, a multimedia card (registered trademark), a memory stick (registered trademark), MD data, and a CD-ROM. Etc.
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c.
- the operation unit 11, the CRT 8, the film scanner unit 9, the reflective document input device 10, and the image reading unit 14 are structured so as to be integrally provided in the housing 2. Either one or more powers may be provided as separate bodies!
- a force print creation method is exemplified in which a photosensitive material is exposed to light and developed to create a print.
- a method such as a kuget method, an electrophotographic method, a heat sensitive method, or a sublimation method may be used.
- FIG. 2 shows a main part configuration of the image processing apparatus 1.
- the image processing apparatus 1 includes a control unit 7, an exposure processing unit 4, a print generation unit 5, a film scanner unit 9, a reflection original input device 10, an image reading unit 14, a communication means (input) 32,
- the image writing unit 15, the data storage unit 71, the template storage unit 72, the operation unit 11, the CRT 8, and the communication unit (output) 33 are configured.
- the control unit 7 includes a microcomputer, and includes various control programs stored in a storage unit (not shown) such as a ROM (Read Only Memory) and a CPU (Central Processing Unit) (not shown). By cooperation, the operation of each part constituting the image processing apparatus 1 is controlled.
- a storage unit not shown
- ROM Read Only Memory
- CPU Central Processing Unit
- the control unit 7 includes an image processing unit 70 according to the image processing apparatus of the present invention. Based on an input signal (command information) from the operation unit 11, the control unit 7 controls the film scanner unit 9 and the reflective original input device 10. The read image signal, the image signal read from the image reading unit 14, and the image signal input from the external device via the communication means 32 are subjected to image processing to form image information for exposure, and exposure Output to processing unit 4. Further, the image processing unit 70 performs a conversion process corresponding to the output form on the image signal subjected to the image processing, and outputs it. As output destinations of the image processing unit 70, there are CRT8, image writing unit 15, communication means (output) 33, and the like.
- the exposure processing unit 4 performs image exposure on the photosensitive material! ⁇ Output this photosensitive material to the print creation unit 5.
- the print creating unit 5 develops the exposed photosensitive material and dries it to create prints Pl, P2, and P3.
- Print P1 is a service size, high-definition size, panorama size, etc.
- print P2 is an A4 size print
- print P3 is a business card size print.
- the film scanner unit 9 reads a frame image recorded on a transparent original such as a developed negative film N or a reversal film imaged by an analog camera, and obtains a digital image signal of the frame image.
- the reflective original input device 10 reads an image on the print P (photo print, document, various printed materials) by a flat bed scanner, and obtains a digital image signal.
- the image reading unit 14 reads frame image information recorded on the PC card 13a or the floppy (registered trademark) disk 13b and transfers it to the control unit 7.
- the image reading unit 14 includes, as image transfer means 30, a PC card adapter 14a, a floppy (registered trademark) disk adapter 14b, and the like.
- the image reading unit 14 reads frame image information recorded on the PC card 13a inserted into the PC card adapter 14a or the floppy disk 13b inserted into the floppy disk adapter 14b. And transfer to the control unit 7.
- a PC card reader or PC card slot is used as the PC card adapter 14a. I can.
- the communication means (input) 32 receives an image signal representing a captured image and a print command signal from another computer in the facility where the image processing apparatus 1 is installed or a remote computer via the Internet or the like. To do.
- the image writing unit 15 includes a floppy (registered trademark) disk adapter 15a, an MO adapter 15b, and an optical disk adapter 15c as the image conveying unit 31.
- the image writing unit 15 includes a floppy disk 16a inserted into the floppy disk adapter 15a and an MO inserted into the MO adapter 15b. 16b, the optical disk 16c inserted into the optical disk adapter 15c, and the image signal generated by the image processing method of the present invention is written.
- the data accumulating unit 71 stores and sequentially accumulates image information and order information corresponding to the image information (information about how many prints are to be created for each frame, information on the print size, etc.).
- the template storage means 72 stores the background image, illustration image, etc., which are sample image data corresponding to the sample identification information Dl, D2, D3, and data of at least one template for setting the synthesis region.
- a predetermined template is selected from a plurality of templates that are set by the operator's operation and stored in advance in the template storage means 72, and the frame image information is synthesized by the selected template and designated sample identification information D1, D2,
- the sample image data selected based on D3 is combined with the image data based on the order and the Z or character data to create a print based on the specified sample.
- the synthesis using this template is performed by the well-known Chromaki method.
- sample identification information Dl, D2, and D3 for designating print samples are configured to be input from the operation unit 211.
- these sample identification information is stored in print samples or orders. Since it is recorded on the sheet, it can be read by reading means such as OCR. Or it can also input by an operator's keyboard operation.
- sample image data is recorded corresponding to sample identification information D1 for specifying a print sample, sample identification information D1 for specifying a print sample is input, and this sample identification information is input.
- Select the sample image data based on D1 select the selected sample image data, and the image data and Z or character based on the order.
- the first sample identification information D2 designating the first sample and the image data of the first sample are stored, and the second sample identification information D3 designating the second sample and the second sample identification information D3 are stored.
- the image data of two samples is stored, the sample image data selected based on the designated first and second sample identification information D2, D3, the image data based on the order, and the Z or character data Since a print based on the specified sample is created, a wider variety of images can be synthesized, and a print that meets a wider variety of user requirements can be created.
- the operation unit 11 includes information input means 12.
- the information input means 12 is composed of, for example, a touch panel and outputs a pressing signal from the information input means 12 to the control unit 7 as an input signal.
- the operation unit 11 may be configured with a keyboard, a mouse, and the like.
- the CRT 8 displays image information and the like according to the display control signal input from the control unit 7.
- the communication means (output) 33 sends an image signal representing a photographed image after the image processing of the present invention and order information attached thereto to other links in the facility where the image processing apparatus 1 is installed.
- the computer transmits to a distant computer via the Internet or the like.
- the image processing apparatus 1 includes an image input unit that captures image information obtained by dividing and metering images of various digital media and image originals, an image processing unit, and a processed image.
- Image output means for displaying images, printing output, writing to image recording media, and order information attached to image data for remote computers via the communication line via another communication line or computer Means for transmitting.
- FIG. 3 shows the internal configuration of the image processing unit 70.
- the image processing unit 70 includes an image adjustment processing unit 701, a film scan data processing unit 702, a reflection original scan data processing unit 703, an image data format decoding processing unit 704, a template processing unit 705, and a CRT specific process.
- Part 706, printer-specific processing part A707, printer-specific processing part B708, image data document An expression creation processing unit 709 is included.
- the film scan data processing unit 702 performs calibration operations specific to the film scanner unit 9, negative / positive reversal (in the case of a negative document), dust flaw removal, contrast adjustment, It performs processing such as granular noise removal and sharpening enhancement, and outputs the processed image data to the image adjustment processing unit 701.
- the film size, negative / positive type, information on the main subject optically or magnetically recorded on the film, information on the shooting conditions (for example, information content described in APS), etc. are also output to the image adjustment processing unit 701. .
- the reflection document scan data processing unit 703 performs a calibration operation specific to the reflection document input device 10, negative / positive reversal (in the case of a negative document), dust flaw removal, and contrast adjustment on the image data input from the reflection document input device 10. Then, processing such as noise removal and sharpening enhancement is performed, and the processed image data is output to the image adjustment processing unit 701.
- the image data format decoding processing unit 704 applies compression code to the image data input from the image transfer means 30 and Z or the communication means (input) 32 according to the data format of the image data as necessary. Processing such as restoration and conversion of the color data expression method is performed, the data is converted into a data format suitable for computation in the image processing unit 70, and output to the image adjustment processing unit 701. In addition, when the size of the output image is specified from any of the operation unit 11, the communication means (input) 32, and the image transfer means 30, the image data format decoding processing unit 704 detects the specified information. And output to the image adjustment processing unit 701. Information about the size of the output image specified by the image transfer means 30 is embedded in the header information and tag information of the image data acquired by the image transfer means 30.
- the image adjustment processing unit 701 is based on a command from the operation unit 11 or the control unit 7, and includes a film scanner unit 9, a reflective document input device 10, an image transfer unit 30, a communication unit (input) 32, and a template.
- the image data received from the image processing unit 705 is subjected to image processing (see Fig. 4) described later to generate digital image data for image formation optimized for viewing on the output medium, and CRT
- the data is output to the unique processing unit 706, the printer specific processing unit A 707, the printer specific processing unit B 708, the image data format creation processing unit 709, and the data storage unit 71.
- the optimization process for example, it is displayed on a CRT display monitor compliant with the sRGB standard. If it is assumed to be shown, it is processed so as to obtain the optimum color reproduction within the color gamut of the sRGB standard. If output to silver salt photographic paper is assumed, processing is performed to obtain an optimal color reproduction within the color gamut of silver salt photographic paper. In addition to color gamut compression, gradation compression from 16 bits to 8 bits, reduction of the number of output pixels, and processing to handle output characteristics (LUT) of output devices are also included. Furthermore, it goes without saying that tone compression processing such as noise suppression, sharpening, gray balance adjustment, saturation adjustment, or covering and baking processing is performed.
- tone compression processing such as noise suppression, sharpening, gray balance adjustment, saturation adjustment, or covering and baking processing is performed.
- the image adjustment processing unit 701 includes a scene determination unit 710 that determines the shooting conditions of the captured image data, and an image quality correction unit 7 11 that performs image quality correction processing on the captured image data.
- the shooting conditions are classified into light source conditions and exposure conditions.
- the light source condition is derived from the light source at the time of photographing, the positional relationship between the main subject (mainly a person) and the photographer. In the broader sense, it also includes the type of light source (sunlight, strobe light, tandasten lighting and fluorescent lamps).
- Backlit scenes occur when the sun is located behind the main subject.
- a strobe (close-up) scene occurs when the main subject is strongly irradiated with strobe light. Both scenes have the same brightness (light / dark ratio), and the relationship between the brightness of the foreground and background of the main subject is merely reversed.
- the exposure condition is derived from the settings of the camera shutter speed, aperture value, etc., and the underexposure state is referred to as under, the proper exposure state is referred to as normal, and the overexposure state is referred to as over.
- so-called “white jump” and “shadow collapse” are also included.
- Under all light source conditions under or over exposure conditions can be used. Especially in DSC (digital still camera) with a narrow dynamic range, even if the automatic exposure adjustment function is used, due to the setting conditions aimed at suppressing overexposure, the frequency of underexposed exposure conditions is high. High,.
- Embodiments 1 and 2 “brightness” means “brightness” that is generally used unless otherwise noted.
- the power of using V (0 to 255) of the H SV color system as “brightness” may be a unit system that represents the brightness of any other color system.
- numerical values such as various coefficients described in the first and second embodiments are recalculated.
- Embodiment 1 and Embodiment The captured image data in 2 is assumed to be image data with a person as the main subject.
- the template processing unit 705 reads predetermined image data (template) from the template storage unit 72 based on a command from the image adjustment processing unit 701, and synthesizes the image data to be processed and the template. The template processing is performed, and the image data after the template processing is output to the image adjustment processing unit 701.
- the CRT specific processing unit 706 performs processing such as changing the number of pixels and color matching on the image data input from the image adjustment processing unit 701 as necessary, and displays information such as control information that needs to be displayed.
- the combined display image data is output to CRT8.
- the printer-specific processing unit A707 performs printer-specific calibration processing, color matching, and pixel number change processing as necessary, and outputs processed image data to the exposure processing unit 4.
- a printer specific processing unit B708 is provided for each printer apparatus to be connected.
- the printer-specific processing unit B708 performs printer-specific calibration processing, color matching, pixel number change, and the like, and outputs processed image data to the external printer 51.
- the image data format creation processing unit 709 applies JPEG (Joint Photographic Coding Experts Group), TIF F (Tagged Image File Format) as necessary to the image data input from the image adjustment processing unit 701. Exif (Exchangeable Image File Format), etc. [Converts to various general-purpose image formats such as these, and outputs processed image data to the image transport unit 31 and communication means (output) 33.
- JPEG Joint Photographic Coding Experts Group
- TIF F Tagged Image File Format
- Exif Exchangeable Image File Format
- the categories A707, printer specific processing unit B708, and image data format creation processing unit 709 are provided to help understand the functions of the image processing unit 70, and are not necessarily realized as physically independent devices. For example, there is no need to It may be realized as a type of software processing by a single CPU.
- step Tl photographing condition analysis processing for analyzing photographing conditions of photographed image data is performed.
- the imaging condition analysis process in step T1 will be described in detail later with reference to FIG.
- the condition for image quality correction processing for the shot image data is calculated (step ⁇ 2).
- the condition of the image quality correction process in step ⁇ 2 is, for example, a parameter (for example, a key correction conversion value in density correction) necessary for tone adjustment with respect to captured image data.
- predetermined image quality correction processing conditions for the captured image data are set as reference conditions (step ⁇ 3).
- the reference condition is a condition for image quality correction processing that is optimal for photographed image data in which the photographing condition is determined to be backlit or underexposed (slightly dark).
- the contents of the reference conditions set in step ⁇ 3 include the predetermined density correction processing conditions, the conditions for tone conversion processing that also has the specified brightness enhancement processing and high contrast processing power (see Figs. 33 and 34), Color matrix processing conditions, predetermined contrast correction processing conditions, etc.
- the density correction processing condition or the brightness enhancement processing condition as the reference condition is, for example, that the input key correction conversion value (key correction conversion value input by the operator's operation) is 0 (no correction). In this case, it is defined as applying -2D high brightness (low density) processing (see Figure 20).
- “D” is a density correction button operated by an operator in a mini-lab (small scale laboratory).
- the high contrast processing as the reference condition is defined as performing, for example, a 25% high contrast processing (see S-shaped curve L5-B in FIG. 34). Further, the contrast correction process as the reference condition is expressed by a predetermined linear function.
- the color matrix processing as the reference condition is defined as a condition for slightly reducing the saturation, for example.
- the reference condition is that the brightness distribution center (that is, the AE exposure control center of the imaging apparatus) of the captured image data that also obtains the imaging apparatus (camera) force is slightly lower than the appropriate brightness. Since it is thought to be due to the peak taste, it differs depending on the type of imaging device (identifiable by Exif information recorded in the header of the captured image data) and the type of photographer (Amateur, Professional) It is expected that. Therefore, it is desirable to define the reference conditions according to the type of imaging device and the type of photographer. If you want to apply a higher correction level under shooting conditions such as under or backlight, it is preferable to move the reference point (reference condition) to the minus side.
- step T3 When the reference condition is set, based on the image quality correction processing condition calculated in step T2, the reference condition set in step T3 is shifted to the uncorrected condition side where no image quality correction processing is performed (suppressed). ) Is calculated (step T4), and the transition condition (image quality correction processing condition) is calculated based on the calculated transition coefficient (step ⁇ 5).
- step ⁇ 6 image quality correction processing is performed on the captured image data (step ⁇ 6), and the image data after the image quality correction processing is sent to the designated output destination. Is output.
- step ⁇ 6 when the imaging condition is a normal light based on the analysis result in step T1, the image quality correction process according to the transition condition calculated in step ⁇ 5 is performed.
- the shooting condition is under or backlit, the transition condition is not calculated, and therefore, in step ⁇ 6, the image quality correction process without the transition process is performed.
- step T1 in FIG. 4 the shooting condition analysis process executed in the scene determination unit 710 will be described with reference to the flowchart in FIG.
- the captured image data is divided into predetermined image areas, and occupancy ratios (first occupancy ratio and second occupancy ratio) indicating the ratio of each divided area to the entire captured image data are calculated. Occupancy rate calculation processing is performed (step S1). The occupation rate calculation process in step S1 will be described in detail later with reference to FIGS.
- step S2 a bias amount calculation process for calculating a bias amount indicating a bias of the gradation distribution of the photographed image data is performed (step S2).
- the bias amount calculation process in step S2 will be described later in Fig. 1.
- step S3 the occupation rate calculated in step S1 and the relationship set in advance according to the shooting conditions are set. Based on the number, an index for specifying the light source condition is calculated (step S3). Further, an index for specifying the exposure condition is calculated based on the occupation ratio calculated in step S1 and a coefficient set in advance according to the imaging condition (step S4).
- the index calculation method in steps S3 and S4 will be described in detail later.
- step S3 and step S4 the captured image data is calculated based on the index (index 4-6) calculated in advance and the discrimination map divided in advance according to the accuracy of the imaging condition.
- the shooting conditions (light source condition, exposure condition) are determined (step S5), and the shooting condition analysis process ends.
- the method for determining the shooting condition in step S5 will be described in detail later with reference to FIGS. 16, 17, and Table 7.
- the RGB value of the photographed image data is converted into the HSV color system (step S 10).
- the HS V color system is a representation of image data with three elements: Hue, Saturation, and Lightness (Value or Brittleness), and is based on the color system proposed by Munsell. It was devised.
- Figure 7 shows an example of a conversion program (HSV conversion program) that obtains hue values, saturation values, and brightness values by converting from RGB to the HSV color system in program code (c language). It is.
- HSV conversion program shown in Fig. 7, the digital image data values that are input image data are defined as InR, InG, and InB, the calculated hue value is defined as OutH, the scale is defined as 0 to 360, and the saturation is defined.
- the value is OutS, the brightness value is OutV, and the unit is defined as 0 to 255
- the captured image data is divided into regions having a combination of predetermined brightness and hue, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sl l). .
- a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided region (step Sl l). .
- the area division of the captured image data will be described in detail.
- Lightness (V) is the lightness value power -25 (vl), 26-50 (v2), 51-84 (v3), 85-169 (v4), 170-199 (v5), 200-224 ( v6), divided into 7 regions from 225 to 255 (v7).
- Hue (H) is a flesh hue area (HI and H2) with hue values 0 to 39 and 330 to 359, a green hue area (H3) with hue values 40 to 160, and a blue hue area with hue values 61 to 250.
- H4 red hue area (H5), four areas It is divided into. Note that the red hue region (H5) is not used in the following calculations because of the knowledge that it contributes little to the determination of the shooting conditions.
- the flesh color hue area is further divided into a flesh color area (HI) and other areas (H2).
- HI flesh color area
- H2 other areas
- the hue '(H) that satisfies the following formula (1) is defined as the flesh-colored area (HI), and the area that does not satisfy the formula (1) (H2).
- Hue '(H) Hue (H) + 60 (when 0 ⁇ Hue (H) ⁇ 300),
- Hue '(H) Hue (H) — 300 (when 300 ⁇ Hue (H) ⁇ 360),
- Luminance (Y) InR X 0.30 + InG X 0.59 + InB X 0.11 (A)
- a first occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S12).
- the occupation rate calculation process ends. Assuming that Rij is the first occupancy calculated in the divided area, which is the combined power of the lightness area vi and the hue area Hj, the first occupancy ratio in each divided area is expressed as shown in Table 1.
- Table 2 shows, for each divided region, the first coefficient necessary for calculating the index 1 that quantitatively indicates the accuracy of strobe shooting, that is, the lightness state of the face area during strobe shooting.
- the coefficient of each divided area shown in Table 2 is multiplied by the first occupation ratio Rij of each divided area shown in Table 1.
- the weighting coefficient to be used is preset according to the shooting conditions.
- FIG. 8 shows the brightness (v) —hue (H) plane.
- a positive (+) coefficient is used for the first occupancy calculated from the area (rl) distributed in the high brightness skin color hue area in Fig. 8, and other hues are used.
- a negative (one) coefficient is used for the first occupancy calculated from the blue hue region (r2).
- Figure 10 shows a curve (coefficient curve) that continuously changes the first coefficient in the flesh-color area (HI) and the first coefficient in the other areas (green hue area (H3)) over the entire brightness. It is shown as.
- the sign of the first coefficient in the skin color region (HI) is positive (+), and other regions (for example, the green hue region)
- the sign of the first coefficient in (H3)) is negative (one), indicating that the signs of the two are different.
- the index 1 is expressed as in equation (3) using the sum of the H1 to H4 regions shown in equations (2-1) to (2-4). Defined in
- Index 1 Sum of H1 area + Sum of H2 area + Sum of H3 area + Sum of H4 area +4.424 (3)
- Table 3 shows the accuracy of backlighting, that is, brightness of face area during backlighting.
- the second coefficient necessary for calculating Indicator 2 that quantitatively indicates the state is shown for each divided region.
- the coefficient of each divided area shown in Table 3 is a weighting coefficient by which the first occupancy ratio Rij of each divided area shown in Table 1 is multiplied, and is set in advance according to the shooting conditions.
- FIG. 9 shows the brightness (v) —hue (H) plane.
- a negative (one) coefficient is used for the occupancy in which the area (r4) force is also calculated in Fig. 9 for the area distributed in the intermediate lightness of the flesh hue area, and the low lightness (shadow) area ( A positive (+) coefficient is used for the occupation ratio calculated from r3).
- Fig. 11 shows the second coefficient in the flesh color region (HI) as a curve (coefficient curve) that continuously changes over the entire brightness. According to Table 3 and Fig.
- the sign of the second coefficient in the lightness value range of 85-169 (v4) in the flesh tone hue region is negative (-) and the lightness value is in the range of 26-84 (v2,
- the sign of the second coefficient in the low brightness (shadow) region of v3) is positive (+), and it can be seen that the sign of the coefficient in both regions is different.
- Index 2 is defined as equation (5) using the sum of the H1 to H4 regions shown in equations (41) to (44).
- Index 1 and index 2 are taken images Since the calculation is based on the brightness of the data and the distribution amount of the hue, it is effective for determining the shooting condition when the shot image data is a color image.
- the RGB values of the photographed image data are converted into the HSV color system (step S20).
- the captured image data is divided into areas where the combined power of the distance from the outer edge of the captured image screen and the brightness is determined, and a two-dimensional histogram is created by calculating the cumulative number of pixels for each divided area. (Step S21).
- the area division of the captured image data will be described in detail.
- FIGS. 13A to 13D show four regions nl to n4 divided according to the distance from the outer edge of the screen of the captured image data.
- the region nl shown in FIG. 13 (a) is the outer frame
- the region n2 shown in FIG. 13 (b) is the inner region of the outer frame
- the region n3 shown in FIG. 13 (c) is the region n2.
- an inner area, an area n4 shown in FIG. 13 (d) is an area at the center of the captured image screen.
- a second occupancy ratio indicating the ratio of the cumulative number of pixels calculated for each divided region to the total number of pixels (the entire captured image) is calculated (step S22).
- the occupation rate calculation process ends.
- Table 2 shows the second occupancy ratio in each divided area, where Qij is the second occupancy ratio calculated for the divided area consisting of the combination of the brightness area vi and the screen area nj.
- Table 5 shows the third coefficient necessary for calculating the index 3 for each divided region.
- the coefficient of each divided area shown in Table 5 is a weighting coefficient by which the second occupancy Qij of each divided area shown in Table 4 is multiplied, and is set in advance according to the photographing conditions.
- FIG. 14 shows the third coefficient in the screen regions nl to n4 as a curve (coefficient curve) that continuously changes over the entire brightness.
- Index 3 sum of nl region + sum of n2 region + sum of n3 region + sum of n4 region 12.6201 (7)
- Index 3 is a compositional feature based on brightness distribution position of captured image data (captured image data Therefore, it is effective to determine not only color images but also monochrome image capturing conditions.
- step S 2 in FIG. 15 the bias amount calculation process (step S 2 in FIG. 5) will be described.
- the luminance Y (brightness) of each pixel is calculated from the RGB (Red, Green, Blue) values of the captured image data using Equation (A), and the standard deviation (xl) of the luminance is calculated. (Step S23).
- the standard deviation (xl) of luminance is expressed as shown in Equation (8).
- the pixel luminance value is the luminance of each pixel of the captured image data
- the average luminance value is the average value of the luminance of the captured image data.
- the total number of pixels is the number of pixels of the entire captured image data.
- a luminance difference value (x2) is calculated (step S24).
- the maximum luminance value is the maximum luminance value of the captured image data.
- the average luminance value (x3) of the skin color area in the center of the screen of the captured image data is calculated (step S25), and further, the average luminance value (x4) in the center of the screen is calculated (step S25).
- the center of the screen is, for example, an area composed of an area n3 and an area n4 in FIG.
- the flesh color luminance distribution value (x5) is calculated (step S27), and this deviation amount calculation process ends.
- the maximum brightness value of the skin color area of the captured image data is Yskinjnax
- the minimum brightness value of the skin color area is Yskin_min
- the average brightness value of the skin color area is Yskin_ave
- the skin color brightness distribution value (x5) is expressed as shown in equation (10). Is done.
- x5 (Yskin.max-Yskin_min) / 2—Yskin—ave (10)
- x6 be the average luminance value of the skin color area in the center of the screen of the captured image data.
- the center of the screen is, for example, an area composed of the area n2, the area n3, and the area n4 in FIG.
- index 4 is defined as in equation (11) using index indexes 3 and x6, and index 5 is defined as in equation (12) using index 2, index 3, and x6.
- Indicator 4 0.46 X Indicator 1 + 0.61 X Indicator 3 + 0.01 X x6— 0.79 (11)
- the index 6 is obtained by multiplying the deviation amounts (xl) to (x5) calculated in the deviation amount calculation processing by a fourth coefficient set in advance according to the photographing conditions.
- Table 6 shows the fourth coefficient, which is a weighting coefficient by which each deviation is multiplied.
- the index 6 is expressed as in Expression (13).
- Indicator 6 1 0.02 + 2 1.13 + 3 0.06+ 4 (-0.01) + 5 0.03— 6.49 (13)
- This indicator 6 is a luminance histogram distribution information that consists of only the compositional features of the captured image data screen. In particular, it is effective for distinguishing between strobe shooting scenes and under shooting scenes.
- the shooting conditions (light source conditions, exposure conditions) of the captured image data are determined based on these indexes and the determination map divided in advance according to the shooting conditions.
- the Hereinafter, a method for determining the photographing condition will be described.
- Figure 16 (a) shows 60 images taken under each of the following conditions: forward light, backlight, and strobe, and index 4 and index 5 were calculated for a total of 180 digital image data. The values of index 4 and index 5 are plotted.
- Figure 16 (b) shows the results of plotting the values of index 4 and index 6 for images with index 4 greater than 0.5 under the stroboscope and under shooting conditions.
- the discriminant map is used to evaluate the reliability of the index. As shown in Fig. 17, the basic areas of the following light, backlight, strobe, and under, and the low accuracy area between the backlight and the following light (1 ), Consisting of a low accuracy area (2) between the strobe and under. Note that there are other low-accuracy regions such as a low-accuracy region between backlight and strobe on the discrimination map, but they are omitted in this embodiment.
- Table 7 shows the plot of each index value shown in Fig. 16 and the details of the shooting conditions determined by the discrimination map of Fig. 17.
- the light source condition can be quantitatively determined based on the values of the index 4 and the index 5
- the exposure condition can be quantitatively determined based on the values of the index 4 and the index 6.
- the low accuracy region (1) between the forward light and the backlight can be distinguished from the values of the indicators 4 and 5
- the low accuracy region (2) between the strobe and the under can be determined from the values of the indicators 4 and 6. Can be determined.
- Step IV2 a method for calculating the condition of the image quality correction process in Step IV2 will be described in detail.
- the conditions for image quality correction processing are calculated based on the details of the shooting conditions and indicators 4-6.
- calculating the image quality correction processing condition is calculating a parameter (tone adjustment parameter) necessary for tone adjustment with respect to captured image data.
- the calculation method of the gradation adjustment parameter will be described in detail. In the following, it is assumed that 8-bit captured image data has been converted to 16-bit in advance, and the unit of the value of the captured image data is 16-bit.
- P1 Average brightness of the entire shooting screen
- P2 Block division average brightness
- Reproduction target correction value Brightness reproduction target value (30360) — P4
- a CDF cumulative density function
- the maximum and minimum values of the CDF force obtained are determined.
- the maximum and minimum values are obtained for each RGB.
- the obtained maximum and minimum values for each RGB are Rmax, Rmin, Gmax, Gmin, Bmax, and Bmin, respectively.
- Rx normalized data in R plane is R, Gx in G plane
- the converted data R 1, G 2, and B 3 are expressed as in the equations (14) to (16), respectively.
- R ⁇ (Rx-Rmin) / (Rmax— Rmin) ⁇ X 65535 (14);
- G ⁇ (Gx-Gmin) / (Gmax-Gmin) ⁇ X 65535 (15);
- N (B + G + R) / 3 (17)
- Figure 18 (a) shows the frequency distribution (histogram) of the brightness of RGB pixels before normalization.
- the horizontal axis represents luminance
- the vertical axis represents pixel frequency. This histogram is created for each RGB.
- the regularity is performed for each plane on the captured image data using Equations (14) to (16).
- Figure 18 (b) was calculated using equation (17). A luminance histogram is shown. Since the captured image data is normally entered at 65535, each pixel takes an arbitrary value between the maximum value 65535 and the minimum value power.
- FIG. 18 (c) When the luminance histogram shown in FIG. 18 (b) is divided into blocks divided by a predetermined range, a frequency distribution as shown in FIG. 18 (c) is obtained.
- the horizontal axis is the block number (luminance) and the vertical axis is the frequency.
- FIG. 19 (c) an area having a frequency greater than a predetermined threshold is deleted from the luminance histogram. This is because if there is a part with an extremely high frequency, the data in this part has a strong influence on the average brightness of the entire photographed image, so that erroneous correction is likely to occur. Therefore, as shown in FIG. 19 (c), the number of pixels above the threshold is limited in the luminance histogram.
- Figure 19 (d) shows the luminance histogram after the pixel number limiting process.
- Each block number of the luminance histogram (Fig. 19 (d)) obtained by deleting the high luminance region and the low luminance region from the normalized luminance histogram and further limiting the cumulative number of pixels,
- the parameter P2 is the average luminance value calculated based on each frequency.
- the parameter P1 is an average value of the brightness of the entire captured image data
- the parameter P3 is an average value of the brightness of the skin color area (HI) in the captured image data.
- the parameter P7 key correction conversion value, parameter P7 key correction conversion value 2, and parameter P8 brightness correction value 2 are as shown in equations (18), (19), and (20), respectively. Defined.
- the offset value 3 of the meter PIO is a gradation adjustment parameter in the case of shooting conditions corresponding to the low accuracy area (1) or (2) on the discrimination map.
- the calculation method of the meter P10 will be described below.
- an index serving as a reference is determined among the indices in the corresponding low accuracy region. For example, in the low accuracy region (1), the index 5 is determined as the reference index, and in the low accuracy region (2), the index 6 is determined as the reference index. Then, by normalizing the value of the reference index in the range of 0 to 1, the reference index is converted into a normalized index.
- the normalization index is defined as in equation (21).
- Normalized index (Standard index Minimum index value) Z (Maximum index value Minimum index value) (21)
- the maximum index value and minimum index value are within the corresponding low-accuracy region. The maximum and minimum values of the reference index.
- the correction amounts at the boundary between the corresponding low accuracy region and the two regions adjacent to the low accuracy region are ex and ⁇ , respectively.
- the correction amounts ⁇ and j8 are fixed values calculated in advance using the reproduction target value defined at the boundary of each region on the discrimination map.
- the nomometer P10 is expressed as in equation (22) using the normalized index of equation (21) and the correction amounts a and ⁇ .
- the correlation between the normalization index and the correction amount is a linear relationship, but may be a curve relationship in which the correction amount is shifted more gradually.
- transition coefficient calculation method (step ⁇ 4 in Fig. 4) will be described.
- a transition coefficient is calculated as shown in Expression (23), and the transition process of the reference condition is performed.
- transition start point — 1D
- transition end point + 2D
- output key at reference point (reference condition)
- One correction conversion value — 2D
- D 150.
- the input key correction conversion value in the case of direct light is the value (P6 / 24.78) obtained by dividing P6 (offset value 1) calculated as the image quality correction processing condition (tone adjustment parameter) by 24.78.
- FIG. 20 shows the relationship between the input key correction conversion value (key correction conversion value input by the operator's operation) and the output key correction conversion value in direct light.
- Figure 21 shows the relationship between the input key correction conversion value and the transition coefficient in front light.
- the output key correction conversion value is defined as shown in Equation (24) as shown in Fig. 20.
- Output key correction conversion value (Input key correction conversion value + Output key correction conversion value at the reference point) X transition coefficient (24)
- the transition (suppression) of the reference point (reference condition) to the non-correction side is between the 1D force that is the transition start point and the + 2D that is the transition end point It is realized by making a transition from a linear function passing through a point to a gentle arcuate curve.
- the image quality correction condition (curve in FIG. 20) based on the transition coefficient calculated in equation (23) is calculated as the transition condition.
- the transition condition is not calculated, and the output key correction conversion value according to the linear function passing through the reference point (reference condition) in FIG. 20 is calculated.
- step T6 in FIG. 4 the image quality correction process (step T6 in FIG. 4) will be described with reference to the flowchart in FIG.
- a gradation adjustment method for photographed image data is determined in accordance with the photographing condition determined in step T1 in Fig. 4 (step S30).
- the gradation adjustment method A (FIG. 24 (a)) is selected when the shooting condition is normal light or strobe
- the gradation adjustment method B (FIG. 24) is selected when the shooting condition is backlight or under.
- tone adjustment method C (Fig. 24 (c)) is selected.
- the correction amount is relatively small when the shooting condition is direct light, it is possible to apply the gradation adjustment method A in which the pixel value of the captured image data is corrected in parallel (offset).
- a viewpoint power capable of suppressing gamma fluctuation is also preferable.
- the amount of correction is relatively large. The gradation that does not exist increases significantly, resulting in black turbidity and a decrease in white brightness. Therefore, when the shooting condition is backlight or under, it is preferable to apply the gradation adjustment method B in which the pixel value of the shot image data is gamma corrected.
- the gradation adjustment method for one of the adjacent shooting conditions is A or B in any low accuracy area, so both gradation adjustment methods are mixed. It is preferable to apply the gradation adjustment method C described above. By setting the low-accuracy region in this way, the processing result can be smoothly transferred even when different gradation adjustment methods are used. In addition, it is possible to reduce variations in density between multiple photo prints taken of the same subject.
- the gradation conversion curve shown in FIG. 24 (b) is convex upward, but may be convex downward.
- the gradation conversion curve shown in FIG. 24 (c) is convex downward, but may be convex upward.
- the gradation adjustment amount for the captured image data is determined based on the calculated gradation adjustment parameter (step S31).
- step S31 specifically, the gradation adjustment parameter calculated as described above is selected from a plurality of gradation conversion curves set in advance corresponding to the gradation adjustment method determined in step S30. The gradation conversion curve corresponding to is selected (determined). Note that a gradation conversion curve (gradation adjustment amount) may be calculated based on the calculated gradation adjustment parameters.
- step S32 gradation conversion processing is performed on the captured image data in accordance with the determined gradation conversion curve (step S32), and the image quality correction processing ends. A specific example of the image quality correction processing in FIG. 22 will be described later with reference to Examples 1 to 4.
- the transition amount (transition coefficient) for making a transition from the reference condition to the no image quality correction process! By calculating based on the image quality correction processing condition (input key correction conversion value), the correction amount for the image where the shooting condition is determined to be under-light or backlight is maintained, and the forward light or overshoot (somewhat It is possible to improve the continuity and stability of the correction amount with respect to (brighter) photographed image data.
- image quality correction processing can be optimized by setting the reference conditions (and transition conditions) for each type of imaging device (camera) and photographer.
- the transition coefficient is calculated based on the image quality correction processing condition.
- the transition coefficient is calculated based on the analysis value of the imaging condition analysis process. Since the image processing apparatus in the second embodiment has the same configuration as the image processing apparatus 1 shown in the first embodiment, the illustration thereof is omitted and the same reference numerals are used.
- steps T10 to T12 are the same as steps ⁇ 1 to ⁇ 3 in FIG.
- steps T13 will be described.
- step T12 When the reference condition is set, based on the analysis value of the shooting condition analysis process in step T10, the reference condition set in step T12 is shifted to the no correction condition side where no image quality correction process is performed ( A transition coefficient necessary for the transition process to be suppressed is calculated (step ⁇ 13), and a transition condition (image quality correction processing condition) is calculated based on the calculated transition coefficient (step ⁇ 14).
- step T15 the image quality correction process is performed on the captured image data (step T15), and the image data after the image quality correction process is sent to the specified output destination. Is output.
- step T15 when the imaging condition is the low accuracy region (2) based on the analysis result in step T10, image quality correction processing according to the transition condition calculated in step T14 is performed.
- the transition condition is not calculated, and therefore in step T15, the image quality correction process without the transition process is performed.
- step T13 when it is determined that the imaging condition is the low accuracy region (2), the transition coefficient is calculated using the index 6 as shown in the equation (25), and the transition processing of the reference condition is performed. Is performed.
- Figure 26 shows the input key correction conversion value and output key correction conversion value in the low accuracy range (2). Show the relationship.
- Figure 27 shows the relationship between index 6 and the transition coefficient in the low accuracy region (2).
- the output key correction conversion value in the low accuracy region (2) is defined as shown in Equation (26) as shown in FIG.
- Output key correction conversion value (Input key correction conversion value + Output key correction conversion value at the reference point) X transition coefficient (26)
- the input key correction conversion value in equation (26) is the value obtained by dividing P10 (offset value 3) calculated as the image quality correction processing condition (gradation adjustment parameter) by 24.78 ( P10 / 24.78).
- FIG. 28 shows the transition start point and transition end point in the low accuracy region (2).
- the density correction value that is the reference point is applied, so that a high correction amount is still maintained.
- the reference point shifts to the uncorrected condition.
- the image quality correction processing is not performed from the reference condition! /
- the transition amount (transition coefficient) for transitioning to the no correction condition is By calculating based on the analysis value of the shooting conditions (Indicator 6), the correction amount for images in which the shooting conditions are determined to be under or backlit is maintained, and the front light or overexposure (slightly brighter) The continuity and stability of the correction amount for the captured image data can be improved.
- step T6 in FIG. 4 or step T15 in FIG. 25 a specific example of the image quality correction process
- step T6 in FIG. 4 or step T15 in FIG. 25 the transition condition (image quality correction processing condition) calculation step in step T5 or T14 is described as one step of the image quality correction processing.
- Example 1 will be described with reference to FIGS.
- image quality correction processing when a predetermined density correction processing condition is set as a reference condition will be described.
- an output key correction conversion value that is a density correction value is calculated (step S40), and image quality correction processing is performed on the captured image data based on the calculated output key correction conversion value. (Step S41).
- Fig. 30 shows the relationship (solid line) between the input key correction conversion value and the output key correction conversion value when the shooting condition is direct light.
- the output key correction conversion value in Fig. 30 is defined as in equation (24). Assuming that the value obtained by converting the output key correction converted value of equation (24) into an offset value is P6 ", the offset correction (parallel shift of 8-bit value) that sets parameter P1 to P5 is the following in the image quality correction processing of step S41: This is performed according to the equation (27).
- RGB value of output image RGB value of input image + P6 "(27)
- a gradation conversion curve corresponding to Expression (27) is selected from the plurality of gradation conversion curves shown in FIG.
- the gradation conversion curve may be calculated (determined) based on Expression (27).
- a gradation conversion curve whose output key correction conversion value corresponds to P71 of Expression (28) is selected from the plurality of gradation conversion curves shown in FIG. 24 (b).
- a specific example of the gradation conversion curve in FIG. 24 (b) is shown in FIG. The correspondence between the P71 value and the selected gradation conversion curve is shown below.
- the photographing condition is backlight
- the output key correction conversion value P72 is defined as in Expression (29) using the parameter P7 ′ (key correction conversion value 2) shown in Expression (19).
- offset correction (parallel shift of 8-bit value) is performed according to equation (30).
- RGB value of output image RGB value of input image + P9 (30)
- a gradation conversion curve corresponding to Expression (30) is selected from a plurality of gradation conversion curves shown in FIG. Or, calculate (determine) the tone conversion curve based on Equation (30).
- RGB value of output image RGB value of input image + P10 (31)
- the gradation conversion curve corresponding to the equation (31) is selected from the plurality of gradation conversion curves shown in FIG. Or, let's calculate (determine) the tone conversion curve based on equation (31).
- the image quality correction processing in step S41 is as follows. This is done according to equation (32).
- RGB value of output image RGB value of input image + P10 "(32)
- the gradation conversion curve corresponding to the equation (32) is selected from the plurality of gradation conversion curves shown in FIG. 24 (c). Or, let's calculate (determine) the tone conversion curve based on equation (32).
- a photographic print with a high degree of satisfaction can be obtained by setting the reference condition as a condition for the predetermined density correction process.
- Example 2 will be described with reference to FIGS.
- image quality correction processing is performed when the conditions for gradation conversion processing including predetermined brightness enhancement processing and high contrast processing are set as reference conditions! I will explain in a moment.
- Fig. 33 the effect of increasing brightness and contrast is changed at regular intervals between the reference point (reference condition) that is the predefined brightness enhancement and contrast enhancement power and the uncorrected condition.
- reference point reference condition
- the layout of the ring-around print created in this way is shown.
- the horizontal axis represents the change in hardness when equally divided between the uncorrected condition and the reference point by 25%
- the vertical axis represents 25% between the uncorrected condition and the reference point. It represents the change in brightness when divided evenly.
- FIG. 34 shows a gradation conversion curve set in advance according to the degree of contrast enhancement. The correspondence between the transition coefficient in equation (23) and FIG. 21 and the gradation conversion curves L 5 -A to L 5 -E shown in FIG. 34 is shown below.
- the transition coefficient when the input key correction conversion value is 0 is approximately 0.7, and therefore the curve L-5-B in FIG. 34 can be set as the reference condition.
- step S50 a lightening condition is calculated based on the calculated input key correction conversion value and the transition coefficient (step S50).
- step S50 for example, when the shooting condition is the low accuracy region (2), the gradation corresponding to the equation (32) is selected from the plurality of gradation conversion curves shown in FIG. A conversion curve is selected.
- step S51 a high contrast condition is calculated based on the calculated transition coefficient.
- step S51 for example, when the shooting condition is the low accuracy region (2), the gradation conversion curve corresponding to the transition coefficient calculated by the equation (25) among the gradation conversion curves shown in FIG. Selected.
- step S50 image quality correction processing is performed on the photographed image data based on the brightness enhancement condition calculated in step S50 and the contrast enhancement condition calculated in step S51 (step S52).
- the reference condition is set to a predetermined brightness enhancement process.
- the tone conversion processing condition that is the high contrast processing power
- a photographic print with high satisfaction can be obtained.
- optimizing the reference condition and the transition condition it is possible to optimize the image quality correction process regardless of the difference in the model I of the imaging apparatus or the user's preference.
- Embodiment 3 will be described with reference to FIGS. 35 and 36.
- FIG. in the third embodiment image quality correction processing when a predetermined color matrix processing condition is set as the reference condition will be described.
- the color matrix processing as the reference condition is defined as a condition for slightly reducing the saturation, for example.
- the image quality correction processing in FIG. 35 is performed on the captured image data in the follow light or the low accuracy region (2) after the gradation conversion processing and the contrast adjustment.
- a color matrix coefficient is calculated based on the calculated transition coefficient (step S60).
- the calculation of the color matrix coefficient in step S60 is performed by changing the matrix components in the color matrix processing shown in equation (33). The calculation is based on the value of the coefficient.
- B, G, R are RGB values after color matrix processing.
- a photographic print with a high degree of satisfaction can be obtained by setting the reference condition as the condition for the predetermined color matrix process.
- the transition condition image quality correction processing condition
- Embodiment 4 will be described with reference to the flowchart of FIG.
- image quality correction processing when a predetermined contrast correction processing condition is set as a reference condition will be described.
- the final contrast correction value is calculated as shown in Equation (38) using the calculated transition coefficient and the scene determination contrast correction value calculated based on the determined imaging condition. (Step S70).
- Final contrast correction value 100+ ⁇ (Scene discrimination contrast correction value — 100) X transition coefficient) (38) Any known method can be used to calculate the scene discrimination contrast correction value in Equation (38). Generally, a method of examining the width of the tone distribution by performing histogram analysis and expanding it to a predetermined width. Is used.
- image quality correction processing is performed on the captured image data (step S71).
- the shape of the gradation conversion curve is determined by decomposing it into two elements, brightness and hardness, and an S-shaped gradation conversion curve is obtained.
- the image quality correction process is a linear conversion using a linear function with the gradation adjustment parameter P3 (average brightness of the skin color area) that is not S-shaped as a fulcrum (fixed point).
- a photographic print with a high degree of satisfaction can be obtained by setting the reference condition as a condition for the predetermined contrast correction process.
- FIG. 38 shows a configuration of a digital camera 200 to which the imaging apparatus of the present invention is applied.
- the digital camera 200 includes a CPU 201, an optical system 202, an image sensor unit 203, an AF calculation unit 204, a WB calculation unit 205, an AE calculation unit 206, a lens control unit 207, an image processing unit 208, a display.
- the CPU 201 comprehensively controls the operation of the digital camera 200.
- the optical system 202 is a zoom lens, and forms an object image on a charge-coupled device (CCD) image sensor in the imaging sensor unit 203.
- the imaging sensor unit 203 photoelectrically converts an optical image by a CCD image sensor, converts it into a digital signal (AZD conversion), and outputs it.
- the image data output from the imaging sensor unit 203 is input to the AF calculation unit 204, the WB calculation unit 205, the AE calculation unit 206, and the image processing unit 208.
- the AF calculation unit 204 calculates and outputs the distances of the AF areas provided at nine places in the screen. The determination of the distance is performed by determining the contrast of the image, and the CPU 201 selects a value at the closest distance among them and sets it as the subject distance.
- the WB calculation unit 205 Calculate and output the evaluation value.
- the white balance evaluation value is a gain value required to match the RGB output value of a neutral subject under the light source at the time of shooting, and is calculated as the ratio of R / G and B / G based on the G channel. .
- the calculated evaluation value is input to the image processing unit 208, and the white balance of the image is adjusted.
- the AE calculation unit 206 calculates and outputs an appropriate exposure value from the image data, and the CPU 201 calculates an aperture value and a shutter speed value so that the calculated appropriate exposure value matches the current exposure value.
- the aperture value is output to the lens control unit 2007, and the corresponding aperture diameter is set.
- the shutter speed value is output to the image sensor unit 203, and the corresponding CCD integration time is set.
- the image processing unit 208 performs processing such as white balance processing, CCD filter array interpolation processing, color conversion, primary gradation conversion, and sharpness correction on the captured image data, and then performs the above-described implementation.
- processing such as white balance processing, CCD filter array interpolation processing, color conversion, primary gradation conversion, and sharpness correction on the captured image data, and then performs the above-described implementation.
- the shooting condition analysis process, the image quality correction process condition calculation, the transition coefficient calculation, the transition condition calculation, and the image quality correction process (any one of Examples 1 to 4) shown in FIG. Convert to image. ⁇
- Perform conversion such as PEG compression.
- the JPEG-compressed image data is output to the display unit 209 and the recording data creation unit 210.
- Display unit 209 displays captured image data on a liquid crystal display and displays various types of information according to instructions from CPU 201.
- the recording data creation unit 210 formats the JPEG-compressed image data and various captured image data input from the CPU 201 into an Exif (Exchangeable Age File Format) file, and records it on the recording medium 211.
- Exif Exchangeable Age File Format
- the recording media 211 there is a part called manufacturer note as a space where each manufacturer can write free information. Record the result of discrimination of shooting conditions and index 4, index 5 and index 6. A little.
- the shooting scene mode can be switched by a user setting. That is, three modes can be selected as a shooting scene mode: a normal mode, a portrait mode, and a landscape mode scene.
- a shooting scene mode When the user operates the scene mode setting key 212 and the subject is a person, the portrait mode and the landscape mode are selected. In case of, switch to landscape mode to perform primary gradation conversion suitable for the subject.
- the digital camera 200 adds the selected shooting scene mode information to the maker note part of the image data file. Record. The digital camera 200 also records the position information of the AF area selected as the subject in the image file in the same manner.
- the user can set the output color space using the color space setting key 213.
- the output color space can be selected from sRGB (IEC619662-1) and Raw.
- sRGB IEC619662-1
- Raw the image quality correction process in the above-described embodiment is executed.
- Raw the image quality correction process in the above-described embodiment is not performed, and output is performed in a color space unique to the CCD.
- the correction amount for an image whose shooting condition is determined to be under or backlit is maintained as in the above-described image processing apparatus 1.
- a preferable image is output. can do.
- a face image may be detected from the captured image data, the shooting condition may be determined based on the detected face image, and the gradation adjustment condition may be determined.
- Exif information may be used to determine the shooting conditions. By using Exif information, it is possible to further improve the accuracy of determining the shooting conditions.
- the CRT8 (or digital camera) of the image processing apparatus 1 shows the case where the transition amount from the reference condition to the uncorrected condition is automatically calculated.
- the manual setting screen for transition conditions is displayed on the display section 209) of 200, and the brightness, hardness, color matrix, etc. are displayed on the operation section 11 of the image processing apparatus 1 (or the operation key 215 of the digital camera 200). It may be possible to specify by.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Processing (AREA)
- Color Image Communication Systems (AREA)
Abstract
La présente invention concerne la correction de la qualité d’image optimale en termes de continuité et de stabilité, permettant une impression hautement satisfaisante pour le client à partir d’une scène dont la condition d’imagerie est évaluée pour servir d’imagerie par éclairage frontal ou macro imagerie par stroboscope et dont l’exposition est adéquate ou en surexposition. Le dispositif de traitement d’image (1) exécute les phases d’analyse de la condition d’imagerie de données d’image capturées (phase T1), calcul de la condition d’une correction de la qualité d’image des données d’image capturées (phase T2) à partir de la valeur d’analyse obtenue de la condition d’imagerie, calcul d’une condition de changement lorsqu’une condition de référence d’une correction de la qualité d’image prédéterminée est changée en condition de non correction quand aucune correction de la qualité d’image est effectuée dans la condition de la correction de la qualité d’image calculée à la phase T2 (phase T5), et une correction de la qualité d’image des données d’image capturées dans une condition d’imagerie prédéterminée (par ex., imagerie par éclairage frontal) est exécutée dans la condition de changement calculée (phase T6).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2005-166619 | 2005-06-07 | ||
| JP2005166619A JP2006345037A (ja) | 2005-06-07 | 2005-06-07 | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2006132067A1 true WO2006132067A1 (fr) | 2006-12-14 |
Family
ID=37498272
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2006/310003 WO2006132067A1 (fr) | 2005-06-07 | 2006-05-19 | Procédé et dispositif de traitement d’image, dispositif d’imagerie et programme de traitement d’image |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2006345037A (fr) |
| WO (1) | WO2006132067A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112345548A (zh) * | 2020-11-18 | 2021-02-09 | 上海神力科技有限公司 | 一种燃料电池石墨板表面成型光洁程度检测方法及装置 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5414846B2 (ja) * | 2012-06-28 | 2014-02-12 | キヤノン株式会社 | 画像表示システム、画像表示装置及びその制御方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002369075A (ja) * | 2001-06-04 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置および画像処理装置並びに撮像プログラムおよび画像処理プログラム |
| JP2003244622A (ja) * | 2002-02-19 | 2003-08-29 | Konica Corp | 画像形成方法、画像処理装置、プリント作成装置及び記憶媒体 |
| JP2004088345A (ja) * | 2002-08-26 | 2004-03-18 | Konica Minolta Holdings Inc | 画像形成方法、画像処理装置、プリント作成装置及び記憶媒体 |
| JP2004282570A (ja) * | 2003-03-18 | 2004-10-07 | Fuji Xerox Co Ltd | 画像処理装置 |
| JP2005062993A (ja) * | 2003-08-08 | 2005-03-10 | Seiko Epson Corp | 画像データの出力画像調整 |
-
2005
- 2005-06-07 JP JP2005166619A patent/JP2006345037A/ja active Pending
-
2006
- 2006-05-19 WO PCT/JP2006/310003 patent/WO2006132067A1/fr active Application Filing
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002369075A (ja) * | 2001-06-04 | 2002-12-20 | Olympus Optical Co Ltd | 撮像装置および画像処理装置並びに撮像プログラムおよび画像処理プログラム |
| JP2003244622A (ja) * | 2002-02-19 | 2003-08-29 | Konica Corp | 画像形成方法、画像処理装置、プリント作成装置及び記憶媒体 |
| JP2004088345A (ja) * | 2002-08-26 | 2004-03-18 | Konica Minolta Holdings Inc | 画像形成方法、画像処理装置、プリント作成装置及び記憶媒体 |
| JP2004282570A (ja) * | 2003-03-18 | 2004-10-07 | Fuji Xerox Co Ltd | 画像処理装置 |
| JP2005062993A (ja) * | 2003-08-08 | 2005-03-10 | Seiko Epson Corp | 画像データの出力画像調整 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112345548A (zh) * | 2020-11-18 | 2021-02-09 | 上海神力科技有限公司 | 一种燃料电池石墨板表面成型光洁程度检测方法及装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2006345037A (ja) | 2006-12-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7076119B2 (en) | Method, apparatus, and program for image processing | |
| JP4281311B2 (ja) | 被写体情報を用いた画像処理 | |
| WO2006123492A1 (fr) | Procede et dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images | |
| JP4978378B2 (ja) | 画像処理装置 | |
| WO2006120839A1 (fr) | Procede de traitement d'images, dispositif de traitement d'images et programme de traitement d'images | |
| JP2004173010A (ja) | 撮像装置、画像処理装置、画像記録装置、画像処理方法、プログラム及び記録媒体 | |
| JP2005190435A (ja) | 画像処理方法、画像処理装置及び画像記録装置 | |
| JP2003244467A (ja) | 画像処理方法、画像処理装置、及び画像記録装置 | |
| JP2007184888A (ja) | 撮像装置、画像処理装置、画像処理方法、及び画像処理プログラム | |
| JP2005192162A (ja) | 画像処理方法、画像処理装置及び画像記録装置 | |
| US20050259282A1 (en) | Image processing method, image processing apparatus, image recording apparatus, and image processing program | |
| JP2007311895A (ja) | 撮像装置、画像処理装置、画像処理方法及び画像処理プログラム | |
| WO2006132067A1 (fr) | Procédé et dispositif de traitement d’image, dispositif d’imagerie et programme de traitement d’image | |
| US6801296B2 (en) | Image processing method, image processing apparatus and image recording apparatus | |
| WO2006033235A1 (fr) | Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images | |
| JP2004096508A (ja) | 画像処理方法、画像処理装置、画像記録装置、プログラム及び記録媒体 | |
| WO2006033236A1 (fr) | Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images | |
| WO2006033234A1 (fr) | Procede de traitement d'images, dispositif de traitement d'images, dispositif d'imagerie et programme de traitement d'images | |
| JP2006318255A (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
| WO2006077702A1 (fr) | Dispositif d’imagerie, dispositif de traitement d’image et méthode de traitement d’image | |
| JP4449619B2 (ja) | 画像処理方法、画像処理装置及び画像処理プログラム | |
| JP2005332054A (ja) | 画像処理方法、画像処理装置、画像記録装置及び画像処理プログラム | |
| JP2006345272A (ja) | 画像処理方法、画像処理装置、撮像装置及び画像処理プログラム | |
| JP2011045141A (ja) | 画像ファイルの出力画像調整 | |
| JP2008146657A (ja) | 画像処理方法および画像処理装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06756359 Country of ref document: EP Kind code of ref document: A1 |