[go: up one dir, main page]

CN101271196B - Lens shade correction index confirming method, lens shade emendation method and device - Google Patents

Lens shade correction index confirming method, lens shade emendation method and device Download PDF

Info

Publication number
CN101271196B
CN101271196B CN2008101048750A CN200810104875A CN101271196B CN 101271196 B CN101271196 B CN 101271196B CN 2008101048750 A CN2008101048750 A CN 2008101048750A CN 200810104875 A CN200810104875 A CN 200810104875A CN 101271196 B CN101271196 B CN 101271196B
Authority
CN
China
Prior art keywords
pixel point
correction coefficient
pixel
lens
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008101048750A
Other languages
Chinese (zh)
Other versions
CN101271196A (en
Inventor
谌安军
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongxingtianshi Technology Co ltd
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CN2008101048750A priority Critical patent/CN101271196B/en
Publication of CN101271196A publication Critical patent/CN101271196A/en
Application granted granted Critical
Publication of CN101271196B publication Critical patent/CN101271196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a lens shadow correction coefficient confirming method, a lens shadow correcting method and a device thereof, which comprise: the reference correction coefficient of each pixel point is obtained according to a training image; the reference correction coefficient forms a two-dimensional correction curved surface at a pixel point coordinate plane; the binary high-ordered power multinomial of the pixel point coordinate is used for fitting the two-dimensional correction curved surface; the lens shadow correction coefficient corresponding to each pixel point coordinate is calculated according to the multinomial, and an image shot by the lens is processed for lens shadow correction disposal according to the confirmed lens shadow correction coefficient. The method of the invention is suitable for the lens shadow correction for the image shot by various types of lenses, which has good image correction effect and is simple to be realized.

Description

Lens shading correction coefficient determining method, lens shading correction method and device
Technical Field
The present invention relates to image processing, and in particular, to a method for determining a lens shading correction coefficient in an image, a method and an apparatus for correcting lens shading.
Background
The lens shadow is a phenomenon that exposure of an imaging plane is inconsistent due to inconsistent light transmittance of the lens, and is particularly represented as bright surroundings in the middle of an obtained image. In the existing method, it is generally assumed that the center of the lens is the brightest point, the darkness around the lens is symmetrical, and when the lens shading processing is performed, the ratio of the brightness around the image to the central brightness is considered to be in accordance with the relation of the optical axis and the cosine 4 th power of the included angle between the connecting line of the imaging point and the lens center, and the lens shading correction is performed on the image according to the shading model. The lens shading correction processing is carried out on the image by adopting the hypothesis and the shading model, and the method has certain effect on the image shot by the lens with smaller lens shading. However, from the processing effect of some practical images, many lenses do not completely conform to the assumption, so that a shadow model represented by cosine 4 power does not have universality, and the shadow model has poor effect on some lenses with serious shadow effect and the brightest point not at the center of an imaging plane.
Disclosure of Invention
The invention provides a lens shading correction coefficient determining method, a lens shading correction method and a lens shading correction device which are suitable for various types of lenses and have good correction effect.
The invention provides a method for determining a lens shading correction coefficient, which comprises the following steps:
obtaining a reference correction coefficient of each pixel point according to the training image, specifically: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point;
generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient;
adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial;
calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: and substituting the coordinates of each pixel point into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to the coordinates of each pixel point.
The fitting of the two-dimensional correction curved surface by adopting the binary high power polynomial of the pixel point coordinates comprises the following steps:
and fitting the two-dimensional correction curved surface by adopting a 2-10 power polynomial of the abscissa and the ordinate of the pixel point.
The fitting of the two-dimensional correction curved surface by adopting the binary high power polynomial of the pixel point coordinates comprises the following steps:
and fitting the two-dimensional correction curved surface by adopting a 6 th power polynomial of the abscissa and the ordinate of the pixel point.
The invention also provides a lens shading correction method, which comprises the following steps:
inputting an image to be corrected;
acquiring pixel values of pixels corresponding to the current pixel coordinates according to the pixel coordinate sequence, and acquiring a lens shadow correction coefficient corresponding to the current pixel coordinates; the method for determining the lens shading correction coefficient comprises the following steps: obtaining a reference correction coefficient of each pixel point according to the training image, specifically: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point; generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient; adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial; calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: substituting each pixel point coordinate into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to each pixel point coordinate;
and multiplying the acquired pixel value by the acquired lens shading correction coefficient to obtain a corrected image and outputting the corrected image.
The lens shading correction method of the present invention further includes: extracting and storing lens shadow correction coefficients corresponding to part of pixel point coordinates according to a set extraction rule from the calculated lens shadow correction coefficients corresponding to each pixel point coordinate in advance;
the acquiring of the lens shading correction coefficient corresponding to the current pixel point coordinate includes:
and calculating the lens shadow correction coefficient corresponding to the current pixel point coordinate according to the stored lens shadow correction coefficient and a set interpolation algorithm.
The method for extracting and storing the lens shadow correction coefficients corresponding to the coordinates of the partial pixel points according to the set extraction rule comprises the following steps: extracting and storing a lens shadow correction coefficient corresponding to a pixel coordinate every other plurality of pixels;
the set interpolation algorithm is a cubic interpolation algorithm.
The present invention also provides a lens shading correction coefficient determining apparatus, including:
the reference correction coefficient obtaining unit is used for obtaining and outputting a reference correction coefficient of each pixel point according to the training image, and comprises:
the acquisition subunit is used for acquiring a standard pixel value of each pixel point of a training image and obtaining a real pixel value of each pixel point of a shot image after the training image is shot by a lens;
the calculating subunit is used for calculating the ratio of the real pixel value of each pixel point to the standard pixel value thereof, obtaining and outputting the reference correction coefficient of each pixel point;
the two-dimensional correction curved surface generating unit is used for receiving the reference correction coefficient output by the reference correction coefficient acquiring unit and generating a two-dimensional correction curved surface on a pixel point coordinate plane, and specifically comprises: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient;
the fitting unit is used for fitting the two-dimensional correction curved surface generated by the two-dimensional correction curved surface generating unit by adopting a binary high power polynomial of a pixel point coordinate, and specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial;
the lens shadow correction coefficient calculation unit is used for calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial determined by the fitting unit, and specifically comprises the following steps: and substituting the coordinates of each pixel point into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to the coordinates of each pixel point.
The present invention also provides a lens shading correction apparatus, comprising:
an image input unit for receiving an image to be corrected;
the pixel value acquisition unit is used for acquiring and outputting the pixel values of the corresponding pixels of the current pixel coordinates according to the pixel coordinate sequence according to the image to be corrected received by the image input unit;
the lens shading correction coefficient determining unit is used for obtaining a reference correction coefficient of each pixel point according to the training image, and specifically comprises the following steps: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point; generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient; adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order polynomial; calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: substituting each pixel point coordinate into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to each pixel point coordinate;
the lens shading correction coefficient output unit is used for outputting the lens shading correction coefficient corresponding to the current pixel point coordinate according to the lens shading correction coefficient corresponding to each pixel point coordinate calculated by the lens shading correction coefficient determining unit;
the correction unit is used for multiplying the pixel value output by the pixel value acquisition unit by a lens shading correction coefficient of the corresponding pixel point coordinate output by the lens shading correction coefficient output unit to generate a corrected image;
and an image output unit for outputting the corrected image.
The lens shading correction coefficient output unit includes:
the extraction storage subunit is used for extracting and storing the lens shading correction coefficients corresponding to part of the pixel coordinates according to a set extraction rule from the lens shading correction coefficients corresponding to each pixel coordinate calculated by the lens shading correction coefficient determining unit;
and the interpolation calculation output subunit is used for calculating and outputting the lens shading correction coefficient corresponding to the current pixel point coordinate according to the stored lens shading correction coefficient and the set interpolation algorithm.
The invention has the following beneficial effects:
by adopting the invention, the corresponding lens shadow correction coefficient is determined for each pixel point of the image shot by the lens, namely: the method comprises the steps of firstly determining a reference correction coefficient of each pixel point according to a training image and a shot image obtained after the training image is shot by a lens, generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient of each pixel point, fitting the two-dimensional correction curved surface by adopting a binary high power polynomial of a pixel point coordinate, and calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the determined binary high power polynomial. Because the two-dimensional correction curved surface generated by referring to the correction coefficient is a two-dimensional curved surface with noise, and is not a real reflection of the lens shadow, when the lens shadow correction coefficient is determined, the two-dimensional correction curved surface is obtained by fitting a binary high power polynomial of pixel point coordinates to obtain a two-dimensional smooth curved surface, so that the determined lens shadow correction coefficient is close to real. And because each pixel point coordinate has a uniquely determined corresponding lens shadow correction coefficient, the image correction effect is better. The method is suitable for different types of lenses, when the lens shading processing is required to be carried out on the image shot by a certain type of lens, only the lens is required to be used for shooting the training image in advance, the method provided by the invention can determine the lens shading correction coefficient aiming at the image shot by the lens, and then the image shot by the lens is corrected according to the determined lens shading correction coefficient.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for determining lens shading correction coefficients according to an embodiment of the present invention;
FIG. 2 is a flowchart of a lens shading correction method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a lens shading correction coefficient determining apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a reference correction coefficient obtaining unit in the lens shading correction coefficient determining apparatus of FIG. 3;
FIG. 5 is a schematic structural diagram of a lens shading correction apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a lens shading correction coefficient output unit in the lens shading correction apparatus of fig. 5.
Detailed Description
The method and apparatus of the present invention will be described in detail below with reference to the accompanying drawings using specific embodiments.
The first embodiment is as follows:
referring to fig. 1, a method for determining a lens shading correction coefficient is provided in an embodiment of the present invention, including the following steps:
and S101, obtaining a reference correction coefficient of each pixel point according to the training image.
The method specifically comprises the following steps:
shooting a training image by using a lens to obtain a shot image of the training image; wherein the standard pixel value of each pixel point in the training image is known. In practice, for simplicity, a shot may be taken through a lens to obtain a uniformly-textured white paper, for example, a shot image with a size of 640 × 480 is obtained, and assuming that the standard pixel value of each pixel in the training image is 255, the pixel value of each pixel in the shot image may change due to the existence of the lens shadow (i.e., the true pixel value of each pixel may not be equal to 255).
After the real pixel value of each pixel point in the corresponding shot image aiming at the training image is obtained, the ratio of the real pixel value of each pixel point to the standard pixel value (255) of each pixel point is calculated, and the reference correction coefficient of each pixel point is obtained. As described above, when the image size is 640 × 480, 640 × 480 reference correction coefficients are obtained.
And S102, generating a two-dimensional correction curved surface on the pixel point coordinate plane by the obtained reference correction coefficient.
The method specifically comprises the following steps:
because each reference correction coefficient corresponds to a pixel, and each pixel has a unique corresponding pixel coordinate, that is, the pixels are arranged in the image from top to bottom and from left to right, still taking 640 x 480 image as an example, wherein:
the coordinate of a pixel point corresponding to the 1 st pixel point is (1, 1);
the coordinate of the pixel point corresponding to the 2 nd pixel point is (2, 1);
the coordinate of a pixel point corresponding to the 3 rd pixel point is (3, 1);
……
the coordinate of the pixel point corresponding to the 640 th pixel point is (640, 1);
the coordinates of the pixel points corresponding to the 641 th pixel point are (1, 2);
the coordinates of a pixel point corresponding to the 642 th pixel point are (2, 2);
……
the 307200 th pixel point (the last pixel point of the image) corresponds to the pixel point coordinate of (640, 480);
assuming that the abscissa of a pixel is X and the ordinate is Y, a pixel coordinate plane with the abscissa of X and the ordinate of Y can be generated from the corresponding pixel coordinates of the 640 × 480 image, and since the obtained reference correction coefficients are in one-to-one correspondence with each pixel on the pixel coordinate plane, a two-dimensional correction curved surface perpendicular to the pixel coordinate plane can be generated from the obtained 640 × 480 reference correction coefficients.
The shadow effect represented by the two-dimensional correction curved surface generated in step S102 is not the real shadow effect of the lens, and contains noise, and the denoising process is performed to obtain a two-dimensional smooth curved surface representing the real shadow effect of the lens.
And S103, fitting the two-dimensional correction curved surface by adopting a binary high power polynomial of the pixel point coordinates.
The method specifically comprises the following steps:
setting a binary high power polynomial with the abscissa and the ordinate of the pixel as variables, assuming that the abscissa variable of the pixel is x, the ordinate variable is y, and taking 6 powers as examples, wherein the weight coefficients of the polynomials are respectively beta00,β01,…,β66The specific calculation is as follows:
<math><mrow><mi>A</mi><mo>=</mo><mfenced open='[' close=']'><mtable><mtr><mtd><mi>a</mi><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mi>a</mi><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mi>a</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mi>a</mi><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow></mtd></mtr></mtable></mfenced><mo>=</mo><mfenced open='[' close=']'><mtable><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow></mtd></mtr></mtable></mfenced><mfenced open='[' close=']'><mtable><mtr><mtd><msub><mi>&beta;</mi><mn>00</mn></msub></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mn>01</mn></msub></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mi>ij</mi></msub></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mn>66</mn></msub></mtd></mtr></mtable></mfenced><mo>=</mo><mi>&psi;&beta;</mi></mrow></math>
wherein,
<math><mrow><mi>&psi;</mi><mo>=</mo><mfenced open='[' close=']'><mtable><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mn>1,1</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mn>1,2</mn><mo>)</mo></mrow></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd><mtd></mtd></mtr><mtr><mtd><msub><mi>&phi;</mi><mn>00</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow></mtd><mtd><msub><mi>&phi;</mi><mn>01</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mi>ij</mi></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow><mo>&CenterDot;</mo><mo>&CenterDot;</mo><mo>&CenterDot;</mo><msub><mi>&phi;</mi><mn>66</mn></msub><mrow><mo>(</mo><mi>M</mi><mo>,</mo><mi>N</mi><mo>)</mo></mrow></mtd></mtr></mtable></mfenced></mrow></math>
<math><mrow><mi>&beta;</mi><mo>=</mo><mfenced open='[' close=']'><mtable><mtr><mtd><msub><mi>&beta;</mi><mn>00</mn></msub></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mn>01</mn></msub></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mi>ij</mi></msub></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><mo>&CenterDot;</mo></mtd></mtr><mtr><mtd><msub><mi>&beta;</mi><mn>66</mn></msub></mtd></mtr></mtable></mfenced></mrow></math>
in the above formula, assume that the image size is M × N, φij(x,y)=xiyj. The matrix A is a reference correction coefficient corresponding to each pixel point obtained by training an image, noise influence is considered, and a calculation formula of beta is obtained according to a least square rule as follows:
β=(ψTψ)-1ψTA
after β is determined, a binary high power polynomial Im _ shading (x, y) fitting the two-dimensional correction surface is determined, that is:
Im_shading(x,y)=β0001x+β02x203x304x405x506x6
10y+β11xy+β12x2y+β13x3y+β14x4y+β15x5y+β16x6y
20y221xy222x2y223x3y224x4y225x5y226x6y2
30y331xy332x2y333x3y334x4y335x5y336x6y3
40y441xy442x2y443x3y444x4y445x5y446x6y4
50y551xy552x2y553x3y554x4y555x5y556x6y5
60y661xy662x2y663x3y664x4y665x5y666x6y6
in step S103, the generated two-dimensional calibration curved surface is fitted by taking a binary 6 th power polynomial as an example, and in practical application, fitting of the two-dimensional calibration curved surface can be performed by fitting a 2-10 th power polynomial of abscissa and ordinate of a pixel point according to different situations.
And step S104, calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the determined binary high power polynomial.
The method specifically comprises the following steps:
substituting each pixel point coordinate (x, y) into the binary high power polynomial Im _ shading (x, y) determined in step S103, i.e. calculating a lens shading correction coefficient corresponding to each pixel point coordinate.
The lens shading correction coefficient obtained by the steps of the above embodiment is determined for one type of lens by using an image with a certain resolution, that is, for the type of lens shooting the training image, another image shot by using the same resolution as the shooting training image can be subjected to lens shading correction processing on the other shot image by using the determined lens shading correction coefficient.
If the lens shading correction processing is to be performed on other types of lenses or images with other resolutions, the same processing can be performed by adopting the flow steps of the first embodiment, that is, the type of lens is used for shooting a training image with a specified resolution, and the lens shading correction coefficient corresponding to each pixel point coordinate in the image of the type of lens under the resolution is obtained.
After the lens shading correction coefficient is obtained, the subsequent image acquired by the lens can be corrected by using the lens shading correction coefficient so as to clear the lens shading in the image.
Example two:
the second embodiment of the present invention provides a lens shading correction method, a specific flow is shown in fig. 2, and the method includes:
step S201, an image to be corrected is input.
Step S202, obtaining the pixel value of the pixel point corresponding to the current pixel point coordinate according to the pixel point coordinate sequence, and obtaining the lens shadow correction coefficient corresponding to the current pixel point coordinate.
The method specifically comprises the following steps: sequentially acquiring pixel values of corresponding pixels according to the sequence of the pixels from top to bottom and from left to right in the image; and then acquiring a corresponding lens shadow correction coefficient according to the pixel point coordinate corresponding to the pixel point. The lens shading correction coefficient corresponding to each pixel point coordinate is determined by adopting the method described in the first embodiment.
And step S203, multiplying the acquired pixel value by a lens shading correction coefficient to obtain a corrected image and outputting the corrected image.
The method specifically comprises the following steps: correcting the pixel value of each pixel point in the image by using the corresponding lens shadow correction coefficient, namely: and multiplying the pixel value of each pixel point in the image to be corrected by the corresponding lens shading correction coefficient to obtain a corrected pixel value, and outputting the corrected pixel value of each pixel point to obtain an output image subjected to lens shading correction.
In step S202 of the second embodiment, a specific method for obtaining the lens shading correction coefficient corresponding to the current pixel coordinate may at least adopt the following two implementation manners:
the first method is as follows: a lens shadow correction coefficient corresponding to each pixel point coordinate is stored in a software mode; and acquiring a lens shadow correction coefficient corresponding to each pixel point coordinate according to the pixel point coordinate sequence.
For example, a software control mode can be adopted to sequentially output the lens shading correction coefficient corresponding to each pixel coordinate according to the pixel coordinate sequence.
The second method comprises the following steps: in order to save the storage space of the hardware, the lens shading correction coefficients corresponding to part of the pixel coordinates are extracted from the calculated lens shading correction coefficients corresponding to the pixel coordinates according to a set extraction rule and stored. For example, a lens shading correction coefficient corresponding to a pixel coordinate is extracted every a plurality of pixels and stored; a preferred extraction result is: extraction (2)n+1)*(2n+1) pixels, for example, 17 × 17 pixels are extracted and stored.
When the lens shading correction coefficient is stored in the second mode, before step S203 is executed, the lens shading correction coefficient corresponding to each pixel coordinate needs to be calculated and output by using an interpolation algorithm, a specific interpolation algorithm may theoretically use various interpolation algorithms in the prior art, in order to ensure the accuracy of interpolation, linear interpolation is generally not considered, and a preferable interpolation algorithm is a cube (cubic) interpolation algorithm.
The second mode is a better implementation mode, realizes the lens shadow correction by adopting hardware, and has stable performance; by adopting a mode of alternate point extraction, the storage space of hardware is saved, and the realization cost is low; and the lens shade correction coefficient corresponding to each pixel point coordinate is synchronously output by adopting algorithms such as cube interpolation and the like, so that the operation complexity is low and the control is simple.
Example three:
according to the method for determining lens shading correction coefficient provided by the first embodiment of the present invention, the third embodiment provides a corresponding device for determining lens shading correction coefficient, a schematic structural diagram of which is shown in fig. 3, and the method includes: a reference correction coefficient acquisition unit 31, a two-dimensional correction curved surface generation unit 32, a fitting unit 33, and a lens shading correction coefficient calculation unit 34. Wherein:
a reference correction coefficient obtaining unit 31, configured to obtain a reference correction coefficient of each pixel according to the training image and output the reference correction coefficient;
a two-dimensional correction curved surface generating unit 32, configured to receive the reference correction coefficient output by the reference correction coefficient obtaining unit 31, and generate a two-dimensional correction curved surface on the pixel coordinate plane;
a fitting unit 33 configured to fit the two-dimensional correction curved surface generated by the two-dimensional correction curved surface generation unit 32 with a binary high power polynomial of the pixel point coordinates;
and a lens shading correction coefficient calculation unit 34, configured to calculate a lens shading correction coefficient corresponding to each pixel coordinate according to the polynomial determined by the fitting unit 33.
In a preferred embodiment, a schematic structural diagram of the reference correction coefficient obtaining unit 31 is shown in fig. 4, and includes:
an obtaining subunit 311, configured to obtain a standard pixel value of each pixel of a training image, and obtain a real pixel value of each pixel of a captured image obtained after the training image is captured by a lens;
and the calculating subunit 312 is configured to calculate a ratio of the real pixel value of each pixel to the standard pixel value thereof, obtain a reference correction coefficient of each pixel, and output the reference correction coefficient.
Example four:
according to the lens shading correction method provided by the second embodiment of the present invention, the fourth embodiment provides a corresponding lens shading correction device, a schematic structural diagram of which is shown in fig. 5, and the method includes:
an image input unit 41 for receiving an image to be corrected;
a pixel value obtaining unit 42, configured to obtain and output a pixel value of a pixel point corresponding to a current pixel point coordinate according to the to-be-corrected image received by the image input unit 41 and according to a pixel point coordinate sequence;
a lens shading correction coefficient determining unit 43, configured to obtain a reference correction coefficient of each pixel point according to the training image, generate a two-dimensional correction curved surface on a pixel point coordinate plane from the reference correction coefficient, fit the generated two-dimensional correction curved surface with a binary high power polynomial of a pixel point coordinate, and calculate a lens shading correction coefficient corresponding to each pixel point coordinate according to the determined polynomial;
a lens shading correction coefficient output unit 44, configured to output a lens shading correction coefficient corresponding to the current pixel coordinate according to the lens shading correction coefficient corresponding to each pixel coordinate calculated by the lens shading correction coefficient determination unit 43;
a correction unit 45, configured to multiply the pixel value output by the pixel value acquisition unit 42 by a lens shading correction coefficient corresponding to the pixel point coordinate output by the lens shading correction coefficient output unit 44, and generate a corrected image;
an image output unit 46 for outputting the corrected image.
In a preferred embodiment, a schematic structural diagram of the lens shading correction coefficient output unit 44 is shown in fig. 6, and includes:
an extraction storage subunit 441, configured to extract and store the lens shading correction coefficients corresponding to part of the pixel coordinates according to a set extraction rule from the lens shading correction coefficients corresponding to each pixel coordinate calculated by the lens shading correction coefficient determining unit 43;
the interpolation calculation output subunit 442 is configured to calculate a lens shading correction coefficient corresponding to the current pixel coordinate according to the lens shading correction coefficient stored in the extraction storage subunit 441 and a set interpolation algorithm, and output the calculated lens shading correction coefficient to the correction unit 45.
According to the embodiment of the invention, firstly, the reference correction coefficient of each pixel point is determined according to the training image and the shot image obtained after the training image is shot by using the lens, the two-dimensional correction curved surface on the pixel point coordinate plane is generated by the reference correction coefficient of each pixel point, and the two-dimensional correction curved surface is the two-dimensional curved surface with noise and is not the real reflection of the lens shadow. The lens shading correction coefficient determined by the method has better effect of performing lens shading correction on the images subsequently shot by the lens. The method is generally applicable to different types of lenses, when the image shot by a certain type of lens needs to be subjected to lens shading processing, only the lens needs to be used for shooting a training image in advance, the method provided by the invention can determine the lens shading correction coefficient aiming at the image shot by the lens, and then the image shot by the lens is subjected to correction processing according to the determined lens shading correction coefficient.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A method for determining lens shading correction coefficients, comprising:
obtaining a reference correction coefficient of each pixel point according to the training image, specifically: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point;
generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient;
adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial;
calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: and substituting the coordinates of each pixel point into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to the coordinates of each pixel point.
2. The method of claim 1, wherein said fitting the two-dimensional surface with a binary higher power polynomial of pixel point coordinates comprises:
and fitting the two-dimensional correction curved surface by adopting a 2-10 power polynomial of the abscissa and the ordinate of the pixel point.
3. The method of claim 2, wherein said fitting the two-dimensional surface with a binary higher power polynomial of pixel point coordinates comprises:
and fitting the two-dimensional correction curved surface by adopting a 6 th power polynomial of the abscissa and the ordinate of the pixel point.
4. A lens shading correction method, comprising:
inputting an image to be corrected;
acquiring pixel values of pixels corresponding to the current pixel coordinates according to the pixel coordinate sequence, and acquiring a lens shadow correction coefficient corresponding to the current pixel coordinates; the method for determining the lens shading correction coefficient comprises the following steps: obtaining a reference correction coefficient of each pixel point according to the training image, specifically: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point; generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient; adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial; calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: substituting each pixel point coordinate into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to each pixel point coordinate;
and multiplying the acquired pixel value by the acquired lens shading correction coefficient to obtain a corrected image and outputting the corrected image.
5. The method of claim 4, further comprising: extracting and storing lens shadow correction coefficients corresponding to part of pixel point coordinates according to a set extraction rule from the calculated lens shadow correction coefficients corresponding to each pixel point coordinate in advance;
the acquiring of the lens shading correction coefficient corresponding to the current pixel point coordinate includes:
and calculating the lens shadow correction coefficient corresponding to the current pixel point coordinate according to the stored lens shadow correction coefficient and a set interpolation algorithm.
6. The method as claimed in claim 5, wherein said extracting and storing lens shading correction coefficients corresponding to part of pixel coordinates according to a set extraction rule comprises: extracting and storing a lens shadow correction coefficient corresponding to a pixel coordinate every other plurality of pixels;
the set interpolation algorithm is a cubic interpolation algorithm.
7. A lens shading correction coefficient determination device, characterized by comprising:
the reference correction coefficient obtaining unit is used for obtaining and outputting a reference correction coefficient of each pixel point according to the training image, and comprises:
the acquisition subunit is used for acquiring a standard pixel value of each pixel point of a training image and obtaining a real pixel value of each pixel point of a shot image after the training image is shot by a lens;
the calculating subunit is used for calculating the ratio of the real pixel value of each pixel point to the standard pixel value thereof, obtaining and outputting the reference correction coefficient of each pixel point;
the two-dimensional correction curved surface generating unit is used for receiving the reference correction coefficient output by the reference correction coefficient acquiring unit and generating a two-dimensional correction curved surface on a pixel point coordinate plane, and specifically comprises: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient;
the fitting unit is used for fitting the two-dimensional correction curved surface generated by the two-dimensional correction curved surface generating unit by adopting a binary high power polynomial of a pixel point coordinate, and specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial;
the lens shadow correction coefficient calculation unit is used for calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial determined by the fitting unit, and specifically comprises the following steps: and substituting the coordinates of each pixel point into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to the coordinates of each pixel point.
8. A lens shading correction apparatus, comprising:
an image input unit for receiving an image to be corrected;
the pixel value acquisition unit is used for acquiring and outputting the pixel values of the corresponding pixels of the current pixel coordinates according to the pixel coordinate sequence according to the image to be corrected received by the image input unit;
the lens shading correction coefficient determining unit is used for obtaining a reference correction coefficient of each pixel point according to the training image, and specifically comprises the following steps: acquiring a standard pixel value of each pixel point of a training image, acquiring a real pixel value of each pixel point of a shot image obtained after the training image is shot by a lens, and calculating a ratio of the real pixel value of each pixel point to the standard pixel value of each pixel point to obtain a reference correction coefficient of each pixel point; generating a two-dimensional correction curved surface on a pixel point coordinate plane by the reference correction coefficient, specifically: generating a two-dimensional correction curved surface on a pixel coordinate plane according to the abscissa and the ordinate of each pixel and a reference correction coefficient; adopting a binary high power polynomial of a pixel point coordinate to fit the two-dimensional correction curved surface, which specifically comprises the following steps: setting a binary high-order power polynomial with the horizontal and vertical coordinates of a pixel point as variables, and determining the weight coefficient of the polynomial to obtain the binary high-order power polynomial; calculating a lens shadow correction coefficient corresponding to each pixel point coordinate according to the polynomial, specifically: substituting each pixel point coordinate into the binary high power polynomial to calculate a lens shadow correction coefficient corresponding to each pixel point coordinate;
the lens shading correction coefficient output unit is used for outputting the lens shading correction coefficient corresponding to the current pixel point coordinate according to the lens shading correction coefficient corresponding to each pixel point coordinate calculated by the lens shading correction coefficient determining unit;
the correction unit is used for multiplying the pixel value output by the pixel value acquisition unit by a lens shading correction coefficient of the corresponding pixel point coordinate output by the lens shading correction coefficient output unit to generate a corrected image;
and an image output unit for outputting the corrected image.
9. The apparatus of claim 8, wherein the lens shading correction coefficient output unit comprises:
the extraction storage subunit is used for extracting and storing the lens shading correction coefficients corresponding to part of the pixel coordinates according to a set extraction rule from the lens shading correction coefficients corresponding to each pixel coordinate calculated by the lens shading correction coefficient determining unit;
and the interpolation calculation output subunit is used for calculating and outputting the lens shading correction coefficient corresponding to the current pixel point coordinate according to the stored lens shading correction coefficient and the set interpolation algorithm.
CN2008101048750A 2008-04-24 2008-04-24 Lens shade correction index confirming method, lens shade emendation method and device Active CN101271196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101048750A CN101271196B (en) 2008-04-24 2008-04-24 Lens shade correction index confirming method, lens shade emendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101048750A CN101271196B (en) 2008-04-24 2008-04-24 Lens shade correction index confirming method, lens shade emendation method and device

Publications (2)

Publication Number Publication Date
CN101271196A CN101271196A (en) 2008-09-24
CN101271196B true CN101271196B (en) 2010-12-08

Family

ID=40005265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101048750A Active CN101271196B (en) 2008-04-24 2008-04-24 Lens shade correction index confirming method, lens shade emendation method and device

Country Status (1)

Country Link
CN (1) CN101271196B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510962B (en) * 2008-12-31 2012-03-21 昆山锐芯微电子有限公司 Method and apparatus for correcting lens shadow
CN101505431B (en) * 2009-03-18 2014-06-25 北京中星微电子有限公司 Shadow compensation method and apparatus for image sensor
CN103377467B (en) * 2012-04-25 2017-03-29 华晶科技股份有限公司 Image Compensation Method
CN103377474B (en) * 2012-04-27 2016-08-03 比亚迪股份有限公司 Camera lens shadow correction coefficient determines method, camera lens shadow correction method and device
US9186909B1 (en) 2014-09-26 2015-11-17 Intel Corporation Method and system of lens shading color correction using block matching
CN105681680A (en) * 2016-02-22 2016-06-15 信利光电股份有限公司 Image vignetting correction method, device and system
WO2018150685A1 (en) * 2017-02-20 2018-08-23 ソニー株式会社 Image processing device, image processing method, and program
CN107590840B (en) * 2017-09-21 2019-12-27 长沙全度影像科技有限公司 Color shadow correction method based on grid division and correction system thereof
CN109461126B (en) * 2018-10-16 2020-06-30 重庆金山科技(集团)有限公司 Image distortion correction method and system
CN111818239B (en) * 2020-03-12 2023-05-02 成都微光集电科技有限公司 Lens shading correction method in image sensor
CN111556228B (en) * 2020-05-15 2022-07-22 展讯通信(上海)有限公司 Method and system for correcting lens shadow
CN112202986B (en) * 2020-09-30 2023-04-28 安谋科技(中国)有限公司 Image processing method, image processing apparatus, readable medium and electronic device thereof
CN113362253B (en) * 2021-06-30 2023-10-13 成都纵横自动化技术股份有限公司 Image shading correction method, system and device
CN113747066B (en) * 2021-09-07 2023-09-15 汇顶科技(成都)有限责任公司 Image correction method, image correction device, electronic equipment and computer readable storage medium
CN114078426B (en) * 2021-09-30 2022-12-27 卡莱特云科技股份有限公司 Method for eliminating influence of camera curve on lamp point image and display screen correction method
CN115866241A (en) * 2022-11-29 2023-03-28 昆山丘钛光电科技有限公司 Method, device, medium and equipment for detecting shadow correction effect of PD lens
CN116342435B (en) * 2023-05-30 2023-08-22 合肥埃科光电科技股份有限公司 A line scan camera distortion correction method, computing equipment and storage medium

Also Published As

Publication number Publication date
CN101271196A (en) 2008-09-24

Similar Documents

Publication Publication Date Title
CN101271196B (en) Lens shade correction index confirming method, lens shade emendation method and device
CN103377474B (en) Camera lens shadow correction coefficient determines method, camera lens shadow correction method and device
CN111160298B (en) Robot and pose estimation method and device thereof
KR100850931B1 (en) Rectification System ? Method of Stereo Image in Real Time
CN107633536A (en) A kind of camera calibration method and system based on two-dimensional planar template
CN111311632B (en) Object pose tracking method, device and equipment
CN114697623B (en) Projection plane selection and projection image correction method, device, projector and medium
JP2011045059A (en) Method and control unit for correcting distortion of camera image
US11734789B2 (en) Systems and methods for image distortion correction
CN107833186A (en) A kind of simple lens spatial variations image recovery method based on Encoder Decoder deep learning models
CN110599586A (en) Semi-dense scene reconstruction method and device, electronic equipment and storage medium
CN116777769A (en) Method and device for correcting distorted image, electronic equipment and storage medium
US8542919B2 (en) Method and system for correcting lens shading
CN116295114B (en) High-reflection surface structured light three-dimensional measurement method based on main and auxiliary double-view multi-gray level projection
EP3225024A1 (en) Method and system for determining parameters of an image processing pipeline of a digital camera
CN117974512A (en) Image processing method and device
CN119224743A (en) A method for calibrating extrinsic parameters of laser radar and camera
CN105956530A (en) Image correction method and image correction device
CN109446945A (en) Three-dimensional model processing method and device, electronic equipment and computer readable storage medium
KR101165810B1 (en) A method and an apparatus extracting a depth image information using a stereo camera
CN111489313B (en) CFA image demosaicing method and device
CN117575912A (en) Image super-resolution reconstruction method, system, equipment and medium
US8970744B2 (en) Two-dimensional lens shading correction
CN116245968A (en) Method for generating HDR image based on LDR image of transducer
CN106408499B (en) Method and device for acquiring reverse mapping table for image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210201

Address after: No. 602, 6th floor, shining building, 35 Xueyuan Road, Haidian District, Beijing 100083

Patentee after: BEIJING ZHONGXINGTIANSHI TECHNOLOGY Co.,Ltd.

Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor

Patentee before: Vimicro Corp.

TR01 Transfer of patent right