[go: up one dir, main page]

CN119255101B - Focusing method, electronic device and storage medium - Google Patents

Focusing method, electronic device and storage medium

Info

Publication number
CN119255101B
CN119255101B CN202410301998.2A CN202410301998A CN119255101B CN 119255101 B CN119255101 B CN 119255101B CN 202410301998 A CN202410301998 A CN 202410301998A CN 119255101 B CN119255101 B CN 119255101B
Authority
CN
China
Prior art keywords
focusing
pixel
nxn
sensitivity
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410301998.2A
Other languages
Chinese (zh)
Other versions
CN119255101A (en
Inventor
眭新雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202410301998.2A priority Critical patent/CN119255101B/en
Publication of CN119255101A publication Critical patent/CN119255101A/en
Application granted granted Critical
Publication of CN119255101B publication Critical patent/CN119255101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a focusing method, electronic equipment and a storage medium, and relates to the technical field of image processing. Firstly, calibrating and calibrating differences of NxN pixel sensitivities of focal plane positions NxN corresponding to NxN OCL sensors pre-configured on photographing equipment to reduce the differences of the sensitivities of the focal plane positions NxN pixels, then acquiring K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN OCL sensors to calibrate differences of the sensitivities of the NxN pixels at the K focusing positions by using the calibrated NxN OCL sensors and a focusing response function, and fitting focusing parameters in the calibration process. And then, using the calibrated NxN OCL sensor to acquire P target images at P focusing positions, and calculating the difference value of the sensitivity of NxN pixels corresponding to the P target images. And then accurately fitting out the quasi-focus position by utilizing the obtained difference value and focusing parameters, and accurately focusing according to the lens position of the photographing equipment determined by the quasi-focus position.

Description

Focusing method, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a focusing method, an electronic device, and a storage medium.
Background
With the continuous development of technology, people can shoot and record the drip drops in life through shooting equipment such as mobile phones, tablet computers, cameras and the like at any time and any place. In the photographing process, whether focusing can be achieved and whether focusing is accurate or not often influences the photographing definition.
The existing common focusing modes comprise on-chip phase focusing and contrast focusing, and the two focusing modes can show better focusing and defocusing recognition capability for areas with rich textures and accurately judge the focus position, but when shooting of pure-color scenes, such as sky shooting or white wall shooting, the recognition effect of the two focusing methods is weaker, and the problems of repeated pushing and pulling of focusing, hysteresis or focusing errors and the like are easy to occur. In order to improve the situation, the currently adopted improvement method is to add a laser focusing module on the photographing device additionally so as to improve the focusing effect when photographing a solid-color scene by sensing the absolute distance. However, this modification has problems of limitation of the distance measurement and increase of cost.
Disclosure of Invention
In order to solve the problems, the application provides a focusing method, electronic equipment and a storage medium, which aim to realize accurate focus estimation of shooting a solid-color scene by utilizing a multi-pixel phase focusing sensor configured on the shooting equipment, and can realize accurate focusing without additionally adding any focusing device, thereby not only solving the problem of poor focusing effect in the solid-color scene, but also reducing focusing cost.
In a first aspect, the present application provides a focusing method, where the method includes calibrating differences in sensitivity of NxN pixels at focal plane positions corresponding to NxN pixel phase focusing sensors pre-configured on a photographing device to calibrate values of differences in sensitivity of NxN pixels at focal plane positions, so that differences in sensitivity of NxN pixels at focal plane positions are minimized, and then obtaining K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN pixel phase focusing sensors, so as to calibrate focusing parameters of the focusing response function by using K calibration images and the focusing response function (specific sources are not limited, such as being pre-constructed by a factory end of the NxN pixel phase focusing sensor), and calibrating different values of sensitivity of NxN pixels at preset K focusing positions in the calibration process. And then, using the calibrated NxN pixel phase focusing sensor to acquire P target images at P focusing positions, and calculating the difference value of the NxN pixel sensitivity corresponding to the P target images. And the accurate focus position can be accurately fitted by utilizing the focus parameter of the focus response function and the difference value of NxN pixel sensitivity corresponding to the P target images, and the lens position of the photographing equipment determined according to the accurate focus position is accurately focused, so that an image with better focusing effect is obtained.
Therefore, in the focusing method, any new focusing device is not required to be additionally added in the photographing device (such as a mobile phone), and only the pixel sensitivity difference corresponding to the NxN pixel phase focusing sensor (refer to an image sensor for acquiring image data) configured in the photographing device (such as the mobile phone) in advance is utilized, so that accurate focus estimation and distance recognition of photographing of a pure-color scene can be realized, the problem of poor focusing effect in the pure-color scene is solved, and the focusing cost is reduced.
In one possible implementation, the value of N is 2, the NxN pixel phase focusing sensor is a 2x2 pixel phase focusing sensor, the 2x2 pixel phase focusing sensor comprises a micro lens, four pixels are corresponding to the micro lens, the difference of the sensitivity of the focal plane position NxN pixel corresponding to the NxN pixel phase focusing sensor is calibrated to calibrate the difference of the sensitivity of the focal plane position NxN pixel, the difference of the sensitivity of the focal plane position NxN pixel is reduced, and the calibration of the difference of the sensitivity of the focal plane position four pixels corresponding to the 2x2 pixel phase focusing sensor to calibrate the difference of the sensitivity of the focal plane position four pixels is performed to calibrate the value of the difference of the sensitivity of the focal plane position four pixels, and the difference of the sensitivity of the focal plane position four pixels is reduced, so that the family effect is improved.
In one possible implementation manner, calibrating the difference of the four-pixel sensitivity of the focal plane position corresponding to the 2x2 pixel phase focusing sensor to calibrate the value of the difference of the four-pixel sensitivity of the focal plane position and reduce the difference of the four-pixel sensitivity of the focal plane position can comprise the steps of shooting a light plate with uniform illumination by shooting equipment, and setting half of the sum value of the focusing position of the lens at infinity and the focusing position of the lens at the closest focusing distance as the calibrated focusing position; the method comprises the steps of obtaining a calibration image at a calibration focusing position by using a 2x2 pixel phase focusing sensor, wherein the calibration image comprises four color channels, all sub-channels under each color channel share a color filter of the corresponding color channel, performing pixel reduction processing on the calibration image by using a preset pixel scaling algorithm to obtain a reduced calibration image, combining the sub-channels under each color channel contained in the reduced calibration image, calculating the difference value of each combined channel and the pixel mean value of the color channel, calculating the compensation value of each combined channel and the pixel mean value of the color channel by using the difference value of each combined channel and the pixel mean value of the color channel, compensating corresponding channel pixels by using the compensation value, and calibrating the difference value of the four pixel sensitivity of the focus plane position more accurately to minimize the difference of the four pixel sensitivity of the focus plane position.
In one possible implementation, the preset pixel scaling algorithm is a bilinear interpolation algorithm.
In one possible implementation, merging the sub-channels under each color channel included in the scaled-down calibration image, and calculating the difference value of each merged channel and the pixel mean value of the color channel, where the difference value may include calculating the mean value of each color channel pixel included in the scaled-down calibration image, calculating the ratio of each merged channel pixel to the mean value of the corresponding color channel pixel, and using the difference value of the ratio and the number 1 as the difference value of the corresponding channel. So as to improve the calculation efficiency and accuracy of the difference value.
In one possible implementation manner, calculating the compensation value of each combined channel and the pixel mean value of the color channel by utilizing the difference value of each combined channel and the pixel mean value of the color channel can comprise respectively calculating the sum value of the difference value of each combined channel and the pixel mean value of the color channel and the number 1, and calculating the ratio of the number 1 to the sum value as the compensation value of the corresponding channel and the pixel mean value of the color channel so as to improve the calculation accuracy of the compensation value.
In one possible implementation manner, the value of K is 5, the preset K focusing positions are 5 step positions corresponding to the lens with 5 equal divisions, and the calibrated NxN pixel phase focusing sensor is utilized to acquire K Zhang Biaoding images at the preset K focusing positions, which can include respectively acquiring one calibration image at the 5 step positions corresponding to the lens with 5 equal divisions by utilizing the calibrated NxN pixel phase focusing sensor to acquire 5 calibration images so as to improve the calibration efficiency.
In one possible implementation manner, the K Zhang Biaoding image and the focusing response function are utilized to calibrate different difference values of the sensitivity of the NxN pixels at the preset K focusing positions, and in the calibration process, focusing parameters of the focusing response function are fitted, wherein the method comprises the steps of calculating average pixel values of all color channels contained in the K Zhang Biaoding image, calculating interpolation of all the color channels by utilizing the average pixel values, and calibrating different difference values of the sensitivity of the NxN pixels at the preset K focusing positions by utilizing the interpolation and the focusing response function, and in the calibration process, fitting focusing parameters of the focusing response function. Thereby being capable of improving the fitting accuracy of focusing parameters.
In one possible implementation manner, the value of N is 2, the NxN pixel phase focusing sensor is a 2x2 pixel phase focusing sensor, the 2x2 pixel phase focusing sensor comprises a micro lens, four pixels are corresponding to the micro lens, K calibration images and focusing response functions are utilized to calibrate different differences of the sensitivity of the NxN pixels at preset K focusing positions, in the calibration process, the focusing parameters of the focusing response functions are fitted, the method can comprise the steps of calculating average pixel values of four color channels contained in 5 calibration images, calculating interpolation of the four color channels by utilizing the average pixel values, and in the calibration process, the focusing parameters of the focusing response functions are fitted more quickly by utilizing the interpolation and the focusing response functions to calibrate the different differences of the sensitivity of the four pixels at preset 5 focusing positions, so that the fitting efficiency is improved.
In one possible implementation, the value of P is 2, the calibrated NxN pixel phase focusing sensor is used to obtain P target images at P focusing positions and calculate the difference value of the NxN pixel sensitivities corresponding to the P target images, and the method may include the steps of obtaining 2 target images at 2 focusing positions by using the calibrated NxN pixel phase focusing sensor and calculate the difference value of the NxN pixel sensitivities corresponding to the 2 target images to improve the calculation efficiency.
In one possible implementation manner, fitting the quasi-focus position by using the focusing parameter of the focusing response function and the difference value of the sensitivity of the NxN pixels corresponding to the P target images, and focusing according to the lens position of the photographing device determined by the quasi-focus position may include fitting the quasi-focus position by using the focusing parameter of the focusing response function and the difference value of the sensitivity of the NxN pixels corresponding to the 2 target images by a least square method, and focusing according to the lens position of the photographing device determined by the quasi-focus position. Thereby improving focusing effect and reducing focusing cost.
In a second aspect, the present application provides an electronic device comprising a memory storing a computer program and a processor for invoking and executing the computer program to implement the focusing method according to any one of the first aspects above.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon a computer program for implementing the focusing method of any one of the above first aspects when being executed by a processor of an electronic device.
In a fourth aspect, the present application provides a computer program product for, when run on a computer, causing the computer to perform the focusing method as claimed in any one of the first aspects.
Drawings
FIG. 1 is a schematic view of a scene provided by an embodiment of the present application;
fig. 2 is a schematic diagram of an electronic device according to an embodiment of the present application;
Fig. 3 is a software structure block diagram of an electronic device according to an embodiment of the present application;
FIG. 4 is a flowchart of a focusing method according to an embodiment of the present application;
FIG. 5 is a diagram showing an example of brightness distribution of pixels at front, near, and back focus positions according to an embodiment of the present application;
FIG. 6 is a schematic diagram of R, gr, gb, B color channels included in a calibration image according to an embodiment of the present application;
FIG. 7 is an exemplary diagram of a fitting process of a focus response function according to an embodiment of the present application;
fig. 8 is an exemplary view of a focusing scene of a photographing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The plurality of the embodiments of the present application is greater than or equal to two. It should be noted that, in the description of the embodiments of the present application, the terms "first," "second," and the like are used for distinguishing between the descriptions and not necessarily for indicating or implying a relative importance, or alternatively, for indicating or implying a sequential order.
For clarity and conciseness in the description of the embodiments below, a brief introduction to related concepts or technologies is first given:
Contrast focus (Contrast Detection Auto Focus, CDAF) is a technique that achieves focus by analyzing contrast information of an image. The principle is that the definition of the image is determined based on the contrast of the image, so that the focal length of the camera lens is adjusted to achieve the effect of definition. In contrast focus, a photographing device (e.g., a camera) photographs several images at different focal lengths, and then determines which focal length image is clearer by analyzing the contrast of objects in the images. An image with higher contrast generally indicates that the image is more sharp. The focusing method is to judge the definition of the image by using the contrast of the edge and detail of the object in the image.
Phase Focus (Phase Detection Autofocus, PDAF), a technique used for Auto Focus (AF), uses pixels on an image sensor to divide into different phases for comparison to determine the Focus position. The technology is generally applied to photographing equipment such as cameras and the like, and can be focused rapidly and accurately. The PDAF uses the difference in optical phase between specific pixels on the image sensor to determine the focus position. On an image sensor, light is split into two or more different paths after passing through a lens to reach a pixel. The optical path differences of these paths cause optical phase differences between different pixels. When the image is not at the focus position, the phase-focused image sensor can detect the focus deviation and quantify the deviation by analyzing the optical phase difference between the two pixels. By continuously adjusting the lens position and detecting the change of the phase difference, the optimal focusing position can be rapidly positioned.
In order to make the technical personnel in the technical field more clearly understand the scheme of the application, the application scenario of the technical scheme of the application is explained next.
Referring to fig. 1, a schematic view of a scenario provided by an embodiment of the present application is shown.
In this example scenario, it is shown that when a (user) uses a photographing device such as a mobile phone or a camera to photograph a solid-color scene, no matter in which way a contrast focusing or a phase focusing is adopted, the lens position needs to be repeatedly pushed and pulled, and the corresponding phase difference (PD value) or contrast value is a straight line as shown in fig. 1, so that the accurate focus and the accurate lens position cannot be found. This is because the focus mode of contrast focus generally relies on contrast differences present in the image for focus, whereas in photographing of a solid-color scene as shown in fig. 1, there is a lack of significant features or contrast in the image, which makes it difficult for the focus mode of contrast focus to determine the focus position. In a solid-color scene, all areas in the image look similar, lack obvious edges or details, and thus contrast is very low. The focus mode of contrast focus requires finding contrast differences between regions in the image to determine the focus position. However, in a solid-color scene, such contrast differences are small or almost non-existent, and it is difficult to accurately identify the focus in a contrast-focusing manner. In this case, therefore, there are problems such as repeated pushing and pulling of focusing, hysteresis, and focusing errors. The focusing mode of phase focusing also has difficulty in acquiring enough phase difference information from a solid-color scene because the whole scene or region lacks obvious characteristics or details. This makes the focusing manner of phase focusing also unable to accurately judge the focus position.
In order to improve the above situation, the currently adopted improvement method is to add a laser focusing module on a photographing device such as a mobile phone and a camera, so as to improve the focusing effect when photographing a solid-color scene by sensing the absolute distance. However, this modification has problems of limitation of the distance measurement and increase of cost.
In order to overcome the technical problems, the application provides a focusing method. The method and the device have the advantages that the accurate focus estimation and the distance identification of the pure-color scene shooting are realized by utilizing the pixel sensitivity difference corresponding to the NxN pixel phase focusing sensor (which refers to an image sensor for acquiring image data and can be expressed by using the NxN On-Chip Lens sensor, which can also be simply called as the NxN OCL sensor) which is pre-installed On the photographing equipment, and the accurate focusing can be realized without additionally adding any new focusing device, so that the problem of poor focusing effect under the pure-color scene is solved, and the focusing cost is reduced.
The application is not limited to the value of N, and can be set according to actual conditions and experience values, and only the value of N is required to be a positive integer larger than 1, for example, the value of N can be 2, then the NxN pixel phase focusing sensor is a 2x2 pixel phase focusing sensor (i.e. QPD (Quadrant Photodiode) sensor or 2x2 OCL sensor), thus, accurate focus estimation and distance identification of pure-color scene shooting can be realized by utilizing the sensitivity difference of four pixels corresponding to the 2x2 OCL sensor, and accurate focusing can be further realized.
It should be noted that the focusing method provided by the embodiment of the application can be applied to electronic devices (i.e., photographing devices) with photographing functions, such as mobile phones, tablet computers, cameras, personal digital assistants (Personal DIGITAL ASSISTANT, PDA), desktop, laptop, notebook computers, ultra-mobile Personal Computer (UMPC), handheld computers, netbooks, wearable devices, and the like.
In order to make the person skilled in the art more clearly understand the focusing method provided by the present application, the hardware architecture and the software system architecture of the electronic device for real-time focusing method will be described in detail first.
Referring to fig. 2, a schematic diagram of an electronic device (taking a photographing device such as a mobile phone as an example) according to an embodiment of the present application is shown.
As shown in fig. 2, the electronic device 200 may include a processor 210, a mobile communication module 220, a wireless communication module 230, a sensor module 240, a display 250, an internal memory 260, a camera 270, an audio module 280, a speaker 280A, a receiver 280B, a microphone 280C, an earphone interface 280D, an antenna group 1, and an antenna group 2.
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 200. In other embodiments of the application, electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be separated, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, for example, processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete instruction fetching and instruction execution control, for example, accurate focus estimation and distance identification of shooting of a pure-color scene can be realized according to the sensitivity difference of each pixel corresponding to the NxN pixel phase focusing sensor 240 pre-installed on the electronic equipment (photographing equipment) 200, so that accurate focusing of the pure-color scene is further realized, and focusing effect is improved.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The sensor module 240 may be used to obtain data signals of various aspects related to the electronic device 200 as a basis for implementing corresponding functions. In some embodiments, sensor module 240 may include, but is not limited to, a pressure sensor, a gyroscope sensor, an image sensor, and the like. The image sensor (an NxN OCL sensor is used in the present application) may be used to acquire image data (such as a Raw image) around the electronic device, and transmit the acquired image data to the electronic device 200. So that the electronic device 200 can calculate the difference of the sensitivity of the NxN pixel under each color channel corresponding to the NxN pixel phase focusing sensor according to the image data, so that the accurate focus position can be accurately fitted according to the value of the difference, and further, the accurate focusing can be realized, and particularly, see the related description of the subsequent embodiments.
Internal memory 260 may be used to store computer-executable program code that includes instructions. The internal memory 260 may include a storage program area and a storage data area. The storage program area may store an operating system, an application program (such as a sound collection function, an image capturing function, etc.) required for at least one function, and the like. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, image data, etc.), and so on. In addition, the internal memory 260 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash memory (universal flash storage, UFS), and the like. The processor 210 performs various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 260 and/or instructions stored in a memory provided in the processor.
In some embodiments, the internal memory 260 stores instructions for performing a focusing method. The processor 210 may execute the instructions stored in the internal memory 260 to perform the functions of firstly calibrating the difference of the sensitivity of the NxN pixels at the focal plane position corresponding to the NxN OCL sensor pre-configured in the electronic device (photographing device) 200 to calibrate the difference of the sensitivity of the NxN pixels at the focal plane position, so that the difference of the sensitivity of the NxN pixels at the focal plane position is minimized, then acquiring K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN pixel phase focusing sensor, calibrating the different difference of the sensitivity of the NxN pixels at the preset K focusing positions by using K calibration images and a focusing response function (the specific sources are not limited, for example, the different differences of the sensitivity of the NxN pixels can be pre-constructed by the NxN OCL sensor module factory), and fitting the focusing parameters of the focusing response function in the calibration process. And then, using the calibrated NxN pixel phase focusing sensor to acquire P target images at P focusing positions, and calculating the difference value of the NxN pixel sensitivity corresponding to the P target images. And the accurate focus position can be accurately fitted by utilizing the focus parameter of the focus response function and the difference value of NxN pixel sensitivity corresponding to the P target images, and the lens position of the photographing equipment determined according to the accurate focus position is accurately focused, so that an image with better focusing effect is obtained.
The display 250 is used for displaying images, videos, etc., such as displaying captured sky or white wall solid-color scenes. The display 250 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 200 may include 1 or more display screens 250.
The camera 270 is used to capture still images or video, and the object is projected onto the photosensitive element by generating an optical image through the lens. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or more cameras 193.
The electronic device 200 implements display functions through a GPU, a display screen 250, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 250 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The electronic device 200 may implement audio functions through an audio module 280, a speaker 280A, a receiver 280B, a microphone 280C, an earphone interface 280D, an application processor, and the like. Such as input and output of speech, e.g., music playing, recording, etc.
The audio module 280 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 280 may also be used to encode and decode audio signals. In some embodiments, the audio module 280 may be disposed in the processor 210, or some functional modules of the audio module 280 may be disposed in the processor 210.
Speaker 280A, also known as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 200 may listen to music, or to hands-free conversations, through the speaker 280A.
A receiver 280B, also known as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device 200 is answering a telephone call or voice message, the voice can be received by placing the receiver 280B close to the human ear.
Microphone 280C, also known as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 280C through the mouth, inputting a sound signal to the microphone 280C.
The earphone interface 280D is used to connect a wired earphone, and does not limit the standard attributes of the interface.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only illustrative, and does not limit the structure of the electronic device 200.
The wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 220, the wireless communication module 230, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 220 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200. The mobile communication module 220 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 220 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 220 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate the electromagnetic waves. In some embodiments, at least some of the functional modules of the mobile communication module 220 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 220 may be provided in the same device as at least some of the modules of the processor 210.
The wireless communication module 230 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the electronic device 200. The wireless communication module 230 may be one or more devices that integrate at least one communication processing module. The wireless communication module 230 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 230 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it into electromagnetic waves for radiation via the antenna 2.
In addition, above the above components, the electronic device 200 runs an operating system. Such as iOS operating systems, android operating systems, windows operating systems, etc. Running applications may be installed on the operating system.
Referring to fig. 3, a software structure diagram of an electronic device (taking a photographing device such as a mobile phone as an example) according to an embodiment of the present application is shown.
The software system of the electronic device (for example, a mobile phone) 200 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 200 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer (APK), an application Framework layer (Framework), a Hardware Abstraction Layer (HAL), a driver layer, and a hardware layer, respectively.
The application layer may include a series of application packages (APP). As shown in fig. 3, the application package may include camera, navigation, WLAN, bluetooth, gallery, etc. applications. When shooting while the user holds the electronic device 200 (shooting a solid-color scene as shown in fig. 1), it is possible to communicate with a camera-related device in the camera access interface of the frame layer through the camera application program to request camera functions, acquire image data, and the like.
The application framework layer (framework layer) provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 3, the application framework layer may include a window manager, notification manager, resource manager, camera access interface (including but not limited to camera management, camera device, etc.), and the like. An application framework layer may be used to enable interaction of camera services and camera APIs. I.e. it provides a unified interface enabling different camera hardware to interact with different camera applications. The frame layer is also used to handle many general aspects of camera functions such as auto focus, motion focus, etc.
Wherein the window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
In this embodiment, a Hardware Abstraction Layer (HAL) may be provided with a standard set of interfaces that enable the framework layer to communicate with camera hardware of a variety of different manufacturers without knowledge of underlying hardware details. The Hardware Abstraction Layer (HAL) stores therein a hardware abstraction layer, a camera algorithm library, and the like. It should be noted that, the focusing algorithm provided by the present application may be stored in a camera algorithm library of the HAL layer, as shown in fig. 3.
The focusing algorithm provided by the application is used for accurately fitting the accurate focus position by calculating the value of the difference of the sensitivity of the NxN pixels under each color channel corresponding to the NxN pixel phase focusing sensor 240, so as to further realize accurate focusing. Specifically, firstly, calibrating differences of the sensitivity of the NxN pixels at the focal plane positions NxN pixel corresponding to the NxN OCL sensor pre-configured in the electronic device (for example, a mobile phone) 200 to calibrate the differences of the sensitivity of the NxN pixels at the focal plane positions so as to minimize the differences of the sensitivity of the NxN pixels at the focal plane positions, and then acquiring K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN pixel phase focusing sensor to calibrate the differences of the sensitivity of the NxN pixels at the preset K focusing positions by using K calibration images and focusing response functions, and fitting focusing parameters of the focusing response functions in the calibration process. And then, using the calibrated NxN pixel phase focusing sensor to acquire P target images at P focusing positions, and calculating the difference value of the NxN pixel sensitivity corresponding to the P target images. And the accurate focus position can be accurately fitted by utilizing the focus parameter of the focus response function and the difference value of NxN pixel sensitivity corresponding to the P target images, and the lens position of the photographing equipment determined according to the accurate focus position is accurately focused, so that an image with better focusing effect is obtained.
The hardware layer may include the hardware components of the electronic device (e.g., a cell phone) 200 set forth above. By way of example, fig. 3 shows an NxN pixel phase focus sensor, an image signal processor, a digital signal processor, a graphics processor.
The NxN pixel phase focusing sensor is used for performing image exposure, focusing processing and the like.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 200 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Therefore, interoperability of different camera applications and hardware devices on the Android platform can be realized through the application layer, the frame layer, the HAL layer, the driving layer and the hardware layer, and accurate estimation of the quasi-focus during shooting of the pure-color scene can be realized, so that accurate focusing can be realized without additionally adding any focusing device, the focusing effect on the pure-color scene is improved, and the focusing cost is reduced.
The technical solutions involved in the following embodiments may be implemented in an electronic device having the above hardware architecture and software architecture.
Next, a detailed description will be given of a specific implementation procedure of the focusing method provided by the present application:
As shown in fig. 4, the specific implementation procedure of the focusing method may include the following steps S401 to S405:
S401, calibrating the difference of the sensitivity of the NxN pixels at the focal plane position corresponding to the NxN pixel phase focusing sensor to calibrate the difference of the sensitivity of the NxN pixels at the focal plane position and reduce the difference of the sensitivity of the NxN pixels at the focal plane position, wherein N is a positive integer greater than 1.
It should be noted that, in order to achieve accurate focusing of a photographing device (such as a mobile phone, a tablet computer, etc.) on a solid-color scene without adding any new focusing device, the application sets an image sensor pre-installed in the photographing device as an NxN pixel phase focusing sensor (NxN OCL sensor) for accurately fitting out an accurate focus position by calculating the differential value of the sensitivity of the NxN pixels under each color channel corresponding to the NxN pixel phase focusing sensor, so as to further achieve accurate focusing. Therefore, accurate focusing can be realized without additionally adding any new focusing device, the problem of poor focusing effect under a pure-color scene can be solved, and the focusing cost can be reduced.
The specific composition structure of the NxN OCL sensor is not limited, and the structure of the NxN OCL sensor can be characterized by comprising micro lenses, wherein 1 micro lens corresponds to NxN pixels, and N is a positive integer larger than 1. For example, when the value of N is 2, the NxN OCL sensor is a 2x2 OCL sensor, and four pixels are corresponding to one microlens. Next, the construction of the NxN OCL sensor will be described in detail taking the 2x2 OCL sensor as an example, and specifically may include:
(1) Microlenses in a 2x 2OCL sensor, each microlens is centered on four adjacent pixels, forming a 2x2 pixel area. The purpose of the microlenses is to focus light from different directions onto the corresponding pixels. Such a layout helps to enhance the light receiving efficiency and the light utilization of the sensor.
(2) Color Filters (CFA) in a 2x 2ocl sensor, a four pixel Bayer array (Bayer Pattern) is typically used for the arrangement of the color filters. With this arrangement, each 2x2 pixel region contains one red filter, one blue filter, and two green filters. The arrangement mode enables each pixel to capture information of three colors of red (R), green (G) and blue (B), so that full-color image acquisition is realized.
(3) And (3) arranging pixels, namely distributing four pixels around each microlens in a 2x2 array form, wherein each pixel corresponds to one optical filter. Such a layout design enables each microlens to focus and distribute light to its corresponding pixel, thereby capturing the light signal more effectively.
Therefore, based on the characteristics of the NxN OCL sensor (taking a 2x2 OCL sensor as an example) structure, the application can utilize the NxN pixel under the micro lens to sense the matching condition of the lens chief ray angle and the sensor chief ray angle, calculate the sensitivity difference of the NxN pixel, and realize accurate focus estimation and distance identification of shooting a solid-color scene, thereby realizing accurate focusing.
In particular, still taking a 2x2 OCL sensor as an example, when the chief ray angle (CHIEF RAY ANGLE, CRA) is aligned to a four-pixel microlens, i.e., with a strict CRA match, the rays are more uniformly distributed over the four pixels, making their sensitivity relatively uniform. If the CRA is not exactly matched to the four pixel microlens, the angle of the incident light may cause slight differences in the different pixels. Where the Chief Ray Angle (CRA) refers to the angle between the line of the optical center of the optical system to a certain pixel and the normal of the pixel plane.
The difference in sensitivity of the four pixels is minimal when the chief ray CRA of the lens and the acceptance chief ray angle of the 2x 2ocl sensor (sensor) are closely adapted (e.g., no more than plus or minus 2 ° to 3 °). The four-pixel sensitivity difference gradually increases as the CRA adaptation relationship of the lens and Sensor is adapted to mismatch.
Therefore, in a camera module such as a mobile phone, the change of the focusing position causes the Chief Ray Angle (CRA) of the lens to change, which means that the CRA adaptation relationship between the lens and the sensor is changing at different focusing positions, and thus the difference of four-pixel sensitivity at different focusing positions is also changing.
Based on the above, the application provides that the quasi-focus judgment in the process of shooting a solid-color scene can be realized by utilizing the quantitative relation between the sensitivity difference of 2 x 2OCL and the lens position of the shooting equipment. As shown in fig. 5, in the positions of the front focus, the focus and the rear focus, the angle of the light perceived by the single pixel point changes, and after the light is focused and refracted by the micro lens, the brightness distribution of four pixels also changes. As can be seen from fig. 5, the difference in sensitivity of the pixels is large at the front and rear focal positions, and small at the quasi-focal position. When the matching relation between the lens and the sensor is fixed, the relation between the sensitivity difference change of NxN pixels under 1 micro lens and the focusing position is also fixed, and the characteristic forms the basis of the application for sensing the depth of the photographed image. Therefore, the quasi-focus position can be determined by the relation between the pre-calibrated NxN pixel sensitivity difference and the lens focusing position, and the data point acquisition of a small amount of focusing positions and the NxN pixel sensitivity difference can be performed by camera control, and the quasi-focus position can be determined by fitting and other modes.
In a specific implementation process, firstly, a difference of the sensitivity of the NxN pixel at the focal plane position NxN pixel corresponding to an NxN pixel phase focusing sensor (NxN OCL sensor) configured in the photographing device needs to be calibrated, so as to calibrate a value (which may be represented by CHANNEL DIFF) of the sensitivity of the NxN pixel at the focal plane position, and reduce the difference of the sensitivity of the NxN pixel at the focal plane position, so as to obtain the calibrated NxN OCL sensor, so as to execute a subsequent step S402.
In some implementations, the value of N may be 2, and the NxN pixel phase focusing sensor is a 2x2 pixel phase focusing sensor (2 x2 OCL sensor), where the 2x2 OCL sensor includes a microlens, and four pixels are corresponding under the microlens. Thus, in order to improve the focusing effect, first, four bayer encoding sensitivity correction (QSC) needs to be performed on a 2x2 OCL sensor configured in the photographing apparatus to correct uniformity of sensitivity of 2x2 pixels, that is, to calibrate a differential value (CHANNEL DIFF) of sensitivity of four pixels at the focal plane position, so that the differential value of sensitivity of four pixels at the focal plane position is minimized. The specific implementation process may include the following steps S4011 to S4015:
S4011, shooting a light plate with uniform illumination by using shooting equipment, and setting half of the sum value of the focusing position of the lens at infinity and the focusing position of the lens at the closest focusing distance as the calibrated focusing position.
In this implementation manner, when performing QSC calibration, the camera module of the photographing device (mobile phone, camera, etc.) may be used to photograph the uniform light panel (specific structure is not limited, for example, structural characteristics of the uniform light panel may be large-area light emission, single color temperature, D65 light source, 3000lux brightness, etc.), for example, when the focal length of the lens is 8.6mm, the object distance may be defined as 1 focal length, so as to improve calibration accuracy. The rule for setting the focus position of the camera module of the photographing apparatus (mobile phone, camera, etc.) is to define the focus position of the lens at infinity as z1, define the focus position of the lens at the closest focus distance as z2, and then set the focus position to (z1+z2)/2.
S4012, acquiring a calibration image at a calibration focusing position by using a 2x2 pixel phase focusing sensor, wherein the calibration image comprises four color channels, and each sub-channel under each color channel shares a color filter of the corresponding color channel.
In this implementation, after determining the focusing position in step S4011, a calibration image may be further acquired at the calibration focusing position by using the 2x2 OCL sensor (e.g., a Raw image with full pixel resolution is output at the calibration focusing position by using the 2x2 OCL sensor). Wherein the calibration image (e.g., raw image) includes four color channels, such as R, gr, gb, B four color channels shown in fig. 6. The individual sub-channels under each color channel share the color filters of the corresponding color channel. For example, sub-channels numbered 0, 1, 2, 3 exist for each color channel shown in fig. 6. The R channel is a color filter with four sub-channels R0, R1, R2 and R3 sharing the R channel. Under the Gr channel, four sub-channels of Gr0, gr1, gr2 and Gr3 share the color filter of the Gr channel. Under the Gb channel, four sub-channels of Gb0, gb1, gb2 and Gb3 share the color filter of the Gb channel. Under the B channel, four sub-channels B0, B1, B2 and B3 share the color filter of the B channel.
S4013, performing pixel reduction processing on the calibration image by using a preset pixel scaling algorithm to obtain a reduced calibration image.
In this implementation manner, after the calibration image (e.g., raw image) is obtained in step S4012, in order to accelerate the calculation efficiency, a preset pixel scaling algorithm may be further utilized to perform a pixel scaling process on the calibration image (e.g., raw image) to obtain a calibration image with reduced pixels, so as to execute the subsequent step S4014.
The specific content of the preset pixel scaling algorithm is not limited, and the specific content can be selected according to actual conditions and experience values, for example, the preset pixel scaling algorithm can be set to be a bilinear interpolation algorithm, so that the bilinear interpolation algorithm can be utilized to carry out pixel reduction processing on a calibration image (such as a Raw chart) to reduce the calibrated data quantity and accelerate the calculation efficiency. The specific pixel values of the calibration image (such as a Raw image) and the reduced calibration image are not limited, the calibration image (such as a Raw image) is assumed to be an L pixel×H pixel (the L and H values are not limited, for example, the L and H values can be respectively 512 and 224, etc.), and after the pixel reduction processing is performed on the calibration image by using a bilinear interpolation algorithm, 64 pixels×28 pixels, etc. can be obtained.
S4014, merging the sub-channels under each color channel included in the reduced calibration image, and calculating the difference value of each merged channel and the pixel mean value of the color channel.
In this implementation manner, after the scaled-down calibration image (the size may be 64 pixels×28 pixels) is obtained in step S4013, further, sub-channels under each color channel included in the scaled-down calibration image may be first combined, taking four sub-channels R0, R1, R2, and R3 under the corresponding R color channel of the 2×2 OCL sensor as an example, the size of each sub-channel may be 16 pixels×12 pixels, and the size of the integrated QSC of the 16 channels is 16×12×16. Where 16 pixels=64 pixels/(2×2), 12 pixels=28 pixels/(2×2).
Then, the mean value of each color channel pixel contained in the scaled-down calibration image is calculated. For example, taking four sub-channels R0, R1, R2, and R3 of the 2x2 OCL sensor corresponding to the R color channel as an example, the color channel covers 2x2 pixels, and the pixel values corresponding to the sub-channel numbers R0, R1, R2, and R3 are respectively represented by R0, R1, R2, and R3, then the average value of the color channel pixels is calculated as follows:
for a specific combined channel pixel, the ratio of the channel pixel to the average value of the corresponding color channel pixel can be further calculated. For example, still taking the example of R0 under the R color channel corresponding to the 2X2 OCL sensor, the ratio of R0 to the average value of the R color channel pixels is: the difference between the ratio and the number 1 is used as the difference value of the corresponding channel, and D0 is used for representing Thus, when the mean value of the R color channel pixelsWhen the value is 10 and the value of the pixel value R0 is 9, the value of D0 can be calculated to be-10%, that is, d0=9/10—1= -10%.
S4015, calculating compensation values of the combined channels and the pixel mean values of the color channels by using the difference values of the combined channels and the pixel mean values of the color channels, and compensating corresponding channel pixels by using the compensation values to calibrate the difference values of the four-pixel sensitivity of the focal plane position and reduce the difference of the four-pixel sensitivity of the focal plane position.
In this implementation manner, after obtaining the difference value (such as the value of D0) of each combined channel and the pixel mean value of the color channel through step S4014, the sum value of the difference value (such as the value of D0) of each combined channel and the pixel mean value of the color channel and the number 1 can be further calculated, and the ratio of the number 1 to the sum value is calculated and used as the compensation value of the corresponding channel and the pixel mean value of the color channel.
For example, still taking R0 under the R color channel corresponding to the 2X2 OCL sensor as an example, the ratio of R0 to the average value of the R color channel pixels isThe difference between the ratio and the number 1 is used as the difference value of the corresponding channel, and D0 is used for representingFurther, after calculating the sum value of D0 and 1 (i.e. 1+d0), the ratio of the value 1 to the sum value is calculated and is used as the compensation value of R0 and the pixel mean value of the R color channel, and QR0 is used to represent, i.e. QR 0=1/(1+d0), so when the value of D0 is-10%, the value of QR0 can be calculated to be 1.111, i.e. QR 0=1/(1+d0) =1/0.9≡1.111.
Further, the compensation value (e.g., QR 0) may be used to compensate the corresponding channel pixel (e.g., R0), for example, for the pixel R0, the compensation value QR0 may be multiplied to make the pixel value after the compensation approach to the average value of the R color channel pixel, and so on, to calibrate the differential value of the sensitivity of the four pixels at the focal plane position, and to reduce the differential of the sensitivity of the four pixels at the focal plane position (i.e., all approaches to the average value of the corresponding color channel pixel).
In this way, after the calibration of the QSC, the pixel difference degree of each sub-channel in the Raw chart finally output by the 2x2 OCL sensor can be reduced to the minimum, so as to obtain the calibrated 2x2 OCL sensor, which is used to execute the subsequent step S402.
It should be noted that, when the value of N is other positive integer greater than 2, the calibration process for the NxN OCL sensor may be implemented by referring to the method for calibrating the 2x2 OCL sensor set forth in the steps S4011-S4045, and the detailed calibration process is not repeated here.
S402, acquiring K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN pixel phase focusing sensor, wherein K is a positive integer greater than 0.
In this embodiment, after calibration and calibration are performed on the imaging device configured with the NxN pixel phase focusing sensor in step S401, an image (defined as a calibration image, and a specific size is not limited) may be further acquired at preset K focus positions by using the calibrated NxN pixel phase focusing sensor, so as to obtain K calibration images, which are used to execute the subsequent step S403. Wherein K is a positive integer greater than 0.
The application is not limited to the value of K and the specific value of the preset K focusing positions, and can be set according to actual conditions and empirical values. In some embodiments, the value of K may be set to 5, and the preset K focusing positions may be set to 5 step positions corresponding to the lens stroke of 5 equal divisions. For example, assuming that the stroke of the lens (which may be in mm) is divided into 800 steps, and then divided by 5 equally, the AF positions may be set to 1 st step, 200 th step, 400 th step, 600 th step, and 800 th step, respectively. And obtaining an image at each AF position to obtain 5 calibration images for calibrating the difference value (CHANNEL DIFF) of the sensitivity of NxN pixels corresponding to the NxN OCL sensor at each AF position.
It should be further noted that, taking the color coding scheme of the NxN Raw chart as an example for each calibration image, there are R, gr, gb, B four color channels, and there are NxN color sub-channels under the color channel corresponding to one microlens.
S403, calibrating different difference values of NxN pixel sensitivity under preset K focusing positions by using K calibration images and a focusing response function, and fitting focusing parameters of the focusing response function in the calibration process.
Through step S402, after K calibration images are obtained at preset K focusing positions by using the calibrated NxN OCL sensor, the average pixel value of each color channel included in the K Zhang Biaoding images may be further calculated first, and the interpolation of each color channel may be calculated by using the obtained average pixel value. And then calibrating different difference values of the sensitivity of the NxN pixels at the preset K focusing positions by utilizing the interpolation and focusing response functions, and fitting focusing parameters of the focusing response functions in the calibration process so as to execute the following step S404.
The specific value and source of the focusing response function are not limited, and the focusing response function can be set according to actual conditions and experience values, for example, the focusing response function can be obtained by pre-constructing an NxN OCL sensor module factory end, and can be represented by F (x, p 0), and the focusing response function specifically comprises the following steps: wherein p0 represents the position of the lens under the condition of focusing, x represents any focusing position in the shooting process, and a and b both represent focusing parameters of a focus response function.
For example, assuming that the value of N is 2, the NxN OCL sensor is a 2x2 OCL sensor, and the 2x2 OCL sensor includes a microlens corresponding to four pixels under the microlens. Assume that K has a value of 5, and each image is acquired at 5 step positions corresponding to the lens stroke of 5 equal divisions to obtain 5 calibration images, where each calibration image includes R, gr, gb, B color channels, and 16 sub-channels of R0, R1, R2, R3, gr0, gr1, gr2, gr3, gb0, gb1, gb2, gb3, B0, B1, B2, and B3.
On the basis, the average pixel values of the four color channels contained in the 5 calibration images can be further calculated by using algorithms such as bilinear interpolation and the like, and interpolation of the four color channels can be calculated by using the obtained average pixel values. Then, different difference values of the sensitivities of four pixels at 5 focusing positions can be calibrated by utilizing interpolation and focusing response functions, and values of focusing parameters a and b of the focusing response functions are fitted in the calibration process.
The bilinear interpolation algorithm is a method for estimating based on the values of four adjacent points, and assuming that there are four points (x 1,y1)、(x2,y1)、(x1,y2)、(x2,y2) respectively, which correspond to the values f(x1,y1)、f(x2,y1)、f(x1,y2)、f(x2,y2), and are to be estimated as one point (x, y) at the time, so that x 1≤x≤x2,y1≤y≤y2 is established, the calculation formula of the corresponding bilinear interpolation algorithm is as follows:
f(x,y)=f(x1,y1)×(x2-x)(y2-y)+f(x2,y1)×(x-x1)(y2-y)+f(x1,y2)×(x2-x)(y-y1)+f(x2,y2)×(x-x1)(y-y1)
where f (x, y) represents an estimate for point (x, y).
In addition, for 16 sub-channels including R0, R1, R2, R3, gr0, gr1, gr2, gr3, gb0, gb1, gb2, gb3, B0, B1, B2, and B3 in any one of the calibration images, the average value of R0, R1, R2, R3, gr0, gr1, gr2, gr3, gb0, gb1, gb2, gb3, B0, B1, B2, and B3 is < R >, the average value of Gr0, R1, R2, and R3 is < Gr >, the average value of Gb0, gb1, gb2, and Gb3 is < Gb >, and the average value of B0, B1, B2, and B3 is < B >, after the pixel values of the corresponding channels are respectively indicated.
Furthermore, the calculation formulas of the bilinear interpolation algorithm can be used for calculating the interpolation of four color channels respectively, and the calculation formulas are specifically shown as follows:
RDiff max=max{Ri-<R>}/<R>
GrDiff max=max{Gri-<Gr>}/<Gr>
GbDiff max=max{Gbi-<Gb>}/<Gb>
BDiff max=max{Bi-<B>}/<B>
Wherein, i has the values of 0, 1,2 and 3.
Therefore, the global four-pixel sensitivity difference value in the calibration image acquired under the corresponding focusing position can be calculated, and the specific calculation formula is as follows:
Channel Diff=max{RDiff max,GrDiff max,GbDiff max,BDiff max}
Similarly, the global four-pixel sensitivity difference value (CHANNEL DIFF) in the 4 calibration images acquired at the other 4 focusing positions can be obtained.
And defining 5 points for obtaining the calibration images when the lens focusing positions are respectively the 1 st step length, the 200 th step length, the 400 th step length, the 600 th step length and the 800 th step length as P1, P2, P3, P4 and P5, defining the corresponding lens step length positions as x1, x2, x3, x4 and x5 respectively, and dividing global four-pixel sensitivity difference values (CHANNEL DIFF) in the 5 calibration images obtained through the calculation steps into F1, F2, F3, F4 and F5. And setting p0 corresponding to the quasi-focus position as the position where the 400 th step is located (the specific value is not limited, and the position is usually set as the 400 th step in order to ensure the display definition), and placing the corresponding lamp box position at the object distance corresponding to the 400 th step image distance. Thus, the lens advance example can be determined by converting 400 steps into a lens stroke, for example, if the total stroke of the lens is 2mm, the lens advance is 1mm after conversion.
On the basis, values of x1, x2, x3, x4, x5 and F1, F2, F3, F4, F5 and p0 are obtained and substituted into the focusing response functionAnd performing least square fitting to obtain specific values of the focusing parameters a and b.
For example, when the values of x1, x2, x3, x4, x5 are 1, 200, 400, 600, 800, respectively, the values of F1, F2, F3, F4, F5 are 7%, 4%, 2.2%, 4.2%, 7.2%, and p0=400, the curve shown in fig. 7 can be fitted, and a=4.9992 and b=0.0003 can be determined.
S404, acquiring P target images at P focusing positions by using the calibrated NxN pixel phase focusing sensor, and calculating the difference value of the sensitivity of the NxN pixels corresponding to the P target images, wherein P is a positive integer greater than 0.
In this embodiment, after calibration and calibration are performed on the NxN pixel phase focusing sensor configured in the photographing apparatus through step S401, one image (defined as a target image, and the specific size is not limited) may be obtained at P focus positions by using the calibrated NxN pixel phase focusing sensor, so as to obtain P calibration images, and a difference value (for example, a four-pixel sensitivity difference value CHANNEL DIFF) of the NxN pixel sensitivities corresponding to the P target images is calculated, so as to execute the subsequent step S405. Wherein K is a positive integer greater than 0.
The application is not limited to the value of P and the specific values of P focusing positions, and can be set according to actual conditions and empirical values. In some embodiments, the value of P may be set to 2, and the 2 focus positions may be set to any two lens positions. And respectively acquiring an image at the two positions to obtain 2 target images, and calculating the difference value (CHANNEL DIFF) of the sensitivity of the NxN pixels corresponding to the 2 target images through a similar implementation mode of the step S403.
And S405, fitting out a quasi-focus position by utilizing the focusing parameter of the focusing response function and the difference value of NxN pixel sensitivity corresponding to the P target images, and focusing according to the lens position of the photographing equipment determined by the quasi-focus position.
In this embodiment, after specific values of the focusing parameters a and b of the focusing response function are fitted in step S403, and the differential value (CHANNEL DIFF) of the sensitivity of the NxN pixels corresponding to the P (e.g., 2) target images (e.g., 2) is calculated in step S404, the specific values of the focusing parameters a and b of the focusing response function, and the differential value (CHANNEL DIFF) of the sensitivity of the NxN pixels corresponding to the P (e.g., 2) target images are further utilized, the quasi-focus position is fitted in a least square method, and the lens position of the photographing device determined by the quasi-focus position is utilized to realize precise focusing, so that under the condition that any focusing device is not additionally added, the precise focusing can be realized only by utilizing the configured image sensor (especially the NxN pixel phase focusing sensor), thereby not only solving the problem of poor focusing effect under the pure color scene, but also reducing the focusing cost.
For example, in order to facilitate understanding of the focusing method provided by the present application, an example scene of capturing a solid-color scene is illustrated in fig. 8, in this example scene, fig. 8 (a) shows that when a user captures a solid-color scene (such as sky or white wall) by using a photographing device such as a mobile phone and a camera, a lens position needs to be repeatedly pushed and pulled no matter in a focusing manner of contrast focusing or phase focusing, and the lens position can be moved no matter how, the corresponding phase difference (PD value) or contrast value is a straight line as shown in fig. 1, so that a quasi-focus and an accurate lens position cannot be found, and the focusing effect is poor. After the focusing method provided by the application is adopted to perform focusing optimization processing, as shown in fig. 8 (b), the quasi-focus position can be fitted only by moving the lens twice, for example, the position indicated by x 0 shown in fig. 8 (c) can be specifically exemplified by that, as shown in fig. 8 (c), the first time of moving the lens to the position of x 1 =200 step length is assumed to obtain a value F 1 =6.21% of CHANNEL DIFF, the second time of moving the lens to the position of x 2 =300 step length is assumed to obtain a value F 2 =4.68% of CHANNEL DIFF, and then the quasi-focus x 0 =550 step length can be fitted through the least square method, namely the position indicated by x 0 shown in fig. 8 (c), so that the depth perception of a pure-color scene is realized, the accurate focusing can be realized at the quasi-focus x 0, the problem of poor focusing effect under the pure-color scene is solved, and the focusing cost is also reduced.
In addition, the embodiment of the application also provides an electronic device (i.e. photographing device), and the hardware structure and the software framework of the electronic device can be referred to in the corresponding descriptions of fig. 2 and 3. The electronic device comprises a memory storing a computer program and a processor for invoking and executing the computer program to implement the focusing method provided in the above description.
Still another embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program for implementing the focusing method provided in the above description when executed by a processor of a terminal device.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.

Claims (13)

1. A focusing method, characterized by being applied to a photographing apparatus configured with an NxN pixel phase focus sensor, comprising:
Calibrating the difference of the sensitivity of the NxN pixels at the focal plane position corresponding to the NxN pixel phase focusing sensor to calibrate the difference of the sensitivity of the NxN pixels at the focal plane position and reduce the difference of the sensitivity of the NxN pixels at the focal plane position, wherein N is a positive integer greater than 1;
Obtaining K Zhang Biaoding images at preset K focusing positions by using the calibrated NxN pixel phase focusing sensor, wherein K is a positive integer greater than 0;
Calibrating different difference values of the sensitivity of NxN pixels at the preset K focusing positions by using the K Zhang Biaoding images and the focusing response function, and fitting focusing parameters of the focusing response function in the calibration process;
Obtaining P target images at P focusing positions by using the calibrated NxN pixel phase focusing sensor, and calculating the difference value of the NxN pixel sensitivity corresponding to the P target images, wherein P is a positive integer greater than 0;
Fitting out a quasi-focus position by utilizing focusing parameters of the focusing response function and difference values of NxN pixel sensitivities corresponding to the P target images, and focusing according to the lens position of the photographing device determined by the quasi-focus position.
2. The method of claim 1, wherein the N is 2, the NxN pixel phase focus sensor is a 2x2 pixel phase focus sensor, the 2x2 pixel phase focus sensor comprises a micro lens, the micro lens corresponds to four pixels, the calibrating the difference of the sensitivity of the NxN pixel at the focal plane position corresponding to the NxN pixel phase focus sensor to calibrate the difference of the sensitivity of the NxN pixel at the focal plane position, and the reducing the difference of the sensitivity of the NxN pixel at the focal plane position comprises:
and calibrating the difference of the four-pixel sensitivity of the focal plane position corresponding to the 2x2 pixel phase focusing sensor to calibrate the value of the difference of the four-pixel sensitivity of the focal plane position and reduce the difference of the four-pixel sensitivity of the focal plane position.
3. The method of claim 2, wherein calibrating the difference in the four-pixel sensitivity of the focal plane position corresponding to the 2x2 pixel phase focusing sensor to calibrate the difference in the four-pixel sensitivity of the focal plane position and reduce the difference in the four-pixel sensitivity of the focal plane position comprises:
the photographing equipment is used for photographing a light plate with uniform illumination, and half of the sum value of the focusing position of the lens at infinity and the focusing position of the lens at the closest focusing distance is set as a calibrated focusing position;
acquiring a calibration image at the calibration focusing position by using the 2x2 pixel phase focusing sensor, wherein the calibration image comprises four color channels, and each sub-channel under each color channel shares a color filter of the corresponding color channel;
performing pixel reduction processing on the calibration image by using a preset pixel scaling algorithm to obtain a reduced calibration image;
combining the sub-channels under each color channel contained in the reduced calibration image, and calculating the difference value of each combined channel and the pixel mean value of the color channel;
Calculating the compensation value of each combined channel and the pixel mean value of the color channel by using the difference value of each combined channel and the pixel mean value of the color channel, and compensating the corresponding channel pixel by using the compensation value to calibrate the difference value of the four-pixel sensitivity of the focal plane position and reduce the difference of the four-pixel sensitivity of the focal plane position.
4. A method according to claim 3, wherein the preset pixel scaling algorithm is a bilinear interpolation algorithm.
5. A method according to claim 3, wherein the merging the sub-channels under each color channel included in the scaled-down calibration image and calculating the difference between each merged channel and its pixel mean value for the color channel includes:
Calculating the average value of each color channel pixel contained in the reduced calibration image;
And calculating the ratio of the average value of each combined channel pixel and the corresponding color channel pixel, and taking the difference value of the ratio and the number 1 as the difference value of the corresponding channel.
6. The method of claim 5, wherein calculating the compensation value for each combined channel and the pixel mean value for the color channel using the difference value for each combined channel and the pixel mean value for the color channel comprises:
And respectively calculating the sum value of the difference value of each combined channel and the pixel mean value of the color channel and the number 1, and calculating the ratio of the number 1 to the sum value as the compensation value of the corresponding channel and the pixel mean value of the color channel.
7. The method of claim 1, wherein the K has a value of 5, the preset K focus positions are 5 step positions corresponding to a lens stroke of 5 being equally divided, and the acquiring the K Zhang Biaoding images at the preset K focus positions by using the calibrated NxN pixel phase focus sensor comprises:
And respectively acquiring a calibration image at 5 step positions corresponding to the stroke 5 of the lens after the calibrated NxN pixel phase focusing sensor is divided equally, so as to obtain 5 calibration images.
8. The method according to claim 1, wherein calibrating the different difference values of the sensitivity of the NxN pixels at the preset K focus positions by using the K Zhang Biaoding images and the focus response function, and fitting the focus parameters of the focus response function during the calibration process, includes:
Calculating the average pixel value of each color channel contained in the K Zhang Biaoding image, and calculating the interpolation of each color channel by using the average pixel value;
And calibrating different difference values of the sensitivity of the NxN pixels at the preset K focusing positions by using the interpolation and focusing response functions, and fitting focusing parameters of the focusing response functions in the calibration process.
9. The method of claim 7, wherein the N is 2, the NxN pixel phase focus sensor is a 2x2 pixel phase focus sensor, the 2x2 pixel phase focus sensor comprises a micro lens, the micro lens corresponds to four pixels, the calibrating the different difference values of the sensitivity of the NxN pixels at the preset K focusing positions by using the K Zhang Biaoding images and the focusing response function, and in the calibrating process, fitting focusing parameters of the focusing response function, including;
calculating the average pixel value of four color channels contained in the 5 calibration images, and calculating interpolation of the four color channels by utilizing the average pixel value;
And calibrating different difference values of the sensitivities of the four pixels at the preset 5 focusing positions by using the interpolation and focusing response functions, and fitting focusing parameters of the focusing response functions in the calibration process.
10. The method of claim 1, wherein the P is 2, the obtaining P target images at P focus positions using the calibrated NxN pixel phase focus sensor, and calculating a difference in sensitivity of NxN pixels corresponding to the P target images comprises:
and acquiring 2 target images at 2 focusing positions by using the calibrated NxN pixel phase focusing sensor, and calculating the difference value of the NxN pixel sensitivity corresponding to the 2 target images.
11. The method of claim 10, wherein the fitting the quasi-focus position by using the focus parameter of the focus response function and the difference value of the sensitivity of the NxN pixels corresponding to the P target images, and focusing according to the lens position of the photographing device determined by the quasi-focus position, comprises:
and fitting a quasi-focus position by using a least square method according to the focusing parameter of the focusing response function and the difference value of NxN pixel sensitivity corresponding to the 2 target images, and focusing according to the lens position of the photographing equipment determined by the quasi-focus position.
12. An electronic device comprising a memory storing a computer program and a processor for invoking and executing the computer program to implement the method of any of claims 1-11.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by an electronic device, implements the method of any of claims 1-11.
CN202410301998.2A 2024-03-13 2024-03-13 Focusing method, electronic device and storage medium Active CN119255101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410301998.2A CN119255101B (en) 2024-03-13 2024-03-13 Focusing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410301998.2A CN119255101B (en) 2024-03-13 2024-03-13 Focusing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN119255101A CN119255101A (en) 2025-01-03
CN119255101B true CN119255101B (en) 2025-09-02

Family

ID=94015403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410301998.2A Active CN119255101B (en) 2024-03-13 2024-03-13 Focusing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN119255101B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747070A (en) * 2021-09-10 2021-12-03 昆山丘钛微电子科技股份有限公司 Focusing method and device of camera module, terminal equipment and medium
CN114359406A (en) * 2021-12-30 2022-04-15 像工场(深圳)科技有限公司 Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711261B2 (en) * 2006-04-11 2010-05-04 Nikon Corporation Imaging device, camera and image processing method
JP4935161B2 (en) * 2006-04-11 2012-05-23 株式会社ニコン Imaging apparatus, camera, and image processing method
DE502006006893D1 (en) * 2006-05-24 2010-06-17 Stueckler Gerd Method for spatial filtering and image recording device
JP2020003686A (en) * 2018-06-29 2020-01-09 キヤノン株式会社 Focus detection device, imaging device, and interchangeable lens device
CN117156273A (en) * 2023-09-01 2023-12-01 维沃移动通信(杭州)有限公司 Focusing method, terminal, electronic device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747070A (en) * 2021-09-10 2021-12-03 昆山丘钛微电子科技股份有限公司 Focusing method and device of camera module, terminal equipment and medium
CN114359406A (en) * 2021-12-30 2022-04-15 像工场(深圳)科技有限公司 Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method

Also Published As

Publication number Publication date
CN119255101A (en) 2025-01-03

Similar Documents

Publication Publication Date Title
WO2022262260A1 (en) Photographing method and electronic device
WO2023016025A1 (en) Image capture method and device
US12170844B2 (en) Photographing method and electronic device
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN113744257B (en) Image fusion method, device, terminal equipment and storage medium
CN113660408B (en) Anti-shake method and device for video shooting
CN113630558B (en) A camera exposure method and electronic device
CN115514883B (en) Cross-equipment collaborative shooting method, related device and system
CN116996777A (en) Photography method, electronic device and storage medium
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN119255101B (en) Focusing method, electronic device and storage medium
CN117711300B (en) Image display method, electronic device, readable storage medium and chip
CN116723264B (en) Method, apparatus and storage medium for determining target location information
CN115460343B (en) Image processing method, device and storage medium
CN114390195B (en) Automatic focusing method, device, equipment and storage medium
US20070024718A1 (en) Image capturing device and image adjusting method thereof
CN116051368A (en) Image processing method and related equipment
CN115118963A (en) Image quality adjustment method, electronic device, and storage medium
CN115209062A (en) Image processing method and device
CN115696067B (en) Terminal image processing method, terminal equipment and computer-readable storage medium
CN117714890B (en) Exposure compensation method, electronic equipment and storage medium
CN115442536B (en) Method and device for determining exposure parameters, image system and electronic equipment
CN117119314B (en) Image processing method and related electronic equipment
CN117714858B (en) Image processing method, electronic device and readable storage medium
CN120343416A (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Terminal Co.,Ltd.

Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong

Applicant before: Honor Device Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant