[go: up one dir, main page]

CN119255086A - Shooting methods, devices, equipment and media - Google Patents

Shooting methods, devices, equipment and media Download PDF

Info

Publication number
CN119255086A
CN119255086A CN202411595713.7A CN202411595713A CN119255086A CN 119255086 A CN119255086 A CN 119255086A CN 202411595713 A CN202411595713 A CN 202411595713A CN 119255086 A CN119255086 A CN 119255086A
Authority
CN
China
Prior art keywords
camera
shooting
target object
exposure
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411595713.7A
Other languages
Chinese (zh)
Inventor
颜洪胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202411595713.7A priority Critical patent/CN119255086A/en
Publication of CN119255086A publication Critical patent/CN119255086A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, shooting equipment and shooting media, and belongs to the technical field of shooting. The method comprises the steps of displaying a shooting preview interface, wherein the shooting preview interface comprises a first shooting preview picture captured by the first camera, shooting by the first camera according to first shooting parameters to obtain a target object image, and generating the first shooting parameters based on the shooting parameters of the second camera under the condition that the first camera does not capture the target object and the second camera captures the target object.

Description

Shooting method, device, equipment and medium
Technical Field
The application belongs to the technical field of shooting, and particularly relates to a shooting method, a shooting device, shooting equipment and shooting media.
Background
In daily life, a user wants to capture the moment of a picture of a moving object through a camera, such as the moment of a dash in a race sprint process, but motion smear often exists in a captured image, so that the image quality is reduced.
In the prior art, when a user needs to take a candid photograph, the user usually checks whether a person appears in a camera picture, and once the person is found to be in the camera picture, the user immediately controls the camera to take the candid photograph.
However, the shooting mode in the prior art is easy to capture images with unsuitable parameters, and the quality of the captured images is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device, shooting equipment and shooting media, which can solve the technical problem of poor quality of a snap shot image in the related technology.
In a first aspect, an embodiment of the present application provides a photographing method, which is applied to an electronic device, where the electronic device includes a first camera and a second camera, and the method includes:
displaying a shooting preview interface, wherein the shooting preview interface comprises a first shooting preview picture captured by the first camera;
shooting by the first camera according to a first shooting parameter to obtain a target object image;
the first shooting parameter is generated based on the shooting parameter of the second camera when the first camera does not capture the target object and the second camera has captured the target object.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including:
The display module is used for displaying a shooting preview interface, wherein the shooting preview interface comprises a first shooting preview picture captured by a first camera;
the image shooting module is used for shooting through the first camera according to a first shooting parameter to obtain a target object image;
The first shooting parameters are generated based on shooting parameters of the second camera when the first camera does not capture the target object and the second camera captures the target object.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, the chip including a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement the steps of the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executed by at least one processor to carry out the steps of the method according to the first aspect.
In the embodiment of the application, when the target object is captured by the second camera, the second camera can generate the first shooting parameter aiming at the target object in advance, and when the target object appears in the first camera, the first camera can directly take a candid photograph of the target object by using the first shooting parameter, so that the purpose that the shooting parameter of the first camera can be quickly adjusted when the target object enters the first shooting preview picture is achieved, and the shooting quality of an image is improved.
Drawings
Fig. 1 is a schematic diagram of a shooting scene provided by an embodiment of the present application;
fig. 2 is a flowchart of a shooting method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a shooting preview interface provided in an embodiment of the present application;
FIG. 4 is a view of a field of view provided by an embodiment of the present application;
FIG. 5 is a flowchart of a process in moving a target object according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a background workflow of a camera according to an embodiment of the present application;
FIG. 7 is a flowchart of displaying an interface in an image capturing process according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application;
Fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. In addition, "and/or" in the specification means at least one of the connected objects, and the character "/", generally means a relationship in which the associated objects are one kind of "or".
The shooting method, device, equipment and medium provided by the embodiment of the application are described in detail below with reference to specific embodiments and application scenes thereof.
The shooting method provided by the embodiment of the application can be applied to the snapshot scene, and the moment when the snapshot person enters the picture. For example, in a race game, the athlete's line time is snap shot. Currently, in a process of capturing by a user using an electronic device, if the user needs to capture a photo with zero delay and high image quality, the user is generally required to preset shooting parameters. The shooting parameters can be specifically divided into exposure and exposure time, the reasonable exposure can ensure that the face brightness and the overall picture brightness are proper, and the reasonable exposure time ensures that the person has no motion smear problem.
However, in practical applications, the preset shooting parameters are not friendly to the common user (for example, lack of professional shooting knowledge, and it is difficult for the common user to adjust the reasonable shooting parameters), and the preset shooting parameters are often unsuitable for the picture before shooting (i.e., the person does not enter the picture), which may result in poor preview experience of the picture presented by the electronic device before the person does not enter the picture yet.
Aiming at the problems in the scenes, by the shooting method provided by the embodiment of the application, the second camera with a larger field of view is used for capturing the person in advance and adjusting shooting parameters, and then the person is synchronized to the first camera with a small field of view, so that when the person enters the field of view of the first camera, the picture of the first camera can be quickly adjusted (namely, the shooting parameters adjusted by the second camera are directly followed), thus the quality of the snapshot can be ensured, and the user is not required to set the shooting parameters in advance.
In the embodiment of the present application, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a wearable device, a vehicle-mounted terminal, etc., and taking the electronic device as an example of the mobile phone, and fig. 1 is an exemplary schematic diagram of a shooting scene provided in the embodiment of the present application, as shown in fig. 1, the mobile phone is equipped with a first camera and a second camera (not shown), where a picture captured by the first camera is displayed on a shooting preview interface 11, and the second camera can capture a target object in a larger range (for convenience of comparison, a region occupied by the shooting preview interface 11 in fig. 1 is briefly used as a coverage area of the first camera, and a filling region 12 is used as a coverage area of the second camera).
Referring to fig. 1, a person moves first toward a range covered by a second camera, and when the person enters the range covered by the second camera and does not enter the range covered by the first camera (refer to the middle drawing of fig. 1), the second camera can capture a picture of the person, so that the second camera can quickly adjust shooting parameters according to the captured picture condition. Then the target object continues to move, and enters the coverage area of the first camera (refer to the right drawing of fig. 1), and at the moment, the first camera directly uses the parameters adjusted by the second camera to take a snapshot, so that a high-quality snapshot picture can be obtained.
Fig. 2 is a flowchart of a photographing method according to an embodiment of the present application, where the method may be applied to an electronic device, and the electronic device is implemented with two cameras, where the two cameras are different, and will be described in detail later. In order to facilitate the distinction, one of the cameras is called a first camera, and the other camera is called a second camera. By way of example, the electronic device may refer to the cell phone shown in fig. 1. As shown in fig. 2, the method may include the steps of:
step S210, displaying a shooting preview interface.
The shooting preview interface comprises a first shooting preview picture captured by a first camera.
In this embodiment, the electronic device displays a shooting preview interface, where the shooting preview interface includes a real-time frame, and the real-time frame is a frame captured by the first camera. For example, in other embodiments, the shooting preview interface includes a first control, and clicking on the first control by the user may trigger the first camera to take a picture.
For example, fig. 3 is a schematic view of a shooting preview interface provided in an embodiment of the present application, as shown in fig. 3, the shooting preview interface is divided into a first area 31 and a second area 32, the second area 32 is displayed with a real-time screen, and the first area 31 is displayed with a first control 310.
In other embodiments, the electronic device receives user input to the camera application and displays a capture preview interface in response to the input. I.e. before the shooting preview interface is displayed, the camera application in the electronic device needs to be started by the user. Illustratively, the capture preview interface is displayed by default after the camera application is launched.
Step S220, shooting by a first camera according to the first shooting parameters to obtain a target object image.
The first shooting parameters are generated based on the shooting parameters of the second camera under the condition that the first camera does not capture the target object and the second camera captures the target object.
In the present embodiment, the first photographing parameters may include one or more of exposure time, exposure amount, and exposure gain, which are collectively referred to as exposure parameters for convenience of description.
Wherein exposure may refer to exposure time multiplied by gain. Illustratively, increasing the exposure level screen will illuminate.
The first shooting parameters are shooting parameters which are generated by the second camera and matched with the target object before the target object appears in the first shooting preview picture.
In this embodiment, the target object may be continuously moving (typically a snapshot is also a scene applied to such continuously moving). In the motion process of the target object, the second camera can capture the target object preferentially, and after the second camera captures the target object, the second camera can automatically adjust the exposure parameters of the second camera according to the captured image, so as to seek to obtain a high-quality image containing the target object matching (for example, adjust the exposure amount, so that the captured target object is brighter).
The adjusting of the shooting parameters of the second camera may specifically refer to adjusting exposure parameters according to the face brightness of the target object displayed in the picture of the second camera, so that the face brightness tends to be expected, and ignoring or overlooking the influence of the ambient brightness.
In this embodiment, after the second camera adjusts the shooting parameters, the target object continues to move and enters the range of the first camera, and is captured by the first camera. When the first camera captures the target object, with continued reference to fig. 3, the target object is displayed in the area 32 for the user to preview, that is, the target object appears in the first shooting preview image, at this time, the user may trigger the first input through the first control 310, the electronic device receives the first input to the first control in the shooting preview interface, and in response to the first input, the first camera shoots according to the first shooting parameter to obtain the target object image. Therefore, a user can quickly execute snapshot operation, and people can be snapshot.
The first camera directly uses shooting parameters adjusted by the second camera when taking a candid photograph, so that quality of the candid photograph pictures is guaranteed.
In the embodiment of the application, the target object is captured by the second camera preferentially, the shooting parameters are adjusted in advance, and when the target object is captured by the first camera and appears in the preview interface, the user controls the first camera to shoot, so that the first camera can directly reference the shooting parameters adjusted by the second camera to shoot, thereby obtaining a snapshot picture with proper picture brightness and proper motion smear, avoiding the problem that the user cannot snapshot to obtain a high-quality image when shooting by directly using the camera, and avoiding the problem that the user needs to set the shooting parameters in advance before the snapshot, and solving the problems that the user has poor picture preview experience and high requirement on specialized shooting knowledge for the user because the shooting parameters are set in advance for the snapshot.
It has been mentioned above that there is a difference between the two cameras, i.e. the first camera and the second camera. Further, in some embodiments, the difference between the first camera and the second camera may be described in terms of the field of view range. The view field range of the first camera is smaller than that of the second camera.
In this embodiment, the field of view range may refer to an angular region of the width and height of a scene that can be captured by the camera.
In some embodiments, the field of view range of the first camera being smaller than the field of view range of the second camera may specifically mean that the angle area of the width of the scene that the first camera can capture is smaller than the angle area of the width of the scene that the second camera can capture.
In addition, in other embodiments, the field of view of the first camera being smaller than the field of view of the second camera may specifically refer to the angular area of the scene height that the first camera is capable of capturing being smaller than the angular area of the scene height that the second camera is capable of capturing.
In addition, in other embodiments, the field of view range of the first camera being smaller than the field of view range of the second camera may specifically mean that the angle areas of the width and the height of the scene that the first camera can capture are smaller than the angle areas of the width and the height of the scene that the second camera can capture.
The field of view range can directly influence the performance and shooting effect of the camera in different application scenes.
By way of example, FIG. 4 is a schematic view of the field of view provided by an embodiment of the present application, as shown in FIG. 4, with two different fill areas (i.e., fill area 41 and fill area 42) being used to characterize the field of view. The filling area 41 is a field of view of the second camera, and the filling area 42 is a field of view of the first camera. At this time, the angle area of the width of the scene which can be captured by the first camera is smaller than the angle area of the width of the scene which can be captured by the second camera. The angular area of the scene height that can be captured by the first camera is smaller than the angular area of the scene height that can be captured by the second camera. That is, the angular area of the width and the height of the scene that can be captured by the first camera is smaller than the angular area of the width and the height of the scene that can be captured by the second camera.
When the field of view of the first camera is smaller than that of the second camera, fig. 5 is a flowchart of a process in moving a target object according to an embodiment of the present application, and as shown in fig. 5, taking the target object as a person as an example, when the person moves in a real environment, the process may specifically be divided into three steps as follows:
step S510, the person first appears outside the field of view of the first camera and the second camera.
And step S520, when the person continues to move, the person enters the field of view of the second camera.
Step S530, when the person continues to move again, the person enters the field of view of the first camera.
It will be appreciated that, in the case of capturing, the user usually needs to stand in a fixed position in advance, and the captured object will move towards the fixed position, and when the captured object moves to the fixed position, the user can implement capturing, that is, the movement track of the captured object will be substantially the same as the above-mentioned steps S510-S530.
In the embodiment of the application, the requirement of a user in different shooting scenes is met by configuring two cameras with different view fields for the electronic equipment, so that more flexible shooting selection and richer shooting experience are provided. When the second camera captures the target object in advance, shooting parameters can be adjusted in advance, so that the first camera can be assisted to obtain better shooting effects by adjusting the parameters of the second camera in advance, and the adaptability of the electronic equipment in different scenes is improved.
How to adjust the shooting parameters of the second camera after the target object enters the field of view of the second camera is described in detail below by some embodiments.
In this embodiment, when the target object enters the field of view of the second camera, a plurality of frames of images including the target object may be continuously captured by the second camera. The method comprises the steps of detecting the position of a target object in each frame of image shot by a second camera, determining motion data of the target object according to the position of the target object in each frame of image, and finally determining a first shooting parameter based on the motion data.
In this embodiment, the first camera and the second camera may be turned on all the time during the movement of the target object. The second camera calculates the motion level of the target object through a motion detection algorithm. When the electronic device executes the motion detection algorithm, the electronic device can detect the position change condition of the inter-frame target object, and define the motion level according to the position change degree of the inter-frame target object.
By way of example, the exercise level may be divided into three levels, high, medium and low. For example, when the target object is stationary, it may be classified as low gear. For another example, the electronic device finds that the position where the target image is located is D1 by detecting the first frame image, finds that the position where the target object is located is D2 by detecting the second frame image, and classifies the electronic device as a high gear if D2-D1 is greater than the set gear threshold, and as a medium gear if D2-D1 is within the set gear threshold.
In addition, in some embodiments, the current gear may also be output in real time on the preview interface.
In this embodiment, after determining the motion level of the target object, the shooting parameters of the second camera may be specifically adjusted by:
assume that the following calculation formula is configured:
Exposure = exposure time × gain
After determining a reasonable required exposure according to the effect of the target object in the picture captured by the second camera, the required exposure can be reasonably distributed to the exposure time and the gain according to the currently detected motion gear by using the calculation formula.
For example, the allocation may be based on the optimum signal-to-noise ratio principle, i.e. preferentially to the exposure time, when the exposure time reaches an upper limit (the upper limit of exposure time depends on the frame rate and the module characteristics), and then continue to allocate to the gain until the current desired exposure level is reached.
The optimal signal-to-noise ratio principle can be multiplexed when the motion data of the target object is at a low-grade motion grade, the exposure time is reduced to 1/2 of the original exposure ratio by taking 2 times of the motion exposure ratio when the motion data of the target object is at a medium-grade motion grade, the gain is multiplied by 2, and the exposure time is reduced to 1/4 of the original exposure ratio by taking 4 times of the motion exposure ratio when the motion data of the target object is at a high-grade motion grade, and the gain is multiplied by 4. The motion exposure ratio of the middle and high grade can be adjusted according to actual measurement, and specifically aims at reducing motion smear and improving the comprehensive benefit of signal to noise ratio.
For example, after determining that a reasonable required exposure amount is 100 according to the effect exhibited by the target object in the screen captured by the second camera, if the motion gear of the target object is detected to be in a low gear, the exposure time is 50 (if 50 is an upper limit value), the gain is 2, and if the motion gear of the target object is detected to be in a middle gear, the exposure time needs to be reduced to 1/2 of the original exposure time, that is, the exposure time is adjusted to 25, and meanwhile, the gain is multiplied by 2, that is, the gain is adjusted to 4, and similarly, if the motion gear of the target object is detected to be in a high gear, the exposure time is adjusted to 12.5, and the gain is adjusted to 8.
In the embodiment of the application, the second camera is kept on all the time, the motion level of the target object is detected through a motion detection algorithm, and then the exposure time and the exposure quantity of the second camera are adjusted. When the target object is in a high-grade motion level, the exposure time is reduced, so that the phenomenon of smear is avoided. When the target object is in a low-grade or medium-grade motion level, the exposure time is increased, so that the signal-to-noise ratio of the image is increased, and the image is clearer. The shooting parameters of the second camera are adjusted under different motion states so as to achieve better shooting effects, so that better shooting parameters are provided for the subsequent first camera, and the snapshot quality is further improved.
In addition, since the first camera is required to capture and display the preview screen (that is, capture the environmental information and display the first captured preview screen in the captured preview interface), and is required to capture the target object, a captured image is formed. The background workflow of the first camera and the second camera is described in detail below by some embodiments.
In some embodiments, when the target object does not enter the field of view of the first camera, the target object will not appear in the first shooting preview picture at this time, and the first camera captures a picture in the field of view according to the second shooting parameters to form a first shooting preview picture.
The second shooting parameters can be obtained by the first camera adjusting the shooting parameters according to the captured picture.
In other embodiments, the first camera may be a dual-stream image sensor, and the dual-stream image sensor may enter a predetermined operation mode to output two frames of long-exposure and short-exposure images simultaneously. When the target object enters the field of view of the first camera, the first camera can enter the working mode after the target object is captured by the first camera. In addition, the first camera can also directly enter the working mode when being started.
For the working mode, after the target object enters the field of view of the first camera, a first shooting preview image is displayed in the shooting preview interface (the user can see the target object in the shooting preview interface at this time). If the user clicks the snapshot at this time, in the related art, the background of the electronic device may grab the frame forward to ensure the zero delay effect, and because the short exposure frame does not directly reference the shooting parameter of the second camera, exposure adjustment is automatically performed, so that insufficient frame number (the direct snapshot when the face just enters) adjusted according to the face information easily occurs, which results in unsuitable brightness of the finally-snapped image, for example, the face is dark in a backlight environment.
In this embodiment, the long shot frame is used as a first shot preview image, and is displayed in a shot preview interface for a user to preview, and the short shot frame is used as a snap shot image (i.e., when the user controls the first camera to take a picture by triggering the first control in the shot preview interface, the snap shot generated picture is used as the short shot frame).
In order to ensure the preview effect, the short shot frames can be not displayed in the shooting preview interface any more, and for example, after the user controls the electronic device to take a candid photograph and the electronic device generates a candid photograph of the short shot frames, the candid photograph of the short shot frames can be directly stored in an album of the electronic device for subsequent retrieval by the user.
In some embodiments, when the first camera outputs the long-exposure frame and the short-exposure frame at the same time, the photographing parameters used by the long-exposure frame and the short-exposure frame are different. Specifically, when the first camera captures a target object, determining a third shooting parameter of the first camera according to the display state of the target object in a first shooting preview picture, and updating and displaying the first shooting preview picture according to the picture captured by the first camera according to the third shooting parameter, so that a user can view the movement condition of the target object in a shooting preview interface in real time. In addition, if the user sees the target object in the shooting preview interface and triggers the snapshot, the first camera uses the first shooting parameters adjusted by the original second camera to obtain a short shot frame.
For example, with continued reference to fig. 1, taking a target object as a person, as shown in the right-most drawing in fig. 1, the person enters the field of view of the first camera, is captured by the first camera, and forms a long-exposure frame to be displayed on the shooting preview interface. Assuming that a user performs snapshot at this time, the shooting parameters of the short exposure frame of the first camera will follow the first shooting parameters of the second camera (for example, the exposure of the short exposure frame will follow the exposure of the second camera, where assuming that the photosensitivity of the first camera module is c1 and the photosensitivity of the second camera module is c2 and the exposure is e2, then the exposure e1=c2×e2/c1 of the short exposure frame of the first camera) so as to ensure that the face brightness of the short exposure frame is close to the face brightness in the image frame collected by the second camera, which is equivalent to that when the first camera shoots the short exposure frame, the exposure effect of the second camera is adjusted by exposing one frame, so that the imaging quality of the snapshot is improved.
In addition, the long exposure frame obtained by the simultaneous shooting of the first camera is used for the user to preview, so that the shooting parameters of the long exposure frame can be adjusted according to the normal shooting parameter adjustment flow. The process of adjusting the shooting parameters of the long exposure frame is different from the process of adjusting the shooting parameters of the short exposure frame, and the process of adjusting the shooting parameters of the long exposure frame may need to utilize a plurality of frames of long exposure frames to adjust and obtain the proper shooting parameters, namely obtaining the third shooting parameters mentioned above.
Further, in some embodiments, the exposure time in the first photographing parameter used for the short exposure frame is shorter than the exposure time in the third photographing parameter used for the long exposure frame.
In addition, in some embodiments, when the shooting parameters of the long exposure frame are adjusted to be close to those of the short exposure frame (i.e., when the third shooting parameters are close to the first shooting parameters), the adjustment of the brightness of the preview face is completed. At this time, the strategy of quickly adjusting the face exposure can be exited, so that the short exposure frame output by the first camera does not follow the second camera to adjust the shooting parameters (i.e. the short exposure frame of the first camera does not use the first shooting parameters adjusted in advance by the second camera).
In addition, in the process that the short exposure frame follows the first shooting parameter, the exposure time of the short exposure frame can be distributed according to the motion detection result of the target object in the second camera, and the exposure time of the first camera is not used. This is mainly because the user usually captures an instant scene of a person entering the picture, and the motion detection is a multi-frame algorithm, under the above scene, the motion detection result of the first camera on the target object may be inaccurate due to the fact that the person just enters the field of view of the first camera, and the motion detection result using the second camera is better.
Fig. 6 is a schematic diagram of a background workflow of a camera according to an embodiment of the present application, as shown in fig. 6, including the following steps:
step S610, the first camera and the second camera detect whether a person exists in the field of view in real time.
Step S620, entering a fast-tuning exposure strategy when the second camera detects the person.
In step S630, the second camera adjusts the exposure.
Step S640, the first camera detects the person, the short exposure frame follows the exposure of the second camera, and the long exposure frame continues to use the exposure parameters which are adjusted normally.
Step 650, the exposure of the long exposure frame is adjusted to be close to the short exposure frame, and the fast exposure strategy is exited.
In this embodiment, in the non-snapshot mode, the second camera is generally used to assist the first camera in imaging (e.g., the first camera determines the depth information of the picture in combination with the picture captured by the second camera). When the snapshot mode is entered, the field of view range of the second camera is larger than that of the first camera, the second camera preferentially detects the person, and at this time, the second camera enters a fast-tuning exposure strategy to adjust a first shooting parameter (for example, adjust the exposure amount, so that the brightness of the face is not too dark or too bright) suitable for the person as fast as possible, so that the first camera shoots to obtain a short exposure frame.
When the person appears in the field of view of the first camera, the first camera can adjust the third shooting parameter to form a long exposure frame as a preview picture. After the third shooting parameters are continuously adjusted, the third shooting parameters are finally close to the first shooting parameters, and the rapid exposure strategy is exited.
In the embodiment of the application, when the target object does not enter the view field range of the first camera, the first camera captures the environmental information in the view field range, normally adjusts shooting parameters (namely, second shooting parameters), and then displays a first shooting preview picture on a shooting preview interface for a user to preview. And when the target object enters the field of view of the first camera, the first camera forms a long exposure frame by utilizing the third shooting parameters and sends the shooting preview interface to display in order to improve the preview effect. The user can trigger the snapshot according to the long exposure frame displayed in the shooting preview interface, and at the moment, the first camera utilizes the second shooting parameters which are adjusted in advance by the second camera to form a short exposure frame which is used as a snapshot image. Therefore, when a high-quality snapshot image is obtained, a user can obtain better experience, and in the whole snapshot process, the user only needs to open a camera application and trigger the first control to conduct snapshot when a target object appears on the interface.
In addition, since the user performs the snap action after seeing that the target object appears on the shooting preview interface, there is a reaction time (i.e. a time interval refers to a first time when the target object appears on the shooting preview interface, to a second time when the user triggers the first control to form a first input, and the electronic device performs the snap). It will be appreciated that this reaction time results in imaging delays. To this end, in some embodiments, the first camera may be controlled to buffer raw data of N frames of consecutive images in real time, the N frames of consecutive images being mainly short exposure frames. When the user clicks to shoot to take a candid photograph, the original data of the frame at the moment the user wants to take a candid photograph is selected from the N frames, and a candid photograph image is generated, so that imaging delay is avoided, and the target object image with zero shutter delay effect is finally obtained. Wherein N is a positive integer.
Taking an example of an image of the moment when the user wants to take a snapshot of shooting and take a hand, taking a photograph immediately when the user sees the basketball and taking a click, but actually, at the moment of clicking, the basketball has been flown out a short time (for example, 30 cm from the hand), the N frames of continuous images refer to the frame 30 cm from the hand and the frame N-1 frame before (for example, 20 cm from the hand, 10 cm from the hand, just from the hand, etc.), and then by counting the time difference (for example, 100 ms) between the moment of the moment when the user wants the basketball to take a hand and the moment of clicking when the user actually clicks, it can be determined from the time difference which frame of the N frames of continuous images is the image frame of the moment of taking a photograph that the user really wants.
In this embodiment, when the first camera performs capturing through the first capturing parameter, an N-frame short shot frame is formed. The first camera can buffer the latest N frames of short exposure frames in real time, for example, a user triggers the first control to form a first input at the moment T1, the electronic equipment performs snapshot to obtain a frame of short exposure frames at the moment T1, and then the first camera can buffer the N frames of short exposure frames from the moment T0 to the moment T1 in practice.
The time T0 may be a time when the target object just enters the field of view of the first camera, and the short shot frame at the time T0 is formed by shooting by the first camera according to the first shooting parameter. For example, the time T0 may also be determined in combination with the number of short exposure frames that can be buffered in real time by the first camera, for example, the first camera can buffer X short exposure frames at most, and the time interval between the first frame and the second frame is Y milliseconds, where t0=t1-x×y.
And selecting a frame meeting a preset condition from the N frames of short exposure frames according to a frame selection strategy to serve as a target image. For example, a user sets a preset condition (assuming that the target object is a person, the preset condition may be that the face brightness is moderate, no smear exists, and the person is centered) according to his own demand, then performs photo generation processing on the target image, and finally outputs the target image.
In the embodiment of the application, the latest N frames of short exposure frames are cached in real time through the first camera, so that the delay problem can be solved, the zero shutter delay effect is achieved, in addition, a user can select from the N frames of short exposure frames, a final proper target object image is determined, and the selectivity of the user is increased.
The course of action of the electronic device is described in detail below by some embodiments. Fig. 7 is a flowchart of interface display in the image capturing process according to the embodiment of the present application, taking an electronic device as a mobile phone and a target object as a person as an example, as shown in fig. 7, which includes the following steps:
Step S710, in response to the camera application being started, entering a portrait photographing mode.
Step S720, presenting a portrait mode preview interface.
Step S730, when the person enters the shooting preview screen, receiving a first input.
Step 740, performing snapshot according to the first input to generate a result graph.
In this embodiment, the user opens a camera application in the mobile phone, and the mobile phone displays a corresponding camera interface. The camera interface may refer to fig. 3, where there is a fill icon in the lower left corner of fig. 3, and the user clicks the icon to enter "portrait mode".
In the "portrait mode", as the person enters the preview screen in the shooting preview interface, referring to fig. 1, the camera background performs the following operations:
(1) The first camera and the second camera are both started, and whether a person main body exists in a picture is detected in real time, and the second camera has a larger field of view range (for example, when the first camera is a wide-angle module, the auxiliary camera is an ultra-wide-angle module). The exposure adjustment strategy is configured in such a way that only the long exposure frame of the first camera is used for face automatic exposure, the exposure is calculated, and the short exposure frame of the first camera is used for follow-up exposure, namely the exposure of the second camera is used.
(2) The second camera detects the person main body, as shown in the middle drawing of fig. 1, at this time, the second camera enters a strategy for quickly adjusting the face exposure, the second camera automatically exposes the face, and starts to adjust the exposure of the second camera.
Meanwhile, the second camera also always starts a motion detection algorithm, and the motion grade of the person can be calculated in real time. The exposure time and the gain are reasonably distributed according to the motion grade condition, so that benefits in the aspects of reducing motion smear and improving signal to noise ratio can be balanced better. The motion detection algorithm is used for detecting the position change condition of the inter-frame character main body, and defining the motion grades of high, medium and low gears according to the position change degree of the inter-frame character main body, wherein the condition that the character is static can be classified as low gear. In addition, the current gear is output in real time in the preview process.
And adjusting the exposure of the second camera, namely reasonably distributing the current exposure into exposure time and gain according to the current detected motion level. The initial mode is ①, the exposure time is preferentially started according to the optimal signal-to-noise ratio principle, when the exposure time reaches the upper limit, the gain is started, and finally the current required exposure is reached. By way of example, assume that the currently required exposure is:
B=T*a
In the above formula, B is the current required exposure, T is the exposure time, a is the gain, when the value of B is fixed, the initial mode ① is to adjust T preferentially until T reaches the upper limit value, and if t×a is smaller than B at this time, the gain is continued until the current required exposure is finally reached.
In the embodiment, the multiplexing initial mode is ① under the low-grade motion level, the exposure time is reduced to 1/2 of the original exposure ratio by taking 2 times of the motion exposure ratio under the medium-grade motion level, the gain is multiplied by 2, and the exposure time is reduced to 1/4 of the original exposure ratio by taking 4 times of the motion exposure ratio under the high-grade motion level, and the gain is multiplied by 4. The motion exposure ratio of the middle and high grade can be adjusted according to actual measurement, and aims to reduce motion smear and improve the comprehensive benefit of signal to noise ratio.
(3) The first camera detects the character main body, as shown in the right drawing of fig. 1, the exposure of the short exposure frame of the first camera can follow the exposure of the second camera, so that the face brightness of the short exposure frame can be ensured to be close to the face brightness in the image frame of the second camera, which is equivalent to that under the condition of exposing one frame, the short exposure frame is adjusted to the condition that the second camera adjusts the multi-frame exposure result. The long exposure frame is used for previewing, and the exposure quantity is adjusted according to a normal automatic exposure adjustment strategy.
For step S730, after seeing the person entering the screen, the user may click the photographing button (i.e., the first control).
For step S740, when the human eye sees that the frame to be captured is the time interval from the click photographing, in order to ensure the effect of zero shutter delay, the short exposure frame will buffer the latest N frames in real time, and the frame (reference frame) most suitable for the user needs is selected from the frames according to the frame selection policy for photo generation processing.
According to the embodiment of the application, when the snapshot person enters the field of view of the first camera, a photo with zero shutter time delay, proper picture brightness and proper motion smear can be obtained. The shooting parameters are adjusted in advance according to the face training degree captured by the second camera through the larger view field range of the second camera, when the person main body enters the view field range of the first camera, the shooting parameters of the short exposure frames of the first camera are quickly adjusted in place by utilizing the adjusted first shooting parameters, and meanwhile, the exposure time of the short exposure frames of the first camera is guided and distributed by utilizing the motion detection result of the second camera, so that the motion trails of the person are suitable.
It should be noted that, in the photographing method provided in the embodiment of the present application, the execution subject may be a photographing device, or a processing module in the photographing device for executing the photographing method. In the embodiment of the present application, an example of a photographing method performed by a photographing device is described as a photographing device provided by the embodiment of the present application.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 8 is a schematic structural diagram of a photographing device according to an embodiment of the present application. As shown in fig. 8, the photographing device 800 includes a display module 810 and an image photographing module 830.
The display module 810 is used for displaying a shooting preview interface. The shooting preview interface comprises a first shooting preview picture captured by a first camera.
The image capturing module 820 is configured to capture, by using a first camera, a target object image according to a first capturing parameter.
The first shooting parameters are generated based on the shooting parameters of the second camera under the condition that the first camera does not capture the target object and the second camera captures the target object.
In the embodiment of the application, the target object is captured preferentially by using the second camera, and the shooting parameter is adjusted in advance, so that when the target object is captured by the first camera and appears in the preview interface, the user controls the first camera to shoot, and the first camera can shoot by referring to the shooting parameter adjusted by the second camera, thereby obtaining a snap shot picture with proper picture brightness and proper motion smear.
In some possible implementations of the embodiments of the present application, the field of view range of the first camera is smaller than the field of view range of the second camera, and the first photographing parameter includes at least one of exposure time and exposure amount.
In some possible implementations of the embodiments of the present application, the photographing apparatus further includes a first photographing parameter adjustment module configured to detect a position of the target object in each frame of image captured by the second camera when the target object enters a field of view of the second camera, determine motion data of the target object according to the position of the target object in each frame of image, and determine the first photographing parameter based on the motion data.
In some possible implementations of the embodiments of the present application, the apparatus further includes a display module, configured to capture, by the first camera, a frame obtained according to the second capturing parameter, as a first capturing preview frame. .
In some possible implementations of the embodiments of the present application, the apparatus further includes a display update module, configured to determine a third shooting parameter of the first camera according to a display state of the target object in the first shooting preview screen when the first camera captures the target object, and update and display the first shooting preview screen according to a screen captured by the first camera according to the third shooting parameter.
In some possible implementations of the embodiments of the application, the first photographing parameter includes an exposure time, and the exposure time in the first photographing parameter is shorter than the exposure time in the third photographing parameter.
In some possible implementations of the embodiments of the present application, the image capturing module 920 is specifically configured to receive a first input to a first control in the capturing preview interface when the target object appears in the first capturing preview screen, and obtain, in response to the first input, an image of the target object by capturing, by the first camera, according to the first capturing parameter.
According to the embodiment of the application, when the snapshot person enters the field of view of the first camera, a photo with zero shutter time delay, proper picture brightness and proper motion smear can be obtained. The shooting parameters are adjusted in advance according to the face training degree captured by the second camera through the larger view field range of the second camera, when the person main body enters the view field range of the first camera, the shooting parameters of the short exposure frames of the first camera are quickly adjusted in place by utilizing the adjusted first shooting parameters, and meanwhile, the exposure time of the short exposure frames of the first camera is guided and distributed by utilizing the motion detection result of the second camera, so that the motion trails of the person are suitable.
The photographing device in the embodiment of the application can be a device or a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmentedreality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The electronic device in the embodiment of the application can be a terminal with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The device provided by the embodiment of the application can be used for executing the method in the embodiment, and the implementation principle and the technical effect are similar, and are not repeated here.
Fig. 9 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application, as shown in fig. 9, the electronic device 900 includes a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed by the processor 901, and the program or the instruction implements each step of the foregoing shooting method embodiment when executed by the processor 901, and can achieve the same technical effect, so that repetition is avoided and redundant description is omitted herein.
The electronic device in the embodiment of the application includes the mobile terminal and the non-mobile terminal.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device 1000 includes, but is not limited to, a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device 1000 structure shown in fig. 10 does not constitute a limitation of the electronic device 1000, and the electronic device 1000 may include more or less components than shown, or may combine some components, or may be arranged with different components, which will not be described in detail herein.
The display unit 1006 is configured to display a shooting preview interface. The shooting preview interface comprises a first shooting preview picture captured by a first camera;
And a processor 1010, configured to obtain, by using the first camera, a target object image according to the first shooting parameter. The first shooting parameters are generated based on the shooting parameters of the second camera under the condition that the first camera does not capture the target object and the second camera captures the target object.
In the embodiment of the application, the target object is captured preferentially by using the second camera, and the shooting parameter is adjusted in advance, so that when the target object is captured by the first camera and appears in the preview interface, the user controls the first camera to shoot, and the first camera can shoot by referring to the shooting parameter adjusted by the second camera, thereby obtaining a snap shot picture with proper picture brightness and proper motion smear.
In some possible implementations of the embodiments of the present application, the field of view range of the first camera is smaller than the field of view range of the second camera, and the first photographing parameter includes at least one of exposure time and exposure amount.
In some possible implementations of embodiments of the application, the processor 1010 is specifically configured to:
when the target object enters the field of view of the second camera, detecting the position of the target object in each frame of image shot by the second camera;
determining motion data of the target object according to the position of the target object in each frame of image;
based on the motion data, a first shooting parameter is determined.
In some possible implementations of the embodiments of the present application, the display unit 1006 is specifically configured to capture, by the first camera, a picture obtained according to the second capturing parameter, as a first capturing preview picture.
In some possible implementations of the embodiments of the present application, the processor 1010 is specifically configured to determine, when the first camera captures the target object, a third capturing parameter of the first camera according to a display state of the target object in the first capturing preview screen;
the display unit 1006 is specifically configured to update and display a first capturing preview image by using a first camera according to the image captured by the third capturing parameter.
In some possible implementations of the embodiments of the application, the first photographing parameter includes an exposure time, and the exposure time in the first photographing parameter is shorter than the exposure time in the third photographing parameter.
In some possible implementations of the embodiments of the present application, a user input unit 1007 is further included for receiving a first input to a first control in the shooting preview interface when a target object appears in the first shooting preview screen. Correspondingly, the processor 1010 is specifically configured to obtain, by means of the first camera, a target object image according to the first shooting parameter in response to the first input.
According to the embodiment of the application, when the snapshot person enters the field of view of the first camera, a photo with zero shutter time delay, proper picture brightness and proper motion smear can be obtained. The shooting parameters are adjusted in advance according to the face training degree captured by the second camera through the larger view field range of the second camera, when the person main body enters the view field range of the first camera, the shooting parameters of the short exposure frames of the first camera are quickly adjusted in place by utilizing the adjusted first shooting parameters, and meanwhile, the exposure time of the short exposure frames of the first camera is guided and distributed by utilizing the motion detection result of the second camera, so that the motion trails of the person are suitable.
It should be appreciated that in embodiments of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, where the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1009 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units, and optionally the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application program, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the shooting method embodiment, and can achieve the same technical effect, so that repetition is avoided, and no further description is provided here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium such as a computer readable memory ROM, a random access memory RAM, a magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the shooting method can achieve the same technical effects, and the repetition is avoided, so that the description is omitted.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
An embodiment of the present application provides a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the above-mentioned shooting method embodiment, and achieve the same technical effects, and is not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (11)

1.一种拍摄方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头和第二摄像头,所述方法包括:1. A shooting method, characterized in that it is applied to an electronic device, the electronic device includes a first camera and a second camera, and the method includes: 显示拍摄预览界面,所述拍摄预览界面中包括所述第一摄像头捕捉到的第一拍摄预览画面;Displaying a shooting preview interface, wherein the shooting preview interface includes a first shooting preview picture captured by the first camera; 通过所述第一摄像头根据第一拍摄参数拍摄得到目标对象图像;Obtaining an image of the target object by photographing with the first camera according to first photographing parameters; 所述第一拍摄参数为在所述第一摄像头未捕捉到所述目标对象,且所述第二摄像头已捕捉到所述目标对象的情况下,基于所述第二摄像头的拍摄参数生成的。The first shooting parameter is generated based on the shooting parameter of the second camera when the first camera has not captured the target object and the second camera has captured the target object. 2.根据权利要求1所述的方法,其特征在于,所述第一摄像头的视场范围小于所述第二摄像头的视场范围,所述第一拍摄参数包括曝光时间和曝光量中的至少一种。2. The method according to claim 1 is characterized in that the field of view of the first camera is smaller than the field of view of the second camera, and the first shooting parameter includes at least one of exposure time and exposure amount. 3.根据权利要求1所述的方法,其特征在于,还包括:3. The method according to claim 1, further comprising: 在所述目标对象进入至所述第二摄像头的视场范围时,检测所述目标对象在所述第二摄像头拍摄的每帧图像中所处的位置;When the target object enters the field of view of the second camera, detecting a position of the target object in each frame of image captured by the second camera; 根据所述目标对象在每帧图像中所处的位置,确定所述目标对象的运动数据;Determining motion data of the target object according to the position of the target object in each frame of image; 基于所述运动数据,确定所述第一拍摄参数。Based on the motion data, the first shooting parameter is determined. 4.根据权利要求1所述的方法,其特征在于,还包括:4. The method according to claim 1, further comprising: 将所述第一摄像头根据第二拍摄参数捕捉得到的画面,作为所述第一拍摄预览画面。The picture captured by the first camera according to the second shooting parameter is used as the first shooting preview picture. 5.根据权利要求1所述的方法,其特征在于,还包括:5. The method according to claim 1, further comprising: 在所述第一摄像头捕捉到所述目标对象的情况下,根据所述目标对象在所述第一拍摄预览画面中的显示状态,确定所述第一摄像头的第三拍摄参数;When the first camera captures the target object, determining a third shooting parameter of the first camera according to a display state of the target object in the first shooting preview picture; 将所述第一摄像头根据所述第三拍摄参数捕捉得到的画面,更新显示所述第一拍摄预览画面。The first camera captures the picture according to the third shooting parameter, and updates and displays the first shooting preview picture. 6.根据权利要求5所述的方法,其特征在于,所述第一拍摄参数包括曝光时间,所述第一拍摄参数中的曝光时间短于所述第三拍摄参数中的曝光时间。6 . The method according to claim 5 , wherein the first shooting parameter comprises an exposure time, and the exposure time in the first shooting parameter is shorter than the exposure time in the third shooting parameter. 7.根据权利要求1-6中任一项所述的方法,其特征在于,所述通过所述第一摄像头根据第一拍摄参数拍摄得到目标对象图像,包括:7. The method according to any one of claims 1 to 6, characterized in that the step of obtaining the target object image by photographing the first camera according to the first photographing parameter comprises: 在所述第一拍摄预览画面中出现目标对象时,接收对所述拍摄预览界面中的第一控件的第一输入;When a target object appears in the first shooting preview screen, receiving a first input to a first control in the shooting preview interface; 响应于所述第一输入,通过所述第一摄像头根据所述第一拍摄参数拍摄得到所述目标对象图像。In response to the first input, the target object image is captured by the first camera according to the first shooting parameters. 8.一种拍摄装置,其特征在于,包括:8. A photographing device, comprising: 显示模块,用于显示拍摄预览界面,所述拍摄预览界面中包括第一摄像头捕捉到的第一拍摄预览画面;A display module, used to display a shooting preview interface, wherein the shooting preview interface includes a first shooting preview picture captured by the first camera; 图像拍摄模块,用于通过所述第一摄像头根据第一拍摄参数拍摄得到目标对象图像;An image capturing module, configured to capture an image of a target object using the first camera according to first capturing parameters; 其中,所述第一拍摄参数为在所述第一摄像头未捕捉到所述目标对象,且第二摄像头已捕捉到所述目标对象的情况下,基于所述第二摄像头的拍摄参数生成的。The first shooting parameter is generated based on the shooting parameter of the second camera when the first camera has not captured the target object and the second camera has captured the target object. 9.根据权利要求8所述的装置,其特征在于,还包括显示更新模块,所述显示更新模块用于:9. The device according to claim 8, further comprising a display update module, wherein the display update module is used to: 在所述第一摄像头捕捉到所述目标对象的情况下,根据所述目标对象在所述第一拍摄预览画面中的显示状态,确定所述第一摄像头的第三拍摄参数;When the first camera captures the target object, determining a third shooting parameter of the first camera according to a display state of the target object in the first shooting preview picture; 将所述第一摄像头根据所述第三拍摄参数捕捉得到的画面,更新显示所述第一拍摄预览画面。The first camera captures the picture according to the third shooting parameter, and updates and displays the first shooting preview picture. 10.一种电子设备,其特征在于,所述电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至7中任一项所述方法的步骤。10. An electronic device, characterized in that the electronic device comprises a processor and a memory, the memory stores a program or instruction that can be run on the processor, and the program or instruction, when executed by the processor, implements the steps of the method as claimed in any one of claims 1 to 7. 11.一种可读存储介质,其特征在于,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至7中任一项所述方法的步骤。11. A readable storage medium, characterized in that the readable storage medium stores a program or instruction, and when the program or instruction is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN202411595713.7A 2024-11-08 2024-11-08 Shooting methods, devices, equipment and media Pending CN119255086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411595713.7A CN119255086A (en) 2024-11-08 2024-11-08 Shooting methods, devices, equipment and media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411595713.7A CN119255086A (en) 2024-11-08 2024-11-08 Shooting methods, devices, equipment and media

Publications (1)

Publication Number Publication Date
CN119255086A true CN119255086A (en) 2025-01-03

Family

ID=94016660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411595713.7A Pending CN119255086A (en) 2024-11-08 2024-11-08 Shooting methods, devices, equipment and media

Country Status (1)

Country Link
CN (1) CN119255086A (en)

Similar Documents

Publication Publication Date Title
JP7169383B2 (en) Capture and user interface using night mode processing
CN112312016B (en) Shooting processing method and device, electronic equipment and readable storage medium
CN111586282B (en) Shooting method, shooting device, terminal and readable storage medium
JP2012511171A (en) Camera system and method with touch focus
CN110933303B (en) Photographing method and electronic device
WO2023160496A1 (en) Photographing method, photographing apparatus, electronic device and readable storage medium
JP7681690B2 (en) Photographing method, photographing device, electronic device, and readable storage medium
CN112822412B (en) Exposure method, exposure device, electronic equipment and storage medium
CN112738397A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112866576A (en) Image preview method, storage medium and display device
WO2022206499A1 (en) Image capture method and apparatus, electronic device and readable storage medium
CN114785969B (en) Shooting method and device
CN111586280B (en) Shooting method, device, terminal and readable storage medium
CN113315903A (en) Image acquisition method and device, electronic equipment and storage medium
US20240397196A1 (en) Shooting method and electronic device
CN115134536B (en) Shooting method and device
CN119255086A (en) Shooting methods, devices, equipment and media
EP3945717A1 (en) Take-off capture method and electronic device, and storage medium
CN113766136A (en) Shooting method and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment
CN114339357A (en) Image acquisition method, image acquisition device and storage medium
RU2785789C1 (en) Photography using night mode processing and user interface
CN117294933A (en) Shooting method and shooting device
CN119603565A (en) Image processing method, device, equipment and readable storage medium
CN119603547A (en) Video shooting method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination