WO2018133305A1 - Procédé et dispositif de traitement d'image - Google Patents
Procédé et dispositif de traitement d'image Download PDFInfo
- Publication number
- WO2018133305A1 WO2018133305A1 PCT/CN2017/088085 CN2017088085W WO2018133305A1 WO 2018133305 A1 WO2018133305 A1 WO 2018133305A1 CN 2017088085 W CN2017088085 W CN 2017088085W WO 2018133305 A1 WO2018133305 A1 WO 2018133305A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processed
- subject object
- reference composition
- target
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000000203 mixture Substances 0.000 claims abstract description 101
- 238000004891 communication Methods 0.000 abstract description 10
- 230000006870 function Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 10
- 238000013461 design Methods 0.000 description 8
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the present application relates to the field of communications technologies, and in particular, to a method and an apparatus for image processing.
- the current terminal has dual cameras, and the photographer can touch when he needs to take pictures. Control the display screen to select the subject and reference object of the photo to be taken, and then the terminal can determine the depth of field according to the subject and the reference object. After the photographer finishes focusing, press the shutter button to shoot the subject clearly, and the reference object is blurred.
- the photo by blurring the background of the photo, can achieve the effect of highlighting the theme, making the photo more visual impact and aesthetic expression.
- the embodiment of the present application provides a method and an apparatus for image processing, which can solve the problem that the photo picture taken by the terminal is poor.
- the present application provides a method for image processing, the method comprising: determining, by a terminal, a subject object and a background in an image to be processed, and determining a target corresponding to the image to be processed according to the subject object and the background in the image to be processed
- the image to be processed is calibrated with the target reference composition according to the size of the subject object and the position of the subject object in the image to be processed to obtain a target image. It can be seen that even if the photographer has no composition experience, the terminal can automatically calibrate the image to be processed and the target reference image to obtain a target image that conforms to the target reference composition, and improves the aesthetic degree of the target image by calibrating with the target reference image.
- the method of determining the subject object and background of the image to be processed can be implemented as:
- the method for determining the subject object and the background in the image to be processed may be implemented by: detecting a straight line in the image to be processed, dividing the image to be processed into at least two regions by the detected straight line, and then Any of the divided regions is determined as the subject object of the image to be processed, and an area other than the subject object in the image to be processed is determined as the background.
- determining the target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed may be implemented as: matching the image to be processed with each reference composition stored in advance, and determining The target reference composition that matches the image to be processed. It can be seen that a plurality of reference compositions are pre-stored in the terminal.
- the terminal can determine the target reference composition that matches the image to be processed by matching the image to be processed with each reference composition stored in advance, and then perform a calibration operation. , the target image conforming to the target reference composition can be obtained, the picture feeling of the photograph taken by the terminal is improved, and the shooting function of the terminal and the intelligence of the image processing function are also improved.
- the image to be processed is calibrated with the target reference composition to obtain the target image, which can be realized as follows:
- the calibration parameters including at least the standard proportion of the main object and the position of the main object in the entire picture, and then performing calibration operations on the image to be processed according to the calibration parameters to obtain a target meeting the calibration parameters image. It can be seen that through the calibration operation, the target image conforming to the calibration parameter can be obtained, that is, the target image conforms to the target reference composition, and the image image conforming to the target reference composition is stronger and more beautiful.
- the present application provides an apparatus for image processing, which can implement the functions performed by the terminal in the above first aspect, and the functions can be implemented by hardware or by executing corresponding software by hardware.
- the hardware or software includes one or more modules corresponding to the above functions.
- the apparatus includes a processor and a communication interface configured to support the apparatus to perform the corresponding functions of the above methods.
- the communication interface is used to support communication between the device and other network elements.
- the apparatus can also include a memory for coupling with the processor that retains the program instructions and data necessary for the apparatus.
- the present application provides a computer storage medium for storing computer software instructions for use in the above terminal, comprising a program designed to perform the above aspects.
- the terminal of the present application can determine the target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed, compared to the lack of the composition experience of the user.
- the photographer can automatically calibrate the image to be processed and the target reference composition to obtain a target image that conforms to the target reference composition, and calibrate with the target reference composition to improve the aesthetic appearance of the target image, so that the terminal can capture the image. The picture feels better.
- FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application.
- FIG. 2 is a flowchart of a method for image processing according to an embodiment of the present application
- FIG. 3a is an exemplary schematic diagram of an image to be processed according to an embodiment of the present application.
- FIG. 3b is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
- FIG. 4a is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
- FIG. 4b is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
- FIG. 5 is an exemplary schematic diagram of a method for image processing according to an embodiment of the present application.
- FIG. 6 is a flowchart of another method for image processing according to an embodiment of the present application.
- FIG. 7 is an exemplary schematic diagram of a target image provided by an embodiment of the present application.
- FIG. 8 is an exemplary schematic diagram of another method for image processing according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an apparatus for image processing according to an embodiment of the present application.
- a terminal also known as a User Equipment (UE) is a device that provides voice and/or data connectivity to users, for example, a handheld device, an in-vehicle device, etc. having a wireless connection and image display and processing functions.
- Common terminals include, for example, mobile phones, cameras, tablets, notebook computers, PDAs, mobile internet devices (MIDs), wearable devices such as smart watches, smart bracelets, pedometers, and the like.
- the mobile phone may include: a radio frequency (RF) circuit 110 , a memory 120 , a communication interface 130 , a display screen 140 , a sensor 150 , an audio circuit 160 , an I/O subsystem 170 , a processor 180 , and Camera 190 and other components.
- RF radio frequency
- FIG. 1 the structure of the mobile phone shown in FIG. 1 does not constitute a limitation on the mobile phone, and may include more or less components than those illustrated, or combine some components, or split some components, or Different parts are arranged.
- the display screen 140 belongs to a user interface (UI), and the display screen 140 can include a display panel 141 and a touch panel 142.
- the handset can include more or fewer components than shown.
- the mobile phone may also include functional modules or devices such as a power supply and a Bluetooth module, and details are not described herein.
- the processor 180 is connected to the RF circuit 110, the memory 120, the audio circuit 160, the I/O subsystem 170, and the camera 190, respectively.
- the I/O subsystem 170 is connected to the communication interface 130, the display screen 140, and the sensor 150, respectively.
- the RF circuit 110 can be used for receiving and transmitting signals during and after receiving or transmitting information, and in particular, receiving downlink information of the base station and processing it to the processor 180.
- the memory 120 can be used to store software programs as well as modules.
- the processor 180 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 120.
- Communication interface 130 can be used to receive input numeric or character information, as well as to generate key signal inputs related to user settings and function controls of the handset.
- the display screen 140 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone, and can also accept user input.
- the specific display screen 140 may include a display panel 141 and a touch panel 142.
- the display panel 141 can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
- the touch panel 142 also referred to as a touch screen, a touch sensitive screen, etc., can collect contact or non-contact operations on or near the user (eg, the user uses any suitable object or accessory such as a finger, a stylus, etc. on the touch panel 142.
- the operation in the vicinity of the touch panel 142 may also include a somatosensory operation; the operation includes a single-point control operation, a multi-point control operation, and the like, and drives the corresponding connection device according to a preset program.
- Sensor 150 can be a light sensor, a motion sensor, or other sensor.
- the audio circuit 160 can provide an audio interface between the user and the handset.
- the I/O subsystem 170 is used to control external devices for input and output, and the external devices may include other device input controllers, sensor controllers, and display controllers.
- the processor 180 is the control center of the handset 200, which connects various portions of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120, The various functions and processing data of the mobile phone 200 are executed to perform overall monitoring of the mobile phone.
- the camera 190 can also be used as an input device, specifically for converting the collected analog video or image signal into a digital signal, and then storing it in the memory 120.
- the camera 190 may include a front camera, a rear camera, a built-in camera, and an external camera.
- the embodiment of the present application is not limited in any way. In the embodiment of the present application, a dual camera is taken as an example for description.
- the image to be processed may be an image acquired by the dual camera of the terminal in real time, or may be a static image.
- the terminal may divide the image to be processed into a preset number of regions, and then detect the color of each region, determine the number of regions corresponding to each color, and the region corresponding to the color with the largest number of regions. Determined as the background, the area corresponding to the color with the second largest number of regions It is defined as a subject object, or an area other than the background in the image to be processed may also be determined as the subject object.
- the terminal can detect that most of the areas in the image to be processed are green of the grassland, and then the green area in the image to be processed can be determined.
- the background the area other than the green area is the main object.
- the terminal can detect that the black area shown in FIG. 3b is green by the green detector, and the area of the area occupies the entire picture area. A large proportion, so it can be determined that the black area shown in Fig. 3b is the background, and the part other than the black area is the main body.
- the terminal may further determine a main object and a background of the image to be processed by detecting a straight line in the image to be processed. Specifically, the terminal may divide the image to be processed into at least two by using the detected straight line. For each region, any of the divided regions is determined as the subject object of the image to be processed, and an area other than the subject object in the image to be processed is determined as the background. It should be noted that, when adopting such a method for determining a subject object and a background, the terminal may perform a subsequent step by using each region as a main object, respectively, and respectively determine a target reference composition when each region is the main body.
- the image to be processed includes a beach, a sea, and a sky.
- the boundary line between the beach and the sea is a straight line
- the boundary line between the sea and the sky is also a straight line, as shown in FIG. 4b.
- the two lines can divide the image to be processed into three regions, and the terminal can determine any region as the main body, and the other two regions are the background, or the terminal can also respectively segment the region 1, the region 2 and
- the area 3 is determined as a subject object, and the target reference composition when the area 1 is the main object, the target reference composition when the area 2 is the main object, and the target reference composition when the area 3 is the main object are determined by performing the subsequent steps.
- a plurality of composition detectors can be configured in the terminal, and the terminal can respectively detect the image to be processed by using each composition detector to more accurately determine the subject object and the background, for example, determining the image to be processed by the first composition detector.
- the background is not a single color, or the area of the image to be processed other than the background is not concentrated, that is, there may be multiple body objects in the image to be processed, and the terminal may determine that the image to be processed does not match the first composition detector.
- the second composition detector can be used to detect the image to be processed. It is assumed that the second composition detector is a line detector, and the line detector detects the line in the image, and the detection result of the line detector can be used. Determine the subject object and background in the image to be processed. If the line detector still does not detect a line, it can continue to be detected by other composition detectors.
- the terminal can also identify objects contained in the image to be processed through deep learning techniques, thereby more intelligently determining the subject object and the background.
- the terminal further needs to determine the depth of field data according to the subject object, and set the aperture value and the focal length according to the depth of field data. For example, if the depth of field data is within 5 meters, you can set the aperture value to 2.8. If the depth of field data is more than 5 meters, you can set the aperture value to 4.
- various reference compositions are pre-stored, and the terminal can match the features of the subject object and the background in the image to be processed with the pre-stored reference composition, and then select the target reference composition from the pre-stored reference images.
- this step may determine multiple target reference compositions.
- step 202 the image to be processed needs to be calibrated with each target reference composition, respectively, to obtain target images that conform to each target reference composition.
- a target image may be selected by the user as a final target image, or a target image may be randomly selected by the terminal as a final target image. If the target image randomly selected by the terminal does not meet the user requirements, Users can also manually select other target images.
- the terminal may receive a selection instruction input by the user, and use the target image selected by the user as the final target image according to the selection instruction. For example, in FIG. 5, if the user performs a click operation on the target image 2, the terminal uses the target image 2 as the target image 2 The final target image.
- the method for image processing can compare the subject object and the background in the image to be processed, compared to the prior art, because the user lacks the composition experience and the photographed picture is poor.
- the target reference composition corresponding to the image to be processed is determined, and the terminal can automatically calibrate the image to be processed and the target reference composition to obtain a target image that conforms to the target reference composition, and is calibrated by the target reference composition, without the photographer having the composition experience.
- the aesthetic appearance of the target image is improved, so that the picture taken by the terminal is better.
- the foregoing step 203 according to the size of the subject object and the position of the subject object in the image to be processed, the image to be processed and the target reference composition Perform calibration to obtain the target image, which can be realized as follows:
- the calibration parameter includes at least the standard proportion of the main object and the position of the main object in the entire picture.
- the calibration operation of the image to be processed may be a cropping, rotation, or the like.
- a target reference composition is a three-point composition.
- the three-point composition means that the scene is divided into three equal parts by two horizontal lines in the longitudinal direction and three third lines in the horizontal direction. It is equivalent to dividing a scene with two horizontal lines and two vertical lines, similar to a "well" word, so that four intersections are obtained, so that the main object is located at one of the intersections.
- the terminal can calculate the calibration parameters, and the image to be processed is cropped by the calibration parameter to obtain a target image conforming to the three-part composition, as shown in FIG. 7, according to FIG. After the thick line is to be processed for cropping, the target image conforming to the three-way composition is obtained.
- the image to be processed is a moving image that the camera is shooting
- the terminal can display the three-way composition on the shooting interface. As shown in FIG. 8, the four intersections in FIG. 8 are the positions of the main object.
- the user can be prompted to adjust the image to be processed currently photographed by the camera, so that the actual captured subject object coincides with the position of the subject object indicated in the three-way composition, thereby capturing the conformity method.
- the target image of the composition Assuming that another target reference composition determined according to the image to be processed shown in FIG. 3 is the center composition, the center composition can be used as an option for the target reference composition. If the user selects the three-point composition, the terminal's shooting interface. The three-point composition is displayed in the middle. If the user selects the center composition, the shooting interface of the terminal displays the center composition.
- the terminal may process the image to be processed captured in real time, determine the target reference composition by detecting the image captured by the current camera, and display the target reference composition on the shooting interface, so that the user can
- the target reference composition displayed on the shooting interface is used to adjust the captured image so that the captured image conforms to the target reference composition, and then a more beautiful photo is taken.
- the terminal includes hardware structures and/or software modules corresponding to each function.
- the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
- the embodiment of the present application may divide the function module into the terminal according to the foregoing method example.
- each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and may be further divided in actual implementation.
- FIG. 9 shows a possible structural diagram of the terminal involved in the above embodiment.
- the terminal includes a determining module 901 and a calibration module 902.
- the determining module 901 is configured to support the terminal to perform step 201 and step 202 in FIG. 2, and
- the quasi-module 902 is configured to support the terminal to perform step 203 in FIG. 2 and steps 2031 to 2032 in FIG.
- the determining module 901 shown in FIG. 9 can be integrated into the processor 180 shown in FIG. 1 to cause the processor 180 to execute the determining module 901, and the calibration module 902. Specific features.
- Embodiments of the present application also provide a computer storage medium for storing computer software instructions for use by the terminal, the device comprising a program designed to perform the steps performed by the terminal in the above embodiment.
- the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
- the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
- the processor and the storage medium may also exist as discrete components in the core network interface device.
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network devices. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each functional unit may exist independently, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
L'invention concerne un procédé de traitement d'image, se rapportant au domaine technique des communications et capable de résoudre le problème d'un sentiment d'appauvrissement du tableau d'une photographie prise par un terminal. Dans la présente invention, au moyen d'un objet principal et d'un arrière-plan dans une image à traiter, une composition de référence cible correspondant à l'image à traiter est déterminée, puis en fonction de la taille de l'objet principal, et de la position de l'objet principal dans l'image à traiter, l'image à traiter et la composition de référence cible sont étalonnées pour obtenir la composition de référence cible. La solution fournie par la présente invention est appropriée pour être utilisée pendant un traitement d'image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780007318.4A CN109479087B (zh) | 2017-01-19 | 2017-06-13 | 一种图像处理的方法及装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710044420 | 2017-01-19 | ||
CN201710044420.3 | 2017-01-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018133305A1 true WO2018133305A1 (fr) | 2018-07-26 |
Family
ID=62907547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/088085 WO2018133305A1 (fr) | 2017-01-19 | 2017-06-13 | Procédé et dispositif de traitement d'image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109479087B (fr) |
WO (1) | WO2018133305A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111432122A (zh) * | 2020-03-30 | 2020-07-17 | 维沃移动通信有限公司 | 一种图像处理方法及电子设备 |
CN112037160A (zh) * | 2020-08-31 | 2020-12-04 | 维沃移动通信有限公司 | 图像处理方法、装置及设备 |
CN112734661A (zh) * | 2020-12-30 | 2021-04-30 | 维沃移动通信有限公司 | 图像处理方法及装置 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113206956B (zh) * | 2021-04-29 | 2023-04-07 | 维沃移动通信(杭州)有限公司 | 图像处理方法、装置、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5873007A (en) * | 1997-10-28 | 1999-02-16 | Sony Corporation | Picture composition guidance system |
CN101000451A (zh) * | 2006-01-10 | 2007-07-18 | 英保达股份有限公司 | 自动构图支持装置及方法 |
CN103384304A (zh) * | 2012-05-02 | 2013-11-06 | 索尼公司 | 显示控制设备、显示控制方法、程序和记录介质 |
CN104243787A (zh) * | 2013-06-06 | 2014-12-24 | 华为技术有限公司 | 拍照方法、照片管理方法及设备 |
CN106131418A (zh) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | 一种构图控制方法、装置及拍照设备 |
CN106131411A (zh) * | 2016-07-14 | 2016-11-16 | 纳恩博(北京)科技有限公司 | 一种拍摄图像的方法和装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104917951A (zh) * | 2014-03-14 | 2015-09-16 | 宏碁股份有限公司 | 摄像装置及其辅助拍摄人像方法 |
-
2017
- 2017-06-13 WO PCT/CN2017/088085 patent/WO2018133305A1/fr active Application Filing
- 2017-06-13 CN CN201780007318.4A patent/CN109479087B/zh active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5873007A (en) * | 1997-10-28 | 1999-02-16 | Sony Corporation | Picture composition guidance system |
CN101000451A (zh) * | 2006-01-10 | 2007-07-18 | 英保达股份有限公司 | 自动构图支持装置及方法 |
CN103384304A (zh) * | 2012-05-02 | 2013-11-06 | 索尼公司 | 显示控制设备、显示控制方法、程序和记录介质 |
CN104243787A (zh) * | 2013-06-06 | 2014-12-24 | 华为技术有限公司 | 拍照方法、照片管理方法及设备 |
CN106131411A (zh) * | 2016-07-14 | 2016-11-16 | 纳恩博(北京)科技有限公司 | 一种拍摄图像的方法和装置 |
CN106131418A (zh) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | 一种构图控制方法、装置及拍照设备 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111432122A (zh) * | 2020-03-30 | 2020-07-17 | 维沃移动通信有限公司 | 一种图像处理方法及电子设备 |
CN112037160A (zh) * | 2020-08-31 | 2020-12-04 | 维沃移动通信有限公司 | 图像处理方法、装置及设备 |
CN112037160B (zh) * | 2020-08-31 | 2024-03-01 | 维沃移动通信有限公司 | 图像处理方法、装置及设备 |
CN112734661A (zh) * | 2020-12-30 | 2021-04-30 | 维沃移动通信有限公司 | 图像处理方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN109479087B (zh) | 2020-11-17 |
CN109479087A (zh) | 2019-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11330194B2 (en) | Photographing using night shot mode processing and user interface | |
US11030733B2 (en) | Method, electronic device and storage medium for processing image | |
WO2017124899A1 (fr) | Procédé, appareil et dispositif électronique de traitement d'informations | |
CN105915782A (zh) | 一种基于人脸识别的照片获取方法和移动终端 | |
CN112135046A (zh) | 视频拍摄方法、视频拍摄装置及电子设备 | |
CN111445413B (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2018133305A1 (fr) | Procédé et dispositif de traitement d'image | |
US10009545B2 (en) | Image processing apparatus and method of operating the same | |
CN111800574B (zh) | 成像方法、装置和电子设备 | |
CN108810409A (zh) | 一种拍照指引方法、终端设备及计算机可读介质 | |
CN112533071B (zh) | 图像处理方法、装置及电子设备 | |
CN116939351A (zh) | 拍摄方法、装置、电子设备及可读储存介质 | |
CN108881739B (zh) | 图像生成方法、装置、终端及存储介质 | |
CN114418865A (zh) | 图像处理方法、装置、设备及存储介质 | |
CN112399080A (zh) | 视频处理方法、装置、终端及计算机可读存储介质 | |
CN111835977A (zh) | 图像传感器、图像生成方法及装置、电子设备、存储介质 | |
US20220270313A1 (en) | Image processing method, electronic device and storage medium | |
CN117480772A (zh) | 视频显示方法及装置、终端设备及计算机存储介质 | |
CN115802142A (zh) | 拍摄显示方法及设备 | |
CN112217989A (zh) | 图像显示方法及装置 | |
HK40003312A (en) | Panoramic shooting method and device | |
HK40003312B (en) | Panoramic shooting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17892834 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17892834 Country of ref document: EP Kind code of ref document: A1 |