[go: up one dir, main page]

CN108961157B - Image processing method, image processing device and terminal device - Google Patents

Image processing method, image processing device and terminal device Download PDF

Info

Publication number
CN108961157B
CN108961157B CN201810631027.9A CN201810631027A CN108961157B CN 108961157 B CN108961157 B CN 108961157B CN 201810631027 A CN201810631027 A CN 201810631027A CN 108961157 B CN108961157 B CN 108961157B
Authority
CN
China
Prior art keywords
picture
processed
background
detection result
generation countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810631027.9A
Other languages
Chinese (zh)
Other versions
CN108961157A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810631027.9A priority Critical patent/CN108961157B/en
Publication of CN108961157A publication Critical patent/CN108961157A/en
Application granted granted Critical
Publication of CN108961157B publication Critical patent/CN108961157B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请适用于图片处理技术领域,提供了一种图片处理方法,所述方法包括:检测待处理图片中的前景目标,获得第一检测结果,所述第一检测结果用于指示所述待处理图片中是否存在前景目标,以及在存在前景目标时用于指示各个前景目标的类别和各个前景目标在所述待处理图片中的位置;对所述待处理图片进行场景分类,获得分类结果,所述分类结果用于指示是否识别出所述待处理图片的背景,以及在识别出所述待处理图片的背景时用于指示所述待处理图片的背景类别;根据所述第一检测结果、所述分类结果以及预设的生成对抗网络对所述待处理图片进行处理。本申请能够有效提升图片风格转换的灵活性。

Figure 201810631027

The present application is applicable to the technical field of picture processing, and provides a picture processing method. The method includes: detecting a foreground target in a picture to be processed, and obtaining a first detection result, where the first detection result is used to indicate the to-be-processed image. Whether there is a foreground target in the picture, and when there is a foreground target, it is used to indicate the category of each foreground target and the position of each foreground target in the picture to be processed; the scene classification is performed on the picture to be processed, and the classification result is obtained. The classification result is used to indicate whether the background of the picture to be processed is identified, and when the background of the picture to be processed is identified, it is used to indicate the background category of the picture to be processed; according to the first detection result, the The classification result and the preset generative adversarial network are used to process the to-be-processed picture. The present application can effectively improve the flexibility of picture style conversion.

Figure 201810631027

Description

Picture processing method, picture processing device and terminal equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer-readable storage medium.
Background
At present, many users like to share pictures shot by themselves on a social public platform, and the pictures are generally processed in order to make the pictures shot by themselves more beautiful.
However, the conventional picture processing method generally includes: firstly, acquiring a picture, selecting a picture processing mode, and processing the acquired whole picture according to the selected picture processing mode. For example, if the processing mode of the selected picture is the style conversion mode, the style of the acquired whole picture is converted into the selected picture style.
Because the existing picture processing method can only process the whole picture, the flexibility is poor, and the requirement of a user for carrying out diversified processing on the same picture is difficult to meet.
Disclosure of Invention
In view of this, embodiments of the present application provide a picture processing method, a picture processing apparatus, a terminal device, and a computer-readable storage medium, so as to solve the problem that in the prior art, only a whole picture can be processed, and the flexibility is poor.
A first aspect of an embodiment of the present application provides an image processing method, including:
detecting foreground targets in a picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not and indicating the background category of the picture to be processed when the background of the picture to be processed is identified;
and processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network.
A second aspect of the present application provides a picture processing apparatus, including:
the first detection module is used for detecting foreground targets in the picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
the classification module is used for carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not and indicating the background category of the picture to be processed when the background of the picture to be processed is identified;
and the processing module is used for processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the picture processing method as described when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program which, when executed by one or more processors, implements the steps of the picture processing method as described.
The method comprises the steps of firstly, detecting foreground objects, such as human faces, animals and the like, in a picture to be processed to obtain a foreground object detection result; secondly, performing scene classification on the pictures to be processed, namely identifying which kind of scene the current background in the pictures to be processed belongs to, such as a beach scene, a forest scene, a snow scene, a grassland scene, a desert scene, a blue sky scene and the like, and obtaining a scene classification result; and finally, the picture to be processed is processed according to the foreground target detection result, the scene classification result and the preset generation countermeasure network, so that the comprehensive processing of the foreground target and the background image in the picture to be processed can be realized, for example, the foreground target such as a human face is converted into one style, the background image such as a blue sky is converted into another style, and the like, the flexibility of picture style conversion is effectively improved, the user experience is enhanced, and the method has strong usability and practicability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a picture processing method according to an embodiment of the present application;
FIG. 2(a) is a schematic diagram of an image with a landscape scene according to an embodiment of the present application;
fig. 2(b) is a schematic diagram of an image of a beach as a scene according to an embodiment of the present application;
fig. 2(c) is a schematic image diagram of a scene of a blue sky according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a picture processing method according to a second embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation of a picture processing method according to a third embodiment of the present application;
FIG. 5 is a schematic diagram of a picture processing apparatus according to a fourth embodiment of the present disclosure;
fig. 6 is a schematic diagram of a terminal device provided in the fifth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
The first embodiment is as follows:
referring to fig. 1, which is a schematic diagram of an implementation flow of an image processing method provided in an embodiment of the present application, the method may include:
step S101, detecting foreground targets in a picture to be processed, and obtaining a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed, and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist.
In this embodiment, the picture to be processed may be a currently taken picture, a picture stored in advance, a picture acquired from a network, a picture extracted from a video, or the like. For example, a picture taken by a camera of the terminal device; or, pre-stored pictures sent by the WeChat friends; or, pictures downloaded from a designated website; or a frame of picture extracted from a currently played video. Preferably, the terminal device may preview a certain frame of picture in the picture after starting the camera.
In this embodiment, the detection result includes, but is not limited to: the image to be processed has indication information of whether the foreground object exists or not and information used for indicating the category and the position of each foreground object contained in the image to be processed when the foreground object is contained. The foreground target may refer to a target with dynamic characteristics in the to-be-processed picture, such as a human, an animal, and the like; the foreground object may also refer to a scene that is closer to the viewer and has static characteristics, such as flowers, gourmet, etc. Furthermore, the positions of the foreground objects are identified more accurately, and the identified foreground objects are distinguished. In this embodiment, after the foreground object is detected, different selection boxes can be used for framing the foreground object, for example, a box frame is used for framing an animal, a round frame is used for framing a human face, and the like.
Preferably, the trained scene detection model can be used for detecting the foreground target in the picture to be processed. For example, the scene Detection model may be a model with a foreground target Detection function, such as MobileNet, Single-spot multi-box Detection (SSD), and the like, where the MobileNet is a lightweight deep neural network provided for embedded devices, such as a mobile phone. Of course, other scene detection manners may also be adopted, for example, whether a predetermined target (e.g., a human face) exists in the to-be-processed picture is detected through a target (e.g., a human face) recognition algorithm, and after the predetermined target is detected to exist, the position of the predetermined target in the to-be-processed picture is determined through a target positioning algorithm or a target tracking algorithm.
It should be noted that, within the technical scope disclosed by the present invention, other schemes for detecting foreground objects that can be easily conceived by those skilled in the art should also be within the protection scope of the present invention, and are not described herein.
Taking the example of detecting the foreground target in the picture to be processed by adopting the trained scene detection model as an example, the specific training process of the scene detection model is described as follows:
pre-obtaining a sample picture and a detection result corresponding to the sample picture, wherein the detection result corresponding to the sample picture comprises the category and the position of each foreground target in the sample picture;
detecting a foreground target in the sample picture by using an initial scene detection model, and calculating the detection accuracy of the initial scene detection model according to a detection result corresponding to the sample picture acquired in advance;
if the detection accuracy is smaller than a preset first detection threshold, adjusting parameters of an initial scene detection model, detecting the sample picture through the scene detection model after parameter adjustment until the detection accuracy of the adjusted scene detection model is larger than or equal to the first detection threshold, and taking the scene detection model as a trained scene detection model. The method for adjusting the parameters includes, but is not limited to, a stochastic gradient descent algorithm, a power update algorithm, and the like.
Step S102, carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not, and indicating the background category of the picture to be processed when the background of the picture to be processed is identified.
In this embodiment, the to-be-processed picture is subjected to scene classification, that is, a scene to which the current background in the to-be-processed picture belongs is identified, for example, a landscape scene, a beach scene, a blue sky scene, a forest scene, a snow scene, a grassland scene, a desert scene, and the like. Fig. 2(a), fig. 2(b), and fig. 2(c) respectively show schematic images of scenes of landscape, beach, and blue sky provided by the embodiment of the present application.
Preferably, the trained scene classification model can be used for carrying out scene classification on the background of the picture to be processed. For example, the scene classification model may be a model with a background detection function, such as MobileNet. Of course, other scene classification manners may also be adopted, for example, after a foreground object in the to-be-processed picture is detected by a foreground detection model, the remaining portion in the to-be-processed picture is taken as a background, and the category of the remaining portion is identified by an image identification algorithm.
It should be noted that, within the technical scope of the present disclosure, other schemes for detecting the background that can be easily conceived by those skilled in the art should also be within the protection scope of the present disclosure, and are not described in detail herein.
Taking the detection of the background in the picture to be processed by adopting the trained scene classification model as an example to explain the specific training process of the scene classification model:
obtaining each sample picture and a classification result corresponding to each sample picture in advance; for example, the sample picture 1 is a grass scene, the sample picture 2 is a snow scene, and the sample picture 3 is a beach scene;
carrying out scene classification on each sample picture by using an initial scene classification model, and calculating the classification accuracy of the initial scene classification model according to the classification result of each sample picture acquired in advance, namely whether a sample picture 1 is identified as a grassland scene, a sample picture 2 is identified as a snowfield scene, a sample picture 3 is identified as a beach scene, and a sample picture 4 is identified as a desert scene;
if the classification accuracy is smaller than a preset classification threshold (for example, 75%, that is, the number of the identified sample pictures is smaller than 3), adjusting parameters of the initial scene classification model, detecting the sample pictures through the scene classification model after parameter adjustment until the classification accuracy of the adjusted scene classification model is larger than or equal to the classification threshold, and taking the scene classification model as a trained scene classification model. The method for adjusting the parameters includes, but is not limited to, a stochastic gradient descent algorithm, a power update algorithm, and the like.
And step S103, processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network.
The preset generation countermeasure network comprises a generation network and a countermeasure network. The generation network G is a network for generating pictures, which receives a random noise z, by means of which the pictures are generated, denoted G (z). The discrimination network D is used to discriminate whether a picture is "real", and if its input parameter is x, x represents a picture, then D (x) is output to represent the probability that x is a real picture, if 1, the probability that x is a real picture is 100%, and if 0, x is not a real picture. In the training process, the aim of generating the network G is to generate a real picture as much as possible to deceive the discrimination network D. And the aim of D is to separate the picture generated by G and the real picture as much as possible. Thus, G and D constitute a dynamic "gaming process". In the most ideal state, G can generate enough pictures G (z) to be "spurious". For D, it is difficult to determine whether the picture generated by G is real or not, so D (G (z)) is 0.5.
In this embodiment, one or more generative warfare networks having a specified picture processing mode are trained in advance to obtain corresponding processed pictures from outputs of the generative warfare networks. For example, assuming that a pre-trained generation countermeasure network has a picture processing mode for converting a picture format into an oil painting format, after a picture to be processed is input into the generation countermeasure network, the generation countermeasure network outputs a processed picture converted into the oil painting format.
Specifically, if the first detection result indicates that no foreground object exists in the to-be-processed picture, and the classification result indicates that the background of the to-be-processed picture is identified, then:
selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and processing the picture to be processed according to the first generation countermeasure network to obtain a processed picture;
or, if the first detection result indicates that a foreground target exists in the to-be-processed picture, and the classification result indicates that the background of the to-be-processed picture is not identified, then:
selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and processing the picture to be processed according to the second generation countermeasure network and the position of each foreground target to obtain a processed picture; optionally, when there are multiple foreground targets, different second generative countermeasure networks may be selected for different foreground targets, and of course, the same second generative countermeasure network may also be selected for different foreground targets.
Or, if the first detection result indicates that a foreground target exists in the to-be-processed picture and the classification result indicates that the background of the to-be-processed picture is identified, then:
and selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and processing the picture to be processed according to the second generation countermeasure network, the position of each foreground target and the background category of the picture to be processed indicated by the classification result to obtain a processed picture.
The processing of the picture to be processed includes, but is not limited to, performing style conversion, saturation, brightness, and/or contrast adjustment on the foreground object and/or the background.
By the embodiment of the application, each foreground target and/or background image in the picture to be processed can be flexibly processed in a targeted manner, processing modes are enriched, and further the overall processing effect of the picture is enriched.
Example two:
referring to fig. 3, it is a schematic diagram of an implementation flow of an image processing method provided in the second embodiment of the present application, where the method may include:
step S301, detecting foreground targets in a picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the picture to be processed has the foreground targets and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
step S302, performing scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not, and indicating the background category of the picture to be processed when the background of the picture to be processed is identified.
For the specific implementation process of steps S301 and S302, reference may be made to steps S101 and S102, which are not described herein again.
Step S303, receiving a picture processing instruction input by a user, and selecting a corresponding generation countermeasure network from preset generation countermeasure networks according to the picture processing instruction.
Optionally, a picture processing mode option is displayed on a display interface of the picture to be processed, and when the user clicks the corresponding picture processing mode option, a picture processing instruction is sent, for example, if the picture processing mode option is "oil painting style", the user clicks the picture processing mode option of the "oil painting style", and then the terminal device receives the picture processing instruction input by the user, and selects a generation countermeasure network capable of converting the picture style into the oil painting style from preset generation countermeasure networks according to the picture processing instruction.
Step S304, the picture to be processed is processed according to the first detection result, the classification result and the selected generation countermeasure network, and a processed picture is obtained.
Specifically, if the first detection result indicates that a foreground target exists in the to-be-processed picture and the classification result indicates that the background of the to-be-processed picture is identified, then:
determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target indicated by the first detection result;
processing each foreground target in the picture area of the picture to be processed according to the selected generation countermeasure network to obtain a processed picture area;
and replacing the picture area where each foreground object in the picture to be processed is located with the corresponding processed picture area to obtain a processed picture.
Specifically, if the first detection result indicates that a foreground target exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that a background target exists in the background, then:
and processing the picture to be processed according to the category of each foreground target indicated by the first detection result, the position of each foreground target, the background category of the picture to be processed indicated by the classification result, the position of each background target in the picture to be processed and the selected generation countermeasure network. Preferably: selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and determining the picture area of each background target in the picture to be processed according to the position of each background target indicated by the second detection result; processing each background target in the picture area of the picture to be processed according to the first generation countermeasure network to obtain a processed first picture area; selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target in the picture to be processed indicated by the first detection result; processing the image area of each foreground target in the image to be processed according to the second generation countermeasure network to obtain a processed second image area; and replacing the picture area where each foreground object in the picture to be processed is located with a corresponding processed second picture area, and replacing the picture area where each background object in the picture to be processed is located with a corresponding processed first picture area, so as to obtain a processed picture.
Wherein the first generation countermeasure network has one or more specified picture processing modes: the specified picture processing mode is used for carrying out style conversion, saturation, brightness and/or contrast adjustment and other picture parameters on the background and/or the background target;
wherein the second generative countermeasure network has one or more specified picture processing modes: and a specified picture processing mode for performing style conversion, and adjustment of picture parameters such as saturation, brightness and/or contrast on the foreground object.
Example three:
referring to fig. 4, it is a schematic diagram of an implementation flow of an image processing method provided in the third embodiment of the present application, where the method may include:
step S401, detecting foreground targets in a picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the picture to be processed has the foreground targets or not, and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
step S402, carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not, and indicating the background category of the picture to be processed when the background of the picture to be processed is identified.
For the specific implementation process of steps S401 and S402, reference may be made to steps S101 and S102, which are not described herein again.
Step S403, if the classification result indicates that the background of the to-be-processed picture is identified, detecting a background target in the background to obtain a second detection result, where the second detection result is used to indicate whether a background target exists in the background and, when a background target exists, to indicate a position of each background target in the to-be-processed picture.
As a preferred embodiment of the present application, in order to further improve the processing effect of the picture, in this embodiment, after the background in the picture to be processed is identified, the background object in the background needs to be detected, so as to facilitate subsequent processing of the background object. Wherein the detection result includes but is not limited to: the background includes indication information of whether there is a background object in the background, and information indicating the category and position of each background object when the background object is included. Wherein, the background object refers to various objects composing the background, such as blue sky in blue sky, cloudiness, small grass in green grass, flower, etc. Furthermore, the position of the background target is identified more accurately, and the identified background target is distinguished. In this embodiment, after the background object is detected, different selection boxes may be used for frame selection of the background object, for example, a dotted box selects a white cloud, an elliptical box selects a flower, and the like.
Preferably, in order to improve the detection efficiency of the background target, the present embodiment may use a trained shallow convolutional neural network model to detect the background target in the background, where the shallow convolutional neural network model refers to a neural network model in which the number of convolutional layers is less than a predetermined number (for example, 8). For example, the shallow convolutional neural network model may be a model with a background object detection function, such as AlexNet. Of course, VGGNet model, google lenet model, ResNet model, etc. may also be employed.
It should be noted that, within the technical scope disclosed by the present invention, other schemes for detecting foreground objects that can be easily conceived by those skilled in the art should also be within the protection scope of the present invention, and are not described herein.
The specific training process of the model is illustrated by taking the example that the trained shallow convolutional neural network model is adopted to detect the background target in the background:
the method comprises the steps of obtaining a sample picture and a detection result of a background in the sample picture in advance, wherein the detection result comprises the category and the position of each background target;
detecting a background target in the background by using an initial shallow convolutional neural network model, and calculating the detection accuracy of the initial shallow convolutional neural network model according to a detection result obtained in advance;
and if the detection accuracy is smaller than a preset second detection threshold, adjusting parameters of the initial shallow convolutional neural network model, detecting the background of the sample picture through the shallow convolutional neural network model after parameter adjustment until the detection accuracy of the shallow convolutional neural network model after adjustment is larger than or equal to the second detection threshold, and taking the shallow convolutional neural network model as the trained shallow convolutional neural network model.
And S404, processing the picture to be processed according to the first detection result, the classification result, the second detection result and a preset generation countermeasure network.
Specifically, if the first detection result indicates that no foreground object exists in the to-be-processed picture, the classification result indicates that the background of the to-be-processed picture is identified, and the second detection result indicates that no background object exists in the background, then:
selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and processing the picture to be processed according to the first generation countermeasure network to obtain a processed picture;
or, if the first detection result indicates that a foreground target exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that a background target does not exist in the background, then:
selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target indicated by the first detection result;
processing each foreground target in the picture area of the picture to be processed according to the second generation countermeasure network to obtain a processed picture area;
and replacing the picture area where each foreground object in the picture to be processed is located with the corresponding processed picture area to obtain a processed picture.
Or, if the first detection result indicates that a foreground target exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that a background target exists in the background, then:
and processing the picture to be processed according to the category of each foreground target indicated by the first detection result, the position of each foreground target, the background category of the picture to be processed indicated by the classification result, the position of each background target in the picture to be processed and a preset generation countermeasure network.
Optionally, the processing the picture to be processed according to the category of each foreground target indicated by the first detection result, the position of each foreground target, the background category of the picture to be processed indicated by the classification result, the position of each background target in the picture to be processed, and a preset generation countermeasure network includes:
selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and determining the picture area of each background target in the picture to be processed according to the position of each background target indicated by the second detection result;
processing each background target in the picture area of the picture to be processed according to the first generation countermeasure network to obtain a processed first picture area;
selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target in the picture to be processed indicated by the first detection result;
processing the image area of each foreground target in the image to be processed according to the second generation countermeasure network to obtain a processed second image area;
and replacing the picture area where each foreground object in the picture to be processed is located with a corresponding processed second picture area, and replacing the picture area where each background object in the picture to be processed is located with a corresponding processed first picture area, so as to obtain a processed picture.
Wherein the first generation countermeasure network has one or more specified picture processing modes: the specified picture processing mode is used for carrying out style conversion, saturation, brightness and/or contrast adjustment and other picture parameters on the background and/or the background target;
wherein the second generative countermeasure network has one or more specified picture processing modes: and a specified picture processing mode for performing style conversion, and adjustment of picture parameters such as saturation, brightness and/or contrast on the foreground object.
Through the embodiment of the application, the foreground target and the background image in the picture to be processed can be processed, and the background target in the background image can be processed, so that the picture processing fineness is higher, the overall picture processing effect is effectively improved, and the user experience is enhanced.
Example four:
fig. 5 is a schematic diagram of a picture processing apparatus according to a fourth embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of description.
The image processing apparatus 5 may be a software unit, a hardware unit or a combination of software and hardware unit built in a terminal device such as a mobile phone, a tablet computer, a notebook computer, etc., or may be integrated into a terminal device such as a mobile phone, a tablet computer, a notebook computer, etc., as an independent pendant.
The picture processing apparatus 5 includes:
a first detection module 51, configured to detect a foreground target in a to-be-processed picture, and obtain a first detection result, where the first detection result is used to indicate whether a foreground target exists in the to-be-processed picture, and when a foreground target exists, to indicate a category of each foreground target and a position of each foreground target in the to-be-processed picture;
the classification module 52 is configured to perform scene classification on the picture to be processed to obtain a classification result, where the classification result is used to indicate whether to identify a background of the picture to be processed, and is used to indicate a background category of the picture to be processed when the background of the picture to be processed is identified;
and the processing module 53 is configured to process the to-be-processed picture according to the first detection result, the classification result, and a preset generation countermeasure network.
Optionally, the image processing apparatus 5 further includes:
and the picture processing instruction receiving module is used for receiving a picture processing instruction input by a user and selecting a corresponding generation countermeasure network from preset generation countermeasure networks according to the picture processing instruction.
Correspondingly, the processing module 53 is specifically configured to process the to-be-processed picture according to the first detection result, the classification result, and the selected generation countermeasure network, so as to obtain a processed picture.
Optionally, the image processing apparatus 5 further includes:
a second detection module, configured to detect a background target in the background when the classification result indicates that the background of the to-be-processed picture is identified, to obtain a second detection result, where the second detection result is used to indicate whether a background target exists in the background, and when a background target exists, to indicate a category of each background target and a position of each background target in the to-be-processed picture;
correspondingly, the processing module 53 is specifically configured to process the to-be-processed picture according to the first detection result, the classification result, the second detection result, and a preset generation countermeasure network.
Optionally, the processing module 53 is specifically configured to, if the first detection result indicates that no foreground object exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that no background object exists in the background, then:
and selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and processing the picture to be processed according to the first generation countermeasure network.
Optionally, if the first detection result indicates that a foreground target exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that a background target does not exist in the background, the processing module 53 includes:
a second generation countermeasure network selection unit, configured to select a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determine, according to the position of each foreground target indicated by the first detection result, a picture area of the to-be-processed picture in which each foreground target is located;
the processed picture area acquisition unit is used for processing the picture area of each foreground target in the picture to be processed according to the second generation countermeasure network to obtain a processed picture area;
and the first processed picture acquisition unit is used for replacing the picture area where each foreground target in the picture to be processed is located with the corresponding processed picture area to obtain a processed picture.
Optionally, the processing module 53 is specifically configured to, if the first detection result indicates that a foreground target exists in the to-be-processed picture, the classification result indicates that a background of the to-be-processed picture is identified, and the second detection result indicates that a background target exists in the background, then:
and processing the picture to be processed according to the category of each foreground target indicated by the first detection result, the position of each foreground target, the background category of the picture to be processed indicated by the classification result, the position of each background target in the picture to be processed and a preset generation countermeasure network.
Optionally, the processing module 53 includes:
a background target area determining unit, configured to select a first generation countermeasure network from preset generation countermeasure networks according to the background category of the to-be-processed picture, and determine a picture area of each background target in the to-be-processed picture according to the position of each background target indicated by the second detection result;
a first image region obtaining unit, configured to process, according to the first generation countermeasure network, the image region of the to-be-processed image of each background target, so as to obtain a processed first image region;
a foreground target area determining unit, configured to select a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determine an image area of each foreground target in the to-be-processed image according to the position of each foreground target in the to-be-processed image indicated by the first detection result;
a second picture area obtaining unit, configured to process, according to the second generated countermeasure network, the picture areas of the to-be-processed picture of the foreground targets, so as to obtain processed second picture areas;
and the fusion image processing unit is used for replacing the picture area where each foreground target in the picture to be processed is located with the corresponding processed second picture area, and replacing the picture area where each background target in the picture to be processed is located with the corresponding processed first picture area, so as to obtain the processed picture.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, which may be referred to in the section of the embodiment of the method specifically, and are not described herein again.
Fig. 6 is a schematic diagram of a terminal device according to a fifth embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62, such as a picture processing program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the above-described respective embodiments of the picture processing method, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 51 to 53 shown in fig. 5.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Specifically, the present application further provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the memory in the foregoing embodiments; or it may be a separate computer-readable storage medium not incorporated into the terminal device. The computer readable storage medium stores one or more computer programs which, when executed by one or more processors, implement the following steps of the picture processing method:
detecting foreground targets in a picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not and indicating the background category of the picture to be processed when the background of the picture to be processed is identified;
and processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. An image processing method, comprising:
detecting foreground targets in a picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not and indicating the background category of the picture to be processed when the background of the picture to be processed is identified;
processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network;
if the classification result indicates that the background of the picture to be processed is identified, detecting background targets in the background to obtain a second detection result, wherein the second detection result is used for indicating whether the background targets exist in the background or not and indicating the positions of the background targets in the picture to be processed when the background targets exist;
correspondingly, the processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network includes:
and processing the picture to be processed according to the first detection result, the classification result, the second detection result and a preset generation countermeasure network, wherein the processing of the picture to be processed comprises adjusting picture parameters of a foreground target through the second generation countermeasure network and/or a background target through the first generation countermeasure network, and the first generation countermeasure network and the second generation countermeasure network correspond to different picture processing modes.
2. The picture processing method according to claim 1, wherein before the processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network, the method comprises:
receiving a picture processing instruction input by a user, and selecting a corresponding generation countermeasure network from preset generation countermeasure networks according to the picture processing instruction;
correspondingly, the processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network specifically includes:
and processing the picture to be processed according to the first detection result, the classification result and the selected generation countermeasure network to obtain a processed picture.
3. The picture processing method according to claim 1, wherein the processing the picture to be processed according to the first detection result, the classification result, the second detection result, and a preset generation countermeasure network comprises:
if the first detection result indicates that no foreground target exists in the picture to be processed, the classification result indicates that the background of the picture to be processed is identified, and the second detection result indicates that no background target exists in the background, then:
and selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and processing the picture to be processed according to the first generation countermeasure network.
4. The picture processing method according to claim 1, wherein the processing the picture to be processed according to the first detection result, the classification result, the second detection result, and a preset generation countermeasure network comprises:
if the first detection result indicates that a foreground target exists in the picture to be processed, the classification result indicates that a background of the picture to be processed is identified, and the second detection result indicates that a background target does not exist in the background, then:
selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target indicated by the first detection result;
processing each foreground target in the picture area of the picture to be processed according to the second generation countermeasure network to obtain a processed picture area;
and replacing the picture area where each foreground object in the picture to be processed is located with the corresponding processed picture area to obtain a processed picture.
5. The picture processing method according to claim 1, wherein the processing the picture to be processed according to the first detection result, the classification result, the second detection result, and a preset generation countermeasure network comprises:
if the first detection result indicates that a foreground target exists in the picture to be processed, the classification result indicates that a background of the picture to be processed is identified, and the second detection result indicates that a background target exists in the background, then:
and processing the picture to be processed according to the category of each foreground target indicated by the first detection result, the position of each foreground target, the background category of the picture to be processed indicated by the classification result, the position of each background target in the picture to be processed and a preset generation countermeasure network.
6. The picture processing method according to claim 5, wherein the processing the picture to be processed according to the category of each foreground object, the position of each foreground object, the background category of the picture to be processed, the position of each background object in the picture to be processed, and a preset generation countermeasure network, which are indicated by the first detection result, comprises:
selecting a first generation countermeasure network from preset generation countermeasure networks according to the background category of the picture to be processed, and determining the picture area of each background target in the picture to be processed according to the position of each background target indicated by the second detection result;
processing each background target in the picture area of the picture to be processed according to the first generation countermeasure network to obtain a processed first picture area;
selecting a second generation countermeasure network from preset generation countermeasure networks according to the category of each foreground target indicated by the first detection result, and determining the picture area of each foreground target in the picture to be processed according to the position of each foreground target in the picture to be processed indicated by the first detection result;
processing the image area of each foreground target in the image to be processed according to the second generation countermeasure network to obtain a processed second image area;
and replacing the picture area where each foreground object in the picture to be processed is located with a corresponding processed second picture area, and replacing the picture area where each background object in the picture to be processed is located with a corresponding processed first picture area, so as to obtain a processed picture.
7. A picture processing apparatus, comprising:
the first detection module is used for detecting foreground targets in the picture to be processed to obtain a first detection result, wherein the first detection result is used for indicating whether the foreground targets exist in the picture to be processed and indicating the types of the foreground targets and the positions of the foreground targets in the picture to be processed when the foreground targets exist;
the classification module is used for carrying out scene classification on the picture to be processed to obtain a classification result, wherein the classification result is used for indicating whether the background of the picture to be processed is identified or not and indicating the background category of the picture to be processed when the background of the picture to be processed is identified;
the processing module is used for processing the picture to be processed according to the first detection result, the classification result and a preset generation countermeasure network;
a second detection module, configured to detect a background target in the background when the classification result indicates that the background of the to-be-processed picture is identified, to obtain a second detection result, where the second detection result is used to indicate whether a background target exists in the background, and when a background target exists, to indicate a category of each background target and a position of each background target in the to-be-processed picture;
correspondingly, the processing module is specifically configured to process the to-be-processed picture according to the first detection result, the classification result, the second detection result, and a preset generation countermeasure network, where the processing of the to-be-processed picture includes adjusting picture parameters of a foreground target and/or a background target through the second generation countermeasure network and the first generation countermeasure network and the second generation countermeasure network correspond to different picture processing modes.
8. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the picture processing method according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the picture processing method according to any one of claims 1 to 6.
CN201810631027.9A 2018-06-19 2018-06-19 Image processing method, image processing device and terminal device Expired - Fee Related CN108961157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810631027.9A CN108961157B (en) 2018-06-19 2018-06-19 Image processing method, image processing device and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810631027.9A CN108961157B (en) 2018-06-19 2018-06-19 Image processing method, image processing device and terminal device

Publications (2)

Publication Number Publication Date
CN108961157A CN108961157A (en) 2018-12-07
CN108961157B true CN108961157B (en) 2021-06-01

Family

ID=64491405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810631027.9A Expired - Fee Related CN108961157B (en) 2018-06-19 2018-06-19 Image processing method, image processing device and terminal device

Country Status (1)

Country Link
CN (1) CN108961157B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109739414B (en) * 2018-12-29 2021-12-14 努比亚技术有限公司 Picture processing method, mobile terminal and computer readable storage medium
CN109815997B (en) * 2019-01-04 2024-07-19 平安科技(深圳)有限公司 Method and related device for identifying vehicle damage based on deep learning
CN111752506B (en) * 2019-03-27 2024-02-13 京东方艺云(杭州)科技有限公司 Digital work display method, display device and computer readable medium
CN110544287B (en) * 2019-08-30 2023-11-10 维沃移动通信有限公司 Picture allocation processing method and electronic equipment
CN110766638A (en) * 2019-10-31 2020-02-07 北京影谱科技股份有限公司 Method and device for converting object background style in image
CN111062861A (en) * 2019-12-13 2020-04-24 广州市玄武无线科技股份有限公司 Method and device for generating display image samples
CN111145430A (en) * 2019-12-27 2020-05-12 北京每日优鲜电子商务有限公司 Method and device for detecting commodity placing state and computer storage medium
CN111340124A (en) * 2020-03-03 2020-06-26 Oppo广东移动通信有限公司 Method and device for identifying entity category in image
CN112954138A (en) * 2021-02-20 2021-06-11 东营市阔海水产科技有限公司 Aquatic economic animal image acquisition method, terminal equipment and movable material platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105788142A (en) * 2016-05-11 2016-07-20 中国计量大学 A fire detection system and detection method based on video image processing
CN106296629A (en) * 2015-05-18 2017-01-04 富士通株式会社 Image processing apparatus and method
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
CN107862658A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN107944499A (en) * 2017-12-10 2018-04-20 上海童慧科技股份有限公司 A kind of background detection method modeled at the same time for prospect background

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169784A1 (en) * 2008-12-30 2010-07-01 Apple Inc. Slide Show Effects Style
CN104156915A (en) * 2014-07-23 2014-11-19 小米科技有限责任公司 Skin color adjusting method and device
CN105138693A (en) * 2015-09-18 2015-12-09 联动优势科技有限公司 Method and device for having access to databases
US10592885B2 (en) * 2016-07-05 2020-03-17 Visa International Service Association Device for communicating preferences to a computer system
CN106101547A (en) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN107679465B (en) * 2017-09-20 2019-11-15 上海交通大学 A Generation and Expansion Method for Person Re-ID Data Based on Generative Networks
CN107592517B (en) * 2017-09-21 2020-03-24 青岛海信电器股份有限公司 Skin color processing method and device
CN108038452B (en) * 2017-12-15 2020-11-03 厦门瑞为信息技术有限公司 A method for fast detection and recognition of home appliance gestures based on local image enhancement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296629A (en) * 2015-05-18 2017-01-04 富士通株式会社 Image processing apparatus and method
CN105788142A (en) * 2016-05-11 2016-07-20 中国计量大学 A fire detection system and detection method based on video image processing
CN107845072A (en) * 2017-10-13 2018-03-27 深圳市迅雷网络技术有限公司 Image generating method, device, storage medium and terminal device
CN107862658A (en) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, computer-readable storage medium, and electronic device
CN107944499A (en) * 2017-12-10 2018-04-20 上海童慧科技股份有限公司 A kind of background detection method modeled at the same time for prospect background

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Background recovery from multiple images;Aditee Shrotre等;《2013 IEEE Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE)》;20131024;第135-140页 *
基于TensorFlow分布式与前景背景分离的实时图像风格化算法;吴联坤;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180215(第2期);第I138-1528页 *

Also Published As

Publication number Publication date
CN108961157A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108961157B (en) Image processing method, image processing device and terminal device
KR102083696B1 (en) Image identification and organisation according to a layout without user intervention
CN105659286B (en) Automated image cropping and sharing
CN109189879B (en) Electronic book display method and device
CN109284445B (en) Network resource recommendation method and device, server and storage medium
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
CN110246110B (en) Image evaluation method, device and storage medium
CN109086742A (en) scene recognition method, scene recognition device and mobile terminal
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN111026992B (en) Multimedia resource preview method, device, terminal, server and storage medium
KR102799446B1 (en) Determining User Lifetime Value
CN110163066B (en) Multimedia data recommendation method, device and storage medium
US20210335391A1 (en) Resource display method, device, apparatus, and storage medium
CN108898587A (en) Picture processing method, picture processing device and terminal equipment
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN110069649B (en) Graphic file retrieval method, graphic file retrieval device, graphic file retrieval equipment and computer readable storage medium
CN110929159B (en) Resource release method, device, equipment and medium
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
CN108805095A (en) Picture processing method and device, mobile terminal and computer readable storage medium
CN108776959B (en) Image processing method and device and terminal equipment
CN109089040B (en) Image processing method, image processing device and terminal device
CN108763491B (en) Picture processing method and device and terminal equipment
CN108898169B (en) Picture processing method, picture processing device and terminal equipment
CN110675473A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN111897709B (en) Method, device, electronic equipment and medium for monitoring user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210601

CF01 Termination of patent right due to non-payment of annual fee