Detailed Description
In order to better understand the technical solutions in the embodiments of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the present application, shall fall within the scope of protection of the embodiments of the present application.
The implementation of the embodiments of the present application will be further described below with reference to the accompanying drawings.
Fig. 1 shows an exemplary system to which an image data processing method of an embodiment of the present application is applied. As shown in fig. 1, the system 100 may include a server 102, a communication network 104, and/or one or more user devices 106, which are illustrated in fig. 1 as a plurality of user devices.
Server 102 may be any suitable server for storing information, data, programs, and/or any other suitable type of content. In some embodiments, server 102 may perform any suitable functions. For example, in some embodiments, the server 102 may perform image data processing. As an alternative example, in some embodiments, the server 102 may be used to determine a background area of an image. As another example, in some embodiments, server 102 may be used for subsequent other processing based on the determined background region, such as image background recognition or image segmentation, etc.
In some embodiments, the communication network 104 may be any suitable combination of one or more wired and/or wireless networks. For example, the communication network 104 can include any one or more of the Internet, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode (ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network. The user device 106 can be coupled to the communication network 104 via one or more communication links (e.g., communication link 112), and the communication network 104 can be linked to the server 102 via one or more communication links (e.g., communication link 114). The communication link may be any communication link suitable for transferring data between the user device 106 and the server 102, such as a network link, a dial-up link, a wireless link, a hardwired link, any other suitable communication link, or any suitable combination of such links.
The user device 106 may comprise any one or more user devices suitable for rendering images. In some embodiments, the user device 106 may send the image to be processed to the server 102 to request the server 102 to determine a background area for the image and receive information of the background area fed back by the server 102. However, the present application is not limited thereto, and the user equipment 106 may implement the functions of the server 102 locally without using a server. That is, the image data processing scheme of the embodiment of the present application may be implemented at the server 102 side or at the user equipment 106 side.
In some embodiments, user device 106 may comprise any suitable type of device. For example, in some embodiments, user devices 106 may include mobile devices, tablet computers, laptop computers, desktop computers, wearable computers, game consoles, media players, vehicle entertainment systems, and/or any other suitable type of user device.
Although server 102 is illustrated as one device, in some embodiments any suitable number of devices may be used to perform the functions performed by server 102. For example, in some embodiments, multiple devices may be used to implement the functions performed by server 102. Or the functionality of server 102 may be implemented using cloud services.
Based on the above system, embodiments of the present application provide an image data processing method, which will be described below by way of a plurality of embodiments.
Example 1
Referring to fig. 2A, a flowchart of steps of an image data processing method according to a first embodiment of the present application is shown.
The image data processing method of the present embodiment includes the steps of:
Step S202, obtaining an image to be processed in a preset color space and a background color range matched with the image to be processed.
The preset color space is a color space capable of reflecting the color tone of the image to be processed and the background color range.
In colorimetry, various color models are established, and a certain color is represented by one-dimensional, two-dimensional, three-dimensional or even four-dimensional space coordinates, and a color range or a color space can be defined by the coordinate system. More commonly used color spaces include RGB color space, HSV color space, HSL color space, YUV color space, and the like. In the embodiment of the application, the color space which can reflect the tone of the image to be processed, such as HSV color space, HSL color space and the like, is selected and used for conveniently processing data. It should be apparent to those skilled in the art that if the original image to be processed takes the form of other color spaces, the color space of the image to be processed may be converted to a color space that reflects the hue of the image to be processed before image data processing according to an embodiment of the present application. The specific conversion modes can be described with reference to the related art, and are not described in detail herein.
A color space is typically made up of a number of components, referred to in embodiments of the application as color components. For example, in the RGB color space, it includes an R component, a G component, and a B component, and, for example, in the HSV color space, it includes an H component, an S component, and a V component, and so forth. The color space, which may reflect the hue of the image to be processed, needs to have a corresponding color component, such as having an H color component, etc. Such a color space may facilitate subsequent image data processing, improving efficiency of image data processing. Wherein, the usable angle of tone measures, takes on the value range to be 0 DEG ~360 DEG, calculates according to anticlockwise direction from red, red is 0 DEG, green is 120 DEG, blue is 240 DEG, etc.
In the embodiment, the background color range is matched with the image to be processed, and in practical application, the background color range can be manually specified in advance, or can be obtained by performing rough background detection on the image to be processed. In the embodiment of the present application, the background color range may be a background color range, and a background object such as a curtain, a wall, etc. is supported to have a color difference of a certain span due to wrinkles, stains, light, angles, and exposure, so that the background color range may be understood as a minimum range including the background color.
Step S204, determining first color data corresponding to the image to be processed and second color data corresponding to the background color range.
Wherein the first color data and the second color data each include a plurality of color components corresponding to the color space.
Taking the HSV color space as an example, it includes an H component, an S component, and a V component, from which HSV model data is formed by fusing the components together. In this example, the first color data corresponding to the image to be processed may be first HSV model data including H component, S component, and V component, and similarly the second color data corresponding to the background color range may be second HSV model data also including H component, S component, and V component.
Step S206, obtaining the minimum distance between the color of the pixel in the image to be processed and the color of the background color range according to the minimum distance between the first color data and the second color data.
It should be noted that, in the embodiment of the present application, the "color" refers to a combination of a plurality of components in a color space, and not to a specific color (hue). For example, still taking the HSV color space as an example, the color of a pixel means the HSV model data corresponding to the pixel, and not just the H component thereof.
However, when the minimum distance between the first color data and the second color data is calculated, the calculation can be reduced to a single color component dimension for calculation, so that on one hand, the calculation is more accurate, and on the other hand, the calculation power resource required by the calculation can be greatly reduced. After the minimum distance corresponding to the single color component is obtained, fusion can be performed again to form corresponding color data such as HSV model data.
In order to achieve efficient calculation of the minimum distance between the first color data and the second color data, in one possible manner, before this step, a nonlinear transformation process (referred to herein as a second nonlinear transformation process) is further performed on the saturation component and the luminance component in the first color data, and a third nonlinear transformation process is performed on the saturation component and the luminance component in the second color data, the original saturation component and the original luminance component in the first color data are updated using the saturation component and the luminance component after the second nonlinear transformation process, and the original saturation component and the original luminance component in the second color data are updated using the saturation component and the luminance component after the third nonlinear transformation process, so as to improve the sensitivity of the middle-low saturation region and the bright region. This also makes the subsequent calculation of the minimum distance based on the individual color components easier. The second nonlinear transformation processing and the third nonlinear transformation processing may adopt the same processing method or different processing methods. The same treatment is preferably used.
Step S208, determining the background area of the image to be processed according to the minimum distance between the color of the pixel in the image to be processed and the color of the background color range.
According to the minimum distance between the color of the pixel in the image to be processed and the color of the background color range, whether a certain pixel belongs to the possible background color range or not can be effectively judged, and based on the minimum distance, the background area of the image to be processed can be determined according to the pixels judged to belong to the background color range.
In one possible way, the method can be realized by performing first nonlinear transformation processing on the minimum distance between the color of the pixel in the image to be processed and the color of the background color range, determining the background color range of the image to be processed according to the result of the first nonlinear transformation processing, and determining the background area of the image to be processed according to the background color range. The first nonlinear transformation may be implemented by a person skilled in the art in a suitable manner according to actual requirements, such as an exponential function, etc., which is not limited in the embodiment of the present application. Through the first nonlinear transformation processing, the transition is more gentle in the range near the background color, so that the problem that the recognition processing of noise points of some backgrounds or a small amount of background color ranges is not completely accurate can be solved.
In addition, in order to effectively screen out the areas, in a feasible mode, when the background area of the image to be processed is determined according to the background color range, a cutoff threshold of the background color can be determined according to the background color range, and the background area of the image to be processed is determined according to the cutoff threshold. For example, if the cutoff threshold is exceeded, it is considered a foreground color pixel, and if the cutoff threshold is not exceeded, it may be a background color range pixel, or it may be a foreground + background pixel (i.e., a translucent color). Conversely, the same applies.
Through the above process, the background area in the image to be processed can be effectively determined.
Hereinafter, the above-described process is exemplarily described with one scene example, as shown in fig. 2B.
In fig. 2B, it is assumed that the acquired original image is an RGB image, and the corresponding background color range is a green color range. Illustratively, in this example, the image is first transformed into HSV space to be an HSV image as the image to be processed. And then carrying out rough estimation calculation on the background color range to obtain the green range of the HSV space. In practical applications, the manner of manually designating the background color range is also applicable.
Next, first color data corresponding to the HSV image is determined, which is indicated as "HSV model data corresponding to the image" in the figure, and second color data corresponding to a green range of the HSV space is determined, which is indicated as "HSV model data corresponding to the green range" in the figure. Then, a minimum distance (referred to herein as a first minimum distance) between the "image-corresponding HSV model data" and the "green-family-corresponding HSV model data" may be determined based on the two. Alternatively, the minimum distance may be determined based on the H component, S component, V component in the "image-corresponding HSV model data" and the corresponding H component, S component, V component in the "green-series-corresponding HSV model data". In particular implementations, the first minimum distance may be determined based on HSV model data corresponding to each pixel. Because the HSV model data of each pixel in the HSV image is in the HSV model data corresponding to the image, the HSV model data in the green range is in the HSV model data corresponding to the green range, and therefore, the minimum distance between the HSV model data corresponding to each pixel in the HSV image and the HSV model data corresponding to the green range can be determined through corresponding processing. After the minimum distance corresponding to each pixel in the HSV image is obtained, whether the pixel belongs to the background color range or not can be determined, and then the background area of the corresponding image to be processed is defined according to the determination result of each pixel in the HSV image.
Therefore, according to the embodiment, the judgment of whether the pixel in the image to be processed is the background pixel is abstracted into the shortest distance calculation of the pixel and the background color range, so that the background color range is not required to be a primary color or a pure color, and the color with the similar background color range is allowed to appear in the foreground object, and only a certain degree of distinction is required between the foreground and the background color range. The minimum distance between the first color data corresponding to the image to be processed and the second color data corresponding to the background color range is calculated based on corresponding monochromatic color components among a plurality of color components in a preset color space, the characteristics of visual angles and light shadows are fully considered, the obtained background area is more accurate, and if background segmentation is carried out subsequently, a better background segmentation effect can be obtained.
Example two
Referring to fig. 3, a flowchart of steps of an image data processing method according to a second embodiment of the present application is shown.
In this embodiment, the method for image data provided by the embodiment of the present application is described with an emphasis on calculating the minimum distance of color data.
The image data processing method of the present embodiment includes the steps of:
step S302, obtaining an image to be processed in a preset color space and a background color range matched with the image to be processed.
The preset color space is a color space capable of reflecting the color tone of the image to be processed and the background color range.
Step S304, determining first color data corresponding to the image to be processed and second color data corresponding to the background color range.
Wherein the first color data and the second color data each include a plurality of color components corresponding to the color space.
The specific implementation of steps S302-S304 may refer to the description of the relevant parts in the first embodiment, and will not be repeated here.
Step S306, obtaining the minimum distance between the color of the pixel in the image to be processed and the color of the background color range according to the minimum distance between the first color data and the second color data.
In one possible way, the minimum distance of the first color data and the second color data on the single color component is calculated, and the minimum distance between the color of the pixel in the image to be processed and the color of the background color range is obtained according to the minimum distance on the single color component. In this way, when the minimum distance is calculated, the dimension of the calculation taking the whole component as a whole is respectively reduced to each component for calculation, and the time complexity of calculation and the complexity of algorithm logic are greatly reduced.
Taking HSV space as an example, in this step, the minimum distance between the HSV model data and the HSV model data is calculated, and the dimension is reduced to the minimum distance between the H component and the H component, between the S component and the S component, and between the V component and the V component.
In addition, in order to reduce the influence of conditions such as light and exposure on the color span of the image, and influence the accuracy of the calculated minimum distance, in a feasible manner, a scaling factor may be determined according to the color span corresponding to the background color range. And then, according to the scaling coefficient, scaling the actual distance to obtain the minimum distance between the first color data and the second color data on the single color component. Therefore, the scheme of the embodiment of the application can adapt to the background color tolerance range and the semitransparent transition range while ensuring the accuracy of the minimum distance.
By the above process, accurate calculation of the minimum distance of the first color data and the second color data is realized.
Further, a minimum distance between the color of the pixel in the image to be processed and the color of the background color range may be obtained based on the minimum distance.
For example, a saturation of a target pixel and the target pixel is determined from an image to be processed, a background color in a background color range closest to the target pixel and a saturation of the background color in the background color range are determined based on a minimum distance of first color data and second color data on a saturation component, a minimum distance between a hue of the pixel in the image to be processed and a hue of the background color is obtained based on the saturation of the target pixel, the saturation of the background color, and the minimum distance between the hue of the pixel in the image to be processed and the hue of the background color range is obtained based on the minimum distance between the hues.
The target pixel may be any pixel in the image to be processed, and in the case where all pixels are processed, the target pixel may be each pixel in the image to be processed. The minimum distance between the hues is also combined in determining the minimum distance between the hues to fully take into account the effect of the visual and light-shadow characteristics of the image to be processed on the hues.
Similarly to the foregoing, in order to reduce the interference that the conditions such as light, exposure and the like affect the image color span differently and affect the accuracy of the calculated minimum distance, in one possible manner, this step, when implemented, obtains the actual minimum distance between the hue of the pixel in the image to be processed and the hue of the background color according to the saturation of the target pixel, the saturation of the background color, and the minimum distance between the hue of the first color data and the hue of the second color data on the hue component, and performs scaling processing on the actual minimum distance according to the scaling coefficient to obtain the minimum distance between the hue of the pixel in the image to be processed and the hue of the background color. Wherein the scaling factor may be determined from the color span corresponding to the background color.
After the minimum distance on each color component is obtained, the minimum distance between the color of the pixel in the image to be processed and the color of the background color range can be obtained based on this. In one possible way, the minimum distance between the color of the pixel in the image to be processed and the color of the background color range may be obtained from the minimum distance between the hues, the minimum distance between the first color data and the second color data on the saturation component and the minimum distance on the luminance component.
Step S308, determining the background area of the image to be processed according to the minimum distance between the color of the pixel in the image to be processed and the color of the background color range.
The specific implementation of this step may refer to the description of the relevant part in the first embodiment, and will not be repeated here.
After the background area of the image to be processed is obtained, optionally, the following step S310 may be performed.
Step S310, processing the background area of the image to be processed.
Including but not limited to, performing background segmentation, performing background transformation, performing background recognition, and performing other processing based on the background recognition result, such as AR (augmented reality) processing, etc.
The above-described process is exemplarily described below with a specific example, and the image data processing process of this example includes:
(A) And converting the image to be processed and the background color range matched with the image to be processed into HSV space representation.
The background color range may be specified by a user or may be automatically calculated.
In this example, the color space is taken as an HSV space, but it should be apparent to those skilled in the art that other color spaces that can reflect hues, such as HSL space, are equally applicable.
(B) And performing nonlinear transformation processing on S component and V component in HSV model data of a background color range and HSV model data of an image to be processed so as to improve the sensitivity of the middle and low saturation and bright area.
For example, s=pow (S, a 1), v=pow (V, b 1), where a1, b1 both belong to (0.1, 1), alternatively, may be 0.6-0.8.pow () represents an exponential function.
Or the S component and the V component can be mapped into hyperbolas in the value range, and the slope is larger in the target sensitive interval and the slope is smaller in the non-sensitive area.
The transformed S and V components (including the background color range and the image to be processed) are denoted ST, VT.
(C) And respectively calculating the minimum distance between the ST/VT of each target pixel in the image to be processed and the corresponding ST/VT range in the background color range.
For example, a method of calculating the distance of ST, such as abs (corresponding ST in the ST-background color range of the target pixel) or the like, may be employed.
(D) Scaling (C) the calculated minimum distance of ST and the minimum distance of VT according to the span of ST and VT corresponding to the background color range.
The span can be obtained by means of corresponding components in the background color range, such as an S component and a T component, or scaled components, such as an ST component and a VT component, performing histogram statistics and the like. In this example, spans of ST and VT are employed.
In particular, the method comprises the steps of,
Scaling the minimum distance of ST to obtain distance_st=distance_st/st_range, where st_range=max (pow_st_background-min_background, a 2), max_range, a2 has a value RANGE of (0.1,1.0), max_range is used to limit the span upper limit, MAX () represents the maximum value, pow () represents the exponential function, max_st_background represents the maximum value of the background color RANGE on the ST component, and min_st_background represents the minimum value of the background color RANGE on the ST component.
Similarly, the minimum distance of VT is scaled to obtain distance_vt=distance_vt/vt_range, where vt_range=max (pow_vt_background-min_vt_background, a 3), max_range, a3 has a value RANGE of (0.1,1.0), max_range is used to limit the span upper limit, MAX () represents the maximum value, pow () represents the exponential function, max_vt_background represents the maximum value of the background color RANGE on the VT component, and min_vt_background represents the minimum value of the background color RANGE on the VT component.
(E) And calculating the minimum distance between H of each pixel in the image to be processed and the corresponding H in the background color range, and recording the minimum distance as distance_H.
In the HSV color space, H represents its H component, i.e., hue component.
(F) Scaling (E) the calculated distance_H according to the S (or ST) component of each target pixel and the S (or ST) component of the background Color range in the image to be processed, and marking the distance_H as the minimum distance distance_color of the tone.
For example, distance_color=f (s_target, s_background) ×distance_h;
Where s_target represents the saturation of the target pixel, s_background represents the saturation of the Background color nearest to the target pixel, and f (S1, S2) represents the reference saturation of the foreground/Background color of the current control, which may be implemented as max (S1, S2) or (s1+s2)/2, for example.
(G) The minimum distance_color of the hue calculated by (F) is scaled according to the span of the H component in the background Color range.
For example, distance_color=distance_color/h_range;
Where h_range=max (pow_h_background-min_h_background, a 4), max_range, a4 has a value RANGE of (0.1,1.0), max_range is used to limit the span upper limit, MAX () represents the maximum value, pow () represents the exponential function, max_h_background represents the maximum value of the background color RANGE on the H component, and min_h_background represents the minimum value of the background color RANGE on the H component. Here, since the H component is cyclic, max_h_background and min_h_background are values considering the rotation direction.
(H) And merging the minimum distances of the H component, the S component and the V component corresponding to each pixel in the image to be processed, or merging the minimum distances of the scaled components corresponding to each pixel, namely distance_ Color, ST, VT, and calculating the minimum distance of each target pixel from the background color range.
For example, distance=a5×distance_color+distance_st+distance_vt, where a5 is used to adjust the weights of the hue component and the other 2 components (ST, VT), typically a is much greater than 1.
(I) And performing nonlinear transformation on distance to ensure that transition is more gentle in a range near the background color so as to solve the problem that noise points of some backgrounds or recognition processing of a small amount of background color ranges is not completely accurate.
For example distance=pow (distance, a 6), where a6>1 is used for non-linear transformation.
(J) Determining a semitransparent region range by using the normalized coefficient for the distance calculated in the step (I), and performing truncation processing to convert the value into an alpha value of alpha=min (distance b2, 1.0), wherein b2 is a threshold value of the normalized coefficient for anchoring 100% alpha (namely determining the semitransparent region range), and min () represents taking the minimum value.
Through the above example, (1) abstracting the determination of alpha as the shortest distance calculation of the target pixel and the background color range, so that the background color range does not need to be primary color or pure color, and the color with similar background color range is allowed to appear in the foreground (only a certain degree of distinction between the foreground and the background color range is needed), (2) the minimum distance calculation is converted into HSV space (or other color space achieving similar effect) calculation according to the characteristics of visual angle and shadow, and nonlinear processing is performed on the S component and the V component according to the characteristics of visual and shadow to ensure reasonable sensitivity, the minimum distance on the H component is calculated by considering the degree of distinction between visual and optical when different saturation is considered, (3) the span of the background color range is also considered when the minimum distance calculation is performed, the background tolerance range and the semitransparent transition range are self-adaptive according to the span of the background color range (namely the purity of the background), the minimum distance calculation is considered by considering the degree of 3 aspects of hue, saturation and brightness, and can be used as weight adjustment, and nonlinear conversion is performed on the minimum distance, and the minimum distance can be ensured, and the transition can be used as a direct processing of the foreground and the transition range after the transition period is performed, and the background can be processed for the foreground has a certain degree of transition period.
Example III
Referring to fig. 4, a flowchart of steps of an image data processing method according to a third embodiment of the present application is shown.
The embodiment describes the image data processing method provided by the embodiment of the application from the specific application level of the image data processing.
The image data processing method of the present embodiment includes the steps of:
Step S402, acquiring an image to be processed and receiving a background color range input for the image to be processed.
In this embodiment, the image to be processed is an image to be subjected to foreground segmentation, which may be a single still image, or may be a video frame image in a video, including but not limited to a video frame image in a video conference, a director image of a studio director, a live image in a live video, an image to be subjected to AR (augmented reality) processing, an image for video production, and the like.
The background color range input for the image to be processed can be a background color range set manually or a background color range obtained after rough background color detection of the image to be processed is performed by an algorithm.
Step S404, converting the image to be processed and the background color range into a preset color space.
Wherein the preset color space is a color space which can reflect the tone of the image to be processed and the background color range, including but not limited to HSV color space, HSL color space, etc.
Typically, the image to be processed is an RGB image, and therefore, it needs to be converted into a color space that can reflect the hue, such as an HSV color space or an HSL color space. Correspondingly, the background color range needs to be consistent with the color space adopted by the image to be processed, namely, the color space which can reflect the color tone is also needed.
Step S406, determining the minimum distance between the color of the pixel in the image to be processed and the color of the background color range based on the color data under the color space, which respectively correspond to the image to be processed and the background color range.
For example, first color data corresponding to an image to be processed and second color data corresponding to a background color range may be first determined, wherein the first color data and the second color data each include a plurality of color components corresponding to the color space, and the minimum distance between the color of a pixel in the image to be processed and the color of the background color range is obtained according to the minimum distance between the first color data and the second color data.
Step S408, determining a background area of the image to be processed according to the minimum distance, and performing front background segmentation on the image to be processed according to the determined background area.
After the background area of the image to be processed is determined, the front background segmentation can be performed, and further subsequent processing is performed based on the front background segmentation result.
It should be noted that the above process is described more simply, and specific implementation of each step may refer to descriptions of relevant parts in the foregoing embodiments.
Hereinafter, taking different scenes as examples, a front background segmentation of an image to be processed according to a determined background area is exemplarily described.
Scene one video conference scene
In many video conferences there is often a need for content presentation or product presentation or other background replacement. Based on the method, the background area of each frame of video frame image in the video stream of the video conference acquired in real time can be determined by using the image data processing method, and then the front background segmentation is carried out based on the background area. Then, a foreground part such as an image part of a speaker of the conference or the like is reserved, and the segmented background part is replaced by an image part (such as PPT content) containing the content to be displayed, or an image part (such as a product picture) containing the product to be displayed, or is simply replaced by a background image part conforming to the theme of the conference or the like. Therefore, the effective combination of the conference and the conference content is realized.
Scene two, live broadcast or studio guide broadcast scene
Similar to video conferencing, there is also a content presentation or product presentation or scene presentation requirement in such a scenario. Taking live tourist broadcast as an example, the anchor may use scenic pictures of scenic spots to attract audience in addition to introducing features, scenic spots, etc. Based on the above, in the live broadcast process, for each frame of video frame image in the live broadcast video stream, the background area is determined by using the image data processing method, and then the front background segmentation is performed based on the background area. Then, a foreground part, such as an image part of a host, is reserved, the segmented background part is replaced by an image part comprising the scenic spot special content to be displayed, or an image part of a well-known scenery of the scenic spot, or an image part of a scenic picture of the scenic spot, and the like. Therefore, the effective display of the live broadcast content is realized. The studio-guide scenes are similar to the above, and will not be described again.
Scene three on-line education scene
Similar to video conferences, content presentation is also required in online education. Therefore, the video can be recorded according to the explanation process of the teacher, then the background area of each frame of video frame image in the video is determined by using the image data processing method, and the front background segmentation is performed based on the background area. Then, a foreground portion such as an image portion of a teacher or the like is reserved, and the segmented background portion is replaced with an image portion including courseware content to be displayed, or an image portion of a graphic picture (dynamic or static) related to the explanation content or the like. Therefore, vivid classroom content display is realized.
Scene four AR scene
Whether for video frame images in a video stream or for still images of a single sheet, there may be a possibility to use AR effects. For example, AR objects (e.g., red packets or pets, etc.) are added to the video frame images to interact with the video viewer, or AR objects (e.g., text annotations or interesting notes or portrait decorations, etc.) are added to the still images. In these cases, it is necessary to determine a background area in a video frame image or still image using the aforementioned image data processing method, and perform addition of a corresponding AR effect based on the background area. But not limited to this, can also be to whole replacement for AR effect with whole background area that determines to satisfy different demands, like interactive demand or taste demand, promote user experience.
Scene five, video production scene
In the process of making video, it is often found that the background of some images to be used is not satisfactory and that modifications are required. In this case, the foregoing image data processing method may be used to determine a background area in the image to be used, and modify or replace the background area, so as to meet the overall requirement of the video to be produced, and achieve overall improvement of the video effect.
Therefore, through the embodiment, services can be provided for various different use scenes on the basis of truly outputting the background area of the image and dividing the front background, so that the requirements of the different use scenes are greatly met, and the user experience is improved.
Example IV
Referring to fig. 5, a schematic structural diagram of an electronic device according to a fourth embodiment of the present application is shown, and the specific embodiment of the present application is not limited to the specific implementation of the electronic device.
As shown in FIG. 5, the electronic device may include a processor 502, a communication interface (Communications Interface) 504, a memory 506, and a communication bus 508.
Wherein:
processor 502, communication interface 504, and memory 506 communicate with each other via communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in any of the above-described image data processing method embodiments.
In particular, program 510 may include program code including computer-operating instructions.
The processor 502 may be a CPU, or an Application-specific integrated Circuit ASIC (Application SPECIFIC INTEGRATED circuits), or one or more integrated circuits configured to implement embodiments of the present application. The one or more processors included in the smart device may be the same type of processor, such as one or more CPUs, or different types of processors, such as one or more CPUs and one or more ASICs.
A memory 506 for storing a program 510. Memory 506 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to cause the processor 502 to perform operations corresponding to the image data processing method described in the foregoing embodiment one, two, or three.
The specific implementation of each step in the program 510 may refer to the corresponding steps and corresponding descriptions in the units in the above embodiment of the image data processing method, and have corresponding beneficial effects, which are not described herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
The embodiments of the present application also provide a computer program product, which includes computer instructions that instruct a computing device to perform operations corresponding to any one of the image data processing methods in the above-described method embodiments.
It should be noted that, in the embodiments of the present application, the color space is taken as an HSV space as an example, but it should be understood by those skilled in the art that other determination manners of the background color range of the color space that can reflect the hue may be implemented with reference to the embodiments of the present application, such as the HSL space, the YUV space, and the like.
In addition, the embodiments of the present application may be effectively applied to images acquired in a preset background scene, such as images acquired in a background of a curtain, a wall, a display screen, or the like of green or other colors. However, in the scene of the preset background objects such as a curtain, a wall, a display screen and the like, the background color is actually a dynamically-changed interval range due to curtain wrinkling, light consistency, exposure consistency, interference and the like, and the change and span are large in many scenes, so that the accuracy of determining the image background is difficult to ensure. By the scheme of the embodiment of the application, the problem can be effectively solved. But not limited thereto, other than the background scene, but the image with relatively simple background color may be equally applicable to the scheme of the embodiment of the present application.
It should be noted that, according to implementation requirements, each component/step described in the embodiments of the present application may be split into more components/steps, or two or more components/steps or part of operations of the components/steps may be combined into new components/steps, so as to achieve the objects of the embodiments of the present application.
The above-described methods according to embodiments of the present application may be implemented in hardware, firmware, or as software or computer code storable in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code originally stored in a remote recording medium or a non-transitory machine-readable medium and to be stored in a local recording medium downloaded through a network, so that the methods described herein may be stored on such software processes on a recording medium using a general purpose computer, special purpose processor, or programmable or special purpose hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller or programmable hardware includes a memory component (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the image data processing methods described herein. Further, when the general-purpose computer accesses code for implementing the image data processing method shown herein, execution of the code converts the general-purpose computer into a special-purpose computer for executing the image data processing method shown herein.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only for illustrating the embodiments of the present application, but not for limiting the embodiments of the present application, and various changes and modifications may be made by one skilled in the relevant art without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also fall within the scope of the embodiments of the present application, and the scope of the embodiments of the present application should be defined by the claims.