[go: up one dir, main page]

CN103810674B - Based on the image enchancing method relying on the reconstruct of perception object's position - Google Patents

Based on the image enchancing method relying on the reconstruct of perception object's position Download PDF

Info

Publication number
CN103810674B
CN103810674B CN201210455248.8A CN201210455248A CN103810674B CN 103810674 B CN103810674 B CN 103810674B CN 201210455248 A CN201210455248 A CN 201210455248A CN 103810674 B CN103810674 B CN 103810674B
Authority
CN
China
Prior art keywords
image
objects
dependent
perception
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210455248.8A
Other languages
Chinese (zh)
Other versions
CN103810674A (en
Inventor
胡事民
张方略
汪淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201210455248.8A priority Critical patent/CN103810674B/en
Publication of CN103810674A publication Critical patent/CN103810674A/en
Application granted granted Critical
Publication of CN103810674B publication Critical patent/CN103810674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供了一种基于依赖感知对象位置重构的图像增强方法,该方法包括:初步提取图像的前景物体;将所述图像分成若干区域,并对各个区域进行前景物体依赖分析,得到依赖感知对象;依据摄影学的美观准则及依赖感知对象间的位置关系制定依赖感知对象位置的优化目标,并进行构图优化;对优化后图像的缺失部分进行背景补全,并进行修边处理,得到最终的图像增强结果。本发明能够自动分析和提取图像中的依赖感知对象,并将分析结果用于图像构图优化。

The present invention provides an image enhancement method based on position reconstruction of dependent perception objects. The method includes: initially extracting the foreground object of the image; object; formulate an optimization target that depends on the position of the perceived object according to the aesthetic criteria of photography and rely on the positional relationship between the perceived objects, and optimize the composition; complete the background of the missing part of the optimized image, and perform edge trimming to obtain the final image enhancement results. The invention can automatically analyze and extract dependent perception objects in images, and use the analysis results for image composition optimization.

Description

基于依赖感知对象位置重构的图像增强方法Image Enhancement Method Based on Perception-Dependent Object Position Reconstruction

技术领域technical field

本发明涉及图像处理技术领域,特别涉及一种依赖感知对象分析和位置重构的图像增强方法。The invention relates to the technical field of image processing, in particular to an image enhancement method relying on perceptual object analysis and position reconstruction.

背景技术Background technique

随着数字媒体技术的发展,越来越多的图像增强技术致力于提高图像的视觉质量。这些方法将人类对图像的感知转换为可计算的模型,例如色调和清晰度等。With the development of digital media technology, more and more image enhancement techniques are dedicated to improving the visual quality of images. These methods convert human perception of images into computable models, such as hue and sharpness, etc.

图像增强和编辑技术在计算机图形学领域是非常重要的工具,例如P′erez等人提出的Possion blending方法用于图像融合;Alpha-matting方法用于图像合成后的修边处理;形状感知的图像编辑方法可以用于图像内容的编辑。然而这些方法只提供了图像编辑的工具,并没有主动依据人类的感官和摄影学观点处理图像,使图像更加美观。Image enhancement and editing technology is a very important tool in the field of computer graphics, such as the Possion blending method proposed by P'erez et al. for image fusion; the Alpha-matting method is used for edge trimming after image synthesis; shape-aware images Editing methods can be used for editing of image content. However, these methods only provide tools for image editing, and do not actively process images according to human senses and photographic viewpoints to make images more beautiful.

许多已有工作已经研究如何用计算模型评价图像是否美观,这些工作基于图像的全局或局部特征的提取,并给出了可量化的评价标准,为了提高图像的美观程度,许多学者根据美学标准提炼出计算模型,并依此编辑图像内容。如Santella等人提出了“Gaze-basedinteraction forsemi-automatic photo cropping”,Nishiyama等人提出了“Sensation-based photo cropping”这两个工作都基于照片剪裁来优化图像构图;Liu等人提出“Optimizing photo composition”,该方法使用照片剪裁和重定位操作来优化照片的构图。然而以上方法并没有在对象层次上对图像进行操作,所能改变的构图程度具有局限性。Many existing works have studied how to use computational models to evaluate whether an image is beautiful or not. These works are based on the extraction of global or local features of the image, and given quantifiable evaluation criteria. In order to improve the aesthetics of the image, many scholars have extracted Calculate the model and edit the image content accordingly. For example, Santella et al. proposed "Gaze-basedinteraction forsemi-automatic photo cropping", and Nishiyama et al. proposed "Sensation-based photo cropping". Both works are based on photo cropping to optimize image composition; Liu et al. proposed "Optimizing photo composition ”, which uses photo cropping and repositioning operations to optimize the composition of a photo. However, the above methods do not operate on the image at the object level, and the degree of composition that can be changed is limited.

最近,Bhattacharya等人提出了“A holistic approach toaestheticenhancement of photographs”,该方法利用摄影学构图规则对前景图像进行位置移动。然而该方法没有考虑前景图像的依赖关系,且不能保证图像语义信息的正确性。Recently, Bhattacharya et al. proposed "A holistic approach to aestheticenhancement of photographs", which uses the composition rules of photography to move the position of the foreground image. However, this method does not consider the dependence of the foreground image, and cannot guarantee the correctness of the semantic information of the image.

发明内容Contents of the invention

(一)所要解决的技术问题(1) Technical problems to be solved

本发明通过提供一种基于依赖感知对象位置重构的图像增强方法,在对象层次上对图像进行操作,考虑到依赖感知对象间的依赖关系,且保证了图像语义信息的正确性。The present invention provides an image enhancement method based on the reconstruction of the position of the dependent perception object, operates the image on the object level, takes into account the dependency between the dependent perception objects, and ensures the correctness of the semantic information of the image.

(二)技术方案(2) Technical solution

本发明提供了一种基于依赖感知对象位置重构的图像增强方法,该方法包括:The present invention provides an image enhancement method based on reconstruction of dependent perception object position, the method comprising:

S1、初步提取图像的前景物体;S1, initially extracting the foreground object of the image;

S2、将所述图像分成若干区域,并对各个区域进行前景物体依赖分析,得到依赖感知对象;S2. Dividing the image into several regions, and performing dependency analysis on foreground objects in each region to obtain dependent perception objects;

S3、依据摄影学的美观准则及依赖感知对象间的位置关系制定依赖感知对象位置的优化目标,并进行构图优化;S3. Formulate an optimization target that depends on the position of the perceived object according to the aesthetic criteria of photography and rely on the positional relationship between the perceived objects, and optimize the composition;

S4、对优化后图像的缺失部分进行背景补全,并进行修边处理,得到最终的图像增强结果。S4. Carry out background completion for the missing part of the optimized image, and perform edge trimming to obtain a final image enhancement result.

优选的,所述步骤S1中采用显著性分割方法初步提取图像的前景物体。Preferably, in the step S1, a saliency segmentation method is used to initially extract the foreground object of the image.

优选的,所述步骤S2具体包括:Preferably, the step S2 specifically includes:

S21、用图像分割的方法将所述图像分割成若干区域,各个区域如果有1/2的面积覆盖步骤S1中所述提取的前景物体,则设为依赖感知对象,如果区域的显著性值低于阈值,则设为纯背景;S21. The image is divided into several regions by means of image segmentation. If each region has 1/2 of the area covering the foreground object extracted in step S1, it is set as a dependent perception object. If the significance value of the region is low If it is lower than the threshold, it is set as a pure background;

S22、对分割的其余区域提取锐度、清晰度,颜色和谐度特征;S22. Extracting sharpness, clarity, and color harmony features from the rest of the segmented regions;

S23、用多标签图割方法进行区域依赖分析,得到最终的依赖感知对象。S23. Using a multi-label graph cut method to perform region dependency analysis to obtain a final dependency-aware object.

优选的,步骤S3中所述优化目标包括依赖感知对象的三分点距、依赖感知对象的对角线距、视觉均衡项、依赖感知对象间的关联项和约束惩罚项。Preferably, the optimization target in step S3 includes the three-point distance of dependent perceptual objects, the diagonal distance of dependent perceptual objects, visual balance items, correlation items between dependent perceptual objects and constraint penalty items.

优选的,对所述优化目标采用启发式方法进行优化。Preferably, the optimization target is optimized using a heuristic method.

优选的,所述步骤S4中采用内容感知方法对优化后图像的缺失部分进行背景补全。Preferably, in the step S4, a content-aware method is used to complete the background of the missing part of the optimized image.

(三)有益效果(3) Beneficial effects

本发明所提出的基于依赖感知对象位置重构的图像增强方法通过对图像前景物体和其余区域进行依赖分析,得到依赖感知对象,并对依赖感知对象位置重构,所能改变的构图程度不再具有局限性。优化目标在摄影学构图规则的基础上考虑了前景物体间的关系,使得语义信息更加准确,图像更加美观。The image enhancement method based on the position reconstruction of dependent perceptual objects proposed by the present invention obtains dependent perceptual objects by performing dependency analysis on foreground objects and other areas of the image, and reconstructs the position of dependent perceptual objects, so that the degree of composition that can be changed is no longer has limitations. The optimization objective considers the relationship between foreground objects on the basis of photography composition rules, making the semantic information more accurate and the image more beautiful.

附图说明Description of drawings

图1为本发明所提供方法的步骤流程图。Fig. 1 is a flowchart of the steps of the method provided by the present invention.

具体实施方式detailed description

下面结合附图和具体实施例对本发明做进一步详细描述。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明公开了一种基于依赖感知对象位置重构的图像增强方法,如图1所示,该方法包括:The present invention discloses an image enhancement method based on reconstruction of dependent perception object position, as shown in Fig. 1, the method includes:

S1、初步提取图像的前景物体;S1, initially extracting the foreground object of the image;

S2、将所述图像分成若干区域,并对各个区域进行前景物体依赖分析,得到依赖感知对象;S2. Dividing the image into several regions, and performing dependency analysis on foreground objects in each region to obtain dependent perception objects;

S3、依据摄影学的美观准则及依赖感知对象间的位置关系制定依赖感知对象位置的优化目标,并进行构图优化;S3. Formulate an optimization target that depends on the position of the perceived object according to the aesthetic criteria of photography and rely on the positional relationship between the perceived objects, and optimize the composition;

S4、对优化后图像的缺失部分进行背景补全,并进行修边处理,得到最终的图像增强结果。S4. Carry out background completion for the missing part of the optimized image, and perform edge trimming to obtain a final image enhancement result.

该方法通过对图像前景和其余区域进行依赖分析,得到依赖感知对象。对依赖感知对象进行构图优化,其优化目标在摄影学构图规则的基础上考虑了前景物体间的关系,使得优化后的语义信息更加准确。This method obtains dependent perceptual objects by performing dependency analysis on the foreground and other regions of the image. Composition optimization is performed on perception-dependent objects. The optimization goal considers the relationship between foreground objects on the basis of photographic composition rules, making the optimized semantic information more accurate.

所述步骤S1中采用显著性分割方法初步提取图像的前景物体。In the step S1, the foreground object of the image is preliminarily extracted using a saliency segmentation method.

所述步骤S2具体包括:Described step S2 specifically comprises:

S21、用图像分割的方法将所述图像分割成若干区域,各个区域如果有1/2的面积覆盖步骤S1中所述提取的前景物体,则设为依赖感知对象,如果区域的显著性值低于阈值,则设为纯背景;S21. The image is divided into several regions by means of image segmentation. If each region has 1/2 of the area covering the foreground object extracted in step S1, it is set as a dependent perception object. If the significance value of the region is low If it is lower than the threshold, it is set as a pure background;

S22、对分割的其余区域提取锐度、清晰度,颜色和谐度特征;S22. Extracting sharpness, clarity, and color harmony features from the rest of the segmented regions;

S23、用多标签图割方法进行区域依赖分析,得到最终的依赖感知对象。S23. Using a multi-label graph cut method to perform region dependency analysis to obtain a final dependency-aware object.

步骤S3中所述优化目标包括依赖感知对象的三分点距、依赖感知对象的对角线距、视觉均衡项、依赖感知对象间的关联项和约束惩罚项。The optimization objective in step S3 includes the three-point distance of dependent perception objects, the diagonal distance of dependent perception objects, visual balance items, correlation items between dependent perception objects and constraint penalty items.

其中,对所述优化目标采用启发式方法进行优化。Wherein, the optimization target is optimized using a heuristic method.

所述步骤S4中采用内容感知方法对优化后图像的缺失部分进行背景补全。In the step S4, a content-aware method is used to complete the background of the missing part of the optimized image.

最终得到的构图优化后的图像优于传统方法的效果。The final composition-optimized image is better than traditional methods.

具体的,该方法为:Specifically, the method is:

S1、对一幅输入图像用显著性分割方法进行前景物体初步提取,具体包括:S1. Use the saliency segmentation method to perform preliminary extraction of foreground objects on an input image, specifically including:

S11、用区域对比度方法计算区域显著性值,其中显著性值区间为[0,1],若区域的显著性值高则设为前景候选区域,显著性值低则设为背景候选区域;S11. Calculate the regional salience value by using the regional contrast method, wherein the salience value interval is [0, 1]. If the salience value of the region is high, it is set as a foreground candidate area, and if the salience value is low, it is set as a background candidate area;

S12、根据前景候选区域和背景候选区域信息用图像分割方法进行前景物体目标提取。重复上述方法逐一提取图像前景物体,其中,将上次提取出来的前景物体区域设为背景,并将区域显著性值降低。在此步骤之后,图像中的前景物体将被逐一提取出来。S12. Using an image segmentation method to extract the foreground object target according to the information of the foreground candidate area and the background candidate area. Repeat the above method to extract the foreground objects of the image one by one, wherein the foreground object area extracted last time is set as the background, and the saliency value of the area is reduced. After this step, the foreground objects in the image will be extracted one by one.

S2、将所述图像分成若干区域,并对分割区域进行依赖分析,得到依赖感知对象:S2. Divide the image into several areas, and perform dependency analysis on the segmented areas to obtain dependent perception objects:

S21、用图像分割的方法将图像分成若干区域,对每一块分割出的区域分析:如果有1/2的面积覆盖步骤S1中提取的前景物体,则将此区域设为依赖感知对象,如果此区域的显著性值低于阈值t,则设为纯背景;S21. The image is divided into several regions by image segmentation, and each segmented region is analyzed: if 1/2 of the area covers the foreground object extracted in step S1, then this region is set as a dependent perception object, if this If the significance value of the region is lower than the threshold t, it is set as pure background;

S22、对于余下的其余区域提取锐度Ea、清晰度Es,颜色和谐度ΔCH特征:S22. Extract sharpness E a , sharpness E s , and color harmony ΔCH features for the remaining regions:

锐度Ea为: E a = G [ 1 n Σ i = 11 n δ ( i ) D ( i ) ] - - - ( 1 ) The sharpness E a is: E. a = G [ 1 no Σ i = 11 no δ ( i ) D. ( i ) ] - - - ( 1 )

其中D(i)为图像的二阶导数,若D(i)>0.1,则δ(i)值为1,n为区域中的像素个数,G为高斯归一化函数。Wherein D(i) is the second order derivative of the image, if D(i)>0.1, then the value of δ(i) is 1, n is the number of pixels in the area, and G is the Gaussian normalization function.

清晰度Es为: E S = G [ Σ ( u , v ) ∈ F H F ( u , v ) / Σ ( u , v ) ∈ F L F ( u , v ) ] - - ( 2 ) The sharpness E s is: E. S = G [ Σ ( u , v ) ∈ f h f ( u , v ) / Σ ( u , v ) ∈ f L f ( u , v ) ] - - ( 2 )

其中,FH={(u,v)|βW<|u-u0|≤αW,βH<|v-v0|≤αH}Among them, F H ={(u, v)|βW<|uu 0 |≤αW, βH<|vv 0 |≤αH}

FL={(u,v)||u-u0|≤βW,|v-v0|≤βH}F L ={(u, v)||uu 0 |≤βW, |vv 0 |≤βH}

W和H分别为图像的宽和高,FH为高频带,FL为低频带,(u0,v0)为中心频率,α=0.4,β=0.2。W and H are the width and height of the image respectively, F H is the high frequency band, F L is the low frequency band, (u 0 , v 0 ) is the center frequency, α=0.4, β=0.2.

颜色和谐度ΔCH为:The color harmony ΔCH is:

ΔCH=exp{(CH+5)2/2} (3)ΔCH=exp{(CH+5) 2 /2} (3)

根据Ou等人关于颜色和谐度的文献A Colour Harmony Model forTwo-ColourCombinations,CH=HC+HL+HH,其中HC为彩色项,HL为亮度项,HH为色调项。在得到区域的特征之后,用多标签图割(Multi-label GraphCut)方法进行区域依赖分析,得到最终的依赖感知对象,能量方程为:According to A Color Harmony Model for Two-ColourCombinations written by Ou et al. on color harmony, CH=HC+ H L + H H , where HC is the color item, HL is the brightness item, and H H is the hue item. After obtaining the characteristics of the region, use the Multi-label GraphCut method to perform region dependency analysis to obtain the final dependent perception object. The energy equation is:

EE. (( LL )) == &Sigma;&Sigma; rr ii &Element;&Element; &Omega;&Omega; DD. rr ii (( LL ii )) ++ &Sigma;&Sigma; (( rr ii ,, rr jj )) &Element;&Element; NN TT (( rr ii ,, rr jj )) (( LL ii ,, LL jj )) -- -- -- (( 44 ))

其中,in,

DD. rr (( LL )) == {{ [[ EE. SS (( rr )) -- EE. SS (( LL )) ]] 22 ++ [[ EE. aa (( rr )) -- EE. aa (( LL )) ]] 22 ++ &Delta;CH&Delta;CH rr ,, LL }} 11 22

TT (( rr ii ,, rr jj )) (( LL ii ,, LL jj )) == 00 ,, ifif LL ii == LL jj DD. rr ii (( rr ii )) ,, otherwiseotherwise

其中,r表示分割的区域,L表示标签。Among them, r represents the segmented region and L represents the label.

S3、依据摄影学的美观准则以及依赖感知对象间的位置关系制定依赖感知对象位置的优化目标,并用启发式方法进行位置重构的方法:首先定义布局优化目标为:S3. Formulate an optimization target that depends on the position of the perceptual object based on the aesthetic criteria of photography and the positional relationship between the dependent perceptual objects, and use a heuristic method to reconstruct the position: first define the layout optimization target as:

EE. == DD. PP ++ DD. LL ++ DD. VV ++ &Sigma;&Sigma; ii nno PP ii ++ &omega;&omega; &Sigma;&Sigma; ii nno &Sigma;&Sigma; jj &NotEqual;&NotEqual; ii nno RR (( ii ,, jj )) -- -- -- (( 55 ))

其中n表示感知依赖对象的个数,图像的三分点指的是若将图像的长和宽归一化到[0,1]区间,其图像上坐标为(1/3,1/3),(1/3,2/3),(2/3,1/3),(2/3,2/3)的四个点。Where n represents the number of perception-dependent objects, and the three-point point of the image means that if the length and width of the image are normalized to the [0, 1] interval, the coordinates on the image are (1/3, 1/3) , (1/3, 2/3), (2/3, 1/3), (2/3, 2/3) four points.

依赖感知对象的三分点距能量项DP为:The energy term D P of the third-point distance depending on the perceived object is:

DD. PP == &Sigma;&Sigma; ii == 11 nno mm ii || || cc ii -- PP jj || || -- -- -- (( 66 ))

其中,mi表示依赖感知对象的像素数,ci表示依赖感知对象质心,Pj表示距依赖感知对象最近的三分点。Among them, m i represents the number of pixels of the dependent perceptual object, c i represents the centroid of the dependent perceptual object, and P j represents the nearest third point to the dependent perceptual object.

依赖感知对象的对角线距能量项DL为:The diagonal distance energy item D L dependent on the perceived object is:

DD. LL == &Sigma;&Sigma; ii == 11 nno &Sigma;&Sigma; jj &NotEqual;&NotEqual; ii nno 11 22 (( ff aa (( ll ii ,, jj ,, LL 11 ,, LL 22 )) ++ ff dd (( ll ii ,, jj ,, LL 11 ,, LL 22 )) )) -- -- -- (( 77 ))

其中L1,L2为图像的两条对角线,li,j为两个依赖感知对象质心形成的直线,Mi,j为两个质心的中点,θk为li,j与Lk的夹角。Among them, L 1 and L 2 are the two diagonal lines of the image, l i, j are two straight lines depending on the centroid of the perceived object, M i, j is the midpoint of the two centroids, θ k is the distance between l i, j and The included angle of L k .

fa(li,j,L1,L2)=|θ1||θ2|/4π2 f d ( l i , j , L 1 , L 2 ) = 2 d ( M i , j , L 1 ) &CenterDot; 2 d ( M i , j , L 2 ) ; d(M,L)表示点M到直线L的距离。f a (l i, j , L 1 , L 2 )=|θ 1 ||θ 2 |/4π 2 , f d ( l i , j , L 1 , L 2 ) = 2 d ( m i , j , L 1 ) &Center Dot; 2 d ( m i , j , L 2 ) ; d(M, L) represents the distance from the point M to the straight line L.

视觉均衡能量项DV为:The visual equalization energy item D V is:

DD. VV == expexp {{ -- 11 22 &sigma;&sigma; dd 22 (( CC ,, Mm mm )) }} -- -- -- (( 88 ))

其中,mi表示依赖感知对象的像素数,ci表示依赖感知对象质心,视觉均衡点C为图像的中心,d(a,b)表示两点之间的距离,σ=0.2。Among them, m i represents the number of pixels dependent on the perceived object, c i represents the centroid of the dependent perceived object, and the visual balance point C is the center of the image, d(a, b) represents the distance between two points, σ=0.2.

依赖感知对象之间的关联项R为:The correlation item R between dependent perception objects is:

R(i,j)=S(i,j)||Δi,j-Δ′i,j|| (9)R(i,j)=S(i,j)||Δ i,j -Δ′ i,j || (9)

其中S(i,j)=λSshape(i,j)+(1-λ)Scolor(i,j);Sshape为依赖感知对象之间Shapecontext相似度,Scolor为依赖感知对象颜色直方图的卡方距离,Δi,j表示优化前两个依赖感知对象的距离,Δ′i,j表示经优化后两个依赖感知对象的距离。Where S(i, j)=λS shape (i, j)+(1-λ)S color (i, j); S shape is the Shapecontext similarity between dependent perception objects, and S color is the color histogram of dependent perception objects The chi-square distance of , Δ i, j represents the distance between the two dependent perception objects before optimization, and Δ′ i, j represents the distance between the two dependent perception objects after optimization.

约束惩罚项P可以约束最终位置为最近的最优解。对E用启发式方法进行优化,得到优化后的位置。The constraint penalty term P can constrain the final position to be the nearest optimal solution. Use the heuristic method to optimize E to obtain the optimized position.

S4、用内容感知方法对优化后图像缺失部分进行背景补全,并进行修边(Alpha-matting)处理,得到最终的图像增强结果:在对图像中依赖感知对象位置进行优化后,需要用内容感知的图像补全方法对图像的缺失部分进行补全,同时,在有些情况下需要对依赖感知对象的边界部分进行修边处理,将结果作为输出的构图优化后的最终图像。S4. Use the content-aware method to complete the background of the missing part of the optimized image, and perform alpha-matting processing to obtain the final image enhancement result: after optimizing the position of the dependent perception object in the image, it is necessary to use the content The perceptual image completion method completes the missing part of the image. At the same time, in some cases, it needs to trim the boundary part that depends on the perceptual object, and use the result as the final image after the composition optimization of the output.

以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和替换,这些改进和替换也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the technical principle of the present invention, some improvements and replacements can also be made, these improvements and replacements It should also be regarded as the protection scope of the present invention.

Claims (6)

1.基于依赖感知对象位置重构的图像增强方法,其特征在于,该方法包括以下步骤:1. based on the image enhancement method that relies on perception object position reconstruction, it is characterized in that, the method comprises the following steps: S1、初步提取图像的前景物体;S1, initially extracting the foreground object of the image; S2、将所述图像分成若干区域,并对各个区域进行前景物体依赖分析,得到依赖感知对象;S2. Dividing the image into several regions, and performing dependency analysis on foreground objects in each region to obtain dependent perception objects; S3、依据摄影学的美观准则及依赖感知对象间的位置关系制定依赖感知对象位置的优化目标,并进行构图优化;S3. Formulate an optimization target that depends on the position of the perceived object according to the aesthetic criteria of photography and rely on the positional relationship between the perceived objects, and optimize the composition; S4、对优化后图像的缺失部分进行背景补全,并进行修边处理,得到最终的图像增强结果。S4. Carry out background completion for the missing part of the optimized image, and perform edge trimming to obtain a final image enhancement result. 2.如权利要求1所述方法,其特征在于,所述步骤S 1中采用显著性分割方法初步提取图像的前景物体。2. The method according to claim 1, characterized in that, in the step S1, a salient segmentation method is used to initially extract the foreground object of the image. 3.如权利要求1所述方法,其特征在于,所述步骤S2具体包括:3. The method according to claim 1, wherein said step S2 specifically comprises: S21、用图像分割的方法将所述图像分成若干区域,各个区域如果有1/2的面积覆盖步骤S1中所述提取的前景物体,则设为依赖感知对象,如果区域的显著性值低于阈值,则设为纯背景;S21. The image is divided into several regions by means of image segmentation. If each region has 1/2 of the area covering the foreground object extracted in step S1, it is set as a dependent perception object. If the significance value of the region is lower than Threshold, it is set to pure background; S22、对分割的其余区域提取锐度、清晰度,颜色和谐度特征;S22. Extracting sharpness, clarity, and color harmony features from the rest of the segmented regions; S23、用多标签图割方法进行区域依赖分析,得到最终的依赖感知对象。S23. Using a multi-label graph cut method to perform region dependency analysis to obtain a final dependency-aware object. 4.如权利要求1所述方法,其特征在于,步骤S3中所述优化目标包括依赖感知对象的三分点距、依赖感知对象的对角线距、视觉均衡项、依赖感知对象间的关联项和约束惩罚项。4. The method according to claim 1, characterized in that, the optimization target in step S3 includes the three-point distance of dependent perceptual objects, the diagonal distance of dependent perceptual objects, visual balance items, and the correlation between dependent perceptual objects item and constraint penalty item. 5.如权利要求4所述方法,其特征在于,对所述优化目标采用启发式方法进行优化。5. The method according to claim 4, wherein a heuristic method is used to optimize the optimization target. 6.如权利要求1所述方法,其特征在于,所述步骤S4中采用内容感知方法对优化后图像的缺失部分进行背景补全。6. The method according to claim 1, characterized in that, in the step S4, a content-aware method is used to complete the background of the missing part of the optimized image.
CN201210455248.8A 2012-11-13 2012-11-13 Based on the image enchancing method relying on the reconstruct of perception object's position Active CN103810674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210455248.8A CN103810674B (en) 2012-11-13 2012-11-13 Based on the image enchancing method relying on the reconstruct of perception object's position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210455248.8A CN103810674B (en) 2012-11-13 2012-11-13 Based on the image enchancing method relying on the reconstruct of perception object's position

Publications (2)

Publication Number Publication Date
CN103810674A CN103810674A (en) 2014-05-21
CN103810674B true CN103810674B (en) 2016-09-21

Family

ID=50707396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210455248.8A Active CN103810674B (en) 2012-11-13 2012-11-13 Based on the image enchancing method relying on the reconstruct of perception object's position

Country Status (1)

Country Link
CN (1) CN103810674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107707818B (en) * 2017-09-27 2020-09-29 努比亚技术有限公司 Image processing method, image processing apparatus, and computer-readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1525401A (en) * 2003-02-28 2004-09-01 ��˹���´﹫˾ Method and system for enhancing portrait images that are processed in a batch mode
CN1957371A (en) * 2004-05-31 2007-05-02 诺基亚公司 Method and system for viewing and enhancing images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005036491A2 (en) * 2003-10-09 2005-04-21 De Beers Consolidated Mines Limited Enhanced video based surveillance system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1525401A (en) * 2003-02-28 2004-09-01 ��˹���´﹫˾ Method and system for enhancing portrait images that are processed in a batch mode
CN1957371A (en) * 2004-05-31 2007-05-02 诺基亚公司 Method and system for viewing and enhancing images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Piecewise Planar and Non-Planar Stereo for Urban Scene Reconstruction;David Gallup等;《Computer Vision and Pattern Recognition(CVPR),2010 IEEE Conference on》;20100618;第1420页第3.3节第1-4行,第1421页第1-33行,公式(1)-公式(3) *

Also Published As

Publication number Publication date
CN103810674A (en) 2014-05-21

Similar Documents

Publication Publication Date Title
CN103177446B (en) Based on the accurate extracting method of display foreground of neighborhood and non-neighborhood smoothing prior
CN104537676B (en) Gradual image segmentation method based on online learning
CN105261017A (en) Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction
CN102436637B (en) Method and system for automatically segmenting hairs in head images
CN103839223A (en) Image processing method and image processing device
CN104899877A (en) Image foreground extraction method based on super-pixels and fast three-division graph
CN110120048A (en) In conjunction with the three-dimensional brain tumor image partition method for improving U-Net and CMF
CN104331683B (en) A kind of facial expression recognizing method with noise robustness
CN103309982B (en) A kind of Remote Sensing Image Retrieval method of view-based access control model significant point feature
CN116863319B (en) Copy-move tamper detection method based on cross-scale modeling and alternating refinement
CN105069774A (en) Object segmentation method based on multiple-instance learning and graph cuts optimization
CN104299009A (en) Plate number character recognition method based on multi-feature fusion
CN106503626A (en) Being mated with finger contours based on depth image and refer to gesture identification method
CN113034355B (en) A deep learning-based method for double chin removal in portrait images
CN104809729A (en) Robust automatic image salient region segmenting method
CN104636761A (en) Image semantic annotation method based on hierarchical segmentation
CN104299004A (en) Hand gesture recognition method based on multi-feature fusion and fingertip detecting
CN114494283B (en) A method and system for automatic segmentation of farmland
CN103295219B (en) Method and device for segmenting image
US20070286492A1 (en) Method of extracting object from digital image by using prior shape information and system executing the method
CN105118051A (en) Saliency detecting method applied to static image human segmentation
CN101866422A (en) A method of extracting image attention based on image multi-feature fusion
CN118781326A (en) An object detection method based on depth-aware RGB multi-scale fusion network
CN103810674B (en) Based on the image enchancing method relying on the reconstruct of perception object&#39;s position
CN102831621B (en) Video significance processing method based on spectral analysis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant