[go: up one dir, main page]

CN107169487A - The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic - Google Patents

The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic Download PDF

Info

Publication number
CN107169487A
CN107169487A CN201710255712.1A CN201710255712A CN107169487A CN 107169487 A CN107169487 A CN 107169487A CN 201710255712 A CN201710255712 A CN 201710255712A CN 107169487 A CN107169487 A CN 107169487A
Authority
CN
China
Prior art keywords
mrow
msub
pixel
msup
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710255712.1A
Other languages
Chinese (zh)
Other versions
CN107169487B (en
Inventor
肖嵩
熊晓彤
刘雨晴
李磊
王欣远
杜建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710255712.1A priority Critical patent/CN107169487B/en
Publication of CN107169487A publication Critical patent/CN107169487A/en
Application granted granted Critical
Publication of CN107169487B publication Critical patent/CN107169487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/478Contour-based spectral representations or scale-space representations, e.g. by Fourier analysis, wavelet analysis or curvature scale-space [CSS]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • G06V10/473Contour-based spatial representations, e.g. vector-coding using gradient analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明提出一种基于超像素分割及深度特征定位的显著性目标检测方法。解决了传统显著性目标检测方法目标分割效果不理想的问题。其实现包括,本发明利用颜色相似性线性迭代的超像素分割,把图像的处理单位由单独像素点上升到集体类似区域;充分考虑颜色特征,方向特征和深度特征等图像特征,结合人眼更关心中心而忽视周围背景的特性、显著性图像所在区域的特征相似性及相较于全局特征的独特性的先验知识,生成输入图像的定位显著图和深度显著图,对其进行融合和边界处理。本发明检测图像效果边缘更清晰,背景剔除更完全,目标形态分割更完整。用于人脸识别,车辆检测,运动目标检测跟踪,军事导弹检测,医院病理检测等各个领域。

The invention proposes a salient target detection method based on superpixel segmentation and depth feature location. It solves the problem that the target segmentation effect of the traditional salient target detection method is not ideal. Its realization includes that the present invention utilizes color similarity linear iterative superpixel segmentation to raise the image processing unit from a single pixel point to a collective similar area; fully consider image features such as color features, direction features, and depth features, and combine human eyes to more Caring about the center and ignoring the characteristics of the surrounding background, the feature similarity of the region where the saliency image is located, and the prior knowledge of the uniqueness compared to the global feature, generate a localization saliency map and a depth saliency map of the input image, and fuse and boundary them deal with. The present invention has clearer edges in detection image effects, more complete background elimination, and more complete target form segmentation. It is used in face recognition, vehicle detection, moving target detection and tracking, military missile detection, hospital pathological detection and other fields.

Description

基于超像素分割及深度特征定位的显著性目标检测方法Salient Object Detection Method Based on Superpixel Segmentation and Depth Feature Location

技术领域technical field

本发明属于图像检测技术领域,主要涉及显著性目标检测方法,具体是一种基于超像素分割及深度特征定位的显著性目标检测方法。用于人脸识别,车辆检测,运动目标检测跟踪,军事导弹检测,医院病理检测等各个领域。The invention belongs to the technical field of image detection, and mainly relates to a salient target detection method, in particular to a salient target detection method based on superpixel segmentation and depth feature positioning. It is used in face recognition, vehicle detection, moving target detection and tracking, military missile detection, hospital pathological detection and other fields.

背景技术Background technique

随着数据数目的不断庞大,单位时间内积累的数据量指数型猛涨,庞大的数据量便需要更优秀的计算机技术和算法理论来处理提炼数据信息。随着高分辨率图像层出不穷,带给人极大的视觉享受。人们对于复杂图像的理解,已到达很高的水平。传统的图像处理将像素点独立开来,或者完全整体性的分析图像所传到的信息含义,面对庞大的数据量,传统的处理图像的方法已远远达不到高效实时的要求。同时仅仅通过考虑人眼注意机制的相关特征,比如颜色特征,方向特征等简单特征也已不能满足提取显著性目标检测的所要效果了。或者人工去处理待检测图像,工作难度大、压力大、负荷重。如何让计算机模拟人眼视觉机制,实现类似于人类的显著性注意机制去处理图像信息已经成为一个亟待解决的热门话题。As the amount of data continues to grow, the amount of data accumulated per unit of time increases exponentially. The huge amount of data requires better computer technology and algorithm theory to process and refine data information. With the emergence of high-resolution images, it brings great visual enjoyment. People's understanding of complex images has reached a very high level. Traditional image processing separates pixels, or completely analyzes the meaning of information transmitted by images. Faced with a huge amount of data, traditional image processing methods are far from meeting the requirements of high efficiency and real-time. At the same time, only by considering the relevant features of the human eye attention mechanism, such as simple features such as color features and direction features, can no longer meet the desired effect of extracting salient target detection. Or manually process the image to be detected, the work is difficult, stressful, and heavy. How to let the computer simulate the visual mechanism of the human eye and realize the saliency attention mechanism similar to human beings to process image information has become a hot topic that needs to be solved urgently.

现有的显著性目标检测方法有些只考虑图像本身的特征去寻找图像目标区域和背景区域存在的差异性,以此来辨别目标位置和背景区域。还有利用马尔科夫链来处理显著图,寻找中央显著区和周围背景区的相互之间影响关系。也还有利用幅度谱和滤波器的卷积来实现冗余信息最终寻找显著区域的方法。再者有关注局部对比度和全局对比度等各类方法。虽然这些方法都达到一定检测到显著性目标的有效性,但是检测效果在边缘分割,背景剔除,目标形态提取方面差强人意,有一定局限性。而且大都是把图像特征以单独像素点的形式进行处理,这已经远远不能满足现状。Some of the existing salient target detection methods only consider the characteristics of the image itself to find the difference between the target area and the background area of the image, so as to distinguish the target position and the background area. In addition, the Markov chain is used to process the saliency map, and the mutual influence relationship between the central saliency area and the surrounding background area is found. There is also a method of using the convolution of the amplitude spectrum and the filter to realize the redundant information and finally find the salient region. Furthermore, there are various methods focusing on local contrast and global contrast. Although these methods are effective in detecting salient targets, the detection effect is not satisfactory in terms of edge segmentation, background removal, and target shape extraction, and has certain limitations. Moreover, most of the image features are processed in the form of individual pixels, which is far from satisfying the status quo.

发明内容Contents of the invention

本发明的目的在于克服现有技术的不足,提供一种边缘更清晰,背景剔除更完全,目标形态分割更完整的基于超像素分割及深度特征定位的显著性目标检测方法。The purpose of the present invention is to overcome the deficiencies of the prior art and provide a salient target detection method based on superpixel segmentation and depth feature location with clearer edges, more complete background removal, and more complete target shape segmentation.

本发明是一种基于超像素分割及深度特征定位的显著性目标检测方法,其特征在于,包括有如下步骤:The present invention is a salient target detection method based on superpixel segmentation and depth feature location, characterized in that it includes the following steps:

步骤1:输入图像对其进行线性迭代的聚类分割。输入待检测的目标图像,先分割成K个区域,寻找各个区域邻域的局部梯度极小值点作为中心点,并对同一区域设定一标签号;寻找距离像素点邻域内五维欧式距离最小的中心点,并将中心点标签赋予待处理的像素点;不断迭代寻找距离像素点最小的中心点的过程,在像素点的标签值不会发生变化时停止迭代,完成超像素分割;Step 1: The input image is linearly iteratively clustered and segmented. Input the target image to be detected, first divide it into K regions, find the local gradient minimum point in the neighborhood of each region as the center point, and set a label number for the same region; find the five-dimensional Euclidean distance in the neighborhood of the pixel point The smallest center point, and assign the center point label to the pixel to be processed; continuously iterate the process of finding the center point with the smallest distance from the pixel point, stop the iteration when the label value of the pixel point does not change, and complete the superpixel segmentation;

步骤2:构建高斯差分生成定位显著图。Step 2: Construct difference of Gaussian to generate localization saliency map.

2a:根据输入的原图进行高斯函数滤波处理,生成原图的8个层度尺度图;2a: Carry out Gaussian function filtering processing according to the input original image, and generate 8 layers of scale images of the original image;

2b:对构建的8个层度的尺度图再结合原图形成九层尺度图,提取九层尺度图像的红绿颜色差值图以及蓝黄颜色差值图,共18副颜色差值图;提取九层尺度图的强度图,共9副强度图;提取九层尺度图的Gabor滤波方向图,共36副方向图,形成三类特征图;2b: Combine the constructed 8-layer scale map with the original image to form a nine-layer scale map, extract the red-green color difference map and the blue-yellow color difference map of the nine-layer scale image, a total of 18 color difference maps; Extract the intensity map of the nine-layer scale map, a total of 9 intensity maps; extract the Gabor filter direction map of the nine-layer scale map, a total of 36 direction maps, forming three types of feature maps;

2c:因九层尺度图同类特征之间的尺寸不一样,对三类特征图先经过插值处理,再进行差分处理;2c: Because the sizes of similar features of the nine-layer scale map are different, the three types of feature maps are first interpolated and then differentially processed;

2d:不同类型的特征图之间因其特征的度量标准不同,需要先将不同类型的特征进行归一化再融合为定位显著图;2d: Different types of feature maps have different metrics for their features, so it is necessary to normalize the different types of features first and then fuse them into a localization saliency map;

步骤3:生成深度特征显著图。先根据步骤2的定位显著图对超像素分割后的图作一个定位处理,再对于分割完成的每一个区域及其相邻区域采集最近邻区域信息、全局区域信息、边角背景区域信息三类特征信息,生成深度特征显著图,用于显著性目标的检测;Step 3: Generate a deep feature saliency map. First, perform a positioning process on the superpixel-segmented image according to the positioning saliency map in step 2, and then collect the nearest neighbor area information, global area information, and corner background area information for each segmented area and its adjacent areas. Feature information, generating a deep feature saliency map for the detection of salient objects;

步骤4:将通过步骤2和步骤3最终得以确定的定位显著图和深度特征显著图,对定位显著图和深度特征显著图作融合和边界处理,生成最终的显著性目标图,完成超像素分割及深度特征定位的显著性目标检测。Step 4: Combine the localization saliency map and depth feature saliency map finally determined by step 2 and step 3, perform fusion and boundary processing on the localization saliency map and depth feature saliency map, generate the final saliency target map, and complete superpixel segmentation and salient object detection for deep feature localization.

与现有技术相比,本发明的有益效果是:Compared with prior art, the beneficial effect of the present invention is:

1、现有的显著性目标检测算法大都是把图像特征以单独像素点为单位进行处理,其检测到的目标区域和复杂背景之间的边缘分离效果并不理想,本发明采用计算五维欧式距离颜色相似性的线性迭代对输入图像作一个超像素分割预先处理,解决了传统显著性目标检测方法目标边缘分割效果不理想的问题,并提供一种更智能化、高效化、鲁棒性更强的显著性目标检测方法。1. Most of the existing salient target detection algorithms process image features in units of individual pixels, and the edge separation effect between the detected target area and the complex background is not ideal. The present invention uses five-dimensional Euclidean The linear iteration of the distance and color similarity performs a superpixel segmentation preprocessing on the input image, which solves the problem of unsatisfactory target edge segmentation in the traditional salient target detection method, and provides a more intelligent, efficient, and robust Strong salient object detection method.

2、本发明的方法充分的考虑颜色特征,方向特征和深度特征等图像特征,同时充分考虑更关心中心而忽视周围背景,目标所在区域的特征相似性,相较于全局特征的独特性等先验知识;进而实现显著性目标的检测,使得计算机更加具有逻辑性,更加人工智能化。2. The method of the present invention fully considers image features such as color features, direction features, and depth features. At the same time, it fully considers that it is more concerned about the center and ignores the surrounding background, and the similarity of features in the area where the target is located. Experimental knowledge; and then realize the detection of salient objects, making the computer more logical and artificial.

3、本发明方法从检测结果得出检测目标不局限于具体特征,所在环境等条件,通过在办公室场景,校园内区域,公园等多个场景拍摄待检测图像,通过本发明方法均能对显著性物体实现检测,且检测效果更符合人眼显著性效果。对背景剔除的更加完全,目标提取的位置和形态更完整。3. The method of the present invention draws from the detection results that the detection target is not limited to specific features, conditions such as the environment, by shooting images to be detected in multiple scenes such as office scenes, campus areas, parks, etc., the method of the present invention can detect significant Objects can be detected, and the detection effect is more in line with the salient effect of the human eye. The background removal is more complete, and the position and shape of the target extraction are more complete.

附图说明Description of drawings

图1为本发明方法的流程图;Fig. 1 is the flowchart of the inventive method;

图2为本发明方法中通过超像素分割后的效果图,其中图2(a)为办公室墙角的分割效果图,图2(b)为图书馆场景内分割效果图;Fig. 2 is the effect figure after being segmented by superpixel in the method of the present invention, and wherein Fig. 2 (a) is the segmentation effect figure of office wall corner, and Fig. 2 (b) is the segmentation effect figure in the library scene;

图3为针对选取的十幅图,本发明与近年来其他方法的检测效果展示和效果比较图,其中图3(a)为选取的原始图像,图3(b)为本发明检测效果图,图3(c)为GS方法效果图,图3(d)为GBMR方法效果图,图3(e)为RARE方法效果图,图3(f)为HSD方法效果图,图3(g)为STD方法效果图,图3(h)为人工标记图;Fig. 3 is for selecting ten pictures, the detection effect display and effect comparison diagram of the present invention and other methods in recent years, wherein Fig. 3 (a) is the original image selected, Fig. 3 (b) is the detection effect diagram of the present invention, Figure 3(c) is the rendering of the GS method, Figure 3(d) is the rendering of the GBMR method, Figure 3(e) is the rendering of the RARE method, Figure 3(f) is the rendering of the HSD method, and Figure 3(g) is Effect diagram of STD method, Figure 3(h) is a manual marking diagram;

图4为针对选取的五百幅图,本发明与近年来其他方法精确度和召回率的曲线结果图。Fig. 4 is for the selected 500 pictures, the curve results of the precision and recall rate of the present invention and other methods in recent years.

具体实施方式detailed description

下面结合附图对本发明详细说明The present invention is described in detail below in conjunction with accompanying drawing

实施例1Example 1

现有的显著性目标检测方法有些只考虑图像本身的特征去寻找图像目标区域和背景区域存在的差异性,以此来辨别目标位置和背景区域。还有利用马尔科夫链来处理显著图,寻找中央显著区和周围背景区的相互之间影响关系。也还有利用幅度谱和滤波器的卷积来实现冗余信息最终寻找显著目标区域的方法。虽然这些方法都达到一定检测到显著性目标的有效性,但是检测效果在边缘分割,背景剔除,目标形态提取方面差强人意,有一定局限性。Some of the existing salient target detection methods only consider the characteristics of the image itself to find the difference between the target area and the background area of the image, so as to distinguish the target position and the background area. In addition, the Markov chain is used to process the saliency map, and the mutual influence relationship between the central saliency area and the surrounding background area is found. There is also a method of using the convolution of the magnitude spectrum and the filter to realize the redundant information and finally find the salient target area. Although these methods are effective in detecting salient targets, the detection effect is not satisfactory in terms of edge segmentation, background removal, and target shape extraction, and has certain limitations.

针对现有技术的这些缺陷,经过探讨与创新,本发明提出一种基于超像素分割及深度特征定位的显著性目标检测方法,参见图1,包括有如下步骤:Aiming at these defects of the prior art, after discussion and innovation, the present invention proposes a salient target detection method based on superpixel segmentation and depth feature location, see Figure 1, including the following steps:

步骤(1)对输入图像进行线性迭代的聚类分割:输入待检测的目标图像,即原图,先分割成K个区域,寻找各个区域邻域的局部梯度极小值点作为中心点,并对同一区域设定一标签号。寻找距离像素点邻域内五维欧式距离最小的中心点,并将中心点标签赋予待处理的像素点;不断迭代寻找距离像素点最小的中心点,并给像素点赋予标签的过程,保证像素点的标签号不会发生变化为止,完成超像素分割。本例中寻找各个区域邻域采用的是5*5邻域,寻找距离像素点邻域采用的是2S*2S邻域。Step (1) Carry out linear iterative clustering segmentation of the input image: input the target image to be detected, that is, the original image, first divide it into K regions, find the local gradient minimum point in the neighborhood of each region as the center point, and Set a tag number for the same area. Find the center point with the smallest five-dimensional Euclidean distance in the neighborhood of the pixel point, and assign the center point label to the pixel to be processed; continuously iterate the process of finding the center point with the smallest distance from the pixel point, and assign a label to the pixel point to ensure that the pixel point The superpixel segmentation is completed until the label numbers of the pixels do not change. In this example, the 5*5 neighborhood is used to find the neighborhood of each area, and the 2S*2S neighborhood is used to find the distance pixel neighborhood.

步骤(2)利用高斯差分方法生成定位显著图:Step (2) Use the difference of Gaussian method to generate a localization saliency map:

(2a)根据输入的原图进行高斯函数滤波处理,生成原图的8个层度尺度图。(2a) Perform Gaussian function filtering processing according to the input original image to generate 8 layer scale images of the original image.

(2b)对构建的这8个层度的尺度图再结合原图形成九层尺度图,提取九层尺度图像的红绿颜色差值图以及蓝黄颜色差值图,九层尺度图的两种颜色差值图共18副图;提取九层尺度图的强度图,九层尺度图的强度图共9副图;提取九层尺度图的Gabor滤波四个方向图,这四个方向为0°,45°,90°,135°,九层尺度图的四种方向图图共36副图,形成颜色差值图,强度图和方向图三类特征图。(2b) Combine the scale maps of these 8 layers with the original image to form a nine-layer scale map, and extract the red-green color difference map and the blue-yellow color difference map of the nine-layer scale image. A total of 18 images of the color difference map; extract the intensity map of the nine-layer scale map, a total of 9 images of the intensity map of the nine-layer scale map; extract the Gabor filter four direction maps of the nine-layer scale map, these four directions are 0 °, 45°, 90°, 135°, the nine-layer scale map has four kinds of orientation maps, a total of 36 maps, forming three types of feature maps: color difference map, intensity map and direction map.

(2c)由于得到的九层尺度图同类特征之间的尺寸不一样,需要对三类特征图先经过插值处理,再进行差分处理。(2c) Since the obtained nine-layer scale maps have different sizes among similar features, the three types of feature maps need to be interpolated first, and then differentially processed.

(2d)不同类型的特征图之间因为其特征的度量标准不同,所以单一的幅值并不能反映显著性的重要度,所以需要先将不同类型的特征进行归一化再融合为定位显著图。(2d) Different types of feature maps have different feature metrics, so a single amplitude cannot reflect the importance of saliency. Therefore, it is necessary to normalize different types of features and then fuse them into a localized saliency map. .

步骤(3)生成输入图像的深度特征显著图:先根据步骤2的定位显著图对超像素分割后的图作一个定位处理,再充分考虑中心位置为显著性目标的可能性远大于图像四周位置以及显著性目标的集中性,即显著性目标势必都是集中在一定区域,不可能散落在图像的所有区域或者绝大部分区域;所以对于步骤1分割完成的每一个区域及其相邻区域采集最近邻区域信息、全局区域信息、边角背景区域信息三类特征信息,生成深度特征显著图,用于显著性目标的检测。Step (3) Generate the depth feature saliency map of the input image: first perform a positioning process on the superpixel-segmented map according to the localization saliency map in step 2, and then fully consider that the center position is much more likely to be a salient target than the surrounding positions of the image And the concentration of salient objects, that is, the salient objects must be concentrated in a certain area, and it is impossible to scatter in all or most areas of the image; so for each area and its adjacent areas that have been segmented in step 1, the collection The nearest neighbor area information, the global area information, and the corner background area information are three types of feature information to generate a deep feature saliency map for the detection of salient objects.

步骤(4)将通过步骤2和步骤3最终得以确定的定位显著图和深度特征显著图,为了使得物体分割的更规整,使得显著性的目标和忽视的背景之间的边界性更加的清晰,对定位显著图和深度特征显著图作融合和边界处理,生成最终的显著性目标图,完成超像素分割及深度特征定位的显著性目标检测。In step (4), the localization saliency map and depth feature saliency map finally determined through steps 2 and 3, in order to make the object segmentation more regular and make the boundary between the salient target and the neglected background clearer, The location saliency map and the depth feature saliency map are fused and boundary processed to generate the final saliency object map, and the salient object detection of superpixel segmentation and depth feature location is completed.

本发明的方法充分的考虑颜色特征,方向特征和深度特征等图像特征,同时充分考虑更关心中心而忽视周围背景,目标所在区域的特征相似性,相较于全局特征的独特性等先验知识;进而实现显著性目标的检测,使得计算机更加具有逻辑性,更加人工智能化。The method of the present invention fully considers image features such as color features, direction features, and depth features, and at the same time fully considers prior knowledge such as caring more about the center and ignoring the surrounding background, feature similarity of the target area, and uniqueness compared to global features. ; and then realize the detection of salient objects, making the computer more logical and artificial.

实施例2Example 2

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1,本发明步骤1中对待检测的目标图像的超像素分割包括有如下步骤:The salient target detection method based on superpixel segmentation and depth feature location is the same as in embodiment 1, and the superpixel segmentation of the target image to be detected in step 1 of the present invention includes the following steps:

1.1 先假定目标图像,即原图一共有像素点N个,期望分割的总区域为K个,显然每一个分得的区域共有N/K个像素点,并且不同区域之间的距离约为可能会出现设定的中心点恰好出现在边缘上,为了避免这种情况的发生,在设定的中心周围寻找局部梯度最小的位置,把中心位置移到此局部梯度最小处。并把同一区域设定一标签号,以作标记。1.1 First assume that the target image, that is, the original image has a total of N pixels, and the total area to be segmented is K. Obviously, each divided area has a total of N/K pixels, and the distance between different areas is about It may happen that the set center point just appears on the edge. In order to avoid this situation, find the position with the minimum local gradient around the set center, and move the center position to the minimum local gradient. And set a label number in the same area as a mark.

1.2 分别计算每个像素点到周围邻域已确定的中心点的五维特征向量欧式距离值,然后把值最小的中心点标签号赋予当前处理的像素点,。计算五维特征向量Ci=[li,ai,bi,xi,yi]T的欧氏距离如下面三个公式所示,五维特征向量中,li,ai,bi分别代表CIELAB空间中颜色的亮度,红色和绿色之间的位置,黄色和蓝色之间的位置的三个颜色分量信息值,xi,yi代表像素点所在的待检测的目标图像的坐标位置信息值。1.2 Calculate the Euclidean distance value of the five-dimensional feature vector from each pixel point to the determined center point of the surrounding neighborhood, and then assign the center point label number with the smallest value to the currently processed pixel point. Calculate the Euclidean distance of the five-dimensional feature vector C i =[l i ,a i , bi , xi ,y i ] T as shown in the following three formulas, in the five-dimensional feature vector, l i ,a i ,b i respectively represent the brightness of the color in the CIELAB space, the position between red and green, and the three color component information values of the position between yellow and blue, x i and y i represent the target image where the pixel is located Coordinate position information value.

上式中,dlab代表的是像素点k和中心点i在CIELAB颜色空间方面的欧式距离;dxy代表的是像素点k和中心点i在空间坐标位置方面的欧式距离;Di为评价像素点k和中心点i是否所属一个标签的衡量标准,它的值越大,两者之间的相似程度越接近,标签便一致;m为一个固定参数,用于平衡变量之间的关系;s为不同区域之间的距离约为 In the above formula, d lab represents the Euclidean distance between pixel point k and center point i in CIELAB color space; d xy represents the Euclidean distance between pixel point k and center point i in space coordinate position; D i is the evaluation Whether the pixel point k and the center point i belong to a label or not, the larger its value is, the closer the similarity between the two is, the label will be the same; m is a fixed parameter used to balance the relationship between variables; s is the distance between different regions of about

以上是设置像素点所属标签号的一个迭代周期。The above is an iterative cycle for setting the label number to which the pixel belongs.

1.3 不断按照1.2步骤进行迭代操作,对像素点所属的标签号的准确度作进一步的优化,直到整副图像的每个像素点的所属标签号不再发生变化为止,通常,进行10次左右的迭代可以达到效果。1.3 Continue to iterate according to the steps in 1.2, and further optimize the accuracy of the label number of the pixel point until the label number of each pixel point in the entire image no longer changes. Usually, about 10 times Iteration can achieve results.

1.4 经过迭代过程,可能会出现一些问题,比如,很小的一个区域被分割成一个超像素区域,只有一个像素点却被孤立开来形成一个超像素区域。为了去除这种情况的发生,把过小尺寸的独立区域或者孤立起来的单像素点分配给就近的标签号,完成目标图像的超像素分割。1.4 After the iterative process, some problems may occur. For example, a small area is divided into a super pixel area, and only one pixel is isolated to form a super pixel area. In order to eliminate the occurrence of this situation, the small-sized independent area or isolated single pixel point is assigned to the nearest label number to complete the superpixel segmentation of the target image.

本发明采用颜色相似性进行超像素分割,对输入图像作一个预先处理,解决了传统显著性目标检测方法目标分割效果不理想的问题,并提供一种更智能化、高效化、鲁棒性更强的显著性目标检测方法。The invention adopts color similarity to perform superpixel segmentation, pre-processes the input image, solves the problem of unsatisfactory target segmentation effect of the traditional salient target detection method, and provides a more intelligent, efficient, robust and more Strong salient object detection method.

实施例3Example 3

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-2,本发明步骤3中采集最近邻区域信息,全局区域信息和边角背景区域信息这三类特征信息是为了充分考虑更关心中心而忽视周围背景,目标所在区域的特征相似性,相较于全局特征的独特性等先验知识,包括如下步骤:The salient target detection method based on superpixel segmentation and depth feature location is the same as that of Embodiment 1-2. In step 3 of the present invention, the three types of feature information, namely, nearest neighbor region information, global region information and corner background region information, are collected to fully consider Pay more attention to the center and ignore the surrounding background, the feature similarity of the target area, and the prior knowledge compared to the uniqueness of the global feature, including the following steps:

3.1 考虑中心为显著性的可能性远大于四周背景,显著性目标必然集中在一定面积的区域内,针对分割完成的每一个区域,采集其最邻近范围区域内的信息,即最近邻区域信息。3.1 Considering that the center is more likely to be salience than the surrounding background, saliency targets must be concentrated in a certain area. For each segmented area, the information in the nearest adjacent area is collected, that is, the nearest neighbor area information.

3.2 考虑处理区域对整幅图像的影响程度,通过除去当前区域即分割完成的每一个区域其他区域包含的信息,即全局区域信息。3.2 Consider the degree of influence of the processing area on the entire image, by removing the information contained in the current area, that is, each segmented area, and other areas, that is, the global area information.

3.3 针对分割完成的每一个区域代表背景特征的四个边角区域的信息,即边角背景区域信息。3.3 Each segmented area represents the information of the four corner areas of the background feature, that is, the information of the corner background area.

通过这三个部分提供的特征信息完成采集。Acquisition is completed through the feature information provided by these three parts.

实施例4Example 4

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-3,步骤3中所述的生成深度特征显著图,具体包括有:The salient target detection method based on superpixel segmentation and depth feature location is the same as that in Embodiment 1-3, and the generation of depth feature saliency map described in step 3 specifically includes:

对于分割完成的其中一个区域R,量化其深度特征显著度为:For one of the regions R that has been segmented, quantify the saliency of its depth features as:

其中,s(R)代表该区域R的显著度;Π为多个因子相乘;s(R,ψC)代表最近邻区域信息; s(R,ψG)代表全局区域信息;s(R,ψB)代表边角背景区域信息。Among them, s(R) represents the significance of the region R; Π is the multiplication of multiple factors; s(R,ψ C ) represents the information of the nearest neighbor region; s(R,ψ G ) represents the global region information; s(R ,ψ B ) represents the corner background area information.

通常看到草原上的一束花朵这样一副图像,会瞬间将关注点放在花朵而忽视周围的背景,这恰恰可以理解为,作为背景的绿叶在整副图像出现的概率极高,而身为显著性较大的目标花朵,在整副图像出现的概率相对较低;高的概率因为其的一般性而引起低的关注度,与此同时,低的概率却因为它的独特性而引起高的关注度;这与香农信息论不谋而合,概率低是信息量高的表示,概率高却说明其带来的信息量是较低的。因而将s(R,ψm)定义如下:Usually when seeing an image like a bunch of flowers on the prairie, people will instantly focus on the flowers and ignore the surrounding background. The probability of appearing in the whole image is relatively low for the target flower with high significance; the high probability causes low attention because of its generality, while the low probability causes low attention because of its uniqueness. High degree of attention; this coincides with Shannon's information theory, low probability means high information content, and high probability means that the amount of information it brings is low. Thus define s(R,ψ m ) as follows:

s(R,ψm)=-log(p(R|ψm))s(R,ψ m )=-log(p(R|ψ m ))

上式中,s(R,ψm)代表深度特征下提取的最近邻区域信息,全局区域信息,边角背景区域信息;p是一个概率值。In the above formula, s(R,ψ m ) represents the nearest neighbor area information extracted under the depth feature, the global area information, and the corner background area information; p is a probability value.

对最近邻区域信息,全局区域信息,边角背景区域信三类区域信息分别用其区域平均值简化上式为:For the nearest neighbor area information, the global area information, and the corner background area information, the three types of area information are respectively simplified by their area average values as follows:

上式中d表示当前所处理的区域块R的深度平均值;是上文中所提及的ψm的深度平均值,其中di m代表第m类区域信息特征的第i块区域的深度平均值,nm共有三种情况,nC代表最近邻区域信息的总数,nG代表全局区域信息的总数,nB代表边角背景区域信息的总数。In the above formula, d represents the average depth of the currently processed region block R; is the depth average value of ψ m mentioned above, where d i m represents the depth average value of the i-th block area of the m-th type area information feature, n m has three cases, and n C represents the nearest neighbor area information The total number, n G represents the total number of global area information, and n B represents the total number of corner background area information.

用高斯函数来估计的实现:Estimated with a Gaussian function Implementation of:

上式中,代表不同区域块的深度差异性的影响因子,d,Dm,nm在上文中均已经作过相关含义的解释。In the above formula, Represents the impact factor of the depth difference of different regional blocks, d, D m , The meanings of n and m have been explained above.

本发明的深度特征显著模型充分的考虑了人眼更关心中心而忽视周围背景的特性,利用目标所在区域的特征相似性,相较于全局特征的独特性即考虑到不同区域块的深度差异性的影响因子等先验知识,使得本发明目标检测效果边缘更清晰,背景剔除更完全,目标形态分割更完整。使得计算机更加具有逻辑性,更加人工智能化。The depth feature saliency model of the present invention fully considers that the human eye is more concerned about the center and ignores the characteristics of the surrounding background, and uses the feature similarity of the area where the target is located. Compared with the uniqueness of the global feature, it takes into account the depth differences of different area blocks Prior knowledge such as the impact factor of the present invention makes the edge of the target detection effect clearer, the background removal is more complete, and the target form segmentation is more complete. It makes the computer more logical and artificial.

下面给出较为详细的实例,对本发明进一步详细说明:Provide comparatively detailed example below, the present invention is further described in detail:

实施例5Example 5

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-4,其核心步骤包括有如下步骤:The salient target detection method based on superpixel segmentation and depth feature location is the same as in Embodiment 1-4, and its core steps include the following steps:

步骤(1)对输入图像进行线性迭代的聚类分割:先将输入图像分割成K个区域,寻找各个区域邻域的局部梯度极小值点作为中心点,并把同一区域设定一个标签号,不同区域设定成不同的标签号,在本领域,也可以称为标签值。针对每一个像素点,寻找距离像素点邻域内五维欧式距离最小的中心点,并将中心点标签赋予待处理的像素点。设定当前像素点为距离最小的中心点的标签,参见图1,不断迭代寻找距离像素点最小的中心点,并给像素点赋予标签的过程。以一个K区域为单位,通过比较像素点和中心点的距离,设定像素点的标签,通过迭代优化完成K区域的像素点标签设定,遍历整幅图像,迭代优化过程中要进行判断操作,具体是判断是否像素点的标签值发生变化,如果标签值相对于上一次迭代过程发生变化,重复迭代操作,否则,如果标签值相对于上一次迭代过程不再发生变化,将过小尺寸的独立区域或者孤立起来的单像素点分配给就近的标签号,本例中迭代次数为10次时标签值就不再发生变化,去除孤立点形成的超像素区域,完成超像素分割。分割完成后生成可控数目的区域个数,不一定是K个区域。本例中寻找各个区域邻域为3*3邻域,寻找距离像素点邻域为2S*2S邻域。Step (1) Carry out linear iterative clustering segmentation of the input image: first divide the input image into K regions, find the local gradient minimum point in the neighborhood of each region as the center point, and set a label number for the same region , and different regions are set to different tag numbers, which can also be called tag values in this field. For each pixel point, find the center point with the smallest five-dimensional Euclidean distance from the pixel point neighborhood, and assign the center point label to the pixel point to be processed. Set the current pixel point as the label of the center point with the smallest distance, see Figure 1, iteratively find the center point with the smallest distance from the pixel point, and assign a label to the pixel point. Taking a K area as a unit, by comparing the distance between the pixel point and the center point, set the label of the pixel point, complete the setting of the pixel point label in the K area through iterative optimization, traverse the entire image, and perform judgment operations during the iterative optimization process , specifically to determine whether the label value of the pixel has changed. If the label value has changed relative to the previous iteration process, repeat the iterative operation. Otherwise, if the label value does not change compared to the previous iteration process, the too small size Independent areas or isolated single pixel points are assigned to the nearest label number. In this example, when the number of iterations is 10, the label value will no longer change. The superpixel area formed by the isolated point is removed to complete the superpixel segmentation. After the segmentation is completed, a controllable number of regions is generated, not necessarily K regions. In this example, the neighborhood of each area to be searched is 3*3, and the neighborhood of distance pixels to be searched is 2S*2S.

步骤(2)利用高斯差分方法生成输入图像的定位显著图:Step (2) Use the difference of Gaussian method to generate a localization saliency map of the input image:

(2a)根据输入图像进行高斯函数滤波处理,生成1/2原图的尺度图,1/4原图的尺度图,直到1/256原图的尺度图,共8个层度的尺度图。(2a) Perform Gaussian filter processing on the input image to generate a scale map of 1/2 of the original image, a scale map of 1/4 of the original image, and a scale map of 1/256 of the original image, a total of 8 levels of scale maps.

(2b)对构建的8个层度的尺度图再结合原图,即8个层度的尺度图加上原图形成九层尺度图。提取九层尺度图的红绿颜色差值图RG以及蓝黄颜色差值图BY一共18副颜色差值图。提取九层尺度图的强度图I一共9副强度图。提取九层尺度图的Gabor滤波方向图O,共提取九层尺度图的0°,45°,90°,135°四个方向的共36副方向图。(2b) Combine the constructed 8-layer scale map with the original map, that is, add the 8-layer scale map to the original map to form a nine-layer scale map. A total of 18 color difference maps are extracted from the red-green color difference map RG and the blue-yellow color difference map BY of the nine-layer scale map. A total of 9 intensity maps are extracted from the intensity map I of the nine-layer scale map. Extract the Gabor filter direction map O of the nine-layer scale map, and extract a total of 36 directions in four directions of 0°, 45°, 90°, and 135° from the nine-layer scale map.

上述过程是对九层尺度图分别从三个方面提取特征图,包括颜色差值图,强度图和方向图。The above process is to extract feature maps from three aspects of the nine-layer scale map, including color difference map, intensity map and direction map.

(2c)由于得到的三方面特征图的同类特征之间的尺寸不一样,所以需要对三类特征图的同类特征先经过插值处理,再进行差分处理,参见图1。(2c) Since the sizes of similar features of the obtained three-dimensional feature maps are different, it is necessary to interpolate the similar features of the three types of feature maps first, and then perform differential processing, see Figure 1.

(2d)不同类型的特征图之间因为其特征的度量标准不同,所以单一的幅值并不能反映显著性的重要度,先将不同类型的特征进行归一化,再融合为定位显著图,得到输入图像的定位显著图。(2d) Different types of feature maps have different measurement standards for their features, so a single amplitude cannot reflect the importance of saliency. First, different types of features are normalized and then fused into a localized saliency map. Get the localization saliency map of the input image.

步骤(3)提取输入图像的深度特征显著图D:先根据步骤2的定位显著图对超像素分割后的图作一个定位处理,再充分考虑中心位置为显著性目标的可能性远大于图像四周位置和显著性目标的集中性,即显著性目标势必都是集中在一定区域内,不可能散落在图像的所有或者绝大部分区域内。对于步骤1分割完成的每一个区域及其相邻区域采集最近邻区域信息、全局区域信息、边角背景区域信息三类特征信息,生成输入图像的深度特征显著图,用于显著性目标的检测。Step (3) extract the depth feature saliency map D of the input image: first perform a positioning process on the superpixel-segmented map according to the location saliency map in step 2, and then fully consider that the center position is much more likely to be a saliency target than the surrounding image Concentration of location and salient objects, that is, salient objects must be concentrated in a certain area, and cannot be scattered in all or most areas of the image. For each area segmented in step 1 and its adjacent areas, three types of feature information, namely nearest neighbor area information, global area information, and corner background area information, are collected to generate a depth feature saliency map of the input image for the detection of salient objects. .

步骤(4)为了使得物体分割的更规整,使得显著性的目标和忽视的背景之间的边界更加清晰,将通过步骤2中得到的定位显著图和步骤3最终得到的深度特征显著图,对定位显著图和深度特征显著图作融合和边界处理,生成最终的显著性目标图,完成超像素分割及深度特征定位的显著性目标检测。Step (4) In order to make the object segmentation more regular and make the boundary between the salient target and the neglected background clearer, the positioning saliency map obtained in step 2 and the depth feature saliency map finally obtained in step 3 are used for The location saliency map and the depth feature saliency map are fused and boundary processed to generate the final saliency object map, and the salient object detection of superpixel segmentation and depth feature location is completed.

通过本发明的目标检测,得到边缘更清晰,背景剔除更完全,目标形态分割更完整的目标检测效果。Through the target detection of the present invention, the target detection effect of clearer edges, more complete background elimination and more complete target shape segmentation can be obtained.

下面结合附图和仿真数据对本发明的技术效果详细说明:Below in conjunction with accompanying drawing and simulation data technical effect of the present invention is described in detail:

实施例6Example 6

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-5,本例通过仿真对本发明方法中超像素分割部分作效果展示和结果分析。The salient target detection method based on superpixel segmentation and depth feature location is the same as that in Embodiments 1-5. In this example, the effect display and result analysis of the superpixel segmentation part of the method of the present invention are performed through simulation.

仿真条件为:PC机,AMD A8-7650K Radeon R7,10Compute Cores 4C+6G,3.3GHz,4GB 内存;MATLAB R2015b。The simulation conditions are: PC, AMD A8-7650K Radeon R7, 10Compute Cores 4C+6G, 3.3GHz, 4GB memory; MATLAB R2015b.

仿真内容是对采用本发明对办公室墙角和图书馆场景进行超像素分割。The simulation content is to use the present invention to perform superpixel segmentation on office corners and library scenes.

图2为本发明方法中通过超像素分割后的效果图,其中图2(a)为办公室墙角的分割效果图,图2(b)为图书馆场景内分割效果图。Fig. 2 is an effect diagram after superpixel segmentation in the method of the present invention, wherein Fig. 2(a) is a segmentation effect diagram of an office corner, and Fig. 2(b) is a segmentation effect diagram in a library scene.

图2中,没有网格就是本发明选用的原始图像,网格为本发明方法通过超像素分割后的效果图。In Fig. 2, the original image without grid is the original image selected by the present invention, and the grid is the effect diagram after superpixel segmentation by the method of the present invention.

图像的组成往往是一个个像素点独立的出现,但是检测到的目标不可能为单个像素点而是都占有一定的面积,拥有多个像素点,并且像素点之间具有一定的共性而非一个个独立开来。因而鉴于这些特性,本发明将图像超像素分割为具有一定共性的同区域和差异性的不同区域。用超像素区域块来替代数目庞大的单个独立像素点,降低计算机对图像的处理复杂度。The composition of the image often appears independently of each pixel, but the detected target cannot be a single pixel but occupies a certain area, has multiple pixels, and has certain commonality between pixels rather than one independent. Therefore, in view of these characteristics, the present invention divides image superpixels into same regions with certain commonality and different regions with differences. A large number of single independent pixels are replaced by super-pixel blocks to reduce the complexity of computer processing of images.

参见图2(a)是一个办公室墙角的盆栽场景图,图中除了墙角有一个盆栽以外,其它地方是简单的背景。通过本发明检测效果可见,对于除去盆栽处的其它地方,由于其特征单一,本发明对输入首先进行超像素分割,分割结果不论是在大小还是形状上都很规整。计算机处理本图像时,由于已经进行了同类分割,不用再一一细致处理,从而降低了处理的复杂度。对于盆栽所在处,由于其特征多变,通过本发明方法也能够将其绿叶和白色盆按照相似和差异特征细致的分割。再以同类和不同类的区域为单位对图像进行处理,提高了计算机对图像的处理速度。图2(b)是一个图书馆场景图,在图书馆单一背景内放置了八幅展品,其中一幅展品放置在场景正中间,虽然(b)的场景相对(a)复杂许多,但是图中仍然存在单一墙面区域。根据效果展示可以看出,对于特征单一的墙面,分割情况不论是在大小还是形状上都很规整。对于放置了展品的地方,本发明方法也能将同类特征和不同特征清晰的分割。凡是要检测的图中,都会不容程度的存在类似于单一墙面的同类型区域。本发明计算机处理的时候以区域为单位,有效降低计算机对图像的处理复杂度。See Figure 2(a) which is a scene diagram of a potted plant in the corner of an office. Except for a potted plant in the corner, the rest of the figure is a simple background. It can be seen from the detection effect of the present invention that for other places except potted plants, due to their single characteristics, the present invention first performs superpixel segmentation on the input, and the segmentation results are very regular in both size and shape. When the computer processes this image, because the same kind of segmentation has already been carried out, there is no need to perform detailed processing one by one, thereby reducing the complexity of processing. For the location of the potted plant, because its features are changeable, its green leaves and white pots can also be finely divided according to similar and different features by the method of the present invention. Then, the image is processed in units of the same type and different types of regions, which improves the computer's processing speed of the image. Figure 2(b) is a scene diagram of a library. Eight exhibits are placed in a single background of the library, and one of the exhibits is placed in the middle of the scene. Although the scene in (b) is much more complicated than that in (a), the There is still a single wall area. According to the effect display, it can be seen that for a wall with a single feature, the segmentation is very regular in size and shape. For places where exhibits are placed, the method of the present invention can also clearly segment similar features and different features. In any image to be detected, there will be to some extent the same type of area similar to a single wall. The invention takes the region as the unit when processing by the computer, which effectively reduces the complexity of processing the image by the computer.

本发明利用线性迭代实现的聚类分割方法相较于传统的超像素分割方法,区域的分割情况不论是在大小还是形状上都很规整,不同区域之间边界的分割处理效果更清晰。Compared with the traditional superpixel segmentation method, the clustering segmentation method realized by the linear iteration of the present invention has regular segmentation in both size and shape, and the segmentation processing effect of boundaries between different areas is clearer.

实施例7Example 7

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-5,本例通过仿真进行目标检测和结果分析。The salient target detection method based on superpixel segmentation and depth feature location is the same as in Embodiments 1-5. In this example, target detection and result analysis are carried out through simulation.

仿真条件同实施例6。The simulation conditions are the same as those in Embodiment 6.

仿真内容:针对选取的十幅图通过本发明与全局对比度显著检测GCS也即RARE,测地距离显著检测GS,基于图论的显著性检测GBMR,分层次显著性检测HSD,统计纹理的显著性检测STD五类方法在同一个图像进行效果对比。选择的图像包含室内外场景图像,办公室场景,校园内区域,公园等。Simulation content: for the selected ten images, through the present invention and the global contrast significant detection GCS (that is, RARE), geodesic distance significant detection GS, graph theory-based significance detection GBMR, hierarchical significance detection HSD, statistical texture significance The five methods of detecting STD are compared in the same image. The selected images include indoor and outdoor scene images, office scenes, campus areas, parks, etc.

参见图3,图3(a)为选取的原始图像,图3(b)为本发明检测效果图,图3(c)为GS 方法效果图,图3(d)为GBMR方法效果图,图3(e)为RARE方法效果图,图3(f)为 HSD方法效果图,图3(g)为STD方法效果图,图3(h)为人工标记图。Referring to Fig. 3, Fig. 3 (a) is the selected original image, Fig. 3 (b) is the detection effect diagram of the present invention, Fig. 3 (c) is the GS method effect diagram, Fig. 3 (d) is the GBMR method effect diagram, Fig. 3(e) is the rendering of the RARE method, Figure 3(f) is the rendering of the HSD method, Figure 3(g) is the rendering of the STD method, and Figure 3(h) is the manual labeling graph.

对于办公室场景内的盆栽图,本发明和GBMR方法展现出优势,不仅能检测出显著性目标的位置,其基本的形态也能展示出来,其他几类方法虽也大体展示出了目标形态,但是背景剔除的并不完全,尤其是HSD和STD这两个方法,背景残留很大一部分区域。对于简单场景下的篮球检测图,这六类方法都能很好的检测出来目标物体的形态,接近人工标记的结果图,符合结果要求。对于单目标的屋顶红灯笼,此场景因为屋顶也是红色的,所以屋顶的干扰效果比较明显,在此情景下,这几类方法都能将目标物体检测出来,但可以看出来GS方法和STD 方法对于干扰强烈的红屋顶并没有剔除干净。对于有双目标的屋顶红灯笼图,由于本文所提方法涉及到定位显著图,所以对于多目标的检测有一定局限性,只是更侧重检测了左边的目标,最好的检测出该场景目标的方法是HSD方法,但HSD方法和其他方法虽能检测出目标的位置,但是对于作为强烈干扰的屋顶背景都剔除的不完全,残留的背景区域都过于明显,这一点本发明并没有受到强烈干扰项屋顶的影响,检测出来的单个目标周围背景剔除的比较干净。对于墙面内有多个同类的古迹文字,显然人眼仅会关注该场景内最中间且占场景面积最大的中间这块古迹文字区,对于该场景,HSD方法却把其干扰的上方和右方的微小同类面积去当成显著性目标检测出来了,这显然是不合理的,RARE方法和STD方法也是同样的情况,而GS方法和 GBMR方法结果则比较理想。对于公园场景内的标示牌图,本发明相较于其他几类方法展现出其明显的优势,最接近人工标记图。对于博物馆馆内的这三幅图,本发明那和GS方法以及 HSD方法更能展现其优势,对于目标的形态位置都检测的比较理想,但是GS方法和HSD方法像其他方法一样,都有为剔除的背景区域,这是检测结果图总所不需要的部分。对于场景相对简单的数字9的检测,除了RARE方法效果不理想,其他五类算法的结果都比较接近人工标记图所展示的图像。总体来说,本发明所提出的目标检测方法,在各类场景上的检测效果在边缘清晰度,背景剔除度,目标形态分割方面更优于其他五类方法。For potted plant pictures in office scenes, the present invention and the GBMR method show advantages, not only can detect the position of the salient target, but also display its basic shape. Although other methods generally show the target shape, but The background removal is not complete, especially for the two methods of HSD and STD, a large part of the background remains. For the basketball detection map in a simple scene, these six types of methods can detect the shape of the target object very well, which is close to the result map of manual marking and meets the result requirements. For a single-target red lantern on the roof, because the roof is also red in this scene, the interference effect of the roof is more obvious. In this scenario, these methods can detect the target object, but it can be seen that the GS method and the STD method The red roof with strong interference has not been completely eliminated. For the red lantern image on the roof with two targets, since the method proposed in this paper involves positioning the saliency map, there are certain limitations for the detection of multiple targets, but it focuses more on the detection of the target on the left, and the best detection of the target in this scene The method is the HSD method, but although the HSD method and other methods can detect the position of the target, they cannot completely remove the roof background as a strong interference, and the remaining background areas are too obvious, which is not strongly disturbed by the present invention. The impact of the roof of the item, the background around the detected single target is relatively clean. If there are multiple similar monuments on the wall, it is obvious that the human eye will only pay attention to the middle part of the scene, which is the most central and occupies the largest area of the scene. For this scene, the HSD method puts the interference on the upper and right sides of the scene. It is obviously unreasonable to detect the small similar area of the square as a salient target. The RARE method and the STD method are also in the same situation, while the results of the GS method and the GBMR method are relatively ideal. For the signboard map in the park scene, the present invention has obvious advantages compared with several other methods, and is closest to the artificially marked map. For these three pictures in the museum, the present invention and the GS method and the HSD method can show its advantages more, and it is ideal for the morphological position of the target to be detected, but the GS method and the HSD method are the same as other methods. The culled background area, which is an unnecessary part of the detection result map. For the detection of the number 9 in a relatively simple scene, except for the unsatisfactory effect of the RARE method, the results of the other five types of algorithms are relatively close to the images displayed by the artificially marked images. Generally speaking, the target detection method proposed by the present invention is better than the other five types of methods in terms of edge definition, background rejection, and target shape segmentation in various scenes.

实施例8Example 8

基于超像素分割及深度特征定位的显著性目标检测方法同实施例1-5,本例通过仿真进行目标检测结果性能分析。The salient target detection method based on superpixel segmentation and depth feature location is the same as that in Embodiments 1-5. In this example, the performance analysis of the target detection result is performed through simulation.

仿真条件同实施例6。The simulation conditions are the same as those in Embodiment 6.

仿真内容:针对选取的五百幅图通过本发明与全局对比度显著检测GCS也即RARE,测地距离显著检测GS,基于图论的显著性检测GBMR,分层次显著性检测HSD,统计纹理的显著性检测STD五类方法在同一个图像进行效果对比。选择的图像包含室内外场景图像,办公室场景,校园内区域,公园等。Simulation content: for the selected five hundred images, through the present invention and the global contrast saliency detection GCS (that is, RARE), geodesic distance saliency detection GS, graph theory-based saliency detection GBMR, hierarchical saliency detection HSD, statistical texture saliency The effect of the five types of sex detection STD methods is compared on the same image. The selected images include indoor and outdoor scene images, office scenes, campus areas, parks, etc.

参见图4,本发明和五类方法的检测效果性能分析,通过准确性(Precision)指标和召回率(Recall)指标来反应算法的性能,作如下定义:Referring to Fig. 4, the detection effect performance analysis of the present invention and five kinds of methods, reflect the performance of algorithm by accuracy (Precision) index and recall rate (Recall) index, define as follows:

TP:得到的显著图和人工标定显著图中目标区域的交集;TP: the intersection of the obtained saliency map and the target area in the manually calibrated saliency map;

TN:得到的显著图和人工标定显著图中非目标区域的交集;TN: the intersection of the obtained saliency map and the non-target area in the manually calibrated saliency map;

FP:得到的显著图的人工标定显著图中非目标区域的交集;FP: the intersection of non-target regions in the artificially calibrated saliency map of the obtained saliency map;

FN:得到的显著图的人工标定显著图中目标区域的交集;FN: the intersection of the target area in the artificially calibrated salient map of the obtained saliency map;

可知:It can be seen that:

分别计算本发明和五类方法的Precision和Recall指标。不难看出,这几类比较的显著性目标检测方法中,本发明展现出更优的效果,AUC(Area Under Curve)值达到0.6888,次优的为GBMR方法,其AUC值为0.6093。随着召回率的不断增大,整体的精确度都是下降的趋势。并且召回率为0到0.8时候,本发明的Precision指标都明显优于其他类方法。直到召回率接近0.8的时候,本发明的Precision指标才降到0.6以下,充分表明本发明能更优秀的实现检测,更接近人工标记图,背景剔除更完全,目标形态分割更完整。Calculate the Precision and Recall indexes of the present invention and the five types of methods respectively. It is not difficult to see that among these comparisons of salient object detection methods, the present invention exhibits a better effect, with an AUC (Area Under Curve) value of 0.6888, and the second best is the GBMR method, with an AUC value of 0.6093. As the recall rate increases, the overall precision tends to decrease. And when the recall rate is 0 to 0.8, the Precision indicators of the present invention are obviously better than other methods. When the recall rate is close to 0.8, the Precision index of the present invention drops below 0.6, which fully shows that the present invention can achieve better detection, closer to the artificially marked map, more complete background removal, and more complete target shape segmentation.

简而言之,本发明设计并提出一种基于超像素分割及深度特征定位的显著性目标检测方法。利用五维欧式距离的颜色相似性线性迭代的超像素分割方法,把图像的处理单位由单独像素点上升到集体类似区域,使得检测到的目标和复杂背景边缘能够清晰的分离出来,解决了传统显著性目标检测方法目标边缘分割效果不理想的问题。充分考虑颜色特征,方向特征和深度特征等图像特征,结合人眼更关心中心而忽视周围背景的特性、显著性图像所在区域的特征相似性、相较于全局特征的独特性的先验知识。进而通过算法模拟这些特征生成输入图像的定位显著图和深度特征显著图,对其进行融合和边界处理,生成最终的具有类似于人眼注意机制的显著图。使得计算机更加具有逻辑性,更加人工智能化,具有类似于人的逻辑理解能力,或者说更智能化、高效化、鲁棒性更强。经本发明检测的图像效果边缘更清晰,背景剔除更完全,目标形态分割更完整。用于人脸识别,车辆检测,运动目标检测跟踪,军事导弹检测,医院病理检测等各个领域。In short, the present invention designs and proposes a salient object detection method based on superpixel segmentation and depth feature location. Using the color similarity linear iterative superpixel segmentation method of five-dimensional Euclidean distance, the processing unit of the image is raised from a single pixel point to a collective similar area, so that the detected target and the edge of the complex background can be clearly separated, which solves the traditional problem. The problem of unsatisfactory object edge segmentation effect of salient object detection method. Fully consider image features such as color features, direction features, and depth features, combined with the prior knowledge that the human eye is more concerned about the center and ignores the surrounding background, the feature similarity of the salient image area, and the uniqueness compared to global features. Then, the localization saliency map and the depth feature saliency map of the input image are generated by simulating these features through the algorithm, which are fused and boundary processed to generate the final saliency map with a mechanism similar to the human eye attention. It makes the computer more logical, more artificial, and has a logical understanding ability similar to that of a human being, or more intelligent, efficient, and more robust. The edge of the image detected by the invention is clearer, the background is eliminated more completely, and the target form segmentation is more complete. It is used in face recognition, vehicle detection, moving target detection and tracking, military missile detection, hospital pathological detection and other fields.

Claims (4)

1. a kind of conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic, it is characterised in that include Following steps:
Step 1:Input picture carries out the cluster segmentation of linear iteraction to it.Input target image to be detected, is first divided into K Region, finds the partial gradient minimum point of regional neighborhood as central point, and set a tag number to the same area;Seek The central point for looking in range pixel vertex neighborhood five dimension Euclidean distances minimum, and central point label is assigned to pending pixel; Continuous iteration finds the process of the minimum central point of Range Profile vegetarian refreshments, stops changing when the label value of pixel will not change In generation, complete super-pixel segmentation;
Step 2:Build difference of Gaussian generation positioning notable figure.
2a:Gaussian function filtering process is carried out according to the artwork of input, 8 depth scalograms of artwork are generated;
2b:To the scalograms of 8 depths of structure in conjunction with artwork nine layers of scalogram of formation, the red green of nine layers of scalogram picture is extracted Color difference figure and blue yellow color differential chart, totally 18 secondary color difference figure;The intensity map of nine layers of scalogram is extracted, totally 9 is secondary strong Degree figure;The Gabor filtering directional diagrams of nine layers of scalogram are extracted, totally 36 auxiliary direction figure, forms three category feature figures;
2c:Because the size between nine layers of scalogram homogenous characteristics is different, interpolation processing is first passed through to three category feature figures, then carry out Difference processing;
2d:Because the module difference of its feature is, it is necessary to first carry out different types of feature between different types of characteristic pattern Normalization is fused to position notable figure again;
Step 3:Generate depth characteristic notable figure.One first is made to the figure after super-pixel segmentation according to the positioning notable figure of step 2 Localization process, then each region completed for segmentation and its adjacent area gather arest neighbors area information, global area and believed Breath, the category feature information of corner background area information three, generate depth characteristic notable figure, the detection for conspicuousness target;
Step 4:By by step 2 and the final positioning notable figure and depth characteristic notable figure for being able to determine of step 3, positioning is shown Work figure and depth characteristic notable figure are merged and BORDER PROCESSING, generate final conspicuousness target figure, complete super-pixel segmentation Conspicuousness target detection.
2. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by, the super-pixel segmentation of target image to be detected is included having the following steps in step 1:
1.1 first assume that target image one has that pixel is N number of, and it is K to expect the overall area of segmentation, it is clear that each area got Domain has the distance between N/K pixel, and different zones aboutIt is possible that the central point of setting Just appear on edge, in order to avoid the generation of such case, the minimum position of partial gradient is found around the center of setting Put, center is moved on at this partial gradient minimum.And the same area is set a tag number, to mark;
1.2 calculate each pixel to five dimensional feature vector Euclidean distance values of the fixed central point of surrounding neighbors respectively, so Currently processed pixel is assigned the minimum central point tag number of value afterwards,.Calculate five dimensional feature vector Ci=[li,ai,bi,xi, yi]TEuclidean distance as shown in following three formula, in five dimensional feature vectors, li,ai,biFace in CIELAB spaces is represented respectively Position between the brightness of color, red and green, three color component information values of the position between yellow and blueness, xi,yiGeneration The co-ordinate position information value of target image to be detected where table pixel.
<mrow> <msub> <mi>d</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>l</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>l</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
<mrow> <msub> <mi>d</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
<mrow> <msub> <mi>D</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>d</mi> <mrow> <mi>l</mi> <mi>a</mi> <mi>b</mi> </mrow> </msub> <mo>+</mo> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> <msub> <mi>d</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </mrow>
In above formula, dlabWhat is represented is the Euclidean distance of pixel k and central point i in terms of CIELAB color spaces;dxyRepresent It is the Euclidean distance of pixel k and central point i in terms of spatial coordinate location;DiFor evaluate pixel k and central point i whether institute Belong to the criterion of a label, its value is bigger, and similarity degree between the two is closer to label is just consistent;M is one solid Parameter is determined, for the relation between balance variable;S is that the distance between different zones are about
Above is setting an iteration cycle of the affiliated tag number of pixel.
1.3 are constantly iterated operation according to 1.2 steps, and the degree of accuracy to the tag number belonging to pixel makees further excellent Change, untill the affiliated tag number of each pixel of whole sub-picture no longer changes.
1.4 pass through iterative process, and the isolated area or the single pixel point that isolates of crossing small size are distributed to mark nearby Sign, completes the super-pixel segmentation of target image.
3. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by that the category feature information of collection three described in step 3 comprises the following steps:
3.1 are directed to each region after the completion of super-pixel segmentation, gather the information in its closest range areas, i.e. arest neighbors Area information;
3.2 split the information that other regions of each region completed are included, i.e. global area letter by removing current region Breath;
The information of four corner areas of 3.3 each the Regional Representative's background characteristics completed for segmentation, i.e. corner background area Domain information.
4. the conspicuousness object detection method according to claim 1 positioned based on super-pixel segmentation and depth characteristic, its It is characterised by that the class area information of the use three generation depth characteristic notable figure described in step 3 has been specifically included:
The one of region R completed for segmentation, quantifying its depth characteristic significance is:
<mrow> <mi>S</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mrow> <mi>m</mi> <mo>=</mo> <mi>C</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> </mrow> </munder> <mi>s</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>,</mo> <msup> <mi>&amp;psi;</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> </mrow>
Wherein, s (R) represents region R significance;Π is multiple fac-tors;s(R,ψC) represent arest neighbors area information;s (R,ψG) represent global area information;s(R,ψB) represent corner background area information;
Low probability is the high expression of information content, and probability height but illustrates that its information content brought is relatively low;By s (R, ψm) definition is such as Under:
s(R,ψm)=- log (p (R | ψm))
In above formula, s (R, ψm) represent the arest neighbors area information extracted under depth characteristic, global area information, corner background area Information;P is a probable value.
To arest neighbors area information, global area information, corner background area believes that three class area informations use its zone leveling respectively Value simplifies above formula:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>|</mo> <msup> <mi>&amp;psi;</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mover> <mi>p</mi> <mi>&amp;Lambda;</mi> </mover> <mrow> <mo>(</mo> <mi>d</mi> <mo>|</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> </mrow>
D represents to be presently in the region unit R of reason depth-averaged value in above formula;It is above institute The ψ referred tomDepth-averaged value, whereinRepresent the depth-averaged value in i-th piece of region of m class area information features, nmAltogether There are three kinds of situations, nCRepresent the sum of arest neighbors area information, nGRepresent the sum of global area information, nBRepresent corner background The sum of area information;
Estimated with Gaussian functionRealization:
<mrow> <mover> <mi>p</mi> <mi>&amp;Lambda;</mi> </mover> <mrow> <mo>(</mo> <mi>d</mi> <mo>|</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>n</mi> <mi>m</mi> </msub> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>m</mi> </msub> </munderover> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <mi>d</mi> <mo>-</mo> <msubsup> <mi>d</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>&amp;sigma;</mi> <mi>d</mi> <mi>m</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </msup> </mrow>
In above formula,Represent the factor of influence of the depth difference opposite sex of different zones block.
CN201710255712.1A 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning Active CN107169487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710255712.1A CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710255712.1A CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Publications (2)

Publication Number Publication Date
CN107169487A true CN107169487A (en) 2017-09-15
CN107169487B CN107169487B (en) 2020-02-07

Family

ID=59812347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710255712.1A Active CN107169487B (en) 2017-04-19 2017-04-19 Salient object detection method based on superpixel segmentation and depth feature positioning

Country Status (1)

Country Link
CN (1) CN107169487B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108154129A (en) * 2017-12-29 2018-06-12 北京华航无线电测量研究所 Method and system are determined based on the target area of vehicle vision system
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Image co-saliency detection method based on energy optimization
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN109636784A (en) * 2018-12-06 2019-04-16 西安电子科技大学 Saliency object detection method based on maximum neighborhood and super-pixel segmentation
CN109886267A (en) * 2019-01-29 2019-06-14 杭州电子科技大学 A saliency detection method for low-contrast images based on optimal feature selection
CN109960977A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Saliency preprocessing method based on image layering
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN110610184A (en) * 2018-06-15 2019-12-24 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient object of image
CN110796650A (en) * 2019-10-29 2020-02-14 杭州阜博科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 A method and system for image semantic segmentation based on single pixel annotation
CN112149688A (en) * 2020-09-24 2020-12-29 北京汽车研究总院有限公司 Image processing method and apparatus, computer readable storage medium, computer equipment
WO2021000302A1 (en) * 2019-07-03 2021-01-07 深圳大学 Image dehazing method and system based on superpixel segmentation, and storage medium and electronic device
CN112700438A (en) * 2021-01-14 2021-04-23 成都铁安科技有限责任公司 Ultrasonic damage judging method and system for inlaid part of train axle
CN112990226A (en) * 2019-12-16 2021-06-18 中国科学院沈阳计算技术研究所有限公司 Salient object detection method based on machine learning
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN118691667A (en) * 2024-08-29 2024-09-24 成都航空职业技术学院 A machine vision positioning method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185838A1 (en) * 2004-02-23 2005-08-25 Luca Bogoni System and method for toboggan based object segmentation using divergent gradient field response in images
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN106296695A (en) * 2016-08-12 2017-01-04 西安理工大学 Adaptive threshold natural target image based on significance segmentation extraction algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185838A1 (en) * 2004-02-23 2005-08-25 Luca Bogoni System and method for toboggan based object segmentation using divergent gradient field response in images
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN106296695A (en) * 2016-08-12 2017-01-04 西安理工大学 Adaptive threshold natural target image based on significance segmentation extraction algorithm

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019071976A1 (en) * 2017-10-12 2019-04-18 北京大学深圳研究生院 Panoramic image saliency detection method based on regional growth and eye movement model
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN107730515B (en) * 2017-10-12 2019-11-22 北京大学深圳研究生院 Saliency detection method for panoramic images based on region growing and eye movement model
CN109960977B (en) * 2017-12-25 2023-11-17 大连楼兰科技股份有限公司 Saliency preprocessing method based on image layering
CN109960977A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Saliency preprocessing method based on image layering
CN108154129A (en) * 2017-12-29 2018-06-12 北京华航无线电测量研究所 Method and system are determined based on the target area of vehicle vision system
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN110610184B (en) * 2018-06-15 2023-05-12 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient targets of images
CN110610184A (en) * 2018-06-15 2019-12-24 阿里巴巴集团控股有限公司 Method, device and equipment for detecting salient object of image
CN109035252A (en) * 2018-06-29 2018-12-18 山东财经大学 A kind of super-pixel method towards medical image segmentation
CN109118493A (en) * 2018-07-11 2019-01-01 南京理工大学 A kind of salient region detecting method in depth image
CN109118493B (en) * 2018-07-11 2021-09-10 南京理工大学 Method for detecting salient region in depth image
CN109472259B (en) * 2018-10-30 2021-03-26 河北工业大学 Image collaborative saliency detection method based on energy optimization
CN109472259A (en) * 2018-10-30 2019-03-15 河北工业大学 Image co-saliency detection method based on energy optimization
CN109522908A (en) * 2018-11-16 2019-03-26 董静 Image significance detection method based on area label fusion
CN109636784A (en) * 2018-12-06 2019-04-16 西安电子科技大学 Saliency object detection method based on maximum neighborhood and super-pixel segmentation
CN109636784B (en) * 2018-12-06 2021-07-27 西安电子科技大学 Image saliency object detection method based on maximum neighborhood and superpixel segmentation
CN109886267A (en) * 2019-01-29 2019-06-14 杭州电子科技大学 A saliency detection method for low-contrast images based on optimal feature selection
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN109977767B (en) * 2019-02-18 2021-02-19 浙江大华技术股份有限公司 Target detection method and device based on superpixel segmentation algorithm and storage device
WO2021000302A1 (en) * 2019-07-03 2021-01-07 深圳大学 Image dehazing method and system based on superpixel segmentation, and storage medium and electronic device
CN110796650A (en) * 2019-10-29 2020-02-14 杭州阜博科技有限公司 Image quality evaluation method and device, electronic equipment and storage medium
CN112990226A (en) * 2019-12-16 2021-06-18 中国科学院沈阳计算技术研究所有限公司 Salient object detection method based on machine learning
CN111259936A (en) * 2020-01-09 2020-06-09 北京科技大学 A method and system for image semantic segmentation based on single pixel annotation
CN112149688A (en) * 2020-09-24 2020-12-29 北京汽车研究总院有限公司 Image processing method and apparatus, computer readable storage medium, computer equipment
CN112700438A (en) * 2021-01-14 2021-04-23 成都铁安科技有限责任公司 Ultrasonic damage judging method and system for inlaid part of train axle
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN118691667A (en) * 2024-08-29 2024-09-24 成都航空职业技术学院 A machine vision positioning method
CN118691667B (en) * 2024-08-29 2024-10-22 成都航空职业技术学院 Machine vision positioning method

Also Published As

Publication number Publication date
CN107169487B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN107169487A (en) The conspicuousness object detection method positioned based on super-pixel segmentation and depth characteristic
CN106447676B (en) An Image Segmentation Method Based on Fast Density Clustering Algorithm
CN102609934B (en) A Multi-target Segmentation and Tracking Method Based on Depth Image
CN103337072B (en) A kind of room objects analytic method based on texture and geometric attribute conjunctive model
CN107092871B (en) Remote sensing image building detection method based on multiple dimensioned multiple features fusion
CN114926699B (en) Method, device, medium and terminal for semantic classification of indoor 3D point cloud
CN115035260A (en) Indoor mobile robot three-dimensional semantic map construction method
CN108257139A (en) RGB-D three-dimension object detection methods based on deep learning
Shen et al. A polygon aggregation method with global feature preservation using superpixel segmentation
CN102316352B (en) Stereo video depth image manufacturing method based on area communication image and apparatus thereof
CN108399361A (en) A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation
CN101692224A (en) High-resolution remote sensing image search method fused with spatial relation semantics
CN110084243A (en) It is a kind of based on the archives of two dimensional code and monocular camera identification and localization method
JP2010511215A (en) How to indicate an object in an image
CN108376232A (en) A kind of method and apparatus of automatic interpretation for remote sensing image
CN109886267A (en) A saliency detection method for low-contrast images based on optimal feature selection
CN109344842A (en) A Pedestrian Re-identification Method Based on Semantic Region Representation
CN109559273A (en) A kind of quick joining method towards vehicle base map picture
CN107730536A (en) A kind of high speed correlation filtering object tracking method based on depth characteristic
CN111915617A (en) An anti-overlap segmentation method for single leaf of plant point cloud
CN107808140B (en) A Monocular Vision Road Recognition Algorithm Based on Image Fusion
CN108829711A (en) A kind of image search method based on multi-feature fusion
KR101507732B1 (en) Method for segmenting aerial images based region and Computer readable storage medium for storing program code executing the same
CN109816051A (en) A method and system for matching characteristic points of dangerous chemical goods
CN111754618A (en) An object-oriented method and system for multi-level interpretation of real 3D model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant