[go: up one dir, main page]

CN114926391B - Perspective transformation method based on improved RANSAC algorithm - Google Patents

Perspective transformation method based on improved RANSAC algorithm Download PDF

Info

Publication number
CN114926391B
CN114926391B CN202210352267.1A CN202210352267A CN114926391B CN 114926391 B CN114926391 B CN 114926391B CN 202210352267 A CN202210352267 A CN 202210352267A CN 114926391 B CN114926391 B CN 114926391B
Authority
CN
China
Prior art keywords
image
pairs
matching
transformation
matching points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210352267.1A
Other languages
Chinese (zh)
Other versions
CN114926391A (en
Inventor
舒征宇
姚景岩
汪俊
许欣慧
高健
翟二杰
黄志鹏
李镇翰
张洋
李�浩
马聚超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changshan Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Changshan Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changshan Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Changshan Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202210352267.1A priority Critical patent/CN114926391B/en
Publication of CN114926391A publication Critical patent/CN114926391A/en
Application granted granted Critical
Publication of CN114926391B publication Critical patent/CN114926391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The perspective transformation method based on the improved RANSAC algorithm utilizes the SIFT algorithm to obtain the coordinates of the characteristic matching points of the reference graph and the auxiliary graph, so that a transformation matrix can be obtained by randomly selecting 4 pairs of characteristic matching points, and the perspective transformation can be carried out on the auxiliary graph by utilizing the transformation matrix. Only 4 pairs of matching points are needed in calculating the transformation matrix, and an improved RANSAC algorithm is provided. The invention can help better assist the intelligent inspection robot to identify the running state of the pressing plate in the image and improve the anti-interference capability of the intelligent inspection robot.

Description

基于改进RANSAC算法的透视变换方法Perspective transformation method based on improved RANSAC algorithm

技术领域Technical Field

本发明涉及电网智能巡检技术领域,具体涉及一种基于改进RANSAC算法的透视变换方法。The present invention relates to the technical field of intelligent power grid inspection, and in particular to a perspective transformation method based on an improved RANSAC algorithm.

背景技术Background Art

随着通信技术以及人工智能技术的不断发展,带动了智能电网的快速建设。智能巡检逐渐成为当前变电站无人值守模式下的重要辅助运维手段。继电二次设备保护压板作为电力运行系统中的重要保护装置,由于其数量较多,传统人工巡检工作量大且繁杂,误检或者漏检情况时有发生,为此采用智能机器人对其巡检也成为研究热点。目前图像处理相关技术在电力系统中的应用主要集中在一次设备。而在实际生产中二次设备的运行状态切换不容易发现,需要得到关注和实时监测。With the continuous development of communication technology and artificial intelligence technology, the rapid construction of smart grids has been driven. Intelligent inspection has gradually become an important auxiliary operation and maintenance means in the current unmanned mode of substations. As an important protection device in the power operation system, the relay secondary equipment protection pressure plate has a large number of relays, and the traditional manual inspection workload is large and complicated. False detection or missed detection often occurs. Therefore, the use of intelligent robots for inspection has also become a research hotspot. At present, the application of image processing related technologies in power systems is mainly concentrated in primary equipment. In actual production, the operation status switching of secondary equipment is not easy to find, and it needs to be paid attention to and monitored in real time.

通过观察采集到的压板图像可知,由于保护柜门大多使用玻璃柜门,在光照不均的情况下压板图像存在局部高光干扰。严重情况下高光干扰会使得纹理信息严重缺失无法辨识,出现压板状态的误检漏检的情况。By observing the collected pressure plate images, we can see that since most of the protective cabinet doors use glass cabinet doors, there is local highlight interference in the pressure plate image under uneven lighting conditions. In severe cases, the highlight interference will cause the texture information to be seriously missing and unrecognizable, resulting in false detection and missed detection of the pressure plate status.

变电站里出现的玻璃柜门反光原因主要有三种:There are three main reasons for the reflection of glass cabinet doors in substations:

(1):自然光透过窗户照射到玻璃柜门上形成的反光;(1): The reflection formed by natural light shining through the window onto the glass cabinet door;

(2);室内照明灯在玻璃柜门上形成的反光;(2) Reflections from interior lighting on glass cabinet doors;

(3):光线较暗时相机闪光灯在柜门上形成的反光。(3): The reflection of the camera flash on the cabinet door when the light is dim.

基于二维OTSU算法的高光区域检测:Highlight area detection based on two-dimensional OTSU algorithm:

在图像阈值分割的各种算法中,最大类间方差法(OTSU)因其计算简单、性能稳定而被广泛使用。传统OTSU算法是利用双峰直方图统计图像的像素平均灰度值,将图像分割为前景区域和背景区域两类,当被分成两类的方差最大时即得到该阈值。由于一幅图像中同一区域内的像素在位置和灰度级上具有较强的一致性和相关性,而传统OTSU算法只考虑了直方图提供的灰度级信息,忽略了图像的空间位置信息。Among various algorithms for image threshold segmentation, the maximum between-class variance method (OTSU) is widely used because of its simple calculation and stable performance. The traditional OTSU algorithm uses a bimodal histogram to count the average grayscale value of the pixels in the image and divide the image into two categories: the foreground area and the background area. The threshold is obtained when the variance of the two categories is the largest. Since the pixels in the same area of an image have strong consistency and correlation in position and grayscale, the traditional OTSU algorithm only considers the grayscale information provided by the histogram and ignores the spatial position information of the image.

发明内容Summary of the invention

本发明提出了一种基于图像融合的保护压板运行状态辨识方法,用于变电站二次继电保护压板智能巡检中;该方法采用阈值分割方法来对高光区域进行检测,并在此基础上进行图像修复,有效去除继电保护压板图像中存在的光影干扰,以更好地辅助智能巡检机器人识别图像中压板的运行状态,提升其抗干扰的能力。The present invention proposes a protection pressure plate operating status identification method based on image fusion, which is used in the intelligent inspection of secondary relay protection pressure plates in substations. The method adopts a threshold segmentation method to detect the highlight area, and performs image restoration on this basis, so as to effectively remove the light and shadow interference in the relay protection pressure plate image, so as to better assist the intelligent inspection robot to identify the operating status of the pressure plate in the image and improve its anti-interference ability.

本发明采取的技术方案为:The technical solution adopted by the present invention is:

基于图像融合的保护压板运行状态辨识方法,包括以下步骤:The method for identifying the operating state of a protective pressure plate based on image fusion comprises the following steps:

步骤1、保护压板图像高光区域检测:Step 1: Protect the platen image highlight area detection:

步骤1.1:智能机器人巡检继电保护压板,拍摄保护压板图像;Step 1.1: The intelligent robot inspects the relay protection pressure plate and takes pictures of the protection pressure plate;

步骤1.2:由于在拍摄时要得到整个区域的保护压板,为此将拍摄的保护压板图像输入至计算机图像处理系统中,对保护压板图像进行筛选,删去没有拍摄到完整区域的保护压板图像,避免影响后续检测结果。对筛选后的保护压板图像进行压缩,让计算机处理图像时速度更快。Step 1.2: Since the entire area of the protective pressure plate must be captured during the shooting, the captured protective pressure plate images are input into the computer image processing system, and the protective pressure plate images are screened, and the protective pressure plate images that do not capture the entire area are deleted to avoid affecting the subsequent test results. The screened protective pressure plate images are compressed to allow the computer to process the images faster.

计算机图像处理系统通过Microsoft Visual Studio开发平台中配置OpenCV、python等插件来实现。The computer image processing system is implemented by configuring OpenCV, python and other plug-ins in the Microsoft Visual Studio development platform.

步骤1.3:对保护压板图像进行灰度化处理,扩大高光区域与背景区域差异;Step 1.3: grayscale the protective plate image to enlarge the difference between the highlight area and the background area;

步骤1.4:利用二维OTSU算法,求得最佳阈值,将保护压板灰度化图像按最佳阈值进行分割,从而快速检测出图像中的高光区域。Step 1.4: Use the two-dimensional OTSU algorithm to find the optimal threshold, segment the grayscale image of the protective pressure plate according to the optimal threshold, and quickly detect the highlight area in the image.

步骤2、基于图像融合的高光区域去除:Step 2: Highlight area removal based on image fusion:

步骤2.1、特征点检测:Step 2.1, feature point detection:

利用SIFT算法对基准图和辅助图进行特征点检测,生成特征描述向量,并用最近邻法对基准图和辅助图进行特征匹配;The SIFT algorithm is used to detect feature points of the reference image and the auxiliary image, generate feature description vectors, and the nearest neighbor method is used to match the features of the reference image and the auxiliary image.

步骤2.2:基于改进RANSAC算法的透视变换:Step 2.2: Perspective transformation based on improved RANSAC algorithm:

通过得到的特征匹配点,求取透视变换矩阵,采用透视变换对辅助图进行视角、尺寸调整,使其与基准图一致。The perspective transformation matrix is obtained through the obtained feature matching points, and the perspective transformation is used to adjust the viewing angle and size of the auxiliary image to make it consistent with the reference image.

步骤2.3、图像修复:Step 2.3, Image restoration:

根据基准图中检测到的高光区域位置,用透视变换后的辅助图相对应区域位置,对其进行纹理填补修复,进而去除基准图中的高光。According to the position of the highlight area detected in the reference image, the corresponding area position is filled and repaired with the auxiliary image after perspective transformation, thereby removing the highlight in the reference image.

步骤3、保护压板状态辨识:Step 3: Identify the protection plate status:

步骤3.1:连通区域提取:Step 3.1: Connected region extraction:

对高光去除后的基准图进行颜色区域筛选、二值化、形态学处理,并用8连通方式对连通区域进行提取。The reference image after highlight removal is subjected to color region screening, binarization, and morphological processing, and the connected regions are extracted using the 8-connectivity method.

步骤2:有效压板区域筛选:Step 2: Effective platen area screening:

依据形态特征进行面积、尺寸、形状分析,从连通区域中准确提取出有效压板区域。The area, size and shape are analyzed according to the morphological characteristics, and the effective pressure plate area is accurately extracted from the connected area.

步骤3:压板投退状态辨识:Step 3: Identification of the platen insertion and retraction status:

对筛选出的有效压板区域进行辨识,识别出有效压板投退状态后,采用重心坐标对有效压板按照从左至右、从上到下顺序进行排序,最终得到一个只含有0、1的状态序列。The screened effective pressure plate area is identified. After the effective pressure plate insertion and retraction status is recognized, the effective pressure plates are sorted from left to right and from top to bottom using the center of gravity coordinates, and finally a state sequence containing only 0 and 1 is obtained.

步骤2.2具体步骤包括以下步骤:Step 2.2 The specific steps include the following steps:

步骤一:对SIFT算法提取出的有效特征点,利用最近邻法进行初始匹配,初始所选欧式距离阈值为0.6。将图像等分成4个区域,判断当前所分4个区域特征匹配点对数是否均大于4,若是则进行下一步,否则将欧式距离阈值加0.1重新匹配。Step 1: Use the nearest neighbor method to perform initial matching on the effective feature points extracted by the SIFT algorithm, and the initially selected Euclidean distance threshold is 0.6. Divide the image into 4 equal areas, and determine whether the number of feature matching points in the current 4 areas is greater than 4. If so, proceed to the next step, otherwise add 0.1 to the Euclidean distance threshold and rematch.

步骤二:从4个区域中各筛选出欧式距离最小的4对匹配点,一共16对,将这16对匹配点4对一组进行组合,根据组合后4对匹配点的欧式距离和从小到大进行1、2、…、N排序,选取序号前50组。Step 2: Select 4 pairs of matching points with the smallest Euclidean distance from each of the 4 regions, for a total of 16 pairs. Combine these 16 pairs of matching points into groups of 4, and sort them from small to large according to the sum of the Euclidean distances of the 4 pairs of matching points after combination, 1, 2, ..., N, and select the top 50 groups.

步骤三:首先按照序号顺序取序号为1的4对匹配点计算变换矩阵H,利用矩阵H对图像中的所有匹配点对进行校验,判断局内点数所占匹配点对总数比例是否大于50%,若大于50%则当前所计算出的矩阵H为最佳变换矩阵,否则按照序号顺序选取下一组4对匹配点计算变换矩阵H。Step 3: First, take 4 pairs of matching points with serial number 1 in order to calculate the transformation matrix H, and use the matrix H to check all the matching point pairs in the image to determine whether the proportion of the points in the local area to the total number of matching point pairs is greater than 50%. If it is greater than 50%, the currently calculated matrix H is the optimal transformation matrix, otherwise select the next group of 4 pairs of matching points in order to calculate the transformation matrix H.

本发明一种基于图像融合的保护压板运行状态辨识方法,技术效果如下:The present invention provides a method for identifying the operating status of a protective pressure plate based on image fusion, and the technical effects are as follows:

1)可以广泛应用于变电站继电保护压板智能巡检当中,更好地辅助巡检机器人对压板投退状态的核对,提高继电保护压板投退状态辨识准确率,降低巡检人员的劳动强度,减少电网操作中的误操作、避免经济损失,确保电网安全稳定运行。1) It can be widely used in the intelligent inspection of relay protection pressure plates in substations, better assist the inspection robot in checking the status of the pressure plates, improve the accuracy of identifying the status of the relay protection pressure plates, reduce the labor intensity of inspection personnel, reduce misoperation in power grid operations, avoid economic losses, and ensure the safe and stable operation of the power grid.

2)本发明主要功能在于更好地辅助智能巡检机器人对压板投退状态的核对,提高继电保护压板投退状态辨识准确率,降低巡检人员的劳动强度,减少电网操作中的误操作、避免经济损失,确保电网安全稳定运行。2) The main function of the present invention is to better assist the intelligent inspection robot in checking the status of the pressure plate, improve the accuracy of identifying the status of the relay protection pressure plate, reduce the labor intensity of inspection personnel, reduce misoperation in power grid operation, avoid economic losses, and ensure the safe and stable operation of the power grid.

3)本发明计算机图像处理技术对基准图高光区域进行检测主要包括:将所采集到的基准图进行灰度化,利用改进的OTSU二维阈值分割方法快速检测图像中的高光干扰,降低噪声的影响,使基准图中高光区域检测更加精准。3) The computer image processing technology of the present invention detects the highlight area of the reference image mainly including: graying the collected reference image, using the improved OTSU two-dimensional threshold segmentation method to quickly detect the highlight interference in the image, reducing the influence of noise, and making the detection of the highlight area in the reference image more accurate.

4)本发明采用SIFT算法对基准图和辅助图进行特征点检测并用最近邻法匹配,引入改进的RANSAC算法去除错误匹配点和求取最佳透视变换矩阵,利用辅助图像透视变换至主图像修复其中的高光区域。在图像修复的基础上通过对压板边缘检测的倾角来判断其运行状态。本发明可有助于更好地辅助智能巡检机器人识别图像中压板的运行状态,提升其抗干扰的能力。4) The present invention uses the SIFT algorithm to detect feature points of the reference image and the auxiliary image and uses the nearest neighbor method for matching. The improved RANSAC algorithm is introduced to remove erroneous matching points and obtain the best perspective transformation matrix. The auxiliary image is transformed into the main image to repair the highlight area. On the basis of image restoration, the inclination angle of the edge of the pressure plate is detected to judge its operating status. The present invention can help better assist the intelligent inspection robot to identify the operating status of the pressure plate in the image and improve its anti-interference ability.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为二维直方图。Figure 1 is a two-dimensional histogram.

图2为透视变换过程图。FIG. 2 is a diagram of the perspective transformation process.

图3为保护压板图像高光区域检测流程图。FIG3 is a flowchart of the detection of highlight areas of the protection platen image.

图4为基于图像融合的高光去除流程图。FIG4 is a flowchart of highlight removal based on image fusion.

图5为基于改进RANSAC算法的透视变换步骤图。FIG5 is a diagram showing the perspective transformation steps based on the improved RANSAC algorithm.

图6为保护压板状态辨识流程图。FIG. 6 is a flow chart of the protection pressure plate state identification.

图7为本发明方法整体结构图。FIG. 7 is a diagram showing the overall structure of the method of the present invention.

具体实施方式DETAILED DESCRIPTION

基于图像融合的保护压板运行状态辨识方法,包括:The protection platen operating state identification method based on image fusion includes:

一、基于OTSU优化算法的高光区域检测:1. Highlight area detection based on OTSU optimization algorithm:

1.压板图像特征分析:1. Platen image feature analysis:

本发明采用阈值分割方法来对高光区域进行检测,并在此基础上进行图像修复,为压板的运行状态辨识奠定基础。The present invention adopts a threshold segmentation method to detect the highlight area, and performs image restoration on this basis, laying a foundation for identifying the operating status of the pressing plate.

2.基于二维OTSU算法的高光区域检测:2. Highlight area detection based on two-dimensional OTSU algorithm:

为了更好地分割前景与背景,提升算法的抗噪声能力,本发明将传统一维OTSU算法增加维度变为二维,具体步骤如下:In order to better segment the foreground and background and improve the algorithm's anti-noise ability, the present invention increases the dimension of the traditional one-dimensional OTSU algorithm to two dimensions. The specific steps are as follows:

步骤1:令存在图像I,设图像I(x,y)的灰度级为L级,那么图像I的领域平均灰度也为L级。Step 1: Suppose there is an image I, and the grayscale of image I(x,y) is L, then the average grayscale of the field of image I is also L.

步骤2:设f(x,y)为像素点(x,y)的灰度值,g(x,y)为中心像素点(x,y)的K×K领域内的平均灰度值。令f(x,y)=i,g(x,y)=j,然后就形成了一个二元组(i,j)。Step 2: Let f(x,y) be the gray value of the pixel (x,y), and g(x,y) be the average gray value in the K×K area of the center pixel (x,y). Let f(x,y)=i, g(x,y)=j, and then a tuple (i,j) is formed.

步骤3:设二元组(i,j)出现的次数为fij,求出二元组对应的概率密度Pij,Pij=fij/N,i,j=1,2,…,L,其中N为图像像素点总数。Step 3: Let the number of occurrences of the binary group (i, j) be fij , and find the probability density Pij corresponding to the binary group, Pij = fij /N,i,j = 1,2,…,L, where N is the total number of image pixels.

步骤4:任意选取一个阈值向量(s,t),将图像的二维直方图划分成4个区域,B、C区域代表图像的前景和背景,A、D区域代表噪声点,如图1所示。Step 4: Randomly select a threshold vector (s, t) to divide the two-dimensional histogram of the image into four regions. Regions B and C represent the foreground and background of the image, and regions A and D represent noise points, as shown in Figure 1.

步骤5:设背景和前景出现的概率分别为ω1,ω2,对应的均值矢量为μ1,μ2。整幅图像所对应的均值矢量为μ,公式如下:Step 5: Assume that the probabilities of background and foreground appearing are ω 1 and ω 2 respectively, and the corresponding mean vectors are μ 1 and μ 2 . The mean vector corresponding to the entire image is μ, and the formula is as follows:

式中:ω1为背景出现的概率,Pij为二元组(i,j)出现的概率密度。Where: ω 1 is the probability of background occurrence, and Pij is the probability density of the occurrence of the binary group (i, j).

式中:ω2为前景出现的概率,Pij为二元组(i,j)出现的概率密度。Where: ω 2 is the probability of foreground appearance, Pij is the probability density of the occurrence of the binary group (i, j).

式中:μ1为背景对应的均值矢量。Where: μ 1 is the mean vector corresponding to the background.

式中:μ2为前景对应的均值矢量。Where: μ 2 is the mean vector corresponding to the foreground.

式中:μ为整幅图像所对应的均值矢量。Where: μ is the mean vector corresponding to the entire image.

步骤6:利用离散测度矩阵S(s,t)求得图像的离散测度tr(S(s,t)),公式如下:Step 6: Use the discrete measure matrix S (s, t) to obtain the discrete measure tr(S (s, t) ) of the image. The formula is as follows:

S(s,t)=ω11-μ)(μ1-μ)T22-μ)(μ2-μ)T (6)S (s,t)11 -μ)(μ 1 -μ) T22 -μ)(μ 2 -μ) T (6)

式中:S(s,t)为图像的离散测度矩阵。Where: S (s, t) is the discrete measurement matrix of the image.

tr(S(s,t))=ω1[(μ1ii)2+(μ1jj)2]+ω2[(μ2ii)2+(μ2jj)2] (7)tr(S (s,t) )=ω 1 [(μ 1ii ) 2 +(μ 1jj ) 2 ]+ω 2 [(μ 2ii ) 2 +(μ 2jj ) 2 ] (7)

式中:tr(S(s,t))为图像的离散测度。Where: tr(S (s,t) ) is the discrete measure of the image.

步骤7:离散测度越大,类间方差就越大,最大离散测度对应的即为最佳阈值(s*,t*)。Step 7: The larger the discrete measure, the larger the inter-class variance. The maximum discrete measure corresponds to the optimal threshold (s * , t * ).

(s*,t*)=argmax{tr(S(s,t))} (8)(s * ,t * )=argmax{tr(S (s,t) )} (8)

式中:(s*,t*)为图像的最佳阈值。Where: (s * , t * ) is the optimal threshold of the image.

通过以上步骤求得最佳阈值后,利用该阈值对0~255个亮度等级的灰度图像进行二值化处理,分离出前景区域和背景区域,此时的前景区域即为高光区域。After obtaining the optimal threshold through the above steps, the threshold is used to perform binarization processing on the grayscale image with brightness levels of 0 to 255 to separate the foreground area and the background area. At this time, the foreground area is the highlight area.

二、基于图像融合的高光区域去除:2. Highlight area removal based on image fusion:

图像融合(Image Fusion)是将采集到关于同一目标的两幅或多幅图像综合成一幅新的图像,有效提高图像信息的利用率,使得融合后得到的图像对目标有更全面、清晰的描述。Image fusion is the process of combining two or more images of the same target into a new image, effectively improving the utilization of image information so that the fused image has a more comprehensive and clear description of the target.

本发明针对经SIFT算法生成特征描述向量后,利用最近邻法特征匹配时存在过分依赖于预设阈值和存在错误匹配点对的问题,引入RANSAC算法,进一步剔除错误匹配点对,完成多视角的图像特征匹配和求取最佳透视变换矩阵。同时为避免传统RANSAC算法随机选取的弊端,减少不必要的迭代次数及用时,本发明对RANSAC算法进行改进,使得图像透视变换和高光去除的效果更好。The present invention aims at the problem that after the feature description vector is generated by the SIFT algorithm, the nearest neighbor method is used for feature matching, which is overly dependent on the preset threshold and has wrong matching point pairs. The RANSAC algorithm is introduced to further eliminate the wrong matching point pairs, complete the multi-view image feature matching and obtain the best perspective transformation matrix. At the same time, in order to avoid the drawbacks of the traditional RANSAC algorithm of random selection and reduce unnecessary iteration times and time, the present invention improves the RANSAC algorithm, so that the effect of image perspective transformation and highlight removal is better.

1.特征点检测:1. Feature point detection:

目前在特征点描述方面最经典的是SIFT(Scale-invariant feature transform)算法即尺度不变特征变换,该算法因其具有对旋转、尺度缩放以及亮度的变化保持不变的特性而被广泛用于特征点检测和生成特征描述向量。具体步骤如下:At present, the most classic algorithm for feature point description is SIFT (Scale-invariant feature transform), which is widely used for feature point detection and generating feature description vectors because of its ability to remain unchanged under rotation, scale scaling and brightness changes. The specific steps are as follows:

步骤1:令输入图像I(x,y),对该图像进行不断降阶采样,得到一系列大小不一的图像,将这些图像由大到小、从下到上排序构成金字塔状模型。然后对每层图像利用连续变化尺度的二维高斯函数G(x,y,σ)与图像I(x,y)进行卷积,求出图像的尺度空间L(x,y,σ)。Step 1: Take the input image I(x,y) and continuously downsample it to obtain a series of images of different sizes. These images are sorted from large to small and from bottom to top to form a pyramid model. Then, for each layer of image, a two-dimensional Gaussian function G(x,y,σ) with continuously changing scale is convolved with the image I(x,y) to obtain the scale space L(x,y,σ) of the image.

L(x,y,σ)=G(x,y,σ)*I(x,y) (9)L(x,y,σ)=G(x,y,σ)*I(x,y) (9)

其中:*表示卷积运算,σ为尺度。Where: * represents the convolution operation, σ is the scale.

步骤2:令尺度空间内每层多张图像合称为一组,对同组相邻两层图像进行相减得到高斯差分图像。将同组除顶层和底层外的每层高斯差分图像的每个像素点与其同层8个以及上下邻层图像9×2个共26个像素点比较,当该点像素值为最大或者最小时,该像素点为极值点。高斯差分函数公式如下:Step 2: Let multiple images of each layer in the scale space be called a group, and subtract two adjacent layers of images in the same group to obtain a Gaussian difference image. Compare each pixel of the Gaussian difference image of each layer except the top and bottom layers in the same group with 8 pixels in the same layer and 9×2 pixels in the upper and lower adjacent layers, a total of 26 pixels. When the pixel value of the point is the maximum or minimum, the pixel is an extreme point. The Gaussian difference function formula is as follows:

D(x,y,σ)=L(x,y,kσ)-L(x,y,σ) (11)D(x,y,σ)=L(x,y,kσ)-L(x,y,σ) (11)

其中,k为固定的系数。Among them, k is a fixed coefficient.

步骤3:由于检测到的极值点是离散空间的极值点,并不是连续空间真正的特征点,为此需对尺度空间高斯差分函数进行曲线拟合来重新计算极值点坐标,即利用Taylor公式对尺度空间高斯差分函数进行展开:Step 3: Since the detected extreme points are the extreme points of the discrete space, not the real feature points of the continuous space, it is necessary to perform curve fitting on the scale space Gaussian difference function to recalculate the coordinates of the extreme points, that is, to expand the scale space Gaussian difference function using the Taylor formula:

其中,D(X)为高斯差分函数,X=(x,y,σ)TWhere D(X) is the Gaussian difference function, X = (x, y, σ) T .

求导并让方程等于零,即可得到极值点的偏移量:Taking the derivative and setting the equation equal to zero gives the offset of the extreme point:

对应的极值点方程的值为:The corresponding extreme point equation value is:

其中,为偏移量对应极值点方程的值。in, is the value of the equation corresponding to the extreme point of the offset.

利用原极值点坐标加上偏移量即为生成的极值点新坐标,将生成的新坐标像素值与所设定的对比度阈值进行比较,剔除对比度低的极值点,此时余下的极值点即为特征点。The new coordinates of the extreme points are generated by adding the offset to the original extreme point coordinates. The pixel values of the generated new coordinates are compared with the set contrast threshold, and the extreme points with low contrast are eliminated. At this time, the remaining extreme points are the feature points.

步骤4:为使描述符具有旋转不变性,需要为每个特征点分配方向。为此使用梯度法求取图像中特征点邻域内像素点的梯度模值和方向,梯度的模值m(x,y)和方向θ(x,y)公式如下:Step 4: To make the descriptor rotationally invariant, it is necessary to assign a direction to each feature point. To this end, the gradient method is used to obtain the gradient modulus and direction of the pixel points in the neighborhood of the feature point in the image. The formulas for the gradient modulus m(x, y) and direction θ(x, y) are as follows:

θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))) (16)θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y)) ) (16)

其中,L所用的尺度为每个特征点各自所在的尺度。Among them, the scale used by L is the scale of each feature point.

然后利用二维直方图进行统计,并以直方图中幅值最高的方向作为该特征点的主方向。为增强匹配的鲁棒性,保留大于主方向幅值80%的幅值所在方向作为该特征点的辅方向。Then, a two-dimensional histogram is used for statistics, and the direction with the highest amplitude in the histogram is taken as the main direction of the feature point. To enhance the robustness of the matching, the direction with an amplitude greater than 80% of the main direction amplitude is retained as the auxiliary direction of the feature point.

步骤5:以特征点为中心取16×16个像素的窗口,并将窗口分成4×4的子域,用梯度方向直方图统计每个子域上8个方向的梯度累加幅值,此时每个子域都可以用一个8维的特征描述向量表示,最终对于每个特征点都有4×4×8=128维的特征向量对其描述。为了使其具有光照不变性,对这128维特征向量进行归一化处理并取一阈值对梯度幅值进行限制,即可有效降低光照不均对匹配结果的影响。Step 5: Take a 16×16 pixel window with the feature point as the center, and divide the window into 4×4 subdomains. Use the gradient direction histogram to count the accumulated gradient amplitudes in 8 directions in each subdomain. At this time, each subdomain can be represented by an 8-dimensional feature description vector. Finally, each feature point has a 4×4×8=128-dimensional feature vector to describe it. In order to make it illumination invariant, the 128-dimensional feature vector is normalized and a threshold is taken to limit the gradient amplitude, which can effectively reduce the impact of uneven illumination on the matching results.

当基准图和辅助图的SIFT特征向量生成后,利用最近邻法对其匹配,即设置一比例阈值,若两个特征点描述向量的最近欧式距离除以次近欧式距离小于这个比例阈值,则认为这两个特征点匹配正确。After the SIFT feature vectors of the reference image and the auxiliary image are generated, they are matched using the nearest neighbor method, that is, a ratio threshold is set. If the closest Euclidean distance between the description vectors of two feature points divided by the second closest Euclidean distance is less than this ratio threshold, the two feature points are considered to be matched correctly.

2.基于改进RANSAC算法的透视变换:2. Perspective transformation based on improved RANSAC algorithm:

由于在高光去除图像融合时,需要保持辅助图视角、尺寸与基准图视角、尺寸一致,When the highlight removal image is fused, the perspective and size of the auxiliary image need to be kept consistent with the perspective and size of the reference image.

为此需利用透视变换对辅助图进行调整。透视变换公式如下:To do this, the auxiliary image needs to be adjusted using perspective transformation. The perspective transformation formula is as follows:

其中,(x',y')为基准图特征匹配点的坐标值,(u,v)为辅助图对应特征匹配点的坐标值,S为图像之间的变换系数,H为3×3的变换矩阵,即:Among them, (x', y') is the coordinate value of the feature matching point of the reference image, (u, v) is the coordinate value of the corresponding feature matching point of the auxiliary image, S is the transformation coefficient between images, and H is a 3×3 transformation matrix, that is:

其中,可对图像进行旋转、缩放、扭曲变换,T2=[a13a23]T可对图像进行平移变换,T3=[a31 a32]可产生图像透视变换,如图2所示。in, The image can be rotated, scaled, and distorted. T 2 = [a 13 a 23 ] T can translate the image. T 3 = [a 31 a 32 ] can generate image perspective transformation, as shown in FIG2 .

由于利用SIFT算法已经求得基准图和辅助图特征匹配点的坐标,为此只需随机选取4对特征匹配点就可求得变换矩阵H,利用变换矩阵即可对辅助图进行透视变换。变换后的辅助图像素坐标(x,y)公式如下:Since the coordinates of the feature matching points of the reference image and the auxiliary image have been obtained using the SIFT algorithm, we only need to randomly select 4 pairs of feature matching points to obtain the transformation matrix H, and use the transformation matrix to perform perspective transformation on the auxiliary image. The formula for the pixel coordinates (x, y) of the transformed auxiliary image is as follows:

其中,(x,y)为变换后的辅助图像素坐标。Among them, (x, y) is the pixel coordinate of the auxiliary image after transformation.

由于采用最近邻法对两个特征点匹配时过分依赖于当前预先设置的阈值,阈值大小无法精确判断。当所设阈值较大时,会出现较多错误的匹配点对;当所设阈值较小时,虽然会减少错误匹配点对,但匹配点对数量明显减少,严重影响变换矩阵H的最佳选取。因此需要引入RANSAC算法,进一步去除错误匹配点对,得到最佳变换矩阵。Since the nearest neighbor method is overly dependent on the currently preset threshold when matching two feature points, the threshold value cannot be accurately determined. When the threshold is set to a large value, more wrong matching point pairs will appear; when the threshold is set to a small value, although the number of wrong matching point pairs will be reduced, the number of matching point pairs will be significantly reduced, which seriously affects the optimal selection of the transformation matrix H. Therefore, it is necessary to introduce the RANSAC algorithm to further remove the wrong matching point pairs and obtain the optimal transformation matrix.

RANSAC算法的思想是通过随机选取数据中一组随机子集来拟合估计模型,通过估计模型去测试其它数据,若某个数据适用于估计模型,则被归类为局内点,如果有足够多的点被归类为假设的局内点,则认为估计的模型足够合理,然后用所有假设的局内点去重新估计模型,并通过估计局内点与模型的错误率来评估模型,这样的过程被重复执行固定的次数,每次产生的模型要么因为局内点太少而被舍弃,要么因为它比现有的模型更好而被选用。The idea of the RANSAC algorithm is to fit the estimated model by randomly selecting a set of random subsets from the data, and then test other data through the estimated model. If a certain data is suitable for the estimated model, it is classified as an inlier point. If enough points are classified as hypothetical inliers, the estimated model is considered to be reasonable enough. Then all hypothetical inliers are used to re-estimate the model, and the model is evaluated by estimating the error rate between the inliers and the model. This process is repeated a fixed number of times, and the model generated each time is either discarded because there are too few inliers, or selected because it is better than the existing model.

由于在计算变换矩阵H时只需4对匹配点,为此本发明提出了一种改进的RANSAC算法,步骤如下:Since only 4 pairs of matching points are needed when calculating the transformation matrix H, the present invention proposes an improved RANSAC algorithm, and the steps are as follows:

步骤1:首先对SIFT算法提取出的有效特征点利用最近邻法进行初始匹配,初始所选欧式距离阈值为0.6。Step 1: First, the effective feature points extracted by the SIFT algorithm are initially matched using the nearest neighbor method, and the initially selected Euclidean distance threshold is 0.6.

步骤2:将图像等分成4个区域,判断当前所分4个区域特征匹配点对数是否均大于4,若是则进行下一步,否则将欧式距离阈值加0.1返回上一步重新匹配。Step 2: Divide the image into 4 equal areas, and determine whether the number of feature matching points in the 4 areas is greater than 4. If so, proceed to the next step; otherwise, add 0.1 to the Euclidean distance threshold and return to the previous step to rematch.

步骤3:从4个区域中各筛选出欧式距离最小的4对匹配点,一共16对。Step 3: Select 4 pairs of matching points with the smallest Euclidean distance from each of the 4 regions, for a total of 16 pairs.

步骤4:将这16对匹配点4对一组进行组合,根据组合后4对匹配点的欧式距离和从小到大进行1、2、…、N排序,选取序号前50组。Step 4: Combine the 16 pairs of matching points into groups of 4, and sort them from small to large by 1, 2, ..., N according to the Euclidean distance of the 4 pairs of matching points after combination, and select the first 50 groups.

步骤5:首先按照序号顺序取序号为1的4对匹配点计算变换矩阵H。Step 5: First, take the 4 pairs of matching points numbered 1 in order to calculate the transformation matrix H.

步骤6:利用矩阵H对图像中的所有匹配点对进行校验,当局内点数所占匹配点对总数比例大于50%时,则认为当前所计算出的矩阵H为最佳变换矩阵,否则返回上一步按照序号顺序选取下一组4对匹配点计算变换矩阵H。Step 6: Use the matrix H to check all matching point pairs in the image. When the number of points in the local area accounts for more than 50% of the total number of matching point pairs, the currently calculated matrix H is considered to be the best transformation matrix. Otherwise, return to the previous step and select the next group of 4 matching point pairs in sequence to calculate the transformation matrix H.

利用最佳变换矩阵对辅助图进行调整,将其相对应区域遮罩在基准图高光区域,得到高光去除的基准图。The auxiliary image is adjusted using the optimal transformation matrix, and its corresponding area is masked in the highlight area of the reference image to obtain the reference image with highlights removed.

三、压板投退状态辨识:3. Identification of the status of the pressure plate:

为了准确辨识高光去除后基准图中的压板状态,本发明采用了一种基于图像处理与形态特征分析的继电保护压板状态识别方法。首先为了提高压板图像整体特征提取的准确性,对高光去除后的基准图进行颜色区域筛选、二值化、形态学处理,并用8连通方式对连通区域进行提取;然后依据形态特征进行面积、尺寸、形状分析,从所有区域中准确提取出有效压板区域;最后根据压板投退状态时的方向角对有效区域进行状态识别,同时采用重心坐标对有效压板进行排序,得到所有有效压板状态序列。In order to accurately identify the state of the pressure plate in the reference image after highlight removal, the present invention adopts a relay protection pressure plate state recognition method based on image processing and morphological feature analysis. First, in order to improve the accuracy of the overall feature extraction of the pressure plate image, the reference image after highlight removal is subjected to color region screening, binarization, and morphological processing, and the connected area is extracted using an 8-connected method; then, the area, size, and shape analysis are performed based on the morphological features to accurately extract the effective pressure plate area from all areas; finally, the state of the effective area is identified based on the direction angle of the pressure plate when it is in the retracted state, and the effective pressure plate is sorted using the center of gravity coordinates to obtain a sequence of all effective pressure plate states.

1.压板图像连通区域提取:1. Extraction of connected regions of pressure plate images:

为了更好地反映图像整体和局部特征信息,准确提取压板图像连通区域,采取如下步骤:In order to better reflect the overall and local feature information of the image and accurately extract the connected area of the platen image, the following steps are taken:

步骤1:由于高光去除后的基准图为彩色图像,图中有效压板整体为红色和黄色,备用压板为驼色和红色,背景区域为白色,为此,可通过设置一定RGB阈值筛选出红色和黄色区域。考虑图片上可能存在的其他元件、标识等干扰因素会导致筛选有误,通过大量实验可知,红色和黄色像素点在R、G、B三通道的值中最大值与最小值之差均不小于40,为此将图像中这部分不小于40的像素保留,然后将其余部分区域设为R、G、B等值即将其变为黑色。Step 1: Since the reference image after highlight removal is a color image, the effective platen in the image is red and yellow, the spare platen is camel and red, and the background area is white, so the red and yellow areas can be screened out by setting a certain RGB threshold. Considering that other interference factors such as other components and logos that may exist in the image will lead to incorrect screening, through a large number of experiments, it is known that the difference between the maximum and minimum values of the red and yellow pixels in the three channels of R, G, and B is not less than 40. Therefore, the pixels in the image that are not less than 40 are retained, and then the remaining areas are set to R, G, B, etc., that is, they are turned black.

步骤2:为了提高运算速度,对红黄区域筛选后的基准图进行灰度化处理,利用OTSU算法求取二值化阈值,利用该阈值对灰度图像进行二值化处理。Step 2: In order to improve the operation speed, the reference image after the red and yellow areas are filtered is grayed out, and the binarization threshold is obtained using the OTSU algorithm, and the grayscale image is binarized using the threshold.

步骤3:由于图像二值化后会产生一些不平滑边缘,并且压板连通处会存在孔洞,严重影响后续特征提取的效果。为此需要对二值图像进行形态学处理,利用膨胀和腐蚀操作对孔洞进行填充,并以8连通的方式(如果一个像素和其邻近像素在上、下、左、右、左上角、左下角、右上角或右下角相连接,则认为它们是连通的)对连通区域进行提取,将提取得到的N个连通区域编号为1、2、…、N。Step 3: After the image is binarized, some uneven edges will be generated, and there will be holes at the connection points of the pressure plate, which will seriously affect the effect of subsequent feature extraction. Therefore, it is necessary to perform morphological processing on the binary image, fill the holes with dilation and erosion operations, and extract the connected areas in an 8-connected manner (if a pixel and its neighboring pixels are connected at the top, bottom, left, right, upper left corner, lower left corner, upper right corner or lower right corner, they are considered to be connected), and the N connected areas extracted are numbered 1, 2, ..., N.

2.有效压板区域筛选:2. Effective platen area screening:

为了准确筛选出有效压板区域,从面积、尺寸、形状三方面进行形态特征分析,具体步骤如下:In order to accurately screen out the effective pressing plate area, the morphological characteristics are analyzed from three aspects: area, size and shape. The specific steps are as follows:

步骤1:面积分析。考虑连通区域提取后的图像中可能存在无效备用压板、标识等,通过观察可知该部分所在的连通区域面积较小,为此,通过设定一面积阈值Varea-thre来去除干扰区域,公式如下:Step 1: Area analysis. Considering that there may be invalid spare plates, logos, etc. in the image after the connected area extraction, it can be seen from the observation that the area of the connected area where this part is located is small. Therefore, an area threshold V area-thre is set to remove the interference area. The formula is as follows:

其中,阈值Varea-thre为二值图中像素面积大小排在前5位的像素区域面积平均值乘以0.3得到的,Varea(i)为按照区域面积从大到小排列的第i个区域面积。The threshold V area-thre is obtained by multiplying the average area of the top five pixel areas in the binary image by 0.3, and V area (i) is the area of the i-th area arranged from large to small.

根据公式(20),判定连通区域像素面积大于阈值Varea-thre的区域为备选有效压板区域,否则为干扰区域。According to formula (20), the area whose pixel area of the connected region is greater than the threshold V area-thre is determined as a candidate valid pressure plate area, otherwise it is an interference area.

步骤2:尺寸分析。考虑连通区域提取后的图像中有效压板具有一定尺寸,在X、Y方向上,有效压板区域边界长度与图像的像素大小有一定的比值。所以,利用X、Y方向上的图像像素PX、PY对像素尺寸阈值Xwidth-thre、Ywidth-thre进行设定,公式如下:Step 2: Size analysis. Considering that the effective pressure plate in the image after the connected area extraction has a certain size, there is a certain ratio between the effective pressure plate area boundary length and the image pixel size in the X and Y directions. Therefore, the pixel size thresholds X width-thre and Y width-thre are set using the image pixels P X and P Y in the X and Y directions. The formula is as follows:

根据公式(21),判定在X、Y方向上,连通区域边界长度均大于对应阈值的区域为备选有效压板区域,否则为干扰区域。According to formula (21), the area whose border lengths of the connected areas in the X and Y directions are greater than the corresponding thresholds is determined to be a candidate effective pressure plate area, otherwise it is an interference area.

步骤3:形状分析。考虑连通区域提取后的图像中有效压板具有一定形状,有效压板区域在图像中具有一定的等效长宽比值,同时为消除图像中存在的具有相似形状的其他干扰信息,将等效长宽比例阈值Sratio-thre设定为2<Sratio-thre<5,若连通区域等效长宽比值在该阈值内,则判定该区域为备选有效压板区域,否则为干扰区域。Step 3: Shape analysis. Considering that the effective pressure plate in the image after the connected area extraction has a certain shape, the effective pressure plate area has a certain equivalent aspect ratio in the image. At the same time, in order to eliminate other interference information with similar shapes in the image, the equivalent aspect ratio threshold S ratio-thre is set to 2<S ratio-thre <5. If the equivalent aspect ratio of the connected area is within the threshold, the area is determined to be a candidate effective pressure plate area, otherwise it is an interference area.

步骤4:搜索图像中所有连通区域,重复步骤1-3,直至第N区域搜索完毕,同时被判定为备选有效压板区域的为最终有效压板区域。Step 4: Search all connected areas in the image and repeat steps 1-3 until the Nth area is searched and the area determined as a candidate valid pressure plate area is the final valid pressure plate area.

3.压板投退状态识别:3. Identification of the platen insertion and retraction status:

为准确识别经有效压板区域筛选后的保护压板高光去除图像中压板投退状态,本发明利用压板投退状态时的方向角,即压板投入时方向角为±90°,退出时方向角为±45°,并设置±10°的裕度,判据式如下:In order to accurately identify the platen insertion and retraction state in the protective platen highlight removal image after the effective platen area is screened, the present invention uses the direction angle of the platen insertion and retraction state, that is, the direction angle of the platen when it is inserted is ±90°, and the direction angle when it is retracted is ±45°, and a margin of ±10° is set. The judgment formula is as follows:

其中,投入状态标为1,退出状态标为0。识别出有效压板投退状态后,采用重心坐标对有效压板按照从左至右、从上到下顺序进行排序,最终得到一个只含有0、1的状态序列。Among them, the input state is marked as 1, and the exit state is marked as 0. After identifying the effective pressure plate input and exit states, the effective pressure plates are sorted from left to right and from top to bottom using the center of gravity coordinates, and finally a state sequence containing only 0 and 1 is obtained.

综上可以看出,本发明以智能机器人巡检所拍摄的保护压板图像为依据,采用阈值分割方法对图像中高光区域进行检测,并在此基础上进行利用图像融合的方法对图像进行修复,消除了变电站现场存在的光源干扰压板状态辨识的情况,最后在图像修复的基础上通过对压板边缘检测的倾角来判断其运行状态。更好地辅助了巡检机器人对压板投退状态的核对,提高继电保护压板投退状态辨识准确率,降低巡检人员的劳动强度,减少电网操作中的误操作、避免经济损失,确保电网安全稳定运行。From the above, it can be seen that the present invention is based on the protection platen image taken by the intelligent robot inspection, uses the threshold segmentation method to detect the highlight area in the image, and on this basis uses the image fusion method to repair the image, eliminating the situation where the light source at the substation site interferes with the platen status identification, and finally judges its operating status by the inclination angle detected on the edge of the platen based on the image repair. It better assists the inspection robot in checking the platen throw-in and throw-out status, improves the accuracy of the relay protection platen throw-in and throw-out status identification, reduces the labor intensity of the inspection personnel, reduces the misoperation in the power grid operation, avoids economic losses, and ensures the safe and stable operation of the power grid.

Claims (1)

1. The perspective transformation method based on the improved RANSAC algorithm is characterized by comprising the following steps of:
The perspective transformation formula is as follows:
Wherein (x ', y') is the coordinate value of the feature matching point of the reference image, (u, v) is the coordinate value of the feature matching point corresponding to the auxiliary image, S is the transformation coefficient between images, and H is the transformation matrix of 3×3, namely:
wherein, The image can be subjected to rotation, scaling and distortion transformation, T 2=[a13 a23]T can be subjected to translation transformation, and T 3=[a31 a32 can generate image perspective transformation;
Because coordinates of the feature matching points of the reference graph and the auxiliary graph are obtained by using the SIFT algorithm, a transformation matrix H can be obtained by randomly selecting 4 pairs of feature matching points, and perspective transformation can be carried out on the auxiliary graph by using the transformation matrix; the transformed auxiliary map pixel coordinates (x, y) are formulated as follows:
Only 4 pairs of matching points are needed when calculating the transformation matrix H, and an improved RANSAC algorithm is provided, which comprises the following steps:
Step ①: firstly, carrying out initial matching on effective feature points extracted by a SIFT algorithm by utilizing a nearest neighbor method, wherein an initial selected Euclidean distance threshold value is 0.6;
Step ②: equally dividing the image into 4 areas, judging whether the number of pairs of characteristic matching points of the 4 areas which are currently divided is larger than 4, if so, carrying out the next step, otherwise, adding 0.1 to the Euclidean distance threshold value, and returning to the previous step for re-matching;
Step ③: screening 4 pairs of matching points with the smallest Euclidean distance from the 4 areas, wherein the total number of the matching points is 16;
step ④: combining the 16 pairs of matching points 4 pairs, sorting 1, 2, … and N according to the Euclidean distance sum of the 4 pairs of matching points after combination from small to large, and selecting the first 50 groups of serial numbers;
Step ⑤: firstly, 4 pairs of matching points with the sequence number of 1 are selected according to the sequence number order to calculate a transformation matrix H;
Step ⑥: and checking all matching point pairs in the image by using the matrix H, when the proportion of the number of the matching point pairs in the office is greater than 50%, considering the currently calculated matrix H as the optimal transformation matrix, otherwise, returning to the previous step, and selecting the next 4 pairs of matching points according to the sequence number to calculate the transformation matrix H.
CN202210352267.1A 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm Active CN114926391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210352267.1A CN114926391B (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210352267.1A CN114926391B (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm
CN202010631727.5A CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based operating state identification method of protective platen

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010631727.5A Division CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based operating state identification method of protective platen

Publications (2)

Publication Number Publication Date
CN114926391A CN114926391A (en) 2022-08-19
CN114926391B true CN114926391B (en) 2024-10-25

Family

ID=73227207

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202210352274.1A Active CN114926392B (en) 2020-07-03 2020-07-03 Highlight area removal method based on image fusion
CN202210352267.1A Active CN114926391B (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm
CN202010631727.5A Active CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based operating state identification method of protective platen

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210352274.1A Active CN114926392B (en) 2020-07-03 2020-07-03 Highlight area removal method based on image fusion

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010631727.5A Active CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based operating state identification method of protective platen

Country Status (1)

Country Link
CN (3) CN114926392B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560713A (en) * 2020-12-18 2021-03-26 广东智源机器人科技有限公司 Image recognition method, device, equipment and cooking system
CN114998581B (en) * 2020-12-22 2024-11-22 三峡大学 Extraction method of effective pressure plate area of protective pressure plate based on multi-threshold and K-means clustering
CN113096120A (en) * 2021-04-30 2021-07-09 随锐科技集团股份有限公司 Method and system for identifying on-off state of protection pressing plate
CN113361548B (en) * 2021-07-05 2023-11-14 北京理工导航控制科技股份有限公司 Local feature description and matching method for highlight image
CN114120282B (en) * 2021-11-29 2025-06-27 神思电子技术股份有限公司 A method, device and storage medium for removing interference from images taken by a CTC system
CN114913370B (en) * 2022-05-06 2025-06-20 国网河北省电力有限公司石家庄供电分公司 State automatic detection method and device based on deep learning and morphological fusion
CN114919792B (en) * 2022-06-01 2023-09-12 中迪机器人(盐城)有限公司 System and method for detecting abnormality of film sticking of steel belt
CN115861652A (en) * 2022-09-19 2023-03-28 广州品唯软件有限公司 Page element positioning method, device, computer equipment and storage medium
CN116342511A (en) * 2023-03-08 2023-06-27 杭州电子科技大学 PWS automatic area selection scoring method based on deep learning and color difference evaluation
CN117036674B (en) * 2023-07-12 2025-04-11 国网湖北省电力有限公司超高压公司 A method for positioning and identifying hard pressure plates in intelligent substations
CN118464385B (en) * 2024-05-07 2024-10-25 浙江博来电子科技有限公司 PCB lamp bead defect detection method, device and storage medium
CN118657779B (en) * 2024-08-20 2024-11-19 湖南苏科智能科技有限公司 Intelligent detection method, system, equipment and medium for construction process of power distribution station
CN120198741B (en) * 2025-05-23 2025-08-05 国网上海市电力公司 A multi-modal fusion protection device pressure plate state recognition method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150043697A (en) * 2013-10-15 2015-04-23 한국과학기술연구원 Texture-less object recognition using contour fragment-based features with bisected local regions
CN104867137A (en) * 2015-05-08 2015-08-26 中国科学院苏州生物医学工程技术研究所 Improved RANSAC algorithm-based image registration method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5791336B2 (en) * 2011-04-01 2015-10-07 キヤノン株式会社 Image processing apparatus and control method thereof
CN102289676B (en) * 2011-07-30 2013-02-20 山东鲁能智能技术有限公司 Method for identifying mode of switch of substation based on infrared detection
EP2648157A1 (en) * 2012-04-04 2013-10-09 Telefonaktiebolaget LM Ericsson (PUBL) Method and device for transforming an image
JP6056319B2 (en) * 2012-09-21 2017-01-11 富士通株式会社 Image processing apparatus, image processing method, and image processing program
CN107424181A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of improved image mosaic key frame rapid extracting method
CN107253485B (en) * 2017-05-16 2019-07-23 北京交通大学 Foreign matter invades detection method and foreign matter invades detection device
CN107481201A (en) * 2017-08-07 2017-12-15 桂林电子科技大学 A kind of high-intensity region method based on multi-view image characteristic matching
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110728282A (en) * 2019-10-11 2020-01-24 哈尔滨理工大学 Adaptive Calibration Method Based on Dynamic Measurement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150043697A (en) * 2013-10-15 2015-04-23 한국과학기술연구원 Texture-less object recognition using contour fragment-based features with bisected local regions
CN104867137A (en) * 2015-05-08 2015-08-26 中国科学院苏州生物医学工程技术研究所 Improved RANSAC algorithm-based image registration method

Also Published As

Publication number Publication date
CN114926392B (en) 2025-06-13
CN114926392A (en) 2022-08-19
CN111915544A (en) 2020-11-10
CN114926391A (en) 2022-08-19
CN111915544B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN114926391B (en) Perspective transformation method based on improved RANSAC algorithm
Zhao et al. Cloud shape classification system based on multi-channel cnn and improved fdm
CN111814686A (en) A vision-based transmission line identification and foreign object intrusion online detection method
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN106157323B (en) A kind of insulator division and extracting method of dynamic division threshold value and block search combination
CN107944396A (en) A kind of disconnecting link state identification method based on improvement deep learning
CN106557740B (en) A Recognition Method of Oil Depot Targets in Remote Sensing Images
CN116051539A (en) Diagnosis method for heating fault of power transformation equipment
CN111626104B (en) Cable hidden trouble point detection method and device based on unmanned aerial vehicle infrared thermal image
CN110751619A (en) A kind of insulator defect detection method
CN118038515B (en) Face recognition method
CN111915509A (en) Protection pressing plate state identification method based on image processing shadow removal optimization
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN113888462A (en) Crack identification method, system, readable medium and storage medium
CN111950357A (en) A fast identification method of marine debris based on multi-feature YOLOV3
CN114913370A (en) State automatic detection method and device based on deep learning and morphology fusion
CN118762003A (en) Image-based circuit board electronic component positioning method and system
Zhang et al. Research on multiple features extraction technology of insulator images
CN112330643B (en) Secondary equipment state identification method based on sparse representation image restoration
CN115661106A (en) Equipment fault position identification method and system
CN116823866A (en) An electrical equipment fault detection method based on infrared images
CN110363784A (en) Identification method of overlapped fruits
CN114677428A (en) Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing
Liu et al. Research on the plant leaf disease region extraction
Shang et al. Automatic drainage pipeline defect detection method using handcrafted and network features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240626

Address after: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region after: China

Address before: 443002 No. 8, University Road, Xiling District, Yichang, Hubei

Applicant before: CHINA THREE GORGES University

Country or region before: China

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240918

Address after: No. 2 Wenfeng East Road, Tianma Street, Changshan County, Quzhou City, Zhejiang Province, China 324299

Applicant after: Changshan power supply company of State Grid Zhejiang Electric Power Co.,Ltd.

Country or region after: China

Address before: 1003, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Tongsheng Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant before: Shenzhen Wanzhida Enterprise Management Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant