CN106372552B - Human body target recognition positioning method - Google Patents
Human body target recognition positioning method Download PDFInfo
- Publication number
- CN106372552B CN106372552B CN201610755695.3A CN201610755695A CN106372552B CN 106372552 B CN106372552 B CN 106372552B CN 201610755695 A CN201610755695 A CN 201610755695A CN 106372552 B CN106372552 B CN 106372552B
- Authority
- CN
- China
- Prior art keywords
- human body
- body target
- positioning
- positioning result
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10009—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation sensing by radiation using wavelengths larger than 0.1 mm, e.g. radio-waves or microwaves
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Toxicology (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明提供一种人体目标识别定位方法,包括:获取利用超高频射频识别系统对人体目标进行定位的第一定位结果;获取利用计算机视觉系统对人体目标进行定位的第二定位结果;利用预设融合算法对所述第一定位结果和第二定位结果进行融合,获得对人体目标的第三定位结果;利用预设融合优化算法对所述第三定位结果进行定位精度优化,获得对人体目标的最终定位结果。本发明可以实现对人体目标的识别定位,效率高、准确度高。
The present invention provides a method for identifying and positioning a human body target, comprising: obtaining a first positioning result of positioning the human body target by using an ultra-high frequency radio frequency identification system; obtaining a second positioning result of positioning the human body target by using a computer vision system; Suppose a fusion algorithm fuses the first positioning result and the second positioning result to obtain a third positioning result for the human target; using a preset fusion optimization algorithm to optimize the positioning accuracy of the third positioning result to obtain the human target the final positioning result. The invention can realize the identification and positioning of the human body target, and has high efficiency and high accuracy.
Description
技术领域technical field
本发明涉及安防识别定位技术领域,尤其涉及一种人体目标识别定位方法及装置。The invention relates to the technical field of security identification and positioning, in particular to a method and device for identifying and positioning a human body target.
背景技术Background technique
目前,对安防的需求和应之而生的安防产业始终伴随着人类文明的发展。二十世纪六十年代闭路监控系统(Closed Circuit Television,简称CCTV)在纽约城市安保中的应用标志着现代安防产业的诞生。自那时以来,随着科技的发展和日益严峻的社会安全风险,很多技术被应用于安防产业并衍生出一系列安防设备。传统上,一个完整的安防系统包含三个部分:监测模块、风险评估模块和响应模块。尽管智能技术的普及已经模糊了三者的界限,但毋庸置疑,监测模块仍然是整个系统的最前端。监测设备的灵敏度和收集信息的能力决定了整个安防设备的精度和健壮性。At present, the demand for security and the security industry arising from it have always been accompanied by the development of human civilization. The application of Closed Circuit Television (CCTV) in New York City Security in the 1960s marked the birth of the modern security industry. Since then, with the development of technology and increasingly severe social security risks, many technologies have been applied to the security industry and a series of security equipment have been derived. Traditionally, a complete security system consists of three parts: a monitoring module, a risk assessment module and a response module. Although the popularity of intelligent technology has blurred the boundaries of the three, there is no doubt that the monitoring module is still the forefront of the entire system. The sensitivity of the monitoring equipment and the ability to gather information determine the accuracy and robustness of the entire security equipment.
而由于三种常用的检测设备:闭路监控系统(即计算机视觉系统)、运动探测器与超高频射频识别系统各自存在技术弱点,任何单一的监测系统都不能满足现代安防产业对效率、准确度和自动化程度的需求。近年来,除研究新的监测方式外,各类不同的多监测系统融合方案也不断被提出。其中多数解决方案着眼于对各监测子系统最终结果的融合,而这种较高层次的融合通常难以完全发挥出各子系统的优势。因此有必要提出一种更深层次的多监测手段的融合方式来克服现有方案的局限性并提升融合系统的效率。However, due to the technical weaknesses of the three commonly used detection equipment: closed-circuit monitoring system (ie computer vision system), motion detector and UHF radio frequency identification system, any single monitoring system cannot meet the requirements of modern security industry for efficiency, accuracy and automation requirements. In recent years, in addition to researching new monitoring methods, various fusion schemes of multi-monitoring systems have been proposed. Most of these solutions focus on the fusion of the final results of various monitoring subsystems, and such higher-level fusion is often difficult to fully utilize the advantages of each subsystem. Therefore, it is necessary to propose a deeper fusion method of multi-monitoring methods to overcome the limitations of the existing schemes and improve the efficiency of the fusion system.
鉴于此,如何提供一种高效率、高准确度的人体目标识别定位方法成为目前需要解决的技术问题。In view of this, how to provide a high-efficiency and high-accuracy human target recognition and positioning method has become a technical problem that needs to be solved at present.
发明内容SUMMARY OF THE INVENTION
为解决上述的技术问题,本发明提供一种人体目标识别定位方法,可以实现对人体目标的识别定位,效率高、准确度高。In order to solve the above technical problems, the present invention provides a human body target identification and positioning method, which can realize the identification and positioning of the human body target with high efficiency and high accuracy.
第一方面,本发明提供一种人体目标识别定位方法,包括:In a first aspect, the present invention provides a method for identifying and locating a human body target, comprising:
获取利用超高频射频识别系统对人体目标进行定位的第一定位结果;Obtain the first positioning result of positioning the human body target by using the ultra-high frequency radio frequency identification system;
获取利用计算机视觉系统对人体目标进行定位的第二定位结果;obtaining a second localization result of using the computer vision system to locate the human target;
利用预设融合算法对所述第一定位结果和第二定位结果进行融合,获得对人体目标的第三定位结果;Using a preset fusion algorithm to fuse the first positioning result and the second positioning result to obtain a third positioning result for the human target;
利用预设融合优化算法对所述第三定位结果进行定位精度优化,获得对人体目标的最终定位结果。A preset fusion optimization algorithm is used to optimize the positioning accuracy of the third positioning result to obtain a final positioning result for the human target.
可选地,所述获取利用超高频射频识别系统对人体目标进行定位的第一定位结果,包括:Optionally, the obtaining the first positioning result of locating the human target by using the ultra-high frequency radio frequency identification system includes:
利用基于被动标签列的到达角定位算法,对超高频射频识别系统中人体目标上的标签列位置进行定位,进而获得对所述人体目标进行定位的第一定位结果;Using the angle-of-arrival positioning algorithm based on the passive label column, the position of the label column on the human body target in the UHF radio frequency identification system is located, and then the first positioning result of locating the human body target is obtained;
其中,所述超高频射频识别系统,包括:人体目标上的标签列和在同一直线间隔设置的至少三个超高频射频识别设备;任意相邻的两个超高频射频识别设备间隔预设第一距离,所述标签列包括:间隔预设第二距离的两个电子标签。Wherein, the UHF radio frequency identification system includes: a label column on the human body target and at least three UHF radio frequency identification devices arranged at the same straight line interval; any adjacent two UHF radio frequency identification devices are pre-spaced. Assuming a first distance, the label column includes: two electronic labels separated by a preset second distance.
可选地,所述预设第二距离大于零且小于λ/4,λ为所述超高频射频识别设备发送的载波的波长。Optionally, the preset second distance is greater than zero and less than λ/4, where λ is the wavelength of the carrier wave sent by the UHF radio frequency identification device.
可选地,所述利用基于被动标签列的到达角定位算法,对超高频射频识别系统中人体目标上的标签列位置进行定位,进而获得对所述人体目标进行定位的第一定位结果,包括:Optionally, the position of the label column on the human body target in the UHF radio frequency identification system is located by using the angle of arrival positioning algorithm based on the passive label column, and then the first positioning result of the positioning of the human body target is obtained, include:
获取每一超高频射频识别设备通过其天线接收所述标签列返回的回波信号的到达角;Obtain the angle of arrival of each UHF RFID device receiving the echo signal returned by the tag column through its antenna;
根据所述到达角、所述标签列的位置和所述超高频射频识别设备天线位置间的几何关系,对所述标签列位置进行二维定位,进而获得对所述人体目标进行定位的第一定位结果。According to the geometric relationship between the angle of arrival, the position of the label column and the antenna position of the UHF RFID device, perform two-dimensional positioning on the position of the label column, and then obtain the first method for locating the human target. A positioning result.
可选地,所述获取利用计算机视觉系统对人体目标进行定位的第二定位结果,包括:Optionally, the obtaining of the second positioning result of positioning the human target by using a computer vision system includes:
获取计算机视觉系统中设置在预设位置的摄像头对人体目标拍摄后采集的图像;Obtain the images collected after the camera set at the preset position in the computer vision system shoots the human target;
利用高斯混合模型算法,统计所述图像中每一个像素的像素值的变化;Utilize the Gaussian mixture model algorithm to count the change of the pixel value of each pixel in the image;
利用方向梯度直方图算法,检测出所述图像中的人体目标;Using the directional gradient histogram algorithm to detect the human target in the image;
对所述图像中检测出的人体目标进行坐标系转换,获得人体目标的实际世界坐标,进而获得人体目标真实位置的第二定位结果。The coordinate system transformation of the human body target detected in the image is performed to obtain the actual world coordinates of the human body target, and then the second positioning result of the real position of the human body target is obtained.
可选地,所述对所述图像中检测出的人体目标进行坐标系转换,获得人体目标的实际世界坐标,进而获得人体目标真实位置的第二定位结果,包括:Optionally, performing coordinate system transformation on the human target detected in the image to obtain the actual world coordinates of the human target, and then obtaining a second positioning result of the real position of the human target, including:
以所述摄像头的位置为原点建立一个的三维笛卡尔坐标系,用此三维笛卡尔坐标系的三个单位坐标向量表示世界坐标系的单位坐标向量,在此基础上进一步求取将人体目标像素坐标映射到该人体目标的实际平面坐标的转移矩阵,根据所述转移矩阵计算得到人体目标的实际世界坐标,进而获得人体目标真实位置的第二定位结果。A three-dimensional Cartesian coordinate system is established with the position of the camera as the origin, and the three unit coordinate vectors of the three-dimensional Cartesian coordinate system are used to represent the unit coordinate vector of the world coordinate system. The coordinates are mapped to the transfer matrix of the actual plane coordinates of the human body target, and the actual world coordinates of the human body target are calculated according to the transfer matrix, and then the second positioning result of the real position of the human body target is obtained.
可选地,所述利用方向梯度直方图算法,检测出所述图像中的人体目标,包括:Optionally, the detection of the human target in the image by using a directional gradient histogram algorithm includes:
将所述图像转换为灰度图;converting the image to a grayscale image;
使用Gamma矫正算法对所述灰度图中的像素值进行归一化;Normalize the pixel values in the grayscale image using a Gamma correction algorithm;
计算每个像素点的梯度方向及大小;Calculate the gradient direction and size of each pixel;
将归一化后的灰度图划分为具有相同大小的正方形单元,统计其中每个像素的梯度方向及大小,从而得到每个正方形单元的特征向量;Divide the normalized grayscale image into square units with the same size, and count the gradient direction and size of each pixel, so as to obtain the feature vector of each square unit;
将多个相邻正方形单元组合为一个矩形块,对矩形块内的特征向量进行归一化,得到矩形块的特征描述子;Combine multiple adjacent square units into a rectangular block, and normalize the feature vectors in the rectangular block to obtain the feature descriptor of the rectangular block;
将所有块的特征描述子组合在一起,得到所述图像的梯度直方图特征向量,进而检测出所述图像中的人体目标。The feature descriptors of all blocks are combined to obtain the gradient histogram feature vector of the image, and then the human target in the image is detected.
可选地,所述利用预设融合算法对所述第一定位结果和第二定位结果进行融合,获得对人体目标的第三定位结果,包括:Optionally, using a preset fusion algorithm to fuse the first positioning result and the second positioning result to obtain a third positioning result for the human target, including:
利用方差加权平均算法,将所述第一定位结果和所述第二定位结果进行初步融合;Using the variance weighted average algorithm, the first positioning result and the second positioning result are preliminarily fused;
利用卡尔曼滤波算法,根据测量数据估计信号的真实值,进一步提高初步融合后得到的定位结果的精度,获得对人体目标的第三定位结果。The Kalman filtering algorithm is used to estimate the true value of the signal according to the measurement data, to further improve the accuracy of the positioning result obtained after the preliminary fusion, and obtain the third positioning result of the human target.
可选地,所述预设融合优化算法,包括:初始化过程、循环追踪过程和显示过程;Optionally, the preset fusion optimization algorithm includes: an initialization process, a cycle tracking process and a display process;
所述初始化过程,用于初始化一个被捕捉到的新的人体目标;The initialization process is used to initialize a captured new human target;
所述循环追踪过程,用于不断更新并追踪人体目标的位置;The cycle tracking process is used to continuously update and track the position of the human target;
所述显示过程,用于将循环追踪的结果绘制于每帧图像中;The display process is used to draw the result of the loop tracking in each frame of image;
其中,所述初始化过程与所述循环追踪过程均包含:预处理模块、数据对比模块和精度增强模块;其中:Wherein, the initialization process and the cycle tracking process both include: a preprocessing module, a data comparison module, and a precision enhancement module; wherein:
所述预处理模块:根据所述第一定位结果得到的人体目标位置或人体目标的上一帧位置估计目前人体目标可能存在的区域;Described preprocessing module: according to the position of the human body target obtained by the first positioning result or the position of the previous frame of the human body target to estimate the area where the human body target may exist at present;
所述数据对比模块:将所述第一定位结果得到的人体目标位置和所述第二定位结果得到的人体目标位置分别与上一次人体目标位置做对比,选取最可能的人体目标位置并送入精度增强模块;The data comparison module: compares the target position of the human body obtained by the first positioning result and the target position of the human body obtained by the second positioning result with the previous target position of the human body, selects the most likely target position of the human body and sends it to Precision enhancement module;
所述精度增强模块,用于根据所述数据对比模块选取的最可能的人体目标位置对所述第三定位结果进行精度优化,获得对人体目标的最终定位结果。The accuracy enhancement module is configured to optimize the accuracy of the third positioning result according to the most probable human target position selected by the data comparison module to obtain a final positioning result for the human target.
由上述技术方案可知,本发明的人体目标识别定位方法,通过预设融合算法将利用超高频射频识别系统对人体目标进行定位的第一定位结果与利用计算机视觉系统对人体目标进行定位的第二定位结果进行融合,并利用预设融合优化算法对融合结果进行定位精度优化,获得对人体目标的最终定位结果,由此,可以实现对人体目标的识别定位,效率高、准确度高。As can be seen from the above technical solutions, the human body target identification and positioning method of the present invention uses a preset fusion algorithm to combine the first positioning result of positioning the human body target by using the ultra-high frequency radio frequency identification system and the first positioning result of using the computer vision system to locate the human body target. Two positioning results are fused, and a preset fusion optimization algorithm is used to optimize the positioning accuracy of the fusion results to obtain the final positioning result of the human target, thereby realizing the identification and positioning of the human target with high efficiency and high accuracy.
附图说明Description of drawings
图1为本发明一实施例提供的人体目标识别定位方法的流程示意图;1 is a schematic flowchart of a method for identifying and positioning a human body target provided by an embodiment of the present invention;
图2为本发明实施例提供的基于被动标签列的到达角定位算法的基本模型的示意图;2 is a schematic diagram of a basic model of an AOA positioning algorithm based on a passive label column provided by an embodiment of the present invention;
图3为本发明实施例提供的基于被动标签列的到达角定位算法中到达角、标签列位置和天线位置间的几何关系示意图;3 is a schematic diagram of the geometric relationship between the angle of arrival, the position of the tag column, and the position of the antenna in the angle of arrival positioning algorithm based on the passive label column provided by an embodiment of the present invention;
图4为对本发明实施例利用基于被动标签列的到达角定位算法对超高频射频识别系统中人体目标上的标签列位置定位而获取的第一定位结果进行验证和评估实验的天线摆放位置示意图;FIG. 4 is the antenna placement of the verification and evaluation experiment of the first positioning result obtained by using the passive tag column-based angle of arrival positioning algorithm to locate the label column position on the human body target in the UHF RFID system according to the embodiment of the present invention. schematic diagram;
图5a为本发明实施例提供的人体目标识别定位方法的一种具体流程示意图;5a is a schematic flowchart of a specific process of a method for identifying and positioning a human body target provided by an embodiment of the present invention;
图5b为本发明实施例提供的人体目标识别定位方法的一种具体流程示意图。FIG. 5b is a schematic flowchart of a specific process of a method for identifying and positioning a human body target provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他的实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
表1示出了超高频射频识别系统、计算机视觉系统和运动探测器的优劣对比,由表1可知,只有计算机视觉系统和超高频射频识别系统具有在较大区域内的人体目标定位能力。这一共同点意味着两系统可通过共享人体目标的位置信息来实现深度融合。Table 1 shows the comparison of the advantages and disadvantages of the UHF RFID system, the computer vision system and the motion detector. From Table 1, it can be seen that only the computer vision system and the UHF RFID system can locate human targets in a large area. ability. This commonality means that the two systems can achieve deep fusion by sharing the location information of human objects.
表1Table 1
图1示出了本发明一实施例提供的人体目标识别定位方法的流程示意图,如图1所示,本实施例的人体目标识别定位方法包括下述步骤101-104。FIG. 1 shows a schematic flowchart of a method for identifying and positioning a human body target provided by an embodiment of the present invention. As shown in FIG. 1 , the method for identifying and positioning a human body target in this embodiment includes the following steps 101-104.
101、获取利用超高频射频识别系统对人体目标进行定位的第一定位结果。101. Acquire a first positioning result of positioning a human target by using an ultra-high frequency radio frequency identification system.
102、获取利用计算机视觉系统对人体目标进行定位的第二定位结果。102. Acquire a second positioning result of positioning the human target by using the computer vision system.
103、利用预设融合算法对所述第一定位结果和第二定位结果进行融合,获得对人体目标的第三定位结果。103. Use a preset fusion algorithm to fuse the first positioning result and the second positioning result to obtain a third positioning result for the human target.
104、利用预设融合优化算法对所述第三定位结果进行定位精度优化,获得对人体目标的最终定位结果。104. Use a preset fusion optimization algorithm to optimize the positioning accuracy of the third positioning result to obtain a final positioning result for the human body target.
本实施例的人体目标识别定位方法,通过预设融合算法将利用超高频射频识别系统对人体目标进行定位的第一定位结果与利用计算机视觉系统对人体目标进行定位的第二定位结果进行融合,并利用预设融合优化算法对融合结果进行定位精度优化,获得对人体目标的最终定位结果,由此,可以实现对人体目标的识别定位,效率高、准确度高。In the human body target identification and positioning method of this embodiment, a first positioning result obtained by using an ultra-high frequency radio frequency identification system to locate a human body target and a second positioning result obtained by using a computer vision system to locate the human body target are fused by a preset fusion algorithm. , and use the preset fusion optimization algorithm to optimize the positioning accuracy of the fusion result, and obtain the final positioning result of the human target, thereby realizing the identification and positioning of the human target with high efficiency and high accuracy.
在具体应用中,上述步骤101,可以包括:In a specific application, the above step 101 may include:
利用基于被动标签列的到达角定位算法,对超高频射频识别系统中人体目标上的标签列位置进行定位,进而获得对所述人体目标进行定位的第一定位结果;Using the angle-of-arrival positioning algorithm based on the passive label column, the position of the label column on the human body target in the UHF radio frequency identification system is located, and then the first positioning result of locating the human body target is obtained;
其中,所述超高频射频识别系统,包括:人体目标上的标签列和在同一直线间隔设置的至少三个超高频射频识别设备;任意相邻的两个超高频射频识别设备间隔预设第一距离,所述标签列包括:间隔预设第二距离的两个电子标签。Wherein, the UHF radio frequency identification system includes: a label column on the human body target and at least three UHF radio frequency identification devices arranged at the same straight line interval; any adjacent two UHF radio frequency identification devices are pre-spaced. Assuming a first distance, the label column includes: two electronic labels separated by a preset second distance.
其中,所述预设第二距离大于零且小于λ/4,λ为所述超高频射频识别设备发送的载波的波长。Wherein, the preset second distance is greater than zero and less than λ/4, where λ is the wavelength of the carrier wave sent by the UHF RFID device.
在具体应用中,所述超高频射频识别设备为超高频射频识别RFID读写器。In a specific application, the UHF radio frequency identification device is an UHF radio frequency identification RFID reader.
具体地,所述利用基于被动标签列的到达角定位算法,对超高频射频识别系统中人体目标上的标签列位置进行定位,进而获得对所述人体目标进行定位的第一定位结果,可以进一步具体包括图中未示出的步骤101a和101b:Specifically, by using the angle-of-arrival positioning algorithm based on the passive tag column to locate the position of the label column on the human body target in the UHF radio frequency identification system, and then to obtain the first positioning result of locating the human body target, it is possible to It further specifically includes steps 101a and 101b not shown in the figure:
101a、获取每一超高频射频识别设备通过其天线接收所述标签列返回的回波信号的到达角。101a. Acquire the angle of arrival at which each UHF RFID device receives the echo signal returned by the tag column through its antenna.
具体地,所述步骤101a可以包括图中未示出的步骤S1和S2:Specifically, the step 101a may include steps S1 and S2 not shown in the figure:
S1、获取每一超高频射频识别设备通过其天线接收的所述标签列中的两个电子标签分别返回的两个回波信号的到达相位偏移值。S1. Acquire the arrival phase offset values of the two echo signals respectively returned by the two electronic tags in the tag column received by each UHF radio frequency identification device through its antenna.
S2、根据所述两个回波信号的到达相位偏移值、所述超高频射频识别设备发送的载波的波长和所述预设第二距离,获取每一超高频射频识别设备通过其天线接收所述标签列返回的回波信号的到达角。S2. According to the arrival phase offset value of the two echo signals, the wavelength of the carrier wave sent by the UHF RFID device, and the preset second distance, obtain each UHF RFID device through its The antenna receives the angle of arrival of the echo signal returned by the tag column.
进一步地,所述步骤S2可以具体包括:Further, the step S2 may specifically include:
根据所述两个回波信号的到达相位偏移值和所述超高频射频识别设备发送的载波的波长λ和所述预设第二距离d,通过第一公式,获取每一超高频射频识别设备通过其天线接收所述标签列返回的回波信号的到达角θ;According to the arrival phase offset value of the two echo signals and The wavelength λ of the carrier wave sent by the UHF RFID device and the preset second distance d are obtained through the first formula to obtain the echoes returned by each UHF RFID device through its antenna to receive the tag column. The arrival angle θ of the signal;
其中,所述第一公式为:Wherein, the first formula is:
101b、根据所述到达角、所述标签列的位置和所述超高频射频识别设备天线位置间的几何关系,对所述标签列位置进行二维定位,进而获得对所述人体目标进行定位的第一定位结果。101b. According to the geometric relationship between the angle of arrival, the position of the label column, and the position of the antenna of the UHF RFID device, perform two-dimensional positioning on the position of the label column, and then obtain the positioning of the human body target the first positioning result.
具体地,上述公式(1)的推导过程如下所述:Specifically, the derivation process of the above formula (1) is as follows:
参见图2,图2示出了基于被动标签列的到达角定位算法的基本模型,该基本模型中包括:所述超高频射频识别系统中的任意一超高频射频识别设备的天线C和所述标签列(即间隔预设第二距离的两个电子标签A和B),令电子标签A的坐标为(0,0),电子标签B的坐标为(d,0),天线C的坐标为(x,h),则电子标签A和B与天线C的距离之差为:Referring to FIG. 2, FIG. 2 shows the basic model of the angle-of-arrival positioning algorithm based on the passive tag column, the basic model includes: the antenna C of any UHF RFID device in the UHF RFID system and the For the label column (that is, two electronic labels A and B separated by a preset second distance), let the coordinates of the electronic label A be (0, 0), the coordinates of the electronic label B be (d, 0), and the coordinates of the antenna C The coordinates are (x, h), then the difference between the distances between the electronic tags A and B and the antenna C is:
由于d远小于天线C到电子标签A/B的距离,对公式(2)的右侧做无穷小量代换可得:Since d is much smaller than the distance from the antenna C to the electronic tag A/B, the infinitesimal substitution on the right side of the formula (2) can be obtained:
考虑到每一超高频射频识别设备通过其天线接收所述标签列返回的回波信号的到达相位偏移与距离间的线性关系可得:Considering the linear relationship between the arrival phase offset and the distance of each UHF RFID device receiving the echo signal returned by the tag column through its antenna, we can get:
其中,为超高频射频识别设备通过天线C接收的电子标签A和B分别返回的两个回波信号的到达相位偏移值;in, is the arrival phase offset value of the two echo signals respectively returned by the electronic tags A and B received by the UHF RFID device through the antenna C;
令根据公式(3)和(4)经过变换可得到:make According to formulas (3) and (4), the transformation can be obtained:
由于标签列为定位目标位置引入了一个新的自由度(标签列的方向),因此上述步骤101b需要三个或更多天线才能够实现对标签列位置的二维定位。标签列的二维坐标可根据得到的到达角根据简单几何关系算出,图3为到达角、标签列位置和天线位置间的几何关系示意图。Since the label column introduces a new degree of freedom (the direction of the label column), the above step 101b requires three or more antennas to realize the two-dimensional positioning of the label column position. The two-dimensional coordinates of the tag column can be calculated according to the obtained angle of arrival according to a simple geometric relationship. FIG. 3 is a schematic diagram of the geometric relationship among the angle of arrival, the position of the tag column and the position of the antenna.
需说明的是,所述预设第二距离d的选取过程如下:It should be noted that the selection process of the preset second distance d is as follows:
一方面,公式(5)中的取值范围为[-1,1],而的取值范围为[0,2π),因此公式(5)对任意均成立当且仅当d<λ/4。在具体应用中,对于超高频射频识别设备,载波的波长约为32cm,因此d应当小于8cm。另一方面,下述的误差分析表明定位的随机误差反比于电子标签间隔d,因此d应当尽可能大来减小随机误差。因此,在本实施例中的d值应优选为8cm。On the one hand, in formula (5) The range of values is [-1,1], and The value range of is [0, 2π), so formula (5) is valid for any Both hold if and only if d<λ/4. In a specific application, for UHF RFID equipment, the wavelength of the carrier wave is about 32cm, so d should be less than 8cm. On the other hand, the following error analysis shows that the random error of positioning is inversely proportional to the electronic tag interval d, so d should be as large as possible to reduce the random error. Therefore, the value of d in this embodiment should preferably be 8 cm.
为了验证和评估基于被动标签列的到达角算法,本实施例使用信号为IPJ-REV-R420-GX21M的超高频射频识别读写器进行试验,选用天线型号为E911011PCR,其方向性增益为11dBic,波束宽度为40°,天线摆放方式参看图4。A1和A2为天线,b和b’为天线的周线。标签列被放置于矩形D1D2D3D4内,标签列间的线间隔为0.5m。实验结果显示最大定位偏差为0.43m,最小定位偏差为0.07m。此结果满足理论预期,本实施例所述方法的检测结果准确度较高。In order to verify and evaluate the angle-of-arrival algorithm based on the passive tag column, this example uses an UHF RFID reader with the signal IPJ-REV-R420-GX21M for testing, the antenna model is E911011PCR, and its directional gain is 11dBic , the beam width is 40°, and the antenna placement method is shown in Figure 4. A1 and A2 are the antennas, and b and b' are the circumferences of the antennas. The label columns are placed within the rectangle D1D2D3D4, and the line spacing between the label columns is 0.5m. The experimental results show that the maximum positioning deviation is 0.43m, and the minimum positioning deviation is 0.07m. This result meets the theoretical expectation, and the detection result of the method described in this embodiment has high accuracy.
在具体应用中,上述步骤102,可以包括图中未示出的步骤102a-102d:In a specific application, the above step 102 may include steps 102a-102d not shown in the figure:
102a、获取计算机视觉系统中设置在预设位置的摄像头对人体目标拍摄后采集的图像。102a: Acquire an image collected after a camera set at a preset position in a computer vision system shoots a human target.
102b、利用高斯混合模型算法,统计所述图像中每一个像素的像素值的变化。102b. Use a Gaussian mixture model algorithm to count changes in the pixel value of each pixel in the image.
需说明的是,在所述高斯混合模型算法中,需要对每一个像素建立K个高斯分布模型,并用这些高斯分布模型来统计图像中每一个像素的像素值的变化,即所述图像中的每一个像素分别用K个高斯分布模型表示其在时间域上的值概率;It should be noted that, in the Gaussian mixture model algorithm, K Gaussian distribution models need to be established for each pixel, and these Gaussian distribution models are used to count the changes in the pixel value of each pixel in the image, that is, the pixel value in the image. Each pixel uses K Gaussian distribution models to represent its value probability in the time domain;
当前像素取值为xt=[x1,t,x2,t,…,xn,t]的概率可表达为:The probability that the current pixel value is x t =[x 1,t ,x 2,t ,...,x n,t ] can be expressed as:
其中,K为高斯模型的个数,wk,t为t时刻当前像素中第k个高斯分布模型的权重,μk,t为t时刻当前像素中第k个高斯分布模型的数学期望,σk,t为t时刻当前像素中第k个高斯分布模型的协方差矩阵,n为正整数。简单起见,通常认为RGB色彩空间或YUV色彩空间中的三个同道是互独立的,即其中I为三维单位矩阵,σk,t为t时刻当前像素中第k个高斯分布模型中各颜色分量的标准差。Among them, K is the number of Gaussian models, w k,t is the weight of the kth Gaussian distribution model in the current pixel at time t, μ k,t is the mathematical expectation of the kth Gaussian distribution model in the current pixel at time t, σ k, t is the covariance matrix of the kth Gaussian distribution model in the current pixel at time t, and n is a positive integer. For simplicity, it is usually considered that the three co-channels in the RGB color space or the YUV color space are independent of each other, namely where I is a three-dimensional unit matrix, σ k, t is the standard deviation of each color component in the kth Gaussian distribution model in the current pixel at time t.
将每个像素点的高斯分布模型按照wk,t/σk,t排序,此值可表征对应高斯分布模型是背景模型的概率。如果一个像素值不满足任何一个高斯分布模型,则会建立一个新的高斯分布模型,而排序最靠后的高斯分布模型将会被淘汰。The Gaussian distribution model of each pixel is sorted according to w k,t /σ k,t , and this value can represent the probability that the corresponding Gaussian distribution model is the background model. If a pixel value does not satisfy any of the Gaussian distribution models, a new Gaussian distribution model will be established, and the lowest ranked Gaussian distribution model will be eliminated.
为了进一步简化运算,在t时刻,像素值xt被划归第K个高斯分布模型当且仅当其满足下述公式(3):In order to further simplify the operation, at time t, the pixel value x t is assigned to the Kth Gaussian distribution model if and only if it satisfies the following formula (3):
|xt-μk,t-1|<Dσk,t-1 (8)|x t -μ k,t-1 |<Dσ k,t-1 (8)
其中,D为用户定义的参数,通常取值为2.5。Among them, D is a user-defined parameter, usually the value is 2.5.
此时,高斯高斯分布模型的参数按照如下递推式更新:At this time, the parameters of the Gaussian Gaussian distribution model are updated recursively as follows:
wk,t=(1-ρ)wk,t-1+ρ (9)w k,t =(1-ρ)w k,t-1 +ρ (9)
μk,t=(1-ρ)μk,t-1+ρxt (10)μ k,t =(1-ρ)μ k,t-1 +ρx t (10)
其中,ρ为学习率,可在0到1间取值。ρ的值决定了高斯分布模型更新的速率。Among them, ρ is the learning rate, which can be between 0 and 1. The value of ρ determines the rate at which the Gaussian distribution model is updated.
未匹配的高斯分布模型的权值按照下述公式(12)更新:The weights of the unmatched Gaussian distribution model are updated according to the following formula (12):
wi,t=(1-α)wi,t-1 i≠k (12)w i,t =(1-α)wi ,t-1 i≠k (12)
若像素值不满足任何一个高斯分布模型,则会建立一个新的高斯分布模型,新的高斯分布模型的期望为像素值xt,而其标准差和期望将被设为预设的默认值。If the pixel value does not satisfy any Gaussian distribution model, a new Gaussian distribution model will be established. The expectation of the new Gaussian distribution model is the pixel value x t , and its standard deviation and expectation will be set to preset default values.
102c、利用方向梯度直方图算法,检测出所述图像中的人体目标。102c, using a directional gradient histogram algorithm to detect a human target in the image.
具体地,所述步骤102c,可以包括图中未示出的步骤A1-A6:Specifically, the step 102c may include steps A1-A6 not shown in the figure:
A1、将所述图像转换为灰度图。A1. Convert the image into a grayscale image.
A2、使用伽马Gamma矫正算法对所述灰度图中的像素值进行归一化。A2. Use a gamma correction algorithm to normalize the pixel values in the grayscale image.
在具体应用中,Gamma矫正算法的表达式为:In specific applications, the expression of the Gamma correction algorithm is:
I'(x,y)=I(x,y)Γ (13)I'(x,y)=I(x,y) Γ (13)
其中,I(x,y)为输入的像素值,I'(x,y)为输出的像素值,Γ为伽马参数,Γ的取值范围为(0,1),通常情况下Γ取值为0.5。Among them, I(x, y) is the input pixel value, I'(x, y) is the output pixel value, Γ is the gamma parameter, and the value range of Γ is (0, 1), usually Γ takes The value is 0.5.
可以理解的是,步骤A2能够减少来自光照变化和随机噪声的影响。It can be understood that step A2 can reduce the influence from illumination variation and random noise.
A3、计算每个像素点的梯度方向及大小。A3. Calculate the gradient direction and size of each pixel.
在具体应用中,计算每个像素点的梯度Gx和Gy的公式为:In specific applications, the formulas for calculating the gradients G x and G y of each pixel are:
其中,Gx(x,y)为坐标为(x,y)的像素点水平方向上的梯度向量,Gy(x,y)为坐标为(x,y)的像素点水平方向上的梯度向量,H(x,y)为坐标为(x,y)的像素点的像素值;Among them, G x (x, y) is the gradient vector in the horizontal direction of the pixel with coordinates (x, y), and G y (x, y) is the gradient of the pixel with coordinates (x, y) in the horizontal direction. vector, H(x, y) is the pixel value of the pixel whose coordinates are (x, y);
梯度的模G(x,y)(即梯度幅值)为:The modulo G(x,y) of the gradient (ie the gradient magnitude) is:
梯度方向θ'(x,y)为:The gradient direction θ'(x,y) is:
通常,这一步骤可以通过将图像矩阵与Canny边缘检测算子进行二维卷积实现。Typically, this step can be achieved by two-dimensional convolution of the image matrix with the Canny edge detection operator.
A4、将归一化后的灰度图划分为具有相同大小的正方形单元,统计其中每个像素的梯度方向及大小,从而得到每个正方形单元的特征向量。A4. Divide the normalized grayscale image into square units with the same size, and count the gradient direction and size of each pixel, so as to obtain the feature vector of each square unit.
A5、将多个相邻正方形单元组合为一个矩形块,对矩形块内的特征向量进行归一化,得到矩形块的特征描述子。A5. Combine a plurality of adjacent square units into a rectangular block, and normalize the feature vectors in the rectangular block to obtain a feature descriptor of the rectangular block.
A6、将所有块的特征描述子组合在一起,得到所述图像的梯度直方图特征向量,进而检测出所述图像中的人体目标。A6. Combine the feature descriptors of all blocks together to obtain a gradient histogram feature vector of the image, and then detect a human target in the image.
102d、对所述图像中检测出的人体目标进行坐标系转换,获得人体目标的实际世界坐标,进而获得人体目标真实位置的第二定位结果。102d. Perform coordinate system transformation on the human body target detected in the image to obtain the actual world coordinates of the human body target, and then obtain a second positioning result of the real position of the human body target.
具体地,所述步骤102d,可以包括:Specifically, the step 102d may include:
以所述摄像头的位置为原点建立一个的三维笛卡尔坐标系,用此三维笛卡尔坐标系的三个单位坐标向量表示世界坐标系的单位坐标向量,在此基础上进一步求取将人体目标像素坐标映射到该人体目标的实际平面坐标的转移矩阵,根据所述转移矩阵计算得到人体目标的实际世界坐标,进而获得人体目标真实位置的第二定位结果。A three-dimensional Cartesian coordinate system is established with the position of the camera as the origin, and the three unit coordinate vectors of the three-dimensional Cartesian coordinate system are used to represent the unit coordinate vector of the world coordinate system. The coordinates are mapped to the transfer matrix of the actual plane coordinates of the human body target, and the actual world coordinates of the human body target are calculated according to the transfer matrix, and then the second positioning result of the real position of the human body target is obtained.
需说明的是,本实施例中,摄像头指向与水平面的夹角可看作摄像头指向与由摄像头所在位置和沿向量V至无穷远处某点确定的直线间的夹角。像点到图像中心(也称为消失点)的距离L正比于消失点到被成像点的连线与摄像头指向间夹角的正切[4]。其表达式由下述公式(17)和(18)给出:It should be noted that, in this embodiment, the angle between the pointing of the camera and the horizontal plane can be regarded as the angle between the pointing of the camera and a straight line determined by the position of the camera and a point along the vector V to infinity. The distance L from the image point to the center of the image (also called the vanishing point) is proportional to the tangent of the angle between the line connecting the vanishing point and the imaged point and the camera pointing [4]. Its expression is given by the following equations (17) and (18):
其中,α为成像中心与被成像点所成的水平角,β为成像中心与被成像点所成的俯仰角,x'和z'为像点到成像面中心的水平和竖直像素距离。Among them, α is the horizontal angle formed by the imaging center and the imaged point, β is the pitch angle formed by the imaging center and the imaged point, and x' and z' are the horizontal and vertical pixel distances from the image point to the center of the imaging plane.
如上所述,估计摄像头指向的关键在于找到图像中心与世界坐标系坐标轴上无穷远处“点”间的像素距离。而无穷远处“点”在图像中的位置可通过计算世界坐标系竖直或水平平面内在实际中平行的两条直线在图中的“交点”来得到。As mentioned above, the key to estimating the camera's pointing is to find the pixel distance between the center of the image and a "point" at infinity on the axes of the world coordinate system. And the position of the "point" at infinity in the image can be obtained by calculating the "intersection point" of two straight lines that are actually parallel in the vertical or horizontal plane of the world coordinate system.
通常在室内监测中,地板或天花板间的接缝可看作理想的参考平行线,于是无穷远处“点”的图像上位置可经由以下步骤得到:Usually in indoor monitoring, the joint between the floor or ceiling can be regarded as an ideal reference parallel line, so the position on the image of the "point" at infinity can be obtained through the following steps:
a)使用Hough变换找到图像中所有的直线;a) Use Hough transform to find all the straight lines in the image;
b)选取其中长度满足要求的直线;b) Select a straight line whose length meets the requirements;
c)根据来自云台的先验姿态信息滤除所有斜率不满足要求的直线;c) Filter out all straight lines whose slopes do not meet the requirements according to the prior attitude information from the gimbal;
d)将距离过近的两条直线合并为一条d) Merge two straight lines that are too close into one
e)找到其中三条或更多直线共同经过的点。e) Find the point where three or more straight lines pass in common.
建立一个以摄像头位置为原点的三维笛卡尔坐标系,称为摄像头坐标系。令摄像头指向为坐标轴V,令水平向右的轴为坐标轴U,坐标轴W由U×V给出。三维世界坐标系的三个坐标轴为X、Y和Z。根据上节得到的摄像头指向信息,世界坐标系的单位坐标向量可被摄像头坐标系的三个单位向量表示为:A three-dimensional Cartesian coordinate system with the camera position as the origin is established, which is called the camera coordinate system. Let the camera point to be the coordinate axis V, let the horizontal right axis be the coordinate axis U, and the coordinate axis W is given by U×V. The three axes of the three-dimensional world coordinate system are X, Y, and Z. According to the camera pointing information obtained in the previous section, the unit coordinate vector of the world coordinate system can be replaced by the three unit vectors of the camera coordinate system. Expressed as:
于是有:So there are:
在此基础上进一步求取将目标像素坐标映射到目标的实际平面坐标的转移矩阵。On this basis, the transition matrix that maps the target pixel coordinates to the actual plane coordinates of the target is further obtained.
假设人体目标的Z坐标为定值h。由于摄像头坐标到世界坐标的映射为线性映射,世界坐标系中的水平面z=h一定在摄像头坐标系中被映射为一平面。假设此平面为Assume that the Z coordinate of the human target is a fixed value h. Since the mapping from camera coordinates to world coordinates is a linear mapping, the horizontal plane z=h in the world coordinate system must be mapped as a plane in the camera coordinate system. Suppose this plane is
v=au+bw+c (22)v=au+bw+c (22)
根据公式(20)可得According to formula (20), we can get
于是then
显然公式(24)的等号右侧部分应与u或w无关,因此得到方程Obviously the right-hand part of the equal sign of Equation (24) should be independent of u or w, so the equation
求解可得:Solve to get:
于是,实际坐标[x,y]T可被图上坐标[u,w]T表示为Therefore, the actual coordinate [x, y] T can be represented by the coordinate [u, w] T on the graph as
同样,图像上坐标可被实际坐标表示为Likewise, the coordinates on the image can be represented by the actual coordinates as
从而通过矩阵计算得出具体的人物识别实际坐标。Thereby, the actual coordinates of the specific person identification are obtained through the matrix calculation.
在具体应用中,上述步骤103,可以包括图中未示出的步骤103a和103b:In a specific application, the above step 103 may include steps 103a and 103b not shown in the figure:
103a、利用方差加权平均算法,将所述第一定位结果和所述第二定位结果进行初步融合。103a. Perform preliminary fusion of the first positioning result and the second positioning result by using a variance weighted average algorithm.
在具体应用中,所述方差加权平均算法的过程可以包括:In a specific application, the process of the variance weighted average algorithm may include:
假设超高频射频识别系统和计算机视觉系统的位置测量方程分别为:It is assumed that the position measurement equations of the UHF RFID system and the computer vision system are:
B=X+U (31)B=X+U (31)
C=X+V (32)C=X+V (32)
其中,B为超高频射频识别系统测得的位置向量;C为计算机视觉子系统测得的位置向量;X为实际位置向量;U和V为噪声向量,且二者满足:Among them, B is the position vector measured by the UHF RFID system; C is the position vector measured by the computer vision subsystem; X is the actual position vector; U and V are noise vectors, and the two satisfy:
其中,Q和R均为正定矩阵;where Q and R are both positive definite matrices;
令Z=K×B+(I-K)×C (35)Let Z=K×B+(I-K)×C (35)
其中,K为Z协方差矩阵最小时的系数矩阵,称为最优系数矩阵;Among them, K is the coefficient matrix when the Z covariance matrix is the smallest, which is called the optimal coefficient matrix;
Z与实际位置X间的偏移量为:The offset between Z and the actual position X is:
Z的协方差矩阵为:The covariance matrix of Z is:
令若以偏移系数矩阵δK加最优系数矩阵K,则有make If the optimal coefficient matrix K is added to the offset coefficient matrix δK, we have
δP=δKW+(δKW)T+δK(Q+R)δKT (39)δP=δKW+(δKW) T +δK(Q+R)δK T (39)
其中:in:
W=QKT-R(I-K)T (40)W=QK T -R(IK) T (40)
根据前面的假设,(Q+R)为正定矩阵。因此不论δK取何值,公式(39)的第三项恒为正;而当W≠0时,公式(39)的前两项的符号随δK的变化而变化。因此P方差最小当且仅当W≡0,即According to the previous assumption, (Q+R) is a positive definite matrix. Therefore, no matter what the value of δK is, the third term of formula (39) is always positive; and when W≠0, the sign of the first two terms of formula (39) changes with the change of δK. Therefore, the variance of P is the smallest if and only if W≡0, i.e.
K=R(Q+R)-1 (41)K=R(Q+R) -1 (41)
(I-K)=Q(Q+R)-1 (42)(IK)=Q(Q+R) -1 (42)
此时Z及其协方差的矩阵为:At this time, the matrix of Z and its covariance is:
Z=R(Q+R)-1×B+Q(Q+R)-1×C (43)Z=R(Q+R) -1 ×B+Q(Q+R) -1 ×C (43)
P=(Q+R)-1(QRQ+RQR)(Q+R)-1 (44)P=(Q+R) -1 (QRQ+RQR)(Q+R) -1 (44)
协方差矩阵的行列式为:The determinant of the covariance matrix is:
|P|小于|Q|和|R|中的任一值。|P| is less than either of |Q| and |R|.
103b、利用卡尔曼滤波算法,根据测量数据估计信号的真实值,进一步提高初步融合后得到的定位结果的精度,获得对人体目标的第三定位结果。103b. Use the Kalman filter algorithm to estimate the true value of the signal according to the measurement data, further improve the accuracy of the positioning result obtained after the preliminary fusion, and obtain a third positioning result for the human target.
在具体应用中,所述卡尔曼滤波算法的过程可以包括:In a specific application, the process of the Kalman filter algorithm may include:
定义时刻tk处的状态向量为Xk。Xk由随机噪声序列Wk-1驱动,其驱动方程为:The state vector at time t k is defined as X k . X k is driven by random noise sequence W k-1 , and its driving equation is:
Xk=Φk,k-1Xk-1+Γk-1Wk-1 (46)X k =Φ k,k-1 X k-1 +Γ k-1 W k-1 (46)
测量方程为:The measurement equation is:
Zk=HkXk+Vk (47)Z k =H k X k +V k (47)
其中,Φk,k-1为tk-1时刻至tk时刻的转移矩阵;Γk-1为驱动矩阵;Hk为测量矩阵;Vk为随机测量噪声。Among them, Φ k,k-1 is the transition matrix from time t k-1 to time t k ; Γ k-1 is the driving matrix; H k is the measurement matrix; V k is the random measurement noise.
随机噪声序列Wk和Vk满足如下关系:The random noise sequences W k and V k satisfy the following relationship:
对状态向量Xk的估计值可根据如下递推式得出:Estimated value of the state vector X k It can be obtained according to the following recursion:
一步状态估计:One-step state estimation:
状态估计:State estimation:
滤波增益:Filter gain:
一步状态估计方差:One-step state estimation variance:
状态估计方差:State estimate variance:
在具体应用中,上述步骤104中的所述预设融合优化算法,可以包括:初始化过程、循环追踪过程和显示过程;In a specific application, the preset fusion optimization algorithm in the above step 104 may include: an initialization process, a loop tracking process, and a display process;
所述初始化过程,用于初始化一个被捕捉到的新的人体目标;The initialization process is used to initialize a captured new human target;
所述循环追踪过程,用于不断更新并追踪人体目标的位置;The cycle tracking process is used to continuously update and track the position of the human target;
所述显示过程,用于将循环追踪的结果绘制于每帧图像中;The display process is used to draw the result of the loop tracking in each frame of image;
其中,所述初始化过程与所述循环追踪过程均包含:预处理模块、数据对比模块和精度增强模块;其中:Wherein, the initialization process and the cycle tracking process both include: a preprocessing module, a data comparison module, and a precision enhancement module; wherein:
所述预处理模块:根据所述第一定位结果得到的人体目标位置或人体目标的上一帧位置估计目前人体目标可能存在的区域;Described preprocessing module: according to the position of the human body target obtained by the first positioning result or the position of the previous frame of the human body target to estimate the area where the human body target may exist at present;
所述数据对比模块:将所述第一定位结果得到的人体目标位置和所述第二定位结果得到的人体目标位置分别与上一次人体目标位置做对比,选取最可能的人体目标位置并送入精度增强模块;The data comparison module: compares the target position of the human body obtained by the first positioning result and the target position of the human body obtained by the second positioning result with the previous target position of the human body, selects the most likely target position of the human body and sends it to Precision enhancement module;
所述精度增强模块,用于根据所述数据对比模块选取的最可能的人体目标位置对所述第三定位结果进行精度优化,获得对人体目标的最终定位结果。The accuracy enhancement module is configured to optimize the accuracy of the third positioning result according to the most probable human target position selected by the data comparison module to obtain a final positioning result for the human target.
为评估所述预设融合优化算法的有效性,本实施例使用了一段由商业级闭路监控摄像头(DS-2DC2202-DE3/W HIKVISION)录制的监控录像进行实验。受限于实验场地电磁环境的复杂性难以满足超高频射频识别定位的需要,此处超高频射频识别的定位数据采用了仿真结果。视频长度约为6秒,共130帧。结果显示,误检现象被完全消除,同时对目标的定位精度提高了大约一个数量级:均方根误差从超高频射频识别系统的0.0673m和计算机视觉系统的0.1226m提高到了0.0107m。Kalman滤波的输出结果显示,其绝对误差较初步融合结果有了进一步提高,均方根值为0.0071m,提高幅度约为30%,目标的位置被精确标记。In order to evaluate the effectiveness of the preset fusion optimization algorithm, this embodiment uses a surveillance video recorded by a commercial-grade closed-circuit surveillance camera (DS-2DC2202-DE3/W HIKVISION) to conduct experiments. Due to the complexity of the electromagnetic environment of the experimental site, it is difficult to meet the needs of UHF RFID positioning. The simulation results are used for the positioning data of UHF RFID here. The length of the video is about 6 seconds, with a total of 130 frames. The results show that the false detection phenomenon is completely eliminated, while the positioning accuracy of the target is improved by about an order of magnitude: the root mean square error is improved from 0.0673m for the UHF RFID system and 0.1226m for the computer vision system to 0.0107m. The output results of Kalman filtering show that the absolute error has been further improved compared with the initial fusion results, the root mean square value is 0.0071m, the improvement range is about 30%, and the position of the target is accurately marked.
图5a和图5b分别示出了本发明实施例提供的人体目标识别定位方法的一种具体流程示意图。FIG. 5a and FIG. 5b respectively show a specific schematic flowchart of a method for identifying and positioning a human body target provided by an embodiment of the present invention.
本实施例的人体目标识别定位方法,通过预设融合算法将利用超高频射频识别系统对人体目标进行定位的第一定位结果与利用计算机视觉系统对人体目标进行定位的第二定位结果进行融合,并利用预设融合优化算法对融合结果进行定位精度优化,获得对人体目标的最终定位结果,可以实现对人体目标的识别定位,效率高、准确度高。In the human body target identification and positioning method of this embodiment, a first positioning result obtained by using an ultra-high frequency radio frequency identification system to locate a human body target and a second positioning result obtained by using a computer vision system to locate the human body target are fused by a preset fusion algorithm. , and use the preset fusion optimization algorithm to optimize the positioning accuracy of the fusion result, and obtain the final positioning result of the human target, which can realize the identification and positioning of the human target, with high efficiency and high accuracy.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。术语“上”、“下”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。It should be noted that, in this document, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. any such actual relationship or sequence exists. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device that includes a list of elements includes not only those elements, but also includes not explicitly listed or other elements inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element. The orientation or positional relationship indicated by the terms "upper", "lower", etc. is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the indicated device or element must be It has a specific orientation, is constructed and operates in a specific orientation, and therefore should not be construed as a limitation of the present invention. Unless otherwise expressly specified and limited, the terms "installed", "connected" and "connected" should be understood in a broad sense, for example, it may be a fixed connection, a detachable connection, or an integral connection; it may be a mechanical connection, It can also be an electrical connection; it can be a direct connection, an indirect connection through an intermediate medium, or an internal connection between two components. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood according to specific situations.
本发明的说明书中,说明了大量具体细节。然而能够理解的是,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。类似地,应当理解,为了精简本发明公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释呈反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。本发明并不局限于任何单一的方面,也不局限于任何单一的实施例,也不局限于这些方面和/或实施例的任意组合和/或置换。而且,可以单独使用本发明的每个方面和/或实施例或者与一个或更多其他方面和/或其实施例结合使用。In the description of the present invention, numerous specific details are set forth. It will be understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it is to be understood that, in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together into a single embodiment in order to simplify the present disclosure and to aid in the understanding of one or more of the various aspects of the invention. , figures, or descriptions thereof. However, this method of disclosure should not be construed to reflect the intention that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention. It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The invention is not limited to any single aspect, nor to any single embodiment, nor to any combination and/or permutation of these aspects and/or embodiments. Furthermore, each aspect and/or embodiment of the invention may be used alone or in combination with one or more other aspects and/or embodiments thereof.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围,其均应涵盖在本发明的权利要求和说明书的范围当中。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention. The scope of the invention should be included in the scope of the claims and description of the present invention.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610755695.3A CN106372552B (en) | 2016-08-29 | 2016-08-29 | Human body target recognition positioning method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610755695.3A CN106372552B (en) | 2016-08-29 | 2016-08-29 | Human body target recognition positioning method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106372552A CN106372552A (en) | 2017-02-01 |
| CN106372552B true CN106372552B (en) | 2019-03-26 |
Family
ID=57901811
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610755695.3A Expired - Fee Related CN106372552B (en) | 2016-08-29 | 2016-08-29 | Human body target recognition positioning method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106372552B (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106911915B (en) * | 2017-02-28 | 2020-05-29 | 潍坊恩源信息科技有限公司 | Commodity information acquisition system based on augmented reality technology |
| CN107273799A (en) * | 2017-05-11 | 2017-10-20 | 上海斐讯数据通信技术有限公司 | A kind of indoor orientation method and alignment system |
| CN107608541B (en) * | 2017-10-17 | 2021-03-05 | 宁波视睿迪光电有限公司 | Three-dimensional attitude positioning method and device and electronic equipment |
| CN107782304B (en) * | 2017-10-26 | 2021-03-09 | 广州视源电子科技股份有限公司 | Mobile robot positioning method and device, mobile robot and storage medium |
| SG10201913005YA (en) * | 2019-12-23 | 2020-09-29 | Sensetime Int Pte Ltd | Method, apparatus, and system for recognizing target object |
| CN111833397B (en) * | 2020-06-08 | 2022-11-29 | 西安电子科技大学 | A data conversion method and device for direction finding target positioning |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101559600A (en) * | 2009-05-07 | 2009-10-21 | 上海交通大学 | Service robot grasp guidance system and method thereof |
| CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
| CN102848388A (en) * | 2012-04-05 | 2013-01-02 | 上海大学 | Multi-sensor based positioning and grasping method for service robot |
| CN104330771A (en) * | 2014-10-31 | 2015-02-04 | 富世惠智科技(上海)有限公司 | Indoor RFID precise positioning method and device |
| CN105180943A (en) * | 2015-09-17 | 2015-12-23 | 南京中大东博信息科技有限公司 | Ship positioning system and ship positioning method |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102011005513A1 (en) * | 2011-03-14 | 2012-09-20 | Kuka Laboratories Gmbh | Robot and method for operating a robot |
-
2016
- 2016-08-29 CN CN201610755695.3A patent/CN106372552B/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101559600A (en) * | 2009-05-07 | 2009-10-21 | 上海交通大学 | Service robot grasp guidance system and method thereof |
| CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
| CN102848388A (en) * | 2012-04-05 | 2013-01-02 | 上海大学 | Multi-sensor based positioning and grasping method for service robot |
| CN104330771A (en) * | 2014-10-31 | 2015-02-04 | 富世惠智科技(上海)有限公司 | Indoor RFID precise positioning method and device |
| CN105180943A (en) * | 2015-09-17 | 2015-12-23 | 南京中大东博信息科技有限公司 | Ship positioning system and ship positioning method |
Non-Patent Citations (1)
| Title |
|---|
| 视频序列图像中运动目标检测与跟踪算法研究;宋佳声;《中国博士学位论文全文数据库 信息科技辑》;20141215;全文 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106372552A (en) | 2017-02-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106372552B (en) | Human body target recognition positioning method | |
| US11403839B2 (en) | Commodity detection terminal, commodity detection method, system, computer device, and computer readable medium | |
| US7313265B2 (en) | Stereo calibration apparatus and stereo image monitoring apparatus using the same | |
| US20180189577A1 (en) | Systems and methods for lane-marker detection | |
| Blanco et al. | A robust, multi-hypothesis approach to matching occupancy grid maps | |
| US20140270362A1 (en) | Fast edge-based object relocalization and detection using contextual filtering | |
| US8369578B2 (en) | Method and system for position determination using image deformation | |
| US9934585B2 (en) | Apparatus and method for registering images | |
| CN111445531A (en) | Multi-view camera navigation method, device, equipment and storage medium | |
| JP2011113197A (en) | Method and system for image search | |
| CN111862208A (en) | A vehicle positioning method, device and server based on screen optical communication | |
| Nick et al. | Camera-assisted localization of passive rfid labels | |
| Jung et al. | Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation | |
| Stommel et al. | Inpainting of missing values in the Kinect sensor's depth maps based on background estimates | |
| Li et al. | An improved graph-based visual localization system for indoor mobile robot using newly designed markers | |
| Duan et al. | Enabling RFID-based tracking for multi-objects with visual aids: A calibration-free solution | |
| Jog et al. | Automated computation of the fundamental matrix for vision based construction site applications | |
| EP4481682A1 (en) | Spatial positioning method | |
| Budge et al. | Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation | |
| US8164507B2 (en) | Fusing multi-sensor data to provide estimates of structures | |
| WO2018033698A1 (en) | Method of angle detection | |
| Ecklbauer | A mobile positioning system for android based on visual markers | |
| Robinson et al. | Pattern design for 3D point matching | |
| Biswas et al. | Medical image registration based on grid matching using Hausdorff Distance and Near set | |
| Sadeghi et al. | Ocrapose ii: An ocr-based indoor positioning system using mobile phone images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190326 Termination date: 20200829 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |