CN102801927A - High-speed image acquiring method - Google Patents
High-speed image acquiring method Download PDFInfo
- Publication number
- CN102801927A CN102801927A CN2012102298552A CN201210229855A CN102801927A CN 102801927 A CN102801927 A CN 102801927A CN 2012102298552 A CN2012102298552 A CN 2012102298552A CN 201210229855 A CN201210229855 A CN 201210229855A CN 102801927 A CN102801927 A CN 102801927A
- Authority
- CN
- China
- Prior art keywords
- speed
- mode
- image
- windowing
- sampling frequency
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Studio Devices (AREA)
Abstract
Description
技术领域 technical field
本发明涉及图像探测中高速成像领域,尤其涉及一种高速图像采集方法。The invention relates to the field of high-speed imaging in image detection, in particular to a high-speed image acquisition method.
背景技术 Background technique
复眼的高灵敏度一般定义为较高的时间分辨率,即每秒钟可以获得较高的采样帧数。根据生物学中的进化论原理,昆虫选择多孔径系统的进化策略,以扩大昆虫的视场角,同时由于昆虫子眼的焦距较小,这样可以获得一个较小的成像,从而降低图像处理的能量消耗。而为了获得较高的时间分辨率图像,昆虫可以有选择性地对不同通道子眼成像进行高速采集,这样相对全局的图像采集,这种手段可以有针对性地采集图像,提高图像采样帧数,提高了昆虫的灵敏度。The high sensitivity of the compound eye is generally defined as a higher temporal resolution, that is, a higher number of sampling frames per second can be obtained. According to the principle of evolution in biology, insects choose the evolution strategy of multi-aperture system to expand the field of view of insects. At the same time, due to the smaller focal length of insect sub-eyes, a smaller image can be obtained, thereby reducing the energy of image processing. consume. In order to obtain higher temporal resolution images, insects can selectively collect images of sub-eyes in different channels at high speed. Compared with global image acquisition, this method can collect images in a targeted manner and increase the number of image sampling frames. , improving the sensitivity of insects.
目前,对于高速成像采集系统的实现主要依赖高速相机和经过编码的多台低速相机。At present, the realization of a high-speed imaging acquisition system mainly relies on high-speed cameras and multiple low-speed cameras that have been encoded.
发明人在实现本发明的过程中,发现现有技术中至少存在以下缺点和不足:In the process of realizing the present invention, the inventor finds that at least the following disadvantages and deficiencies exist in the prior art:
当采用依靠高速采集和缓存系统的高速相机时,依靠高速采集和缓存系统,但是成本高,能量消耗大,且每张照片曝光时间较短,曝光不足的问题不可避免;When using a high-speed camera that relies on a high-speed acquisition and cache system, it relies on a high-speed acquisition and cache system, but the cost is high, the energy consumption is large, and the exposure time of each photo is short, so the problem of underexposure is inevitable;
当采用经过编码的多台低速相机时,主要依靠的是图像采集过程,不同相机曝光时刻的设置,以及图像后期的重建。虽然成本低,图片曝光时间充分,但是无法实时获得高速图像,且图像重构之后会出现一定程度的失真。When multiple encoded low-speed cameras are used, it mainly depends on the image acquisition process, the setting of exposure time of different cameras, and the reconstruction of the image in the later stage. Although the cost is low and the image exposure time is sufficient, it is impossible to obtain high-speed images in real time, and a certain degree of distortion will appear after image reconstruction.
发明内容 Contents of the invention
本发明提供了一种高速图像采集方法,本发明实现了实时的采集高速图像,降低了成本和能量消耗,避免了图像的失真,详见下文描述:The present invention provides a high-speed image acquisition method. The present invention realizes real-time acquisition of high-speed images, reduces cost and energy consumption, and avoids image distortion. See the following description for details:
一种高速图像采集方法,所述方法包括以下步骤:A high-speed image acquisition method, said method comprising the following steps:
(1)建立每个子眼与图像传感器对应的成像区域一一对应关系,子眼个数和成像区域的个数均为N;(1) Establish a one-to-one correspondence between each sub-eye and the imaging area corresponding to the image sensor, the number of sub-eyes and the number of imaging areas are both N;
(2)通过所述对应关系,建立全局低速的加窗模式;(2) Establishing a global low-speed windowing mode through the corresponding relationship;
(3)通过所述对应关系,建立局部高速的基本跟踪模式;(3) establish a local high-speed basic tracking mode through the corresponding relationship;
(4)通过所述对应关系,建立自适应性的物体跟踪模式;(4) establishing an adaptive object tracking mode through the corresponding relationship;
(5)根据第三预设条件采用所述全局低速的加窗模式和所述局部高速的基本跟踪模式,或采用所述全局低速的加窗模式和所述自适应性的物体跟踪模式分别获取高速图像序列;将采样频率高的高速图像序列对应的模式作为最终模式,通过所述最终模式获得的图像序列作为最终高速图像序列。(5) Using the global low-speed windowing mode and the local high-speed basic tracking mode according to the third preset condition, or using the global low-speed windowing mode and the adaptive object tracking mode to obtain High-speed image sequence: the mode corresponding to the high-speed image sequence with high sampling frequency is used as the final mode, and the image sequence obtained through the final mode is used as the final high-speed image sequence.
所述全局低速具体为:所述图像传感器工作在满足第一预设条件的采样频率和采样区间。The global low speed specifically includes: the image sensor works at a sampling frequency and a sampling interval satisfying a first preset condition.
所述通过所述对应关系,建立全局低速的加窗模式具体包括:The establishment of the global low-speed windowing mode through the corresponding relationship specifically includes:
1)获取一片像素总数、最高采样频率和窗口个数确定的所述图像传感器;1) Obtain a piece of the image sensor determined by the total number of pixels, the highest sampling frequency and the number of windows;
2)通过所述对应关系将移动物体的角速度转化为所述图像传感器上物体成像区域的移动速度;2) Converting the angular velocity of the moving object into the moving velocity of the imaging area of the object on the image sensor through the corresponding relationship;
3)获取第二预设条件下物体移动速度大小的范围;3) Obtain the range of the moving speed of the object under the second preset condition;
4)根据所述移动速度大小的范围获取加窗的层数;4) Obtain the number of layers of windowing according to the range of the moving speed;
5)通过所述加窗的层数、所述像素总数和所述最高采样频率获取加窗模式;5) Obtain a windowing mode through the number of layers of windowing, the total number of pixels and the highest sampling frequency;
其中,所述加窗模式包括:窗口范围之内的采样频率和窗口位置。Wherein, the windowing mode includes: sampling frequency and window position within the window range.
所述局部高速具体为:窗口的大小和位置随时间变化,且采样频率高于所述全局低速下的采样频率。The local high speed specifically means that the size and position of the window change with time, and the sampling frequency is higher than the sampling frequency at the global low speed.
所述通过所述对应关系,建立局部高速的基本跟踪模式具体包括:The establishment of a local high-speed basic tracking mode through the corresponding relationship specifically includes:
1)通过所述全局低速获取物体在所述图像传感器上的最初成像位置;1) Obtaining the initial imaging position of the object on the image sensor through the global low speed;
2)以所述最初成像位置所在的当前窗口为中心,确定所述基本跟踪模式;2) Determining the basic tracking mode with the current window where the initial imaging position is located as the center;
其中,所述基本跟踪模式具体为:当前窗口和邻近窗口的采样频率。Wherein, the basic tracking mode is specifically: the sampling frequency of the current window and adjacent windows.
所述通过所述对应关系,建立自适应性的物体跟踪模式具体包括:The establishment of an adaptive object tracking mode through the corresponding relationship specifically includes:
1)通过所述全局低速获取物体在所述图像传感器上的所述最初成像位置;1) acquiring the initial imaging position of the object on the image sensor through the global low speed;
2)由所述最初成像位置的质心坐标确定下一时刻的加窗区域,加窗区域为扇形;2) The windowed area at the next moment is determined by the centroid coordinates of the initial imaging position, and the windowed area is fan-shaped;
3)确定所述扇形的半径r、张角张角角平分线与水平轴的夹角θ;3) Determine the radius r and opening angle of the sector The angle θ between the bisector of the opening angle and the horizontal axis;
4)确定所述加窗区域的大小和位置。4) Determine the size and position of the windowed area.
本发明提供的技术方案的有益效果是:相对于现有技术,本方法通过图像传感器的加窗能力,可以成倍提高图像采集频率,同时保存了一定的闲置率;能有效地保持图像较高的信噪比,实时地采集了高速图像,降低了成本和能量消耗;通过子眼视场之间的非重叠特性,避免了图像的失真;此外,自适应性的物体跟踪模式具有较高的学习能力,能够根据物体在上一个时刻成像的加窗区域,来预测下一时刻加窗的大小和位置。The beneficial effects of the technical solution provided by the present invention are: compared with the prior art, this method can double the image acquisition frequency through the windowing capability of the image sensor, and at the same time preserve a certain idle rate; it can effectively keep the image at a higher The signal-to-noise ratio is high, and high-speed images are collected in real time, reducing cost and energy consumption; image distortion is avoided through the non-overlapping characteristics of the sub-eye field of view; in addition, the adaptive object tracking mode has a high The learning ability can predict the size and position of the window at the next moment based on the windowed area of the object imaged at the previous moment.
附图说明 Description of drawings
图1为本发明提供的全局开窗的示意图;Fig. 1 is a schematic diagram of global windowing provided by the present invention;
图2为本发明提供的边缘开窗的示意图;Fig. 2 is the schematic diagram of edge window provided by the present invention;
图3为本发明提供的目标移动速度与加窗层数关系曲线图;Fig. 3 is the curve diagram of the relationship between the moving speed of the target and the number of layers of windowing provided by the present invention;
图4为本发明提供的加窗区域的3个参量r、θ、示意图;Fig. 4 shows the three parameters r, θ, schematic diagram;
图5为本发明提供的自适应加窗区域估算示意图;Figure 5 is the adaptive windowing area provided by the present invention Estimation diagram;
图6为本发明提供的一种高速图像采集方法的流程图。FIG. 6 is a flowchart of a high-speed image acquisition method provided by the present invention.
具体实施方式 Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the implementation manner of the present invention will be further described in detail below in conjunction with the accompanying drawings.
无论在民用和军工领域,对于高速目标的捕捉能力要求越来越高,同时在特殊场合会要求能够实现高速、小型化、节能和小畸变的图像或者视频采集。Whether in the civilian or military fields, the requirements for the ability to capture high-speed targets are getting higher and higher. At the same time, in special occasions, it is required to achieve high-speed, miniaturized, energy-saving and low-distortion image or video acquisition.
为了实现实时的采集高速图像,降低成本和能量消耗,避免图像的失真,本发明实施例提供了一种高速图像采集方法,参见图1、图2、图3、图4、图5和图6,详见下文描述:In order to realize real-time acquisition of high-speed images, reduce cost and energy consumption, and avoid image distortion, an embodiment of the present invention provides a high-speed image acquisition method, see Fig. 1, Fig. 2, Fig. 3, Fig. 4, Fig. 5 and Fig. 6 , see the description below:
101:建立每个子眼与图像传感器对应的成像区域一一对应关系,子眼个数和成像区域的个数均为N;101: Establish a one-to-one correspondence between each sub-eye and the imaging area corresponding to the image sensor, and the number of sub-eyes and the number of imaging areas are both N;
其中,在建立一一对应关系之前,保证每个子眼之间的视场没有重叠,也不存在盲区。具体建立方法为本领域技术人员所公知,本发明实施例在此不做赘述。Among them, before establishing a one-to-one correspondence, it is ensured that the field of view between each sub-eye does not overlap, and there is no blind area. The specific establishment method is well known to those skilled in the art, and will not be described in detail here in the embodiment of the present invention.
102:通过对应关系,建立全局低速的加窗模式;102: Establish a global low-speed windowing mode through the corresponding relationship;
其中,全局低速具体为:图像传感器工作在满足第一预设条件的采样频率和采样区间。Wherein, the global low speed specifically means that the image sensor works at a sampling frequency and a sampling interval satisfying the first preset condition.
其中,该步骤具体包括:Among them, this step specifically includes:
1)获取一片像素总数、最高采样频率和窗口个数确定的图像传感器;1) Obtain an image sensor with the total number of pixels, the highest sampling frequency and the number of windows determined;
对于一片图像传感器区域,假设图像传感器的形状为正方形,对应的横纵方向的像素总数为M×Mpixel(本例取M=1024)。所采用的高速数据接口采样频率的上限为F(单位为fps)。因此,带宽的上限为F×M2pixel.fps(取F=20fps)。设窗口形状也为正方形,大小为l×l像素(取l=256),位置可以自适应地调整和变化。同时图像传感器横纵两边均可以开k个窗口(取k=M/l=4)。因此,图像传感器的像素总数,可以使用k2l2来表达。For an image sensor area, assuming that the image sensor is square in shape, the corresponding total number of pixels in the horizontal and vertical directions is M×Mpixel (in this example, M=1024). The upper limit of the sampling frequency of the adopted high-speed data interface is F (the unit is fps). Therefore, the upper limit of the bandwidth is F×M 2 pixel.fps (take F=20fps). Suppose the window shape is also square, the size is l×l pixels (take l=256), and the position can be adjusted and changed adaptively. At the same time, k windows can be opened on both horizontal and vertical sides of the image sensor (take k=M/l=4). Therefore, the total number of pixels of the image sensor can be expressed by k 2 l 2 .
2)通过对应关系将移动物体的角速度转化为图像传感器上物体成像区域的移动速度;2) Convert the angular velocity of the moving object into the moving velocity of the imaging area of the object on the image sensor through the corresponding relationship;
其中,在对成像系统进行标定后,统一使用“移动窗口数/秒”作为物体在图像传感器上的速度v。Among them, after the imaging system is calibrated, the "number of moving windows/second" is uniformly used as the speed v of the object on the image sensor.
3)获取第二预设条件下物体移动速度大小的范围;3) Obtain the range of the moving speed of the object under the second preset condition;
4)根据移动速度大小的范围获取加窗的层数;4) Obtain the number of layers of windowing according to the range of moving speed;
首先,假设复眼透镜的视场角为180°,对应在横方向或者是纵方向均有四个窗口。先对物体的角速度有大概的估算。First, assume that the field of view of the fly-eye lens is 180°, corresponding to four windows in the horizontal or vertical direction. First estimate the angular velocity of the object.
当物体移动速度大于Fk/2(即移动窗口数/秒=40,对应物空间的速度为1800°/s)时,此时采用全局加窗的策略来监测物体的运动。如图1所示:When the moving speed of the object is greater than Fk/2 (that is, the number of moving windows/second = 40, and the speed of the corresponding object space is 1800°/s), the global windowing strategy is used to monitor the movement of the object. As shown in Figure 1:
当物体移动速度小于Fk/2(即移动窗口数/秒=40,对应物空间的速度为1800°/s)时,此时采用边缘加窗的策略,如图2所示。边缘加窗的层数为(k-Fk2/4v)的向上取整数。这里取角速度为45°/s(即移动窗口数/秒=1),略大于人眼对角速度分辨能力的上限(35°/s)。此时,取加窗层数为1。When the moving speed of the object is less than Fk/2 (that is, the number of moving windows/second = 40, and the speed of the corresponding object space is 1800°/s), the edge windowing strategy is adopted at this time, as shown in Figure 2. The number of layers for edge windowing is rounded up to an integer of (k-Fk 2 /4v). Here, the angular velocity is taken as 45°/s (that is, the number of moving windows/second = 1), which is slightly greater than the upper limit of the human eye's ability to distinguish angular velocity (35°/s). At this point, take the number of windowing layers as 1.
由图3可知,一般情况下加窗的层数为一层,当物体移动速度较大时候,才需要一层以上的加窗层数。加窗层数和物体速度(含物体移动速度和角速度)的对应关系,如表1所示:It can be seen from Figure 3 that in general, the number of windowing layers is one layer, and when the moving speed of the object is relatively high, more than one layer of windowing layers is required. The corresponding relationship between the number of windowing layers and the velocity of the object (including the moving velocity and angular velocity of the object) is shown in Table 1:
表1加窗层数与物体速度对照表(k=4)Table 1 Comparison table between the number of window layers and the speed of objects (k=4)
如表1所示,当物体移动速度小于26.7窗口数/秒时,只开一层边缘窗口即可满足探测要求。As shown in Table 1, when the moving speed of the object is less than 26.7 windows per second, only one layer of edge windows can meet the detection requirements.
5)通过加窗的层数、像素总数和最高采样频率获取加窗模式。5) Obtain the windowing mode through the number of windowing layers, the total number of pixels and the highest sampling frequency.
其中,加窗模式包括:窗口范围之内的采样频率和窗口位置等。Wherein, the windowing mode includes: a sampling frequency and a window position within a window range, and the like.
103:通过对应关系,建立局部高速的基本跟踪模式;103: Establish a local high-speed basic tracking mode through the corresponding relationship;
其中,局部高速具体为:窗口的大小和位置随时间变化,且采样频率高于全局低速下的采样频率。Among them, the local high speed is specifically: the size and position of the window change with time, and the sampling frequency is higher than that at the global low speed.
1)通过全局低速获取物体在图像传感器上的最初成像位置;1) Obtain the initial imaging position of the object on the image sensor at a global low speed;
2)以最初成像位置所在的当前窗口为中心,确定基本跟踪模式。2) With the current window where the initial imaging position is located as the center, determine the basic tracking mode.
其中,基本跟踪模式具体为:当前窗口和邻近窗口的采样频率。Wherein, the basic tracking mode is specifically: the sampling frequency of the current window and adjacent windows.
根据邻近窗口和当前窗口之间的距离,来调整邻近窗口的采样频率,使得距离越近,采样频率越高,距离越远采样频率越低,这样可以对于物体在飞出当前窗口,且飞入下一个窗口的方向和采样频率,进行一个较好的估算和平衡。According to the distance between the adjacent window and the current window, the sampling frequency of the adjacent window is adjusted, so that the closer the distance is, the higher the sampling frequency is, and the farther the distance is, the lower the sampling frequency is, so that the objects flying out of the current window and flying into The direction and sampling frequency of the next window, for a better estimate and balance.
这里主要是选择窗口的大小和个数,一般而言,窗口越小,窗口的个数越多,对于物体跟踪的性能就越高,对于细节的描述就越来越细致。但是鉴于物体在图像传感器上的像不能无限小,窗口个数和成本一般是成正相关性的。窗口的大小要大于物体在图像传感器的成像区域大小,基于本方法,建议窗口大小为成像区域大小的81-100倍。The main thing here is to choose the size and number of windows. Generally speaking, the smaller the window, the more the number of windows, the higher the performance of object tracking, and the more detailed description of details. However, in view of the fact that the image of an object on the image sensor cannot be infinitely small, the number of windows and the cost are generally positively correlated. The size of the window should be larger than the size of the imaging area of the object in the image sensor. Based on this method, it is recommended that the size of the window be 81-100 times the size of the imaging area.
104:通过对应关系,建立自适应性的物体跟踪模式;104: Establish an adaptive object tracking mode through the corresponding relationship;
1)通过全局低速获取物体在图像传感器上的最初成像位置;1) Obtain the initial imaging position of the object on the image sensor at a global low speed;
由于复眼光学透镜通常焦距较小,导致物体放大倍率较小,当物体在距离透镜较远时,可将目标在探测器上每一时刻在图像中的成像区域,近似看作为一个质点。Because the focal length of the fly-eye optical lens is usually small, the magnification of the object is small. When the object is far away from the lens, the imaging area of the target on the detector at each moment in the image can be approximately regarded as a particle.
2)由最初成像位置的质心坐标确定下一时刻的加窗区域,加窗区域为扇形;2) The windowed area at the next moment is determined by the centroid coordinates of the initial imaging position, and the windowed area is fan-shaped;
设时刻i的成像的坐标为Oi(xi,yi)。物体在下一时刻的坐标Oi+1(xi+1,yi+1),落在一个扇形的范围之内,张角半径r>0,张角角平分线与水平轴的夹角θ的绝对角度为0~360°。下一个时刻的加窗区域由之前若干时刻的成像质心的坐标决定,且形状为扇形。该扇形的中心为上一个时刻成像的形心位置Oi(xi,yi)。因此,以上三个参量r、θ、可以完备地描述扇形加窗区域的形状。扇形状加窗区域如下图所示:Let the coordinates of the imaging at time i be O i ( xi , y i ). The coordinates O i+1 (x i+1 , y i+1 ) of the object at the next moment fall within the range of a sector, and the opening angle Radius r>0, the absolute angle of the angle θ between the bisector of the opening angle and the horizontal axis is 0~360°. The windowed area at the next moment is determined by the coordinates of the imaging centroid at several moments before, and its shape is fan-shaped. The center of the sector is the centroid position O i ( xi , y i ) imaged at the previous moment. Therefore, the above three parameters r, θ, The shape of the fanned windowed region can be completely described. The fan-shaped window area is shown in the figure below:
为了分析扇形窗口的自适应加窗的策略,对r、θ和三个参量分别进行分析:In order to analyze the adaptive windowing strategy of fan-shaped window, for r, θ and Three parameters were analyzed separately:
(1)r是物体在下一时刻相对上一时刻的移动距离的上限值,很大程度上决定了加窗区域的大小;(1) r is the upper limit value of the moving distance of the object at the next moment relative to the previous moment, which largely determines the size of the windowed area;
(2)描述物体在下一时刻相对上一时刻转角的上限值,和r一起,决定了加窗的大小,窗口的大小约等于 (2) Describes the upper limit value of the object's rotation angle at the next moment relative to the previous moment. Together with r, it determines the size of the window. The size of the window is approximately equal to
(3)θ描述物体可能存在区域的概率分布的中心线,这条线的左右两侧物体出现的概率大小均等。(3) θ describes the center line of the probability distribution of the area where objects may exist, and the probability of objects appearing on the left and right sides of this line is equal.
物体在下一个时刻出现的范围和趋势,不完全由上一时刻所决定。这个过程中,i+1时刻的范围,由i、i-1和i-2等时刻的速度通过拟合决定,且需要留有一定的裕度(范围需要给出经验值,即根据物体速度波动范围的分布而给出),以保证下一时刻物体会出现在扇形区域当中。The scope and trend of objects appearing in the next moment are not completely determined by the previous moment. In this process, the range at time i+1 is determined by the speed at time i, i-1, and i-2 through fitting, and a certain margin needs to be left (the range needs to give empirical values, that is, according to the speed of the object given by the distribution of the fluctuation range), to ensure that the object will appear in the fan-shaped area at the next moment.
3)确定扇形的半径r、张角张角角平分线与水平轴的夹角θ;3) Determine the radius r and opening angle of the sector The angle θ between the bisector of the opening angle and the horizontal axis;
关于θ的计算:Calculation about θ:
一般认为,物体在高采样频率的条件下,前后时间间隔最短的情况下,物体的运动方向不会出现显著的变化。因此这里θ的取值为i和i-1时刻质心连线所成的矢量的角度。It is generally believed that under the condition of high sampling frequency and the shortest time interval before and after the object, the direction of motion of the object will not change significantly. Therefore, the value of θ here is the angle of the vector formed by the line connecting the centroids at time i and i-1.
关于r的计算:About the calculation of r:
r的计算由之前若干时刻物体运动的速度给出,再乘以拟曝光的时间间隔Δt(考虑到物体在不同的窗口下的采样帧数不一样,这里使用速度计算,而非位移)。具体方法应当根据前几次的速度的大小完成拟合和估算。The calculation of r is given by the speed of the object at several moments before, and then multiplied by the time interval Δt to be exposed (considering that the number of sampling frames of the object in different windows is different, the speed calculation is used here, not the displacement). The specific method should be based on the size of the speed of the previous few times to complete the fitting and estimation.
关于的计算:about The calculation of:
通常采用物体运动方向的先验值和维持算法鲁棒性所需要的裕度来估算。例如,物体沿着单一方向轨道(如长直的高速公路)运动时,的取值可以较小。但是通常应当留出大约10°的裕度来增加算法的鲁棒性。It is usually estimated by using the prior value of the object's motion direction and the margin required to maintain the robustness of the algorithm. For example, when an object moves along a single-directional trajectory (such as a long straight highway), The value of can be smaller. But usually a margin of about 10° should be left to increase the robustness of the algorithm.
4)确定加窗区域的大小和位置。4) Determine the size and location of the windowed area.
在实际使用的过程当中,由于图像传感器窗口的读取区域通常为矩形。因此,加窗区域是该扇形的最小外接矩形。During actual use, the reading area of the image sensor window is usually rectangular. Therefore, the windowed area is the smallest bounding rectangle of the sector.
105:根据第三预设条件采用全局低速的加窗模式和局部高速的基本跟踪模式,或采用全局低速的加窗模式和自适应性的物体跟踪模式分别获取高速图像序列;将采样频率高的高速图像序列对应的模式作为最终模式,通过最终模式获得的图像序列作为最终高速图像序列。105: According to the third preset condition, adopt the global low-speed windowing mode and the local high-speed basic tracking mode, or adopt the global low-speed windowing mode and the adaptive object tracking mode to obtain high-speed image sequences respectively; The mode corresponding to the high-speed image sequence is used as the final mode, and the image sequence obtained through the final mode is used as the final high-speed image sequence.
其中,第一预设条件、第二预设条件和第三预设条件根据实际应用中的场合确定,具体实现时,本发明实施例对此不做限制。Wherein, the first preset condition, the second preset condition, and the third preset condition are determined according to a situation in an actual application, which is not limited in the embodiment of the present invention during specific implementation.
综上所述,本发明实施例提供了一种高速图像采集方法,本方法通过图像传感器的加窗能力,可以成倍提高图像采集频率,同时保存了一定的闲置率;能有效地保持图像较高的信噪比,实时的采集了高速图像,降低了成本和能量消耗;通过子眼视场之间的非重叠特性,避免了图像的失真;此外,自适应性的物体跟踪模式具有较高的学习能力,能够根据物体在上一个时刻成像的加窗区域,来预测下一时刻加窗的大小和位置。In summary, the embodiment of the present invention provides a high-speed image acquisition method. This method can double the image acquisition frequency through the windowing capability of the image sensor, and at the same time preserve a certain idle rate; it can effectively keep the image relatively low High signal-to-noise ratio, real-time acquisition of high-speed images, reducing cost and energy consumption; through the non-overlapping characteristics of the sub-eye field of view, image distortion is avoided; in addition, the adaptive object tracking mode has a high It can predict the size and position of the window at the next moment based on the windowed area of the object imaged at the previous moment.
本领域技术人员可以理解附图只是一个优选实施例的示意图,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the serial numbers of the above-mentioned embodiments of the present invention are for description only, and do not represent the advantages and disadvantages of the embodiments.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within range.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210229855.2A CN102801927B (en) | 2012-07-04 | 2012-07-04 | High-speed image acquiring method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210229855.2A CN102801927B (en) | 2012-07-04 | 2012-07-04 | High-speed image acquiring method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102801927A true CN102801927A (en) | 2012-11-28 |
| CN102801927B CN102801927B (en) | 2014-08-27 |
Family
ID=47200883
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201210229855.2A Expired - Fee Related CN102801927B (en) | 2012-07-04 | 2012-07-04 | High-speed image acquiring method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102801927B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111693725A (en) * | 2020-06-01 | 2020-09-22 | 中光智控(北京)科技有限公司 | Method and device for measuring angular rate of movement of aiming target |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6563535B1 (en) * | 1998-05-19 | 2003-05-13 | Flashpoint Technology, Inc. | Image processing system for high performance digital imaging devices |
| CN101141574A (en) * | 2007-10-25 | 2008-03-12 | 中国科学院上海光学精密机械研究所 | High Speed Image Sensor Based on Low Speed CCD |
| CN102298139A (en) * | 2011-05-18 | 2011-12-28 | 中国科学院计算技术研究所 | Two-dimensional windowing method of synthetic aperture radar (SAR) imaging system based on field programmable gate array (FPGA) |
| CN102494675A (en) * | 2011-11-30 | 2012-06-13 | 哈尔滨工业大学 | High-speed visual capturing method of moving target features |
| CN102509255A (en) * | 2011-10-26 | 2012-06-20 | 苏州百滨电子科技有限公司 | High-speed image acquiring and processing method and device |
-
2012
- 2012-07-04 CN CN201210229855.2A patent/CN102801927B/en not_active Expired - Fee Related
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6563535B1 (en) * | 1998-05-19 | 2003-05-13 | Flashpoint Technology, Inc. | Image processing system for high performance digital imaging devices |
| CN101141574A (en) * | 2007-10-25 | 2008-03-12 | 中国科学院上海光学精密机械研究所 | High Speed Image Sensor Based on Low Speed CCD |
| CN102298139A (en) * | 2011-05-18 | 2011-12-28 | 中国科学院计算技术研究所 | Two-dimensional windowing method of synthetic aperture radar (SAR) imaging system based on field programmable gate array (FPGA) |
| CN102509255A (en) * | 2011-10-26 | 2012-06-20 | 苏州百滨电子科技有限公司 | High-speed image acquiring and processing method and device |
| CN102494675A (en) * | 2011-11-30 | 2012-06-13 | 哈尔滨工业大学 | High-speed visual capturing method of moving target features |
Non-Patent Citations (2)
| Title |
|---|
| 宋燕星: "高速图像采集处理系统中若干关键技术的研究", 《哈尔滨工业大学工学博士学位论文》 * |
| 陈洁等: "高速图像数据采集系统设计与实现", 《硅谷》 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111693725A (en) * | 2020-06-01 | 2020-09-22 | 中光智控(北京)科技有限公司 | Method and device for measuring angular rate of movement of aiming target |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102801927B (en) | 2014-08-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN108521554B (en) | Large-scene multi-target cooperative tracking method, intelligent monitoring system and traffic system | |
| KR101664123B1 (en) | Apparatus and method of creating high dynamic range image empty ghost image by using filtering | |
| CN101980282B (en) | Infrared image dynamic detail enhancement method | |
| CN107133559B (en) | A moving object detection method based on 360-degree panorama | |
| CN104125419B (en) | A kind of adaptive resolution implementation method based on cmos image sensor | |
| US20140078347A1 (en) | Systems and Methods for Reducing Noise in Video Streams | |
| CN105741234B (en) | It is anchored automatically vision-aided system based on the unmanned boat that three-dimensional panorama is looked around | |
| CN111510592A (en) | Illumination processing method and device and image pickup device | |
| CN103927520A (en) | Method for detecting human face under backlighting environment | |
| CN102360423A (en) | Intelligent human body tracking method | |
| CN103458261B (en) | Video scene variation detection method based on stereoscopic vision | |
| CN108769550B (en) | A DSP-based image saliency analysis system and method | |
| WO2021109409A1 (en) | Image capturing method and device, apparatus, and storage medium | |
| CN107818547B (en) | A kind of minimizing technology towards the spiced salt and Gaussian mixed noise in twilight image sequence | |
| CN102768731A (en) | System and method for automatic positioning and recognition of targets based on high-definition video images | |
| CN104424628B (en) | Method based on the utilization frame-to-frame correlation noise reduction of ccd image | |
| CN112380905B (en) | An abnormal behavior detection method based on surveillance video histogram combined with entropy | |
| CN103279791A (en) | Pedestrian counting method based on multiple features | |
| WO2022037087A1 (en) | Method and apparatus for improving video target detection performance in surveillance edge computing | |
| CN109711256A (en) | A kind of low latitude complex background unmanned plane target detection method | |
| WO2019228450A1 (en) | Image processing method, device, and equipment, and readable medium | |
| CN107122732B (en) | A Fast and Robust License Plate Location Method in Surveillance Scenarios | |
| CN110602388A (en) | Zooming bionic compound eye moving target tracking system and method | |
| CN103400142A (en) | Pedestrian counting method | |
| WO2022193132A1 (en) | Image detection method and apparatus, and electronic device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140827 Termination date: 20210704 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |