CN107773248B - Eye movement instrument and image processing method - Google Patents
Eye movement instrument and image processing method Download PDFInfo
- Publication number
- CN107773248B CN107773248B CN201710940623.0A CN201710940623A CN107773248B CN 107773248 B CN107773248 B CN 107773248B CN 201710940623 A CN201710940623 A CN 201710940623A CN 107773248 B CN107773248 B CN 107773248B
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- eye
- target
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 230000004424 eye movement Effects 0.000 title claims abstract description 11
- 210000001508 eye Anatomy 0.000 claims description 108
- 239000003550 marker Substances 0.000 claims description 54
- 238000003384 imaging method Methods 0.000 claims description 26
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000001931 thermography Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 230000008030 elimination Effects 0.000 claims description 2
- 238000003379 elimination reaction Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 9
- 239000011521 glass Substances 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 238000000034 method Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000016776 visual perception Effects 0.000 description 4
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000007921 spray Substances 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000001179 sorption measurement Methods 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 241000219094 Vitaceae Species 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000012769 display material Substances 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 235000021021 grapes Nutrition 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域Technical Field
本发明属于眼动仪器仪表技术领域,尤其涉及一种眼动仪及图像处理方法。The invention belongs to the technical field of eye movement instruments and meters, and in particular relates to an eye tracker and an image processing method.
背景技术Background Art
眼动仪是一种通过机器视觉和摄像头技术来追踪和记录眼球轨迹的设备,是心理学、广告、工业工程等研究的重要仪器,也被广泛应用在眼动控制中。眼动仪用于记录人在处理视觉信息时的眼球轨迹特征,广泛用于注意、视知觉、阅读等领域的研究。An eye tracker is a device that uses machine vision and camera technology to track and record eye movements. It is an important instrument for research in psychology, advertising, industrial engineering, etc., and is also widely used in eye movement control. Eye trackers are used to record the characteristics of eye movements when people process visual information, and are widely used in research in the fields of attention, visual perception, and reading.
由于现有眼动仪只设置了一个前置摄像头,导致其既不能准确测量眼球到目标物体的距离,也不能进行目标场景的三维重建。尤其在自动定义感兴趣区域方面,当前眼动仪存在重大局限。而人工标定感兴趣区域将会非常费时费力。这给心理学、视知觉、阅读等领域的研究造成了很大的困扰与阻碍。Since existing eye trackers only have one front camera, they can neither accurately measure the distance from the eye to the target object nor perform three-dimensional reconstruction of the target scene. In particular, current eye trackers have major limitations in automatically defining the region of interest. Manually calibrating the region of interest is very time-consuming and laborious. This has caused great trouble and obstacles to research in the fields of psychology, visual perception, and reading.
发明内容Summary of the invention
基于上述问题,本发明提供了一种眼动仪,其能够对目标场景进行三维重构和/或对目标场景中的物体进行测距和/或能够自动化地定义感兴趣区域,具有更宽的采集范围和视野。且本发明实施例的眼动仪的结构简单,使用方便,佩戴舒适、安全、可靠。Based on the above problems, the present invention provides an eye tracker, which can perform three-dimensional reconstruction of a target scene and/or measure the distance of an object in the target scene and/or automatically define a region of interest, and has a wider acquisition range and field of view. The eye tracker of the embodiment of the present invention has a simple structure, is easy to use, and is comfortable, safe, and reliable to wear.
第一方面,本发明提出的眼动仪,包括:头戴式主体;多个前置摄像头,多个前置摄像头沿周向间隔开地设置在头戴式主体上,多个前置摄像头中至少两个能够同时采集头戴式主体的前方的目标场景的图像信息;设置在头戴式主体上的控制系统,控制系统与前置摄像头相连,使得该控制系统能够基于图像信息对目标场景进行三维重构,和/或基于图像信息对目标场景中的至少一个物体进行测距。In a first aspect, the eye tracker proposed in the present invention includes: a head-mounted body; a plurality of front cameras, which are arranged on the head-mounted body at intervals along the circumferential direction, and at least two of the plurality of front cameras can simultaneously collect image information of a target scene in front of the head-mounted body; a control system arranged on the head-mounted body, the control system being connected to the front cameras, so that the control system can perform three-dimensional reconstruction of the target scene based on the image information, and/or measure the distance to at least one object in the target scene based on the image information.
进一步地,本实施例眼动仪还包括设置在头戴式主体的前端的鼻托,在多个前置摄像头中,一部分前置摄像头位于鼻托的一侧,而另一部分的前置摄像头位于鼻托的另一侧。Furthermore, the eye tracker of this embodiment also includes a nose pad arranged at the front end of the head-mounted body. Among the multiple front cameras, some of the front cameras are located on one side of the nose pad, and another part of the front cameras are located on the other side of the nose pad.
进一步地,鼻托与头戴式主体之间的连接为可拆卸连接。Furthermore, the connection between the nose pad and the head-mounted body is a detachable connection.
基于上述任意眼动仪实施例,进一步地,前置摄像头与头戴式主体之间的连接为可拆卸连接。Based on any of the above eye tracker embodiments, further, the connection between the front camera and the head-mounted body is a detachable connection.
基于上述任意眼动仪实施例,进一步地,前置摄像头包括可见光摄像头、红外摄像头或热成像摄像头。Based on any of the above eye tracker embodiments, further, the front camera includes a visible light camera, an infrared camera or a thermal imaging camera.
进一步地,本实施例眼动仪还包括用于采集被试者眼睛的图像信息的眼睛摄像头,眼睛摄像头通过支撑部件与头戴式主体相连,眼睛摄像头与控制系统相连,使得控制系统能够基于采集的被试者眼睛的图像信息与采集的目标场景的图像信息,获知用户是否聚焦在目标场景中的目标对象,在目标场景中,目标对象相对于预先设置的标识物的位置是固定不变的,标识物包括标识图案和/或标识光,标识图案包括一维条码和/或二维码,而标识光包括荧光和/或不可见的红外光源。Furthermore, the eye tracker of this embodiment also includes an eye camera for collecting image information of the subject's eyes, the eye camera is connected to the head-mounted body through a supporting component, and the eye camera is connected to the control system, so that the control system can know whether the user is focused on the target object in the target scene based on the collected image information of the subject's eyes and the collected image information of the target scene. In the target scene, the position of the target object relative to a pre-set marker is fixed, the marker includes an identification pattern and/or an identification light, the identification pattern includes a one-dimensional barcode and/or a two-dimensional code, and the identification light includes fluorescence and/or an invisible infrared light source.
第二方面,本发明提出的用于三维重构的图像处理方法,该方法包括以下步骤:In a second aspect, the present invention provides an image processing method for three-dimensional reconstruction, the method comprising the following steps:
步骤S11:获取至少两个摄像装置对预定标记物的校正用三维场景同步拍摄的校正图像数据;Step S11: obtaining calibration image data synchronously photographed by at least two camera devices on a calibration three-dimensional scene of a predetermined marker;
步骤S12:根据校正图像数据和预先存储的预定标记物的三维坐标数据,生成至少两个拍摄装置针对相同拍摄场景的成像关联信息;Step S12: generating imaging association information of at least two shooting devices for the same shooting scene according to the corrected image data and the pre-stored three-dimensional coordinate data of the predetermined marker;
步骤S13:获取经过校正的至少两个拍摄装置对目标三维场景同步拍摄的采集图像数据;Step S13: Acquire collected image data of the target three-dimensional scene synchronously photographed by at least two calibrated photographing devices;
步骤S14:根据成像关联信息,将采集图像数据合成为三维立体图像,三维立体图像为三维重构后的目标三维场景。Step S14: synthesizing the collected image data into a three-dimensional stereo image according to the imaging association information, where the three-dimensional stereo image is the target three-dimensional scene after three-dimensional reconstruction.
进一步地,本实施例图像处理方法的至少两个摄像装置为针孔摄像头;Furthermore, at least two camera devices of the image processing method of this embodiment are pinhole cameras;
对应地,步骤S12为:Correspondingly, step S12 is:
分别根据校正图像数据和预先存储的预定标记物的三维坐标数据,消除校正图像数据中的畸变,获得消除畸变后的校正图像数据;Eliminating distortion in the corrected image data according to the corrected image data and pre-stored three-dimensional coordinate data of a predetermined marker, respectively, to obtain corrected image data after distortion is eliminated;
根据消除畸变后的校正图像数据和预先存储的预定标记物的三维坐标数据,对至少两个拍摄装置进行校正,生成至少两个拍摄装置针对相同拍摄场景的成像关联信息,预定标记物包括标识图案和/或标识光标记,标识图案包括一维条码和/或二维码,而标识光包括荧光和/或不可见的红外光源。At least two shooting devices are calibrated based on the corrected image data after distortion is eliminated and the three-dimensional coordinate data of the pre-stored predetermined marker, and imaging association information of the at least two shooting devices for the same shooting scene is generated. The predetermined marker includes an identification pattern and/or an identification light mark, the identification pattern includes a one-dimensional barcode and/or a two-dimensional code, and the identification light includes fluorescence and/or an invisible infrared light source.
与现有技术相比,本发明提出的用于三维重构的图像处理方法通过识别图像中的预定标记物,根据标记物提供的坐标参照系,对摄像头组实现校正,从而自适应地获取不同拍摄阶段中摄像头组的成像关联信息,对摄像头组的位置微调等因素实现了主动补偿,提高了三维重构的准确性,复现出三维效果更加逼真的目标场景图像或视频。Compared with the prior art, the image processing method for three-dimensional reconstruction proposed in the present invention recognizes predetermined markers in the image and calibrates the camera group according to the coordinate reference system provided by the markers, thereby adaptively acquiring imaging correlation information of the camera group in different shooting stages, and actively compensating for factors such as fine-tuning the position of the camera group, thereby improving the accuracy of three-dimensional reconstruction and reproducing target scene images or videos with more realistic three-dimensional effects.
第三方面,本发明提出的用于分析目标对象关注度的图像处理方法,其特征在于,该方法包括以下步骤:In a third aspect, the present invention provides an image processing method for analyzing the attention of a target object, characterized in that the method comprises the following steps:
获取前置摄像头拍摄的图像数据,图像数据中包含有预定标记物的图像信息和目标对象的图像信息,且预定标记物和目标对象的相对位置关系固定不变;Acquire image data captured by a front camera, wherein the image data includes image information of a predetermined marker and image information of a target object, and a relative position relationship between the predetermined marker and the target object remains unchanged;
获取眼睛摄像头拍摄的被试者的眼睛数据;Obtain the eye data of the subject captured by the eye camera;
根据眼睛数据,确定被试者在对应的图像数据中的聚焦区域;Determine, based on the eye data, a focus area of the subject in the corresponding image data;
根据图像数据中的预定标记物的图像信息,确定目标对象在图像数据中的位置区域;Determining a location area of a target object in the image data according to image information of a predetermined marker in the image data;
判断位置区域是否落入聚焦区域中,若落入,则确定被试者关注了目标对象;若没有落入,则确定被试者没有关注目标对象。It is determined whether the position area falls within the focus area. If so, it is determined that the subject is paying attention to the target object; if not, it is determined that the subject is not paying attention to the target object.
本发明提出的用于分析目标对象关注度的图像处理方法通过识别预先设定的标记物,并根据预定标记物和目标对象相对固定不变的位置关系,获取目标对象在图像中的位置信息,进而实现对不同图片中或连续视频中的目标对象的位置跟踪,从而判定用户是否关注了对应与目标对象所在位置的图像区域,对目标对象关注度进行评价。The image processing method for analyzing the attention degree of a target object proposed in the present invention obtains the position information of the target object in the image by identifying a preset marker and based on the relatively fixed positional relationship between the preset marker and the target object, thereby tracking the position of the target object in different pictures or continuous videos, thereby determining whether the user pays attention to the image area corresponding to the position of the target object and evaluating the attention degree of the target object.
与现有技术相比,本发明提出的用于分析目标对象关注度的图像处理方法借助预先设定的标记物,实现了目标对象的快速跟踪,处理简单,辨识率高。Compared with the prior art, the image processing method for analyzing the attention degree of a target object proposed in the present invention realizes rapid tracking of the target object with the help of a pre-set marker, and has simple processing and high recognition rate.
第四方面,本发明提出的用于测距的图像处理方法,该方法包括以下步骤:In a fourth aspect, the present invention provides an image processing method for ranging, the method comprising the following steps:
获取第一前置摄像头拍摄的测距目标的第一测距图像;Acquire a first ranging image of a ranging target captured by a first front camera;
获取第二前置摄像头拍摄的同一测距目标的第二测距图像;Acquire a second ranging image of the same ranging target captured by a second front camera;
以第一前置摄像头和第二前置摄像头的连线为笛卡尔坐标系中的x轴,根据第一前置摄像头和第二前置摄像头所在的直线距离T、第一前置摄像头的焦距f,第二前置摄像头的焦距f和第一测距图像及第二测距图像,得到测距目标和第一前置摄像头的连线在第一前置摄像头的成像平面的交点的x轴坐标xl,得到测距目标和第二前置摄像头的连线在第二前置摄像头的成像平面的交点的x轴坐标xr;Taking the line connecting the first front camera and the second front camera as the x-axis in the Cartesian coordinate system, according to the straight-line distance T between the first front camera and the second front camera, the focal length f of the first front camera, the focal length f of the second front camera, the first ranging image and the second ranging image, obtaining the x-axis coordinate xl of the intersection point of the line connecting the ranging target and the first front camera on the imaging plane of the first front camera, and obtaining the x-axis coordinate xr of the intersection point of the line connecting the ranging target and the second front camera on the imaging plane of the second front camera;
根据下式,得到测距目标与第一前置摄像头的成像平面的距离Z:According to the following formula, the distance Z between the ranging target and the imaging plane of the first front camera is obtained:
与现有的测距技术相比,本发明提出的用于测距的图像处理方法基于摄像装置获取的图像信息进行测距,简单、方便、易于操作,且精度高。Compared with the existing distance measurement technology, the image processing method for distance measurement proposed by the present invention performs distance measurement based on image information obtained by a camera device, and is simple, convenient, easy to operate, and has high precision.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明具体实施方式中的技术方案,下面将对具体实施方式描述中所需要使用的附图作简单地介绍。在所有附图中,类似的元件或部分一般由类似的附图标记标识。附图中,各元件或部分并不一定按照实际的比例绘制。In order to more clearly illustrate the technical solutions in the specific embodiments of the present invention, the following is a brief introduction to the drawings required for the description of the specific embodiments. In all drawings, similar elements or parts are generally identified by similar reference numerals. In the drawings, each element or part is not necessarily drawn according to the actual scale.
图1为根据本发明的实施例的眼动仪的立体图;FIG1 is a perspective view of an eye tracker according to an embodiment of the present invention;
图2为根据本发明的实施例的眼动仪的主视图;FIG2 is a front view of an eye tracker according to an embodiment of the present invention;
图3为根据本发明的实施例用于三维重构的图像处理方法的算法流程图;FIG3 is an algorithm flow chart of an image processing method for three-dimensional reconstruction according to an embodiment of the present invention;
图4为根据本发明的实施例用于三维重构的图像处理方法的应用场景示意图;FIG4 is a schematic diagram of an application scenario of an image processing method for three-dimensional reconstruction according to an embodiment of the present invention;
图5为根据本发明的实施例用于分析目标对象关注度的图像处理方法的算法流程图;5 is an algorithm flow chart of an image processing method for analyzing the attention level of a target object according to an embodiment of the present invention;
图6为根据本发明的实施例用于分析目标对象关注度的图像处理方法的应用场景示意图;6 is a schematic diagram of an application scenario of an image processing method for analyzing the attention level of a target object according to an embodiment of the present invention;
图7为根据本发明的实施例用于测距的图像处理方法的算法流程图;7 is an algorithm flow chart of an image processing method for ranging according to an embodiment of the present invention;
图8为根据本发明的实施例用于测距的图像处理方法的计算示意图;FIG8 is a schematic diagram of calculation of an image processing method for ranging according to an embodiment of the present invention;
图9为根据本发明的实施例用于测距的图像处理方法的应用场景示意图。FIG. 9 is a schematic diagram of an application scenario of an image processing method for ranging according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
下面将结合附图对本发明技术方案的实施例进行详细的描述。以下实施例仅用于更加清楚地说明本发明的技术方案,因此只作为示例,而不能以此来限制本发明的保护范围。The following embodiments of the technical solution of the present invention are described in detail in conjunction with the accompanying drawings. The following embodiments are only used to more clearly illustrate the technical solution of the present invention, and are therefore only used as examples, and cannot be used to limit the protection scope of the present invention.
实施例1Example 1
图1为根据本发明的实施例的眼动仪10的立体图,图2是根据本发明的实施例的眼动仪10的主视图。该眼动仪10包括头戴式主体1。该头戴式主体1大致呈现为圆环状,可以是一体式结构,也可是分体式结构。该眼动仪10还包括多个前置摄像头21。多个前置摄像头21沿周向间隔开地设置在头戴式主体1上,多个前置摄像头21中至少两个能够同时采集头戴式主体1的前方的目标场景的图像信息。该图像信息包括视频、图像和/或图片。FIG1 is a stereoscopic view of an eye tracker 10 according to an embodiment of the present invention, and FIG2 is a front view of the eye tracker 10 according to an embodiment of the present invention. The eye tracker 10 includes a head-mounted body 1. The head-mounted body 1 is roughly annular, and can be an integrated structure or a split structure. The eye tracker 10 also includes a plurality of front cameras 21. The plurality of front cameras 21 are arranged on the head-mounted body 1 at intervals along the circumferential direction, and at least two of the plurality of front cameras 21 can simultaneously capture image information of a target scene in front of the head-mounted body 1. The image information includes video, images and/or pictures.
该眼动仪10还包括设置在头戴式主体1上的控制系统(图1和图2中未显示)。控制系统的安装位置不作特别限定,可安装在头戴式主体1的前端、后端、内部或表面。为了美观起见,控制系统可以设在头戴式主体1的内部。其中,控制系统可以包括可编程逻辑控制器PLC、ASIC、ARM核等各类计算处理器、存储器和与计算处理器相连的电子元件等,这属于本领域技术人员熟知的,在此不再详述。The eye tracker 10 also includes a control system (not shown in FIG. 1 and FIG. 2 ) disposed on the head-mounted body 1. The installation position of the control system is not particularly limited, and it can be installed at the front end, rear end, inside or surface of the head-mounted body 1. For the sake of aesthetics, the control system can be disposed inside the head-mounted body 1. Among them, the control system can include various types of computing processors such as a programmable logic controller PLC, ASIC, ARM core, memory, and electronic components connected to the computing processor, etc., which are well known to those skilled in the art and will not be described in detail here.
该控制系统与前置摄像头21相连,使得该控制系统能够基于图像信息对目标场景进行三维重构,和/或基于图像信息对目标对象中的至少一个物体进行测距。The control system is connected to the front camera 21, so that the control system can perform three-dimensional reconstruction of the target scene based on the image information, and/or perform distance measurement on at least one object in the target object based on the image information.
也就是说,本发明实施例的眼动仪10能够通过至少两个前置摄像头21同时采集头戴式主体1前方的目标场景的图像信息,并基于该图像信息对目标场景进行三维重构和/或对目标场景中的至少一个物体进行测距,从而推进学者在心理学、视知觉、阅读等领域内的研究。同时,由于本发明实施例的眼动仪10具有多个前置摄像头21,且它们沿着头戴式主体1的周向间隔开,因此该眼动仪10相比于只具有一个前置摄像头的现有眼动仪而言,具有更宽的采集范围和视野。另外,本发明实施例的眼动仪10的结构简单,使用方便,佩戴舒适、安全、可靠。That is to say, the eye tracker 10 of the embodiment of the present invention can simultaneously collect image information of the target scene in front of the head-mounted body 1 through at least two front cameras 21, and reconstruct the target scene in three dimensions and/or measure the distance of at least one object in the target scene based on the image information, thereby promoting scholars' research in the fields of psychology, visual perception, reading, etc. At the same time, since the eye tracker 10 of the embodiment of the present invention has multiple front cameras 21, and they are spaced apart along the circumference of the head-mounted body 1, the eye tracker 10 has a wider collection range and field of view than the existing eye tracker with only one front camera. In addition, the eye tracker 10 of the embodiment of the present invention has a simple structure, is easy to use, and is comfortable, safe, and reliable to wear.
如图2所示,该眼动仪10还包括设置在头戴式主体1的前端的鼻托3。鼻托3可卡在用户的鼻子上,其更好地稳定眼动仪10的佩戴,防止眼动仪10滑落或者晃动。在多个前置摄像头21中,一部分前置摄像头21位于鼻托3的一侧,而另一部分的前置摄像头21位于鼻托3的另一侧。若多个前置摄像头21全部位于鼻托3的一侧,那么眼动仪10的摄像范围或采集范围过于狭窄,但是当一部分前置摄像头21位于鼻托3的一侧而另一部分的前置摄像头21位于鼻托3的另一侧时,这可以有效地扩大眼动仪10的摄像范围或采集范围。As shown in FIG2 , the eye tracker 10 further includes a nose pad 3 disposed at the front end of the head-mounted body 1. The nose pad 3 can be stuck on the user's nose, which better stabilizes the wearing of the eye tracker 10 and prevents the eye tracker 10 from slipping or shaking. Among the multiple front cameras 21, a part of the front cameras 21 is located on one side of the nose pad 3, while another part of the front cameras 21 is located on the other side of the nose pad 3. If all of the multiple front cameras 21 are located on one side of the nose pad 3, the camera range or acquisition range of the eye tracker 10 is too narrow, but when a part of the front cameras 21 is located on one side of the nose pad 3 and another part of the front cameras 21 is located on the other side of the nose pad 3, this can effectively expand the camera range or acquisition range of the eye tracker 10.
作为优选,该眼动仪10可以包括两个沿着头戴式主体1的周向间隔开设置的前置摄像头21,这两个前置摄像头21分别位于鼻托3的两侧,这两个前置摄像头21相对于该头戴式主体的中心具有一定角度的倾角,如120度。Preferably, the eye tracker 10 may include two front cameras 21 spaced apart along the circumference of the head-mounted body 1, the two front cameras 21 are respectively located on both sides of the nose pad 3, and the two front cameras 21 have a certain inclination angle relative to the center of the head-mounted body, such as 120 degrees.
根据本发明实施例,该眼动仪10还包括用于采集被试者眼睛的图像信息的眼睛摄像头22,用于采集用户眼部的图像信息。该眼动仪10可以通过控制系统、前置摄像头21和眼睛摄像头22来记录人在处理视觉信息时的眼动轨迹特征,以用于注意、视知觉、阅读等领域的研究。According to an embodiment of the present invention, the eye tracker 10 further includes an eye camera 22 for collecting image information of the subject's eyes, and is used to collect image information of the user's eyes. The eye tracker 10 can record the eye movement trajectory characteristics of a person when processing visual information through a control system, a front camera 21, and an eye camera 22, so as to be used for research in the fields of attention, visual perception, and reading.
眼睛摄像头22可以通过支撑部件5与头戴式主体1相连。该支撑部件5可选为弯曲杆、直杆、万向杆等常规的支撑构件,并将眼睛摄像头22调整到与被试者眼睛相对应,从而方便眼睛摄像头22采集被试者眼睛的图像信息。眼睛摄像头22与控制系统相连,使得控制系统能够基于采集的被试者眼睛的图像信息与采集的目标场景的图像信息,获知用户是否聚焦在目标场景中的目标对象,在目标场景中,目标对象相对于预先设置的标识物的位置是固定不变的,其中,标识物包括标识图案和/或标识光。The eye camera 22 can be connected to the head-mounted body 1 through a support component 5. The support component 5 can be selected as a conventional support member such as a curved rod, a straight rod, a universal rod, etc., and the eye camera 22 is adjusted to correspond to the subject's eye, so as to facilitate the eye camera 22 to collect image information of the subject's eye. The eye camera 22 is connected to the control system, so that the control system can know whether the user is focused on the target object in the target scene based on the collected image information of the subject's eye and the collected image information of the target scene. In the target scene, the position of the target object relative to the pre-set marker is fixed, wherein the marker includes a marker pattern and/or a marker light.
详细地讲,将标识物的图像信息作为预存信息,并将其存储于控制系统的存储器内,当控制系统检测到前置摄像头21的图像信息中有与预存信息一致时,控制系统便可定位标识物在当前图像中的位置信息,鉴于目标对象相对于预先设置的标识物的位置是固定不变的,可进一步定位目标对象在当前图像中的位置信息。通过处理眼睛摄像头22采集的被试者眼睛的图像信息,可以获知被试者眼睛在当前图像中聚焦的区域。通过比较被试者眼睛聚焦的区域和目标对象的位置信息,便可判断出用户是否聚焦在目标场景中的目标对象。In detail, the image information of the marker is used as pre-stored information and stored in the memory of the control system. When the control system detects that the image information of the front camera 21 is consistent with the pre-stored information, the control system can locate the position information of the marker in the current image. Since the position of the target object relative to the pre-set marker is fixed, the position information of the target object in the current image can be further located. By processing the image information of the subject's eyes collected by the eye camera 22, the area where the subject's eyes are focused in the current image can be known. By comparing the area where the subject's eyes are focused and the position information of the target object, it can be determined whether the user is focused on the target object in the target scene.
作为优选,该眼动仪10可以包括一个眼睛摄像头22。Preferably, the eye tracker 10 may include an eye camera 22 .
作为优选,该眼动仪10可以包括多个眼睛摄像头22。Preferably, the eye tracker 10 may include multiple eye cameras 22 .
优选地,标识图案包括一维条码和/或二维码,而标识光包括荧光和/或不可见的红外光源。其中,一维条码包括EAN码、UPC码、128码、交叉25码、39码和库德巴(Codabar)码等,二维码包括排列式二维条码、矩阵式二维条码、Microsoft Tag码等。其中,一维条码和二维条码均可通过喷涂图片、显示屏或LED阵列等任何能够显示它或它们的产品来实现。Preferably, the identification pattern includes a one-dimensional barcode and/or a two-dimensional code, and the identification light includes a fluorescent light and/or an invisible infrared light source. Among them, the one-dimensional barcode includes EAN code, UPC code, 128 code, cross 25 code, 39 code and Codabar code, etc., and the two-dimensional code includes an array two-dimensional barcode, a matrix two-dimensional barcode, Microsoft Tag code, etc. Among them, the one-dimensional barcode and the two-dimensional barcode can be realized by any product that can display it or them, such as a spray picture, a display screen or an LED array.
在一个实施例中,前置摄像头21与头戴式主体1之间的连接为可拆卸连接。可拆卸连接包括螺纹连接、磁铁吸附、螺栓连接或卡接等常规的可拆卸连接。这种可拆卸的方式一方面可以在前置摄像头21损坏时,方便人们更换新的前置摄像头21,另一方面可以根据具体需要将前置摄像头21更换为不同的种类的摄像头,如红外摄像头和普通摄像头(即可见光摄像头)之间的转换。其中,前置摄像头21包括可见光摄像头、红外摄像头或热成像摄像头。所谓的可见光摄像头是指能够将可见光转为图像信息的摄像头,如电脑摄像头或手机摄像头等。In one embodiment, the connection between the front camera 21 and the head-mounted body 1 is a detachable connection. The detachable connection includes conventional detachable connections such as threaded connection, magnet adsorption, bolt connection or snap connection. On the one hand, this detachable manner can facilitate people to replace the front camera 21 with a new one when the front camera 21 is damaged. On the other hand, the front camera 21 can be replaced with a different type of camera according to specific needs, such as conversion between an infrared camera and an ordinary camera (i.e., a visible light camera). Among them, the front camera 21 includes a visible light camera, an infrared camera or a thermal imaging camera. The so-called visible light camera refers to a camera that can convert visible light into image information, such as a computer camera or a mobile phone camera.
在一个优选的实施例中,鼻托3与头戴式主体1之间的连接优选为可拆卸连接,以便用户自行更换或清理鼻托3。可拆卸连接包括螺纹连接、磁铁吸附、卡接、卡箍连接或以过盈配合方式进行的插接。由于用户的面部特征是因人而异的,统一制式的鼻托3很难与所有用户相契合,会导致个别或多数用户佩戴不舒服,作为优选方案,鼻托的高度、宽度等有多种配件方案,用于适应不同性别、人种的用户和戴眼镜的用户。尤其是对于习惯佩戴眼镜的使用者,必须摘掉眼镜才能佩戴具有鼻托3的眼动仪,若是必须佩戴眼镜则还需要定制特定形状的眼镜,由此带来极大的不便。但本发明的眼动仪的鼻托3能够从头戴式主体1上拆卸下来,用户就可以根据自己的面部特征或是否戴眼镜来选择适合自己的鼻托3。用户戴眼镜时,选择一种长度较短的特殊鼻托3,鼻托3有针对眼镜的凹槽式设计,佩戴时眼镜鼻托架在特殊鼻托的上方,采用特殊鼻托3的眼动仪佩戴起来更为方便快捷。In a preferred embodiment, the connection between the nose pad 3 and the head-mounted body 1 is preferably a detachable connection, so that the user can replace or clean the nose pad 3 by himself. The detachable connection includes a threaded connection, a magnet adsorption, a clamp connection, a clamp connection, or a plug-in connection in an interference fit manner. Since the facial features of users vary from person to person, a uniform nose pad 3 is difficult to fit all users, which may cause discomfort to individual or most users. As a preferred solution, there are multiple accessory solutions for the height and width of the nose pad to adapt to users of different genders and races and users wearing glasses. Especially for users who are accustomed to wearing glasses, they must take off their glasses to wear an eye tracker with a nose pad 3. If they must wear glasses, they also need to customize glasses of a specific shape, which brings great inconvenience. However, the nose pad 3 of the eye tracker of the present invention can be detached from the head-mounted body 1, and users can choose a nose pad 3 that suits them according to their facial features or whether they wear glasses. When the user wears glasses, he/she selects a special nose pad 3 of shorter length. The nose pad 3 has a groove design for the glasses. When wearing the glasses, the nose pad is placed on top of the special nose pad. The eye tracker using the special nose pad 3 is more convenient and quick to wear.
在一个优选的实施例中,该眼动仪10还包括信号收发器(未显示)。信号收发器设在上头戴式主体1,并与控制系统相连接。信号收发器的设置位置不作特别限定,可安装在头戴式主体1的前端、后端、内部或表面。信号收发器用于接收预定的外界终端(如手机、电脑和遥控器等)的控制信号和/或者向外界终端传输信息。该信号收发器可选为近程通讯模块,如常见的蓝牙模块、WiFi模块(如WiFi Direct)或红外模块等,当然信号收发器也可以为远程通信模块,如常见的无线电收发模块或互联网通信模块等等。用户能够使用外界终端将控制信号发送给信号收发器,同时信号收发器也能将接收到的前置摄像头21和眼睛摄像头22采集的图像信息传递给外界终端,从而增加本发明实施例的眼动仪10的交互性,该眼动仪采集的图像信息也可以用于其它设备或用途。In a preferred embodiment, the eye tracker 10 further includes a signal transceiver (not shown). The signal transceiver is disposed on the head-mounted body 1 and is connected to the control system. The location of the signal transceiver is not particularly limited, and it can be installed at the front end, rear end, inside or surface of the head-mounted body 1. The signal transceiver is used to receive control signals from a predetermined external terminal (such as a mobile phone, a computer, and a remote controller, etc.) and/or transmit information to the external terminal. The signal transceiver can be a short-range communication module, such as a common Bluetooth module, a WiFi module (such as WiFi Direct) or an infrared module, etc. Of course, the signal transceiver can also be a long-range communication module, such as a common radio transceiver module or an Internet communication module, etc. The user can use the external terminal to send the control signal to the signal transceiver, and the signal transceiver can also transmit the received image information collected by the front camera 21 and the eye camera 22 to the external terminal, thereby increasing the interactivity of the eye tracker 10 of the embodiment of the present invention, and the image information collected by the eye tracker can also be used for other devices or purposes.
实施例2Example 2
如图3所示,本发明的实施例用于三维重构的图像处理方法包括以下步骤:As shown in FIG3 , the image processing method for three-dimensional reconstruction according to an embodiment of the present invention comprises the following steps:
步骤S11:获取至少两个摄像装置对预定标记物的校正用三维场景同步拍摄的校正图像数据;Step S11: obtaining calibration image data synchronously photographed by at least two camera devices on a calibration three-dimensional scene of a predetermined marker;
步骤S12:根据校正图像数据和预先存储的预定标记物的三维坐标数据,生成至少两个拍摄装置针对相同拍摄场景的成像关联信息;Step S12: generating imaging association information of at least two shooting devices for the same shooting scene according to the corrected image data and the pre-stored three-dimensional coordinate data of the predetermined marker;
步骤S13:获取经过校正的至少两个拍摄装置对目标三维场景同步拍摄的采集图像数据;Step S13: Acquire collected image data of the target three-dimensional scene synchronously photographed by at least two calibrated photographing devices;
步骤S14:根据成像关联信息,将采集图像数据合成为三维立体图像,三维立体图像为进行三维重构后的目标三维场景。Step S14: synthesizing the collected image data into a three-dimensional stereo image according to the imaging association information, where the three-dimensional stereo image is the target three-dimensional scene after three-dimensional reconstruction.
需要说明的是,该至少两个摄像装置的类型和成像参数可以相同,也可以不同。It should be noted that the types and imaging parameters of the at least two camera devices may be the same or different.
作为优选,该至少两个摄像装置分别为针孔摄像头;对应地,步骤S12为:Preferably, the at least two camera devices are pinhole cameras respectively; correspondingly, step S12 is:
分别根据校正图像数据和预先存储的预定标记物的三维坐标数据,消除校正图像数据中的畸变,获得消除畸变后的校正图像数据;Eliminating distortion in the corrected image data according to the corrected image data and pre-stored three-dimensional coordinate data of a predetermined marker, respectively, to obtain corrected image data after distortion is eliminated;
根据消除畸变后的校正图像数据和预先存储的预定标记物的三维坐标数据,对该至少两个拍摄装置进行校正,生成至少两个拍摄装置针对相同拍摄场景的成像关联信息。The at least two photographing devices are calibrated according to the corrected image data after distortion is eliminated and the three-dimensional coordinate data of the predetermined marker stored in advance, and imaging association information of the at least two photographing devices for the same photographing scene is generated.
作为优选,步骤S14为:根据成像关联信息,计算两个拍摄装置同步拍摄的采集图像数据中的视觉差异,并根据视觉差异,将采集图像数据合成为三维立体图像,三维立体图像为进行三维重构后的目标三维场景。Preferably, step S14 is: calculating the visual difference in the collected image data synchronously shot by two shooting devices according to the imaging association information, and synthesizing the collected image data into a three-dimensional stereo image according to the visual difference, and the three-dimensional stereo image is the target three-dimensional scene after three-dimensional reconstruction.
作为优选,该预定标记物由标识图案和/或标识光来标记。Preferably, the predetermined marker is marked by an identification pattern and/or identification light.
优选地,标识图案包括一维条码和/或二维码,而标识光包括荧光和/或强度大于日光强度的光。其中,一维条码包括EAN码、UPC码、128码、交叉25码、39码和库德巴(Codabar)码等,二维码包括排列式二维条码、矩阵式二维条码、Microsoft Tag码等。其中,一维条码和二维条码均可通过喷涂图片、显示屏或LED阵列等任何能够显示它或它们的产品来实现。Preferably, the identification pattern includes a one-dimensional barcode and/or a two-dimensional code, and the identification light includes fluorescence and/or light with an intensity greater than that of sunlight. Among them, the one-dimensional barcode includes EAN code, UPC code, 128 code, cross 25 code, 39 code and Codabar code, etc., and the two-dimensional code includes an array two-dimensional barcode, a matrix two-dimensional barcode, Microsoft Tag code, etc. Among them, the one-dimensional barcode and the two-dimensional barcode can be realized by any product that can display it or them, such as a spray picture, a display screen or an LED array.
应当理解为,该预定标记物的标记方法可以相同,也可以不同。It should be understood that the labeling methods of the predetermined markers may be the same or different.
对于针孔摄像头,越远离摄像头感光元件中心的地方,像素质量越差,会有更多的变形和噪点。比如,由于针孔摄像头在不同位置的成像变形,一条直线在针孔摄像头中成的像并不是直线。为了提高图像质量,需要对针孔摄像头拍摄的图像进行消除畸变处理。For a pinhole camera, the farther away from the center of the camera's photosensitive element, the worse the pixel quality, and the more distortion and noise there will be. For example, due to the imaging distortion of the pinhole camera at different positions, the image of a straight line in the pinhole camera is not a straight line. In order to improve the image quality, the image taken by the pinhole camera needs to be dedistorted.
一种针孔摄像头比较常用的校正方法是利用标准的黑白棋盘格来计算摄像头的基本参数,包括摄像头焦距f,内参矩阵,外参矩阵(如旋转R矩阵和平移T矩阵)、畸变系数等。具体地,针孔摄像头拍摄的黑白棋盘格图像中,棋盘格会有扭曲,例如现实中的直线在摄像头图像中并不会是直线。通过检测拍摄的图像中黑白棋盘格上的边角点位置,然后依据黑白棋盘格上的边角点应该是直线这一个事实,可以反推得到该针孔摄像头的基本参数。A commonly used calibration method for pinhole cameras is to use a standard black and white chessboard to calculate the basic parameters of the camera, including the focal length f of the camera, the intrinsic parameter matrix, the extrinsic parameter matrix (such as the rotation R matrix and the translation T matrix), the distortion coefficient, etc. Specifically, in the black and white chessboard image captured by the pinhole camera, the chessboard will be distorted. For example, a straight line in reality will not be a straight line in the camera image. By detecting the position of the corner points on the black and white chessboard in the captured image, and then based on the fact that the corner points on the black and white chessboard should be straight lines, the basic parameters of the pinhole camera can be inferred.
针对设置有两个前置摄像头(即双目摄像头)的眼动仪,则需要进行双目摄像头标定。具体地,两个前置摄像头同时拍摄同一个标定物的图像,通过标定物在两个摄像头中显著点的对应关系,完成摄像头校正。获得左侧前置摄像头相对于右侧前置摄像头的旋转矩阵后,就可以完成双目摄像头的畸变消除和校正了。For an eye tracker with two front cameras (i.e. binocular cameras), binocular camera calibration is required. Specifically, the two front cameras simultaneously capture images of the same calibration object, and the camera calibration is completed through the correspondence between the salient points of the calibration object in the two cameras. After obtaining the rotation matrix of the left front camera relative to the right front camera, the distortion elimination and correction of the binocular camera can be completed.
本发明的实施例用于三维重构的图像处理方法可以对至少两个摄像装置同步拍摄的三维场景进行高复原度的重构。The image processing method for three-dimensional reconstruction in the embodiment of the present invention can reconstruct a three-dimensional scene synchronously photographed by at least two camera devices with high restoration degree.
图4为根据本发明的实施例用于三维重构的图像处理方法应用在实施例1的眼动仪中的应用场景示意图。FIG. 4 is a schematic diagram of an application scenario in which the image processing method for three-dimensional reconstruction according to an embodiment of the present invention is applied in the eye tracker of Example 1. FIG.
被试者305佩戴设置有两个前置摄像头的眼动仪10,由被试者控制或者其他人遥控控制校正后的两个前置摄像头同步对目标场景301(A)进行拍摄,分别获取到图片302(A1)和图片303(A2)。眼动仪10内的控制系统实施本发明的实施例用于三维重构的图像处理方法,对两个前置摄像头同步拍摄的图片302(A1)和图片303(A2)进行高复原度的重构,复现出具有三维效果的目标场景图片304(A′)。The subject 305 wears an eye tracker 10 with two front cameras, and the two calibrated front cameras are controlled by the subject or by other people to synchronously shoot the target scene 301 (A), and obtain pictures 302 (A1) and 303 (A2) respectively. The control system in the eye tracker 10 implements the image processing method for three-dimensional reconstruction of the embodiment of the present invention, reconstructs the pictures 302 (A1) and 303 (A2) synchronously shot by the two front cameras with high restoration, and reproduces the target scene picture 304 (A′) with a three-dimensional effect.
针对视频图像的处理,与针对图片的处理类似,这里不再赘述。The processing of video images is similar to that of pictures, and will not be described in detail here.
应该理解为,该眼动仪也可以由假人佩戴或固定装置在安装架上,以获取具有三维效果的目标场景图像或视频。It should be understood that the eye tracker can also be worn by a dummy or fixed on a mounting frame to obtain a target scene image or video with a three-dimensional effect.
优选地,设置有多个前置摄像头的眼动仪从多个前置摄像头中获得场景原始视图,然后通过针孔摄像头标定技术消除畸变。然后,通过机器视觉检测场景原始视图中的显著点,根据显著点在不同摄像头中视图间的位置关系,实现对多个前置摄像头的校正。Preferably, an eye tracker provided with multiple front cameras obtains the original view of the scene from the multiple front cameras, and then eliminates distortion by pinhole camera calibration technology. Then, the salient points in the original view of the scene are detected by machine vision, and the multiple front cameras are calibrated according to the positional relationship between the views of the salient points in different cameras.
需要说明的是,显著点可以为前述的预定标记物,红外光源,或者具有不光滑的任何物体。It should be noted that the salient point may be the aforementioned predetermined marker, an infrared light source, or any object that is not smooth.
多个前置摄像头可以认为是摄像头组。随后,利用校正后的摄像头组,通过分别计算每一个前置摄像头采集的视图中的视觉差异,从而合成三维立体图,完成三维重构。The multiple front cameras can be considered as a camera group. Then, the calibrated camera group is used to synthesize a 3D stereogram by calculating the visual differences in the views captured by each front camera, thereby completing the 3D reconstruction.
本发明的实施例用于三维重构的图像处理方法通过识别图像中的预定标记物,根据标记物提供的坐标参照系,对摄像头组实现校正,从而自适应地获取不同拍摄阶段中摄像头组的成像关联信息,对摄像头组的位置微调等因素实现了主动补偿,提高了三维重构的准确性,复现出三维效果更加逼真的目标场景图像或视频。The image processing method for three-dimensional reconstruction in the embodiment of the present invention recognizes predetermined markers in the image and calibrates the camera group according to the coordinate reference system provided by the marker, thereby adaptively acquiring imaging correlation information of the camera group in different shooting stages, and actively compensating for factors such as fine-tuning the position of the camera group, thereby improving the accuracy of three-dimensional reconstruction and reproducing a target scene image or video with a more realistic three-dimensional effect.
实施例3Example 3
如图5所示,本发明的实施例用于分析目标对象关注度的图像处理方法,包括以下步骤:As shown in FIG5 , the image processing method for analyzing the attention degree of a target object according to an embodiment of the present invention comprises the following steps:
步骤S21:获取前置摄像头拍摄的图像数据,图像数据中包含有预定标记物的图像信息和目标对象的图像信息,且预定标记物和目标对象的相对位置关系固定不变;Step S21: acquiring image data captured by a front camera, wherein the image data includes image information of a predetermined marker and image information of a target object, and a relative positional relationship between the predetermined marker and the target object remains unchanged;
步骤S22:获取眼睛摄像头拍摄的被试者的眼睛数据;Step S22: obtaining the eye data of the subject photographed by the eye camera;
步骤S23:根据眼睛数据,确定被试者在对应的图像数据中的聚焦区域;Step S23: determining the focus area of the subject in the corresponding image data according to the eye data;
步骤S24:根据图像数据中的预定标记物的图像信息,确定目标对象在图像数据中的位置区域;Step S24: determining a location area of the target object in the image data according to the image information of the predetermined marker in the image data;
步骤S25:判断位置区域是否落入聚焦区域中,若落入,则确定被试者关注了目标对象;若没有落入,则确定被试者没有关注目标对象。Step S25: determining whether the position area falls within the focus area; if so, determining that the subject is focusing on the target object; if not, determining that the subject is not focusing on the target object.
作为优选,预定标记物包括标识图案和/或标识光。标识图案包括一维条码和/或二维码,而标识光包括荧光和/或强度大于日光强度的光和/或不可见的红外光源。此外,上述的预定标记物可能是一个,多个,或者是多个预定标记物的组件。其中,一维条码包括EAN码、UPC码、128码、交叉25码、39码和库德巴(Codabar)码等,二维码包括排列式二维条码、矩阵式二维条码、Microsoft Tag码等。其中,一维条码和二维条码均可通过喷涂图片、显示屏或LED阵列等任何能够显示它或它们的产品来实现。Preferably, the predetermined marker includes a logo pattern and/or a logo light. The logo pattern includes a one-dimensional barcode and/or a two-dimensional code, and the logo light includes fluorescence and/or light with an intensity greater than the intensity of daylight and/or an invisible infrared light source. In addition, the above-mentioned predetermined marker may be one, multiple, or a component of multiple predetermined markers. Among them, the one-dimensional barcode includes EAN code, UPC code, 128 code, cross 25 code, 39 code and Codabar code, etc., and the two-dimensional code includes an array two-dimensional barcode, a matrix two-dimensional barcode, Microsoft Tag code, etc. Among them, both the one-dimensional barcode and the two-dimensional barcode can be realized by any product that can display it or them, such as a spray picture, a display screen or an LED array.
在用户体验研究或广告效果评估等任何视觉相关的研究中,需要获知用户对目标对象的关注程度。如企业在各类媒体投放的形象宣传广告短片,需要评估用户对于企业商标或企业字号的关注程度。若用户对于企业商标或企业字号的关注程度低于预期,则需要调整广告短片的图像设计,以提高广告效果,增加广告收益。In any visual research, such as user experience research or advertising effectiveness evaluation, it is necessary to know the degree of user attention to the target object. For example, when a company puts a short advertising video on its image in various media, it is necessary to evaluate the degree of user attention to the company's trademark or name. If the user's attention to the company's trademark or name is lower than expected, the image design of the short advertising video needs to be adjusted to improve the advertising effect and increase advertising revenue.
再比如需要统计观众对在影视剧中投放的植入广告的关注程度,从而评价广告效果。上述企业商标或企业字号或植入广告内容即为图片中期望被试者关注的目标对象。Another example is that it is necessary to count the audience's attention to the embedded advertisements in film and television dramas in order to evaluate the advertising effect. The above-mentioned corporate trademark or corporate name or embedded advertisement content is the target object that the subjects are expected to pay attention to in the picture.
目前,在一帧图片中,确定目标对象的方法通常为人工手动定义目标区域。鉴于目标对象在不同帧图片中的位置是频繁变化的,人工手动定义目标对象需要由人工逐帧在图片中采用鼠标选中或框选区域来定义目标对象,费时费力,并且海量工作量导致较大型的研究必须使用人为标记视频的方式。Currently, the method for determining the target object in a frame of an image is usually to manually define the target area. Given that the position of the target object in different frames of an image changes frequently, manually defining the target object requires the user to manually select or box the area in the image frame by frame to define the target object, which is time-consuming and labor-intensive. In addition, the massive workload means that larger studies must use the method of manually marking videos.
具体地,根据眼睛数据,确定被试者在对应的图像数据中的聚焦区域,可以包括以下步骤:Specifically, determining the focus area of the subject in the corresponding image data according to the eye data may include the following steps:
对眼睛数据中的图片进行黑白阈值化处理、人脸检测、人眼检测和瞳孔检测,从而得到眼睛注视的位置;Perform black and white thresholding, face detection, eye detection, and pupil detection on the images in the eye data to obtain the position where the eyes are looking;
将被试者眼睛注视的位置标记到前置摄像装置拍摄的被试者当前观看的测试图像的拍摄图像中,从而知道眼睛看到了真实世界中物体的具体部位。The position where the subject's eyes are looking is marked in the image of the test image currently viewed by the subject taken by the front camera device, so as to know the specific part of the object in the real world that the eyes have seen.
具体应用时,将预定标记物设置在预定目标对象附近之后,再通过摄像装置录制视频或拍摄图片,获得图像数据。这些图像数据将用于用户体验研究,如广告短片和心理学实验图片等。In specific applications, after setting a predetermined marker near a predetermined target object, a video or picture is recorded by a camera device to obtain image data. This image data will be used for user experience research, such as advertising clips and psychology experiment pictures.
图6为根据本发明的实施例用于判断目标对象关注度的图像处理方法应用在实施例1的眼动仪中的应用场景示意图。FIG6 is a schematic diagram of an application scenario in which the image processing method for determining the attention degree of a target object according to an embodiment of the present invention is applied in the eye tracker of Example 1. FIG.
被试者205佩戴上安装有至少一个前置摄像头和至少一个眼睛摄像头的眼动仪10后,观看预先录制的视频、图片、网页和文本等显示材料。眼动仪上的至少一个前置摄像头同步拍摄被试者正在观看的图像,获取场景图像数据;眼动仪10上的至少一个眼睛摄像头同步拍摄被试者眼球的运动,获取眼睛数据;眼动仪10内的控制系统实施本发明的实施例用于分析目标对象关注度的图像处理方法,根据眼睛数据,确定被试者在当前观看图像中的聚焦区域,根据图像数据中预定标记物的位置,确定目标对象所在的目标位置;判断目标位置是否落入聚焦区域中,若落入,则确定被试者关注了目标对象;若没有落入,则确定被试者没有关注目标对象。After the subject 205 wears the eye tracker 10 equipped with at least one front camera and at least one eye camera, he/she watches pre-recorded video, pictures, web pages, text and other display materials. At least one front camera on the eye tracker synchronously shoots the image that the subject is watching to obtain scene image data; at least one eye camera on the eye tracker 10 synchronously shoots the movement of the subject's eyeball to obtain eye data; the control system in the eye tracker 10 implements the image processing method for analyzing the degree of attention of the target object according to the embodiment of the present invention, determines the focus area of the subject in the current viewing image according to the eye data, determines the target position of the target object according to the position of the predetermined marker in the image data; determines whether the target position falls into the focus area, if so, it is determined that the subject pays attention to the target object; if not, it is determined that the subject does not pay attention to the target object.
具体地,结合图6,图示中201即为预定目标对象所在区域,图示中202即为预定标记物所在区域。在被试者观看图片C时,通过本发明的实施例用于判断目标对象关注度的图像处理方法,即可确定被试者眼睛的聚焦区域落在图片C中的苹果所在区域,而不是预定标记物所在的202或预定目标对象所在的201,因此,可以确定被试者没有关注目标对象。在被试者观看图片D时,通过本发明的实施例用于判断目标对象关注度的图像处理方法,即可确定被试者眼睛的聚焦区域落在图片D中预定目标对象所在的201,而不是预定标记物所在的202或水果葡萄所在的区域,因此,可以确定被试者关注了目标对象。Specifically, in conjunction with FIG6 , 201 in the diagram is the area where the predetermined target object is located, and 202 in the diagram is the area where the predetermined marker is located. When the subject is viewing picture C, through the image processing method for determining the degree of attention of the target object according to the embodiment of the present invention, it can be determined that the focus area of the subject's eyes falls on the area where the apple is located in picture C, rather than 202 where the predetermined marker is located or 201 where the predetermined target object is located. Therefore, it can be determined that the subject is not paying attention to the target object. When the subject is viewing picture D, through the image processing method for determining the degree of attention of the target object according to the embodiment of the present invention, it can be determined that the focus area of the subject's eyes falls on 201 where the predetermined target object is located in picture D, rather than 202 where the predetermined marker is located or the area where the fruit grapes are located. Therefore, it can be determined that the subject is paying attention to the target object.
需要说明的是,在同一张图片或者视频材料上,可以设置多个感兴趣的目标对象。这时,可以把多个感兴趣的目标对象组成一个感兴趣区域组。感兴趣区域的形状可能是正方形,圆形,椭圆型,三角形,或者自定义的不固定形状等。It should be noted that multiple interesting target objects can be set on the same image or video material. In this case, the multiple interesting target objects can be grouped into an area of interest group. The shape of the area of interest can be square, circle, ellipse, triangle, or a custom unfixed shape.
以被试者是否聚焦目标对象为基础,进一步,可以统计在一个视频/图片区域中被试者聚焦目标对象的累计次数,或在一个视频/图片区域中被试者聚焦目标对象的累计时间,或从图片/视频呈现到第一次聚焦的经过的时间,及一个视频中的累计聚焦时间等许多变量。Based on whether the subject focuses on the target object, we can further count many variables such as the cumulative number of times the subject focuses on the target object in a video/picture area, or the cumulative time the subject focuses on the target object in a video/picture area, or the time from the presentation of the picture/video to the first focus, and the cumulative focus time in a video.
根据本发明的实施例用于分析目标对象关注度的图像处理方法通过识别预先设定的标记物,并根据预定标记物和目标对象相对位置固定不变的关系,获取目标对象在图像中的位置信息,进而实现对不同图片中或连续视频中的目标对象的位置跟踪,从而判定用户是否关注了对应与目标对象所在位置的图像区域,对目标对象关注度进行评价。According to an embodiment of the present invention, an image processing method for analyzing the attention degree of a target object recognizes a preset marker and obtains the position information of the target object in the image based on the fixed relative position relationship between the preset marker and the target object, thereby tracking the position of the target object in different pictures or continuous videos, thereby determining whether the user pays attention to the image area corresponding to the location of the target object and evaluating the attention degree of the target object.
与现有技术相比,根据本发明的实施例用于分析目标对象关注度的图像处理方法借助预先设定的标记物,实现了目标对象的快速跟踪,处理简单,辨识率高。Compared with the prior art, the image processing method for analyzing the attention degree of a target object according to an embodiment of the present invention realizes rapid tracking of the target object with the help of pre-set markers, simple processing and high recognition rate.
眼动仪内的控制系统通过实施本发明的实施例用于三维重构的图像处理方法,使得眼动仪能能够自动识别标记物,利用代码自动计算眼睛注视位置与感兴趣的目标对象的之间的交互行为,从而摆脱了传统的费时费力的人工标记工作。The control system in the eye tracker implements the image processing method for three-dimensional reconstruction of the embodiment of the present invention, so that the eye tracker can automatically identify markers and use codes to automatically calculate the interaction between the eye gaze position and the target object of interest, thereby getting rid of the traditional time-consuming and labor-intensive manual marking work.
实施例4Example 4
如图7所示,本发明的实施例的用于测距的图像处理方法,包括以下步骤:As shown in FIG. 7 , the image processing method for ranging according to an embodiment of the present invention includes the following steps:
步骤S31:获取第一前置摄像头拍摄的测距目标的第一测距图像;及获取第二前置摄像头拍摄的同一测距目标的第二测距图像;Step S31: acquiring a first ranging image of a ranging target taken by a first front camera; and acquiring a second ranging image of the same ranging target taken by a second front camera;
步骤S32:以所述第一前置摄像头和第二前置摄像头的连线为笛卡尔坐标系中的x轴,根据所述第一前置摄像头和第二前置摄像头的直线距离T、第一前置摄像头的焦距f,所述第二前置摄像头的焦距f,和所述第一测距图像及所述第二测距图像,得到所述测距目标与所述第一前置摄像头的连线与所述第一前置摄像头的成像平面的交点的x轴坐标xl,得到所述测距目标与所述第二前置摄像头的连线与所述第二前置摄像头的成像平面的交点的x轴坐标xr;Step S32: Taking the line connecting the first front camera and the second front camera as the x-axis in the Cartesian coordinate system, according to the straight-line distance T between the first front camera and the second front camera, the focal length f of the first front camera, the focal length f of the second front camera, and the first ranging image and the second ranging image, obtaining the x-axis coordinate xl of the intersection point of the line connecting the ranging target and the first front camera and the imaging plane of the first front camera, and obtaining the x-axis coordinate xr of the intersection point of the line connecting the ranging target and the second front camera and the imaging plane of the second front camera;
步骤S33:根据下式,得到所述测距目标与所述第一前置摄像头的成像平面的距离Z, Step S33: Obtain the distance Z between the ranging target and the imaging plane of the first front camera according to the following formula:
具体地,如图8所示,被测量物体等效为空间一点,记为P,左右摄像头分别等效为空间一点分别记为Ol、Or(垂直于纸面方面看去,左侧为左摄像头,右侧为右摄像头),P与Ol的连线与成像平面的交点记为Pl,P与Or的连线与成像平面的交点记为Pr。P、Pl和Pr这三点连线构成的三角形与P、Ol和Or这三点连线构成的三角形为相似三角形。Z为需要测量的摄像头成像平面到被测量物体的距离。f为摄像头焦距,T为两个摄像头之间的直线距离,根据相似三角形原理,可以计算出Z。Specifically, as shown in Figure 8, the object to be measured is equivalent to a point in space, denoted as P, and the left and right cameras are equivalent to points in space, denoted as Ol and Or respectively (perpendicular to the paper, the left side is the left camera, and the right side is the right camera). The intersection of the line connecting P and Ol and the imaging plane is denoted as Pl, and the intersection of the line connecting P and Or and the imaging plane is denoted as Pr. The triangle formed by the lines connecting the three points P, Pl and Pr and the triangle formed by the lines connecting the three points P, Ol and Or are similar triangles. Z is the distance from the imaging plane of the camera to be measured to the object to be measured. f is the focal length of the camera, and T is the straight-line distance between the two cameras. According to the principle of similar triangles, Z can be calculated.
图9为根据本发明的实施例的用于测距的图像处理方法应用在实施例1的眼动仪中的应用场景示意图。FIG. 9 is a schematic diagram of an application scenario in which the image processing method for distance measurement according to an embodiment of the present invention is applied in the eye tracker of Example 1. FIG.
用户佩戴上设置有两个前置摄像头的眼动仪后,控制这两个前置摄像头同步拍摄测距目标的测距图像。眼动仪内的控制系统实施本发明的实施例用于测距的图像处理方法,对这两个前置摄像头同步拍摄测距目标的测距图像进行处理,得到该测距目标与这两个前置摄像头的成像平面的距离,并结合前置摄像头的焦距,获得该测距目标与用户的直线距离。After the user wears the eye tracker with two front cameras, the two front cameras are controlled to synchronously shoot the ranging image of the ranging target. The control system in the eye tracker implements the image processing method for ranging according to the embodiment of the present invention, processes the ranging images of the ranging target synchronously shot by the two front cameras, obtains the distance between the ranging target and the imaging planes of the two front cameras, and obtains the straight-line distance between the ranging target and the user in combination with the focal length of the front camera.
与现有的测距技术相比,根据本发明的实施例的用于测距的图像处理方法基于摄像装置获取的图像信息进行测距,简单、方便、易于操作,且精度高。Compared with the existing distance measurement technology, the image processing method for distance measurement according to the embodiment of the present invention performs distance measurement based on image information acquired by a camera device, which is simple, convenient, easy to operate, and has high precision.
虽然已经参考优选实施例对本发明进行了描述,但在不脱离本发明的范围的情况下,可以对其进行各种改进并且可以用等效物替换其中的部件。尤其是,只要不存在结构冲突,各个实施例中所提到的各项技术特征均可以任意方式组合起来。本发明并不局限于文中公开的特定实施例,而是包括落入权利要求的范围内的所有技术方案。Although the present invention has been described with reference to preferred embodiments, various modifications may be made thereto and parts thereof may be replaced by equivalents without departing from the scope of the present invention. In particular, the various technical features mentioned in the various embodiments may be combined in any manner as long as there are no structural conflicts. The present invention is not limited to the specific embodiments disclosed herein, but includes all technical solutions falling within the scope of the claims.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710940623.0A CN107773248B (en) | 2017-09-30 | 2017-09-30 | Eye movement instrument and image processing method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710940623.0A CN107773248B (en) | 2017-09-30 | 2017-09-30 | Eye movement instrument and image processing method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN107773248A CN107773248A (en) | 2018-03-09 |
| CN107773248B true CN107773248B (en) | 2024-10-29 |
Family
ID=61434369
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710940623.0A Active CN107773248B (en) | 2017-09-30 | 2017-09-30 | Eye movement instrument and image processing method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN107773248B (en) |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110032278B (en) | 2019-03-29 | 2020-07-14 | 华中科技大学 | Method, device and system for pose recognition of objects of interest to human eyes |
| CN111265392B (en) * | 2020-02-27 | 2022-05-03 | 深圳市视界智造科技有限公司 | Amblyopia treatment system |
| CN111524175A (en) * | 2020-04-16 | 2020-08-11 | 东莞市东全智能科技有限公司 | Asymmetric multi-camera depth reconstruction and eye tracking method and system |
| CN111933275B (en) * | 2020-07-17 | 2023-07-28 | 兰州大学 | A Depression Assessment System Based on Eye Movement and Facial Expression |
| CN112767664A (en) * | 2021-02-01 | 2021-05-07 | 三维医疗科技有限公司 | Eye distance monitoring device beneficial to eyesight protection |
| CN113111745B (en) * | 2021-03-30 | 2023-04-07 | 四川大学 | Eye movement identification method based on product attention of openposition |
| CN118849729A (en) * | 2024-07-30 | 2024-10-29 | 东风商用车有限公司 | Anti-glare control method, system, device and computer-readable storage medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105550670A (en) * | 2016-01-27 | 2016-05-04 | 兰州理工大学 | Target object dynamic tracking and measurement positioning method |
| CN106168853A (en) * | 2016-06-23 | 2016-11-30 | 中国科学技术大学 | A kind of free space wear-type gaze tracking system |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101533523B (en) * | 2009-02-27 | 2011-08-03 | 西北工业大学 | A virtual human eye movement control method |
| CN101587542A (en) * | 2009-06-26 | 2009-11-25 | 上海大学 | Field depth blending strengthening display method and system based on eye movement tracking |
| CN102176755B (en) * | 2010-12-24 | 2013-07-31 | 海信集团有限公司 | Control method and device based on eye movement three-dimensional display angle |
| CN102819403A (en) * | 2012-08-28 | 2012-12-12 | 广东欧珀移动通信有限公司 | A human-computer interaction method for a terminal device and the terminal device thereof |
| CN103220467A (en) * | 2013-04-10 | 2013-07-24 | 广东欧珀移动通信有限公司 | A smart camera method and system for a mobile terminal |
| WO2014170760A2 (en) * | 2013-04-16 | 2014-10-23 | The Eye Tribe Aps | Systems and methods of eye tracking data analysis |
| CN104216508B (en) * | 2013-05-31 | 2017-05-10 | 中国电信股份有限公司 | Method and device for operating function key through eye movement tracking technique |
| CN106056092B (en) * | 2016-06-08 | 2019-08-20 | 华南理工大学 | Gaze Estimation Method for Head Mounted Devices Based on Iris and Pupil |
-
2017
- 2017-09-30 CN CN201710940623.0A patent/CN107773248B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105550670A (en) * | 2016-01-27 | 2016-05-04 | 兰州理工大学 | Target object dynamic tracking and measurement positioning method |
| CN106168853A (en) * | 2016-06-23 | 2016-11-30 | 中国科学技术大学 | A kind of free space wear-type gaze tracking system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107773248A (en) | 2018-03-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107773248B (en) | Eye movement instrument and image processing method | |
| US20220265142A1 (en) | Portable eye tracking device | |
| US20210165250A1 (en) | Method and device for determining parameters for spectacle fitting | |
| CN103475893B (en) | The pick-up method of object in the pick device of object and three-dimensional display in three-dimensional display | |
| JP6332392B2 (en) | Spectacle lens design method, spectacle lens manufacturing method, spectacle lens selection device, and spectacle lens selection method | |
| CN111649690A (en) | Handheld 3D information acquisition equipment and method | |
| CN109416744A (en) | Improved camera calibration system, target and process | |
| US9500885B2 (en) | Method and apparatus for determining the habitual head posture | |
| US20100283844A1 (en) | Method and system for the on-line selection of a virtual eyeglass frame | |
| CN103163663A (en) | Method and device for estimating the optical power of corrective lenses in a pair of eyeglasses worn by a spectator | |
| KR102444768B1 (en) | Method and apparatus for measuring local power and/or power distribution of spectacle lenses | |
| US9961307B1 (en) | Eyeglass recorder with multiple scene cameras and saccadic motion detection | |
| US20200107720A1 (en) | Calibration and Image Procession Methods and Systems for Obtaining Accurate Pupillary Distance Measurements | |
| CN211178345U (en) | Three-dimensional acquisition equipment | |
| US20070226076A1 (en) | Sale system of optical characteristic image data | |
| CN109767472A (en) | A method for measuring the FOV of an eye-mounted display | |
| CN111340959A (en) | Three-dimensional model seamless texture mapping method based on histogram matching | |
| CN111310661B (en) | Intelligent 3D information acquisition and measurement equipment for iris | |
| US11627303B2 (en) | System and method for corrected video-see-through for head mounted displays | |
| CN211375621U (en) | Iris 3D information acquisition equipment and iris identification equipment | |
| CN109089106A (en) | Naked eye 3D display system and naked eye 3D display adjusting method | |
| CN211085114U (en) | Take 3D information acquisition equipment of background board | |
| CN111207690B (en) | Adjustable iris 3D information acquisition measuring equipment | |
| CN211085115U (en) | Standardized biological three-dimensional information acquisition device | |
| JP2015123262A (en) | Sight line measurement method using corneal surface reflection image, and device for the same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |