[go: up one dir, main page]

CN114926501A - Method and apparatus for determining motion information, medium, and computer device - Google Patents

Method and apparatus for determining motion information, medium, and computer device Download PDF

Info

Publication number
CN114926501A
CN114926501A CN202210557440.1A CN202210557440A CN114926501A CN 114926501 A CN114926501 A CN 114926501A CN 202210557440 A CN202210557440 A CN 202210557440A CN 114926501 A CN114926501 A CN 114926501A
Authority
CN
China
Prior art keywords
target object
information
track
trajectory
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557440.1A
Other languages
Chinese (zh)
Inventor
史璇珂
王睿
王权
王超
郑龙澍
钱晨
杨奇勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing National Aquatics Center Co ltd
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing National Aquatics Center Co ltd
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing National Aquatics Center Co ltd, Beijing Sensetime Technology Development Co Ltd filed Critical Beijing National Aquatics Center Co ltd
Priority to CN202210557440.1A priority Critical patent/CN114926501A/en
Publication of CN114926501A publication Critical patent/CN114926501A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供一种运动信息的确定方法和装置、介质和计算机设备,所述方法包括:获取多个相机分别在不同的第一视野范围内采集的视频帧;基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息;所述第二视野范围覆盖所述多个相机分别对应的所述第一视野范围;获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定;所述投影信息用于表征所述目标对象的运动距离;基于所述投影信息确定所述目标对象的运动信息。

Figure 202210557440

Embodiments of the present disclosure provide a method, apparatus, medium, and computer device for determining motion information. The method includes: acquiring video frames captured by multiple cameras in different first fields of view; and determining, based on the acquired video frames, position information of multiple track points passed by the target object during the movement within the second field of view; the second field of view covers the first field of view corresponding to the multiple cameras; obtain the multiple tracks Projection information of points on the trajectory of the target object, the trajectory of the target object is determined based on the position information of the plurality of trajectory points; the projection information is used to represent the movement distance of the target object; based on the projection information determines motion information of the target object.

Figure 202210557440

Description

运动信息的确定方法和装置、介质和计算机设备Method and apparatus, medium and computer equipment for determining motion information

技术领域technical field

本公开涉及数据处理技术领域,尤其涉及运动信息的确定方法和装置、介质和计算机设备。The present disclosure relates to the technical field of data processing, and in particular, to a method and apparatus, medium and computer equipment for determining motion information.

背景技术Background technique

在目标对象运动的过程中,往往需要获取目标对象的运动信息,以对目标对象的运动状态进行评估。相关技术中一般简单地基于目标对象在不同轨迹点上的直线距离以及目标对象经过所述不同轨迹点的总时长来粗略估计目标对象的运动信息,这种方式准确度较低。During the movement of the target object, it is often necessary to obtain the movement information of the target object to evaluate the movement state of the target object. In the related art, the motion information of the target object is roughly estimated simply based on the linear distance of the target object on different trajectory points and the total duration of the target object passing through the different trajectory points, which is less accurate.

发明内容SUMMARY OF THE INVENTION

第一方面,本公开实施例提供一种运动信息的确定方法,所述方法包括:获取多个相机分别在不同的第一视野范围内采集的视频帧;基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息;所述第二视野范围覆盖所述多个相机分别对应的所述第一视野范围;获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定;所述投影信息用于表征所述目标对象的运动距离;基于所述投影信息确定所述目标对象的运动信息。In a first aspect, an embodiment of the present disclosure provides a method for determining motion information, the method comprising: acquiring video frames captured by multiple cameras in different first fields of view; and determining, based on the acquired video frames, that a target object is in The position information of the multiple track points passed during the movement in the second field of view; the second field of view covers the first field of view corresponding to the multiple cameras; obtain the location information of the multiple track points in the projection information on the trajectory of the target object, the trajectory of the target object is determined based on the position information of the multiple trajectory points; the projection information is used to represent the movement distance of the target object; Describe the motion information of the target object.

在一些实施例中,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定,包括:获取预测轨迹,所述预测轨迹包括至少一个预测参数;基于已获取的至少一个轨迹点的位置信息确定所述至少一个预测参数中的每个预测参数;根据确定的每个预测参数和所述预测轨迹确定所述目标对象的轨迹。In some embodiments, determining the trajectory of the target object based on the position information of the plurality of trajectory points includes: acquiring a predicted trajectory, where the predicted trajectory includes at least one prediction parameter; based on the acquired position of the at least one trajectory point The information determines each prediction parameter of the at least one prediction parameter; the trajectory of the target object is determined according to each determined prediction parameter and the predicted trajectory.

在一些实施例中,所述获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,包括:确定所述多个轨迹点中每组相邻的轨迹点之间的距离;将所述每组相邻的轨迹点之间的距离投影到所述轨迹的切线方向,得到所述每组相邻的轨迹点在所述轨迹上的投影距离;根据各组相邻的轨迹点之间的距离在所述轨迹上的投影距离,确定所述多个轨迹点在所述目标对象的轨迹上的投影信息。In some embodiments, acquiring the projection information of the plurality of trajectory points on the trajectory of the target object includes: determining a distance between each group of adjacent trajectory points in the plurality of trajectory points; The distance between the adjacent trajectory points of each group is projected to the tangent direction of the trajectory, and the projected distance of each group of adjacent trajectory points on the trajectory is obtained; The projection distance between the distances on the track is used to determine the projection information of the multiple track points on the track of the target object.

在一些实施例中,所述方法还包括:获取所述多个轨迹点中每个轨迹点的时间信息;所述基于所述投影信息确定所述目标对象的运动信息,包括:基于所述投影信息以及所述每个轨迹点的时间信息,确定所述目标对象的运动信息。In some embodiments, the method further includes: acquiring time information of each track point in the plurality of track points; and determining the motion information of the target object based on the projection information includes: based on the projection information and the time information of each track point to determine the motion information of the target object.

在一些实施例中,在基于所述投影信息确定所述目标对象的运动信息之后,所述方法还包括:获取位于所述多个轨迹点之后的在后轨迹点;获取所述在后轨迹点与所述多个轨迹点中的最后一个轨迹点之间的第一距离;将所述第一距离投影到所述轨迹的切线方向上,得到第二距离;基于所述第二距离对所述投影信息进行更新。In some embodiments, after determining the motion information of the target object based on the projection information, the method further includes: acquiring a subsequent trajectory point located behind the plurality of trajectory points; acquiring the subsequent trajectory point the first distance from the last trajectory point in the plurality of trajectory points; the first distance is projected on the tangent direction of the trajectory to obtain a second distance; based on the second distance, the Projection information is updated.

在一些实施例中,所述基于所述投影信息确定所述目标对象的运动信息,包括:获取所述目标对象的运动模型,所述运动模型用于表征所述目标对象的运动距离随时间的变化关系,所述运动模型包括至少一个模型参数;基于已获取的至少一个轨迹点的投影信息以及已获取的至少一个轨迹点中每个轨迹点的时间信息,确定所述至少一个模型参数中的每个模型参数;根据确定的模型参数以及所述运动模型确定所述目标对象的运动信息。In some embodiments, the determining the motion information of the target object based on the projection information includes: acquiring a motion model of the target object, where the motion model is used to represent the movement distance of the target object over time change relationship, the motion model includes at least one model parameter; based on the acquired projection information of the at least one trajectory point and the acquired time information of each trajectory point in the at least one trajectory point, determine the at least one model parameter. each model parameter; determine the motion information of the target object according to the determined model parameter and the motion model.

在一些实施例中,所述至少一个轨迹点包括所述多个轨迹点中已获取的N个轨迹点;所述根据确定的模型参数以及所述运动模型确定所述目标对象的运动信息,包括:根据确定的模型参数以及所述运动模型,确定所述目标对象在所述N个轨迹点中的第M个轨迹点的运动信息;M和N均为正整数,且M小于N。In some embodiments, the at least one track point includes N track points that have been acquired from the plurality of track points; the determining the motion information of the target object according to the determined model parameters and the motion model includes: : determine the motion information of the M-th trajectory point of the target object in the N trajectory points according to the determined model parameters and the motion model; M and N are both positive integers, and M is less than N.

在一些实施例中,所述运动信息包括加速度信息,所述目标对象在参考对象的表面运动,所述目标对象的表面与所述参考对象的表面相接触;所述方法还包括:基于所述加速度信息确定所述目标对象的表面与所述参考对象的表面之间的摩擦系数。In some embodiments, the motion information includes acceleration information, the target object moves on a surface of a reference object, and the surface of the target object is in contact with the surface of the reference object; the method further includes: based on the The acceleration information determines the coefficient of friction between the surface of the target object and the surface of the reference object.

在一些实施例中,所述基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息,包括:基于每个相机获取的视频帧,确定所述每个相机的第一视野范围内的轨迹点的初始位置信息;对各个相机的第一视野范围内的轨迹点的初始位置信息进行融合,得到所述多个轨迹点的位置信息。In some embodiments, the determining, based on the acquired video frames, the position information of multiple track points that the target object passes through in the process of moving within the second field of view includes: determining, based on the video frames acquired by each camera, the The initial position information of the track points within the first field of view of each camera is described; the initial position information of the track points within the first field of view of each camera is fused to obtain the position information of the multiple track points.

在一些实施例中,所述多个相机中的至少两个目标相机的第一视野范围部分重叠;所述对各个相机的第一视野范围内的轨迹点的初始位置信息进行融合,得到所述多个轨迹点的位置信息,包括:确定所述每个目标相机的第三视野范围内的轨迹点的初始位置信息,所述第三视野范围为各个目标相机的重叠的视野范围;对各个目标相机的第三视野范围内的轨迹点的初始位置信息进行同步,得到各个目标相机的第三视野范围内的轨迹点的同步位置信息;对各个目标相机的第三视野范围内的轨迹点的同步位置信息进行融合,得到各个目标相机的第三视野范围内的轨迹点的位置信息。In some embodiments, the first field of view of at least two target cameras in the plurality of cameras partially overlaps; the initial position information of the track points within the first field of view of each camera is fused to obtain the The position information of multiple track points, including: determining the initial position information of the track points within the third field of view of each target camera, where the third field of view is the overlapping field of view of each target camera; Synchronizing the initial position information of the track points within the third field of view of the cameras to obtain the synchronization position information of the track points within the third field of view of each target camera; synchronizing the track points within the third field of view of each target camera The position information is fused to obtain the position information of the track points within the third field of view of each target camera.

第二方面,本公开实施例提供一种运动信息的确定装置,所述装置包括:第一获取模块,用于获取多个相机分别在不同的第一视野范围内采集的视频帧;第一确定模块,用于基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息;所述第二视野范围覆盖所述多个相机分别对应的所述第一视野范围;第二获取模块,用于获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定;所述投影信息用于表征所述目标对象的运动距离;第二确定模块,用于基于所述投影信息确定所述目标对象的运动信息。In a second aspect, an embodiment of the present disclosure provides an apparatus for determining motion information. The apparatus includes: a first acquisition module, configured to acquire video frames acquired by multiple cameras in different first visual fields; a module for determining, based on the acquired video frames, the position information of multiple track points that the target object passes through in the process of moving within a second field of view; the second field of view covers the corresponding a first field of view; a second acquisition module, configured to acquire projection information of the multiple track points on the track of the target object, where the track of the target object is determined based on the position information of the multiple track points; the The projection information is used to represent the movement distance of the target object; the second determination module is used to determine the movement information of the target object based on the projection information.

在一些实施例中,所述第二获取模块用于:获取预测轨迹,所述预测轨迹包括至少一个预测参数;基于已获取的至少一个轨迹点的位置信息确定所述至少一个预测参数中的每个预测参数;根据确定的每个预测参数和所述预测轨迹确定所述目标对象的轨迹。In some embodiments, the second obtaining module is configured to: obtain a predicted trajectory, where the predicted trajectory includes at least one predicted parameter; and determine each of the at least one predicted parameter based on the obtained position information of the at least one trajectory point. a prediction parameter; the trajectory of the target object is determined according to each determined prediction parameter and the predicted trajectory.

在一些实施例中,所述第二获取模块用于:确定所述多个轨迹点中每组相邻的轨迹点之间的距离;将所述每组相邻的轨迹点之间的距离投影到所述轨迹的切线方向,得到所述每组相邻的轨迹点在所述轨迹上的投影距离;根据各组相邻的轨迹点之间的距离在所述轨迹上的投影距离,确定所述多个轨迹点在所述目标对象的轨迹上的投影信息。In some embodiments, the second obtaining module is configured to: determine the distance between each group of adjacent trajectory points in the plurality of trajectory points; project the distance between each group of adjacent trajectory points To the tangent direction of the trajectory, the projected distance of each group of adjacent trajectory points on the trajectory is obtained; according to the projected distance of the distance between each group of adjacent trajectory points on the trajectory, determine the projected distance of each group of adjacent trajectory points on the trajectory. projection information of the plurality of trajectory points on the trajectory of the target object.

在一些实施例中,所述装置还包括:第三获取模块,用于获取所述多个轨迹点中每个轨迹点的时间信息;所述第二确定模块用于:基于所述投影信息以及所述每个轨迹点的时间信息,确定所述目标对象的运动信息。In some embodiments, the apparatus further includes: a third acquisition module configured to acquire time information of each trajectory point in the plurality of trajectory points; the second determination module configured to: based on the projection information and The time information of each track point determines the motion information of the target object.

在一些实施例中,所述装置还包括:第四获取模块,用于获取位于所述多个轨迹点之后的在后轨迹点;第五获取模块,用于获取所述在后轨迹点与所述多个轨迹点中的最后一个轨迹点之间的第一距离;投影模块,用于将所述第一距离投影到所述轨迹的切线方向上,得到第二距离;更新模块,用于基于所述第二距离对所述投影信息进行更新。In some embodiments, the apparatus further includes: a fourth acquisition module, configured to acquire subsequent trajectory points located after the plurality of trajectory points; and a fifth acquisition module, configured to acquire the relationship between the subsequent trajectory points and all the first distance between the last trajectory point in the plurality of trajectory points; the projection module is used to project the first distance on the tangent direction of the trajectory to obtain the second distance; the update module is used to base on the The second distance updates the projection information.

在一些实施例中,所述第二确定模块用于:获取所述目标对象的运动模型,所述运动模型用于表征所述目标对象的运动距离随时间的变化关系,所述运动模型包括至少一个模型参数;基于已获取的至少一个轨迹点的投影信息以及已获取的至少一个轨迹点中每个轨迹点的时间信息,确定所述至少一个模型参数中的每个模型参数;根据确定的模型参数以及所述运动模型确定所述目标对象的运动信息。In some embodiments, the second determination module is configured to: acquire a motion model of the target object, where the motion model is used to represent a time-varying relationship of the motion distance of the target object, and the motion model includes at least one model parameter; based on the acquired projection information of the at least one trajectory point and the acquired time information of each trajectory point in the at least one trajectory point, determine each model parameter in the at least one model parameter; according to the determined model The parameters and the motion model determine motion information of the target object.

在一些实施例中,所述至少一个轨迹点包括所述多个轨迹点中已获取的N个轨迹点;所述第二确定模块用于:根据确定的模型参数以及所述运动模型,确定所述目标对象在所述N个轨迹点中的第M个轨迹点的运动信息;M和N均为正整数,且M小于N。In some embodiments, the at least one trajectory point includes N trajectory points that have been obtained from the plurality of trajectory points; the second determining module is configured to: according to the determined model parameters and the motion model, determine the motion information of the M-th trajectory point of the target object in the N trajectory points; M and N are both positive integers, and M is less than N.

在一些实施例中,所述运动信息包括加速度信息,所述目标对象在参考对象的表面运动,所述目标对象的表面与所述参考对象的表面相接触;所述装置还包括:第三确定模块,用于基于所述加速度信息确定所述目标对象的表面与所述参考对象的表面之间的摩擦系数。In some embodiments, the motion information includes acceleration information, the target object moves on a surface of a reference object, and the surface of the target object is in contact with the surface of the reference object; the apparatus further includes: a third determination a module for determining a friction coefficient between the surface of the target object and the surface of the reference object based on the acceleration information.

在一些实施例中,所述第一确定模块用于:基于每个相机获取的视频帧,确定所述每个相机的第一视野范围内的轨迹点的初始位置信息;对各个相机的第一视野范围内的轨迹点的初始位置信息进行融合,得到所述多个轨迹点的位置信息。In some embodiments, the first determining module is configured to: determine the initial position information of the track points within the first field of view of each camera based on the video frames acquired by each camera; The initial position information of the track points within the field of view is fused to obtain the position information of the multiple track points.

在一些实施例中,所述多个相机中的至少两个目标相机的第一视野范围部分重叠;所述第一确定模块用于:确定所述每个目标相机的第三视野范围内的轨迹点的初始位置信息,所述第三视野范围为各个目标相机的重叠的视野范围;对各个目标相机的第三视野范围内的轨迹点的初始位置信息进行同步,得到各个目标相机的第三视野范围内的轨迹点的同步位置信息;对各个目标相机的第三视野范围内的轨迹点的同步位置信息进行融合,得到各个目标相机的第三视野范围内的轨迹点的位置信息。In some embodiments, the first field of view of at least two target cameras in the plurality of cameras partially overlaps; the first determining module is configured to: determine the trajectory within the third field of view of each target camera The initial position information of the point, the third field of view is the overlapping field of view of each target camera; the initial position information of the track points within the third field of view of each target camera is synchronized to obtain the third field of view of each target camera Synchronous position information of the track points within the range; fuse the synchronous position information of the track points within the third field of view of each target camera to obtain the position information of the track points within the third field of view of each target camera.

第三方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一实施例所述的方法。In a third aspect, an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the method described in any one of the embodiments.

第四方面,本公开实施例提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现任一实施例所述的方法。In a fourth aspect, an embodiment of the present disclosure provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the program described in any of the embodiments when the processor executes the program. method described.

本公开实施例通过获取目标对象运动过程中经过的多个轨迹点在所述目标对象的轨迹上的投影信息来表征目标对象的运动距离,由于投影信息能够较为准确地表征目标对象运动过程中的实际运动距离,因此,基于投影信息确定的运动信息的准确度较高。In the embodiment of the present disclosure, the movement distance of the target object is represented by acquiring projection information of multiple trajectory points passed by the target object on the trajectory of the target object, because the projection information can more accurately characterize the movement of the target object during the movement process. The actual movement distance, therefore, the accuracy of the movement information determined based on the projection information is higher.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the technical solutions of the present disclosure.

图1是相关技术中的运动信息的确定方式的示意图。FIG. 1 is a schematic diagram of a manner of determining motion information in the related art.

图2是本公开实施例的运动信息的确定方法的流程图。FIG. 2 is a flowchart of a method for determining motion information according to an embodiment of the present disclosure.

图3是本公开实施例的相机的视野范围的示意图。FIG. 3 is a schematic diagram of a field of view of a camera according to an embodiment of the present disclosure.

图4是位置信息的同步过程的示意图。FIG. 4 is a schematic diagram of a synchronization process of location information.

图5是计算投影距离的示意图。FIG. 5 is a schematic diagram of calculating the projection distance.

图6是双向队列的示意图。Figure 6 is a schematic diagram of a bidirectional queue.

图7是目标对象的运动速度突变的示意图。FIG. 7 is a schematic diagram of a sudden change in the movement speed of the target object.

图8是本公开实施例的运动信息的确定装置的框图。FIG. 8 is a block diagram of an apparatus for determining motion information according to an embodiment of the present disclosure.

图9是本公开实施例的计算机设备的结构示意图。FIG. 9 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure.

具体实施方式Detailed ways

这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.

在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. Additionally, the term "at least one" herein refers to any one of a plurality or any combination of at least two of a plurality.

应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining."

为了使本技术领域的人员更好的理解本公开实施例中的技术方案,并使本公开实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本公开实施例中的技术方案作进一步详细的说明。In order for those skilled in the art to better understand the technical solutions in the embodiments of the present disclosure, and to make the above objects, features and advantages of the embodiments of the present disclosure more clearly understood, the following describes the technical solutions in the embodiments of the present disclosure with reference to the accompanying drawings. The program is described in further detail.

在目标对象运动的过程中,往往需要获取目标对象的运动信息,以对目标对象的运动状态进行评估。相关技术中一般简单地基于目标对象在不同轨迹点上的直线距离以及目标对象经过所述不同轨迹点的总时长来粗略估计目标对象的运动信息,这种方式准确度较低。例如,参见图1,假设灰色区域为一个赛道,黑色圆形表示在该赛道上移动的物体101(即目标对象),U1和U2分别为起点线和终点线,虚线S表示物体101在赛道上的轨迹。在相关技术中,一般基于起点线U1和终点线U2之间的距离以及物体101从起点线U1移动到终点线U2的时长来确定物体101的运动信息。然而,从图中可以看出,物体101的移动轨迹并不是直线,而是一条曲线,因此,上述起点线U1和终点线U2之间的距离与物体101的实际移动距离往往存在差异,从而导致确定出的物体101的运动信息不准确。During the movement of the target object, it is often necessary to obtain the movement information of the target object to evaluate the movement state of the target object. In the related art, the motion information of the target object is roughly estimated simply based on the linear distance of the target object on different trajectory points and the total duration of the target object passing through the different trajectory points, which is less accurate. For example, referring to FIG. 1 , assuming that the gray area is a track, the black circle represents the object 101 (ie the target object) moving on the track, U1 and U2 are the starting line and the finish line, respectively, and the dotted line S represents that the object 101 is in the race track on the road. In the related art, the motion information of the object 101 is generally determined based on the distance between the start line U1 and the finish line U2 and the duration of the movement of the object 101 from the start line U1 to the finish line U2. However, it can be seen from the figure that the moving trajectory of the object 101 is not a straight line, but a curve. Therefore, there is often a difference between the distance between the starting line U1 and the ending line U2 and the actual moving distance of the object 101, resulting in The determined motion information of the object 101 is inaccurate.

基于此,本公开提供一种运动信息的确定方法。参见图2,所述方法包括:Based on this, the present disclosure provides a method for determining motion information. Referring to Figure 2, the method includes:

步骤201:获取多个相机分别在不同的第一视野范围内采集的视频帧;Step 201: Acquire video frames captured by multiple cameras in different first fields of view respectively;

步骤202:基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息;所述第二视野范围覆盖所述多个相机分别对应的所述第一视野范围;Step 202: Based on the acquired video frames, determine the position information of multiple track points that the target object passes through in the process of moving within the second field of view; a field of view;

步骤203:获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定;所述投影信息用于表征所述目标对象的运动距离;Step 203: Acquire projection information of the plurality of trajectory points on the trajectory of the target object, where the trajectory of the target object is determined based on the position information of the plurality of trajectory points; the projection information is used to represent the target the movement distance of the object;

步骤204:基于所述投影信息确定所述目标对象的运动信息。Step 204: Determine motion information of the target object based on the projection information.

在步骤201中,目标对象可以是任意类别的目标对象,包括但不限于人、动物、交通工具、机器人、运动器械(例如,冰壶、冰球)等类型,该目标对象可以自主移动或者在外力作用下进行移动。In step 201, the target object can be any type of target object, including but not limited to people, animals, vehicles, robots, sports equipment (eg, curling, ice hockey), etc., the target object can move autonomously or under external force Move under the action.

可以通过多个相机分别采集目标对象的视频帧,每个相机单独的视野范围称为该相机的第一视野范围。不同的相机分别对应不同的第一视野范围,所述多个相机的第一视野范围可以部分重叠。所述多个相机可以包括两个或两个以上相机,所述多个相机的第一视野范围部分重叠,可以包括多种情况,下面结合图3,并以相机数量等于3为例,分别对可能的几种情况进行举例说明。其中,每个椭圆表示一个相机的第一视野范围,不同颜色的椭圆表示不同相机的第一视野范围。白色椭圆对应第一个相机的第一视野范围S1,浅灰色椭圆对应第二个相机的第一视野范围S2,深灰色椭圆对应第三个相机的第一视野范围S3。The video frames of the target object may be collected by multiple cameras respectively, and the independent field of view of each camera is referred to as the first field of view of the camera. Different cameras correspond to different first fields of view respectively, and the first fields of view of the multiple cameras may partially overlap. The plurality of cameras may include two or more cameras, and the first field of view of the plurality of cameras partially overlaps, which may include a variety of situations. In the following, with reference to FIG. 3, and taking the number of cameras equal to 3 as an example, respectively Examples of possible situations are given. Wherein, each ellipse represents the first field of view of one camera, and ellipses with different colors represent the first field of view of different cameras. The white ellipse corresponds to the first field of view S1 of the first camera, the light gray ellipse corresponds to the first field of view S2 of the second camera, and the dark gray ellipse corresponds to the first field of view S3 of the third camera.

情况一:任意两个相机的第一视野范围均部分重叠,如图3中的(3-1)所示,S1、S2和S3中的任意两者都是部分重叠的。Case 1: The first field of view of any two cameras is partially overlapped. As shown in (3-1) in FIG. 3, any two of S1, S2 and S3 are partially overlapped.

情况二:一些相机的第一视野范围部分重叠,另一些相机的第一视野范围不重叠,且存在至少一个相机的第一视野范围完全落入其他相机的第一视野范围内,如图3中的(3-2)所示,S1与S3部分重叠,S2与S3不重叠,且S2完全落入S1内。Case 2: The first field of view of some cameras partially overlaps, and the first field of view of other cameras does not overlap, and the first field of view of at least one camera completely falls within the first field of view of other cameras, as shown in Figure 3 As shown in (3-2) of , S1 and S3 partially overlap, S2 and S3 do not overlap, and S2 completely falls within S1.

情况三:相邻两个相机的第一视野范围部分重叠,不相邻的相机的第一视野范围不重叠,如图3中的(3-3)所示,其中,第一视野范围为S1的相机与第一视野范围为S2的相机相邻,且与第一视野范围为S3的相机不相邻,第一视野范围为S2的相机与第一视野范围为S3的相机相邻。可以看出,(3-3)中的S1与S2部分重叠,S2与S3部分重叠,且S1与S3不重叠。Case 3: The first field of view of two adjacent cameras partially overlaps, and the first field of view of non-adjacent cameras does not overlap, as shown in (3-3) in Figure 3, where the first field of view is S1 The camera is adjacent to the camera whose first field of view is S2, and is not adjacent to the camera whose first field of view is S3, and the camera whose first field of view is S2 is adjacent to the camera whose first field of view is S3. It can be seen that S1 and S2 in (3-3) partially overlap, S2 and S3 partially overlap, and S1 and S3 do not overlap.

情况四:部分相机的第一视野范围均部分重叠,至少一个相机的第一视野范围覆盖第一视野范围重叠的相机的总的视野范围,如图3中的(3-4)所示,S2与S3部分重叠,S1覆盖S2与S3的总的视野范围。Case 4: The first field of view of some cameras partially overlaps, and the first field of view of at least one camera covers the total field of view of the cameras whose first field of view overlaps, as shown in (3-4) in Figure 3, S2 Partially overlapping with S3, S1 covers the total field of view of S2 and S3.

情况五:相邻两个相机的第一视野范围部分重叠,不相邻的相机的第一视野范围也部分重叠,如图3中的(3-5)所示,其中,第一视野范围为S1的相机与第一视野范围为S2的相机相邻,且与第一视野范围为S3的相机不相邻,第一视野范围为S2的相机与第一视野范围为S3的相机相邻。可以看出,(3-5)中的S1与S2部分重叠,S2与S3部分重叠,且S1与S3也部分重叠。Case 5: The first field of view of two adjacent cameras partially overlaps, and the first field of view of non-adjacent cameras also partially overlaps, as shown in (3-5) in Figure 3, where the first field of view is The camera of S1 is adjacent to the camera with the first field of view of S2, and is not adjacent to the camera of the first field of view of S3. The camera of the first field of view of S2 is adjacent to the camera of the first field of view of S3. It can be seen that S1 and S2 in (3-5) partially overlap, S2 and S3 partially overlap, and S1 and S3 also partially overlap.

除了以上列举的情况之外,所述多个相机的第一视野范围至少部分重叠还可以是其他的情况,对此不再一一列举。两个或两个以上的相机的第一视野范围内重叠的部分称为重叠的视野范围。例如,在图3的(3-1)中,S1、S2与S3两两重叠的区域(如图中带斜杠的区域所示)即为相机C1、相机C2与相机C3这三个相机的重叠的视野范围;在图3的(3-3)中,S1与S2重叠的区域(如图中带横杠的区域所示)即为相机C1与相机C2的重叠的视野范围,S2与S3重叠的区域(如图中带竖杠的区域所示)即为相机C2与相机C3的重叠的视野范围。In addition to the cases listed above, the at least partial overlap of the first field of view ranges of the plurality of cameras may also be other cases, which will not be listed one by one. The overlapping portion within the first field of view of two or more cameras is referred to as the overlapping field of view. For example, in (3-1) of Fig. 3, the area where S1, S2 and S3 overlap each other (as shown by the area with slashes in the figure) is the area of camera C1, camera C2 and camera C3. The overlapping field of view; in (3-3) of Figure 3, the overlapping area of S1 and S2 (as shown by the area with a horizontal bar in the figure) is the overlapping field of view of camera C1 and camera C2, S2 and S3 The overlapping area (shown as the area with vertical bars in the figure) is the overlapping field of view of the camera C2 and the camera C3.

在一些实施例中,本公开的方法可应用于确定在全局场景中移动的目标对象的运动信息,所述全局场景是指任意一个相机的第一视野范围均覆盖该全局场景中的一个子区域,所述多个相机的第一视野范围的并集能够覆盖该全局场景。在全局场景下,虽然可以通过调整相机焦距的方式来使单个相机的第一视野范围覆盖该全局场景,但是,这样会导致目标对象在相机采集的视频帧中所占的像素数量较少,基于这样的视频帧确定目标对象的运动信息,会导致误差较大。并且,单个相机中的目标容易被遮挡。因此,在全局场景中,每个相机的第一视野范围一般仅覆盖该全局场景中的一个子区域。In some embodiments, the method of the present disclosure may be applied to determine motion information of a target object moving in a global scene, where the global scene means that the first field of view of any camera covers a sub-area in the global scene, The union of the first field of view of the plurality of cameras can cover the global scene. In the global scene, although the first field of view of a single camera can cover the global scene by adjusting the focal length of the camera, this will cause the target object to occupy less pixels in the video frame captured by the camera. Such video frames determine the motion information of the target object, which will lead to large errors. Also, objects in a single camera are easily occluded. Therefore, in the global scene, the first field of view of each camera generally covers only one sub-area in the global scene.

可以通过所述多个相机中的每个相机采集该相机对应的第一视野范围内的视频帧,采集的视频帧中可能包括一个或多个目标对象,也可能不包括目标对象。A video frame within the first field of view corresponding to the camera may be captured by each of the plurality of cameras, and the captured video frame may include one or more target objects, or may not include the target object.

在步骤202中,可以基于步骤201中获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息。其中,第二视野范围覆盖所述多个相机分别对应的所述第一视野范围,即,第二视野范围可以与各个相机的第一视野范围的并集所对应的范围相同,或者,各个相机的第一视野范围的并集为第二视野范围的一个子集。In step 202, based on the video frame acquired in step 201, the position information of multiple track points that the target object passes through during the movement within the second field of view may be determined. The second field of view covers the first field of view corresponding to the plurality of cameras respectively, that is, the second field of view may be the same as the range corresponding to the union of the first field of view of each camera, or, each camera may The union of the first field of view is a subset of the second field of view.

下面对确定轨迹点的位置信息的方式进行举例说明。所述位置信息可以是目标对象在预设的坐标系下的位置信息,所述坐标系可以是世界坐标系,也可以是自定义的其他坐标系。以所述坐标系为世界坐标系为例,可以确定目标对象在一帧视频帧中的像素位置,基于所述像素位置以及相机坐标系到世界坐标系的转换关系,将该像素位置转换为世界坐标系下的物理位置,即可得到一个轨迹点的位置信息。可以采用上述方式对同一相机采集的多个视频帧进行处理,即可得到目标对象在该相机的第一视野范围内的多个轨迹点的位置信息。对各个相机的第一视野范围内的轨迹点进行拼接,从而可以得到目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息。The manner of determining the position information of the track point will be illustrated below with an example. The location information may be the location information of the target object in a preset coordinate system, and the coordinate system may be a world coordinate system, or may be other user-defined coordinate systems. Taking the coordinate system as the world coordinate system as an example, the pixel position of the target object in one video frame can be determined, and based on the pixel position and the conversion relationship between the camera coordinate system and the world coordinate system, the pixel position is converted into the world coordinate system. The physical position in the coordinate system can obtain the position information of a track point. The above method can be used to process multiple video frames collected by the same camera, so as to obtain the position information of multiple track points of the target object within the first field of view of the camera. The track points within the first field of view of each camera are spliced, so that position information of multiple track points that the target object passes through during the movement within the second field of view can be obtained.

例如,假设目标对象从相机C1的第一视野范围运动到相机C2的第一视野范围,第二视野范围包括相机C1的第一视野范围与相机C2的第一视野范围的并集,且目标对象在相机C1的第一视野范围内依次经过了位置信息为(x1,y1)和(x2,y2)的两个轨迹点,目标对象在相机C2的第一视野范围内依次经过了位置信息为(x3,y3)、(x4,y4)和(x5,y5)的三个轨迹点,则对目标对象在相机C1的第一视野范围内依次经过的轨迹点和目标对象在相机C2的第一视野范围内依次经过的轨迹点进行拼接,即可得到目标对象在第二视野范围内依次经过的轨迹点,包括位置信息为(x1,y1)、(x2,y2)、(x3,y3)、(x4,y4)和(x5,y5)的五个轨迹点。For example, it is assumed that the target object moves from the first field of view of camera C1 to the first field of view of camera C2, the second field of view includes the union of the first field of view of camera C1 and the first field of view of camera C2, and the target object In the first field of view of the camera C1, the two track points whose position information is (x1, y1) and (x2, y2) have passed in sequence, and the target object has passed through the first field of view of the camera C2. The position information is ( x3, y3), (x4, y4) and (x5, y5) three track points, then for the track points that the target object passes through in the first field of view of camera C1 and the target object in the first field of view of camera C2 By splicing the track points passed in sequence within the range, the track points passed by the target object in sequence within the second field of view can be obtained, including the position information of (x1, y1), (x2, y2), (x3, y3), ( The five trajectory points of x4, y4) and (x5, y5).

基于单个相机采集的视频帧确定的轨迹点的位置信息可能不够准确,因此,可以基于每个相机获取的视频帧,确定所述每个相机的第一视野范围内的轨迹点的初始位置信息;对各个相机的第一视野范围内的轨迹点的初始位置信息进行融合,得到所述多个轨迹点的位置信息(称为融合位置信息)。其中,初始位置信息即基于单个相机采集的视频帧确定的轨迹点的位置信息,如上所述,可以通过将目标对象在视频帧中的像素位置转换到预设坐标系下得到。The position information of the track points determined based on the video frames collected by a single camera may not be accurate enough, therefore, the initial position information of the track points within the first field of view of each camera may be determined based on the video frames obtained by each camera; The initial position information of the track points within the first field of view of each camera is fused to obtain the position information of the multiple track points (referred to as fused position information). The initial position information is the position information of the track point determined based on the video frame collected by a single camera. As mentioned above, it can be obtained by converting the pixel position of the target object in the video frame to a preset coordinate system.

在一些实施例中,所述多个相机中的至少两个目标相机的第一视野范围部分重叠。例如,在图3中的(3-3)所示的情况中,目标相机可以包括相机C1和相机C2,或者包括相机C2和相机C3。在目标对象处于各个目标相机的重叠的视野范围(以下称为第三视野范围)内的情况下,目标对象经过一个轨迹点时,包括目标对象的视频帧往往可以被多个目标相机采集到。因此,可以确定所述每个目标相机的第三视野范围内的轨迹点的初始位置信息;对各个目标相机的第三视野范围内的轨迹点的初始位置信息进行同步,得到各个目标相机的第三视野范围内的轨迹点的同步位置信息;对各个目标相机的第三视野范围内的轨迹点的同步位置信息进行融合,得到各个目标相机的第三视野范围内的轨迹点的位置信息。In some embodiments, the first fields of view of at least two target cameras of the plurality of cameras partially overlap. For example, in the case shown in (3-3) in FIG. 3, the target camera may include the camera C1 and the camera C2, or the camera C2 and the camera C3. When the target object is within the overlapping field of view of each target camera (hereinafter referred to as the third field of view), when the target object passes through a trajectory point, video frames including the target object can often be captured by multiple target cameras. Therefore, the initial position information of the track points within the third field of view of each target camera can be determined; the initial position information of the track points within the third field of view of each target camera is synchronized to obtain the first position information of each target camera. Synchronous position information of the track points within the three visual fields; fuse the synchronous position information of the track points within the third field of view of each target camera to obtain the position information of the track points within the third field of view of each target camera.

在本实施例中,各个目标相机的第三视野范围内的轨迹点的初始位置信息一般是不同步的,即,不同的目标相机的第三视野范围内的轨迹点的初始位置信息实际上是目标对象在不同时刻经过的轨迹点的位置信息,这是由于不同的相机采集视频帧的时间存在差异导致的。为了减小位置信息的融合过程中产生的误差,本实施例先进行了初始位置信息的同步,得到各个目标相机的第三视野范围内的轨迹点的同步位置信息,即各个目标相机的第三视野范围内的轨迹点在同一时刻的位置信息,再对同步位置信息进行融合。In this embodiment, the initial position information of the track points within the third field of view of each target camera is generally asynchronous, that is, the initial position information of the track points within the third field of view of different target cameras is actually The position information of the trajectory points that the target object passes through at different times, which is caused by the difference in the time when different cameras collect video frames. In order to reduce the error generated in the fusion process of the position information, in this embodiment, the synchronization of the initial position information is performed first, and the synchronization position information of the track points within the third field of view of each target camera is obtained, that is, the third position information of each target camera is obtained. The position information of the trajectory points in the field of view at the same time, and then the synchronous position information is fused.

例如,假设目标相机包括相机C1和相机C2,目标对象处于相机C1和相机C2这两个目标相机的重叠的视野范围(即第三视野范围)内,可以基于相机C1采集的视频帧确定相机C1的第一视野范围内的轨迹点的初始位置信息(记为第一位置信息),还可以基于相机C2采集的视频帧确定相机C2的第一视野范围内的轨迹点的初始位置信息(记为第二位置信息)。由于相机C1采集视频帧的时间与相机C2采集视频帧的时间存在差异,难以获取到同一时刻的第一位置信息和第二位置信息。因此,可以对第一位置信息与第二位置信息进行同步,得到相机C1的第三视野范围内的轨迹点的同步位置信息(记为第三位置信息)以及相机C2的第三视野范围内的轨迹点的同步位置信息(记为第四位置信息),其中,第三位置信息与第四位置信息为目标对象的轨迹点在同一时刻(称为基准时间)的位置信息。对第三位置信息与第四位置信息进行融合,即可得到相机C1和相机C2的第三视野范围内的轨迹点的位置信息。For example, assuming that the target camera includes camera C1 and camera C2, and the target object is within the overlapping field of view (ie, the third field of view) of the two target cameras, camera C1 and camera C2, camera C1 can be determined based on the video frames collected by camera C1 The initial position information (denoted as the first position information) of the track points within the first field of view of the second location information). Since the time when the camera C1 collects the video frame is different from the time when the camera C2 collects the video frame, it is difficult to obtain the first position information and the second position information at the same moment. Therefore, the first position information and the second position information can be synchronized to obtain the synchronized position information (referred to as the third position information) of the track points within the third field of view of the camera C1 and the position information of the track points within the third field of view of the camera C2. Synchronized position information of the track point (referred to as fourth position information), wherein the third position information and the fourth position information are the position information of the track point of the target object at the same time (referred to as reference time). By fusing the third position information and the fourth position information, the position information of the track points within the third field of view of the camera C1 and the camera C2 can be obtained.

其中,基准时间可以是多个目标相机中的其中一个目标相机采集视频帧的时间。例如,在目标对象处于某个轨迹点时,多个目标相机可能先后采集视频帧。目标对象处于同一轨迹点时各个目标相机先后采集的视频帧称为一批视频帧。所述基准时间可以是同一批视频帧的时间戳中最早的时间戳对应的时间。The reference time may be the time when one of the target cameras among the multiple target cameras captures the video frame. For example, when the target object is at a certain trajectory point, multiple target cameras may capture video frames successively. When the target object is at the same track point, the video frames successively collected by each target camera are called a batch of video frames. The reference time may be the time corresponding to the earliest time stamp in the time stamps of the same batch of video frames.

一批视频帧中的各个视频帧的时间戳之间的差值一般小于预设时间间隔,因此,可以获取由不同的目标相机采集,且时间戳之间的差值小于预设时间间隔的多个视频帧,记为{Q1,Q2,……,Qn},其中,Qi为第i个目标相机采集的视频帧,i和n均为正整数,且i≤n,n为目标相机的总数。可以将{Q1,Q2,……,Qn}的时间戳中最早的时间戳对应的时间作为基准时间。在确定基准时间之后,针对每个相机而言,可以获取该目标相机在基准时间之前采集的视频帧(称为在前视频帧)和该目标相机在基准时间之后采集的视频帧(称为在后视频帧),对基于在前视频帧确定的位置信息和基于在后视频帧确定的位置信息进行插值,从而得到基准时间的位置信息(即该目标相机的第三视野范围内的轨迹点的同步位置信息)。对各个相机采集的视频帧都进行上述处理,即可得到每个目标相机的第三视野范围内的轨迹点的同步位置信息。The difference between the time stamps of each video frame in a batch of video frames is generally less than the preset time interval. Therefore, it can be acquired by different target cameras, and the difference between the time stamps is less than the preset time interval. video frames, denoted as {Q1,Q2,...,Qn}, where Qi is the video frame collected by the ith target camera, i and n are positive integers, and i≤n, n is the total number of target cameras . The time corresponding to the earliest timestamp in the timestamps of {Q1, Q2, ..., Qn} can be used as the reference time. After the reference time is determined, for each camera, the video frame collected by the target camera before the reference time (called the previous video frame) and the video frame collected by the target camera after the reference time (called the previous video frame) can be obtained. After the video frame), the position information determined based on the previous video frame and the position information determined based on the subsequent video frame are interpolated to obtain the position information of the reference time (that is, the position information of the track point within the third field of view of the target camera. sync location information). By performing the above processing on the video frames collected by each camera, the synchronization position information of the track points within the third field of view of each target camera can be obtained.

参见图4,假设有4帧视频帧,分别为不同的目标相机采集的视频帧,如图中灰色方块所示,这4帧视频帧的编号分别记为1、2、3和4,如灰色方块上方的数字所示。编号为3的视频帧为这4帧视频帧中最先采集的视频帧,则可以将编号为3的视频帧作为基准帧,将编号为3的视频帧的时间戳对应的时间t作为基准时间。对于上述4帧视频帧中除基准帧以外的每帧视频帧(即编号为1、2和4的视频帧),可以将该视频帧作为在后视频帧,将该视频帧的前一视频帧作为在前视频帧。编号为k(k=1,2或4)的视频帧对应的在前视频帧中目标对象的位置信息记为

Figure BDA0003655482750000131
编号为k的视频帧对应的在后视频帧中目标对象的位置信息记为
Figure BDA0003655482750000132
通过对
Figure BDA0003655482750000133
Figure BDA0003655482750000134
进行插值,得到对应相机的第三视野范围内的目标对象在基准时间t的位置(xk,yk)。对于基准帧而言,该基准帧中目标对象的位置信息即为对应相机的第三视野范围内的目标对象在基准时间t的位置,即图中的(x3,y3)。Referring to Figure 4, it is assumed that there are 4 video frames, which are video frames collected by different target cameras, as shown by the gray squares in the figure. Numbers above the squares. The video frame numbered 3 is the first video frame collected among the four video frames, then the video frame numbered 3 can be used as the reference frame, and the time t corresponding to the timestamp of the video frame numbered 3 can be used as the reference time . For each video frame except the reference frame in the above-mentioned 4 video frames (that is, the video frames numbered 1, 2 and 4), the video frame can be regarded as the subsequent video frame, and the video frame preceding the video frame can be regarded as the video frame. as the preceding video frame. The position information of the target object in the previous video frame corresponding to the video frame numbered k (k=1, 2 or 4) is recorded as
Figure BDA0003655482750000131
The position information of the target object in the following video frame corresponding to the video frame numbered k is recorded as
Figure BDA0003655482750000132
through the pair
Figure BDA0003655482750000133
and
Figure BDA0003655482750000134
Perform interpolation to obtain the position (x k , y k ) of the target object in the third field of view of the corresponding camera at the reference time t. For the reference frame, the position information of the target object in the reference frame is the position of the target object in the third field of view of the corresponding camera at the reference time t, ie (x 3 , y 3 ) in the figure.

在获取到各个目标相机的第三视野范围内的轨迹点的同步位置信息之后,可以采用加权平均等方式对各个目标相机的第三视野范围内的轨迹点在同一基准时间的同步位置信息进行融合,得到所述基准时间的轨迹点的位置信息。接着前面的例子,对(x1,y1)、(x2,y2)、(x3,y3)和(x4,y4)进行加权平均,即可得到基准时间t的轨迹点的位置信息。After obtaining the synchronous position information of the trajectory points within the third field of view of each target camera, the synchronous position information of the trajectory points within the third field of view of each target camera at the same reference time can be fused by means of weighted average , to obtain the position information of the track point at the reference time. Following the previous example, the weighted average of (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) and (x 4 , y 4 ) can be obtained to obtain the track point at the reference time t location information.

在步骤203中,可以获取所述多个轨迹点在所述目标对象的轨迹上的投影信息。其中,所述轨迹可以通过以下方式得到:获取预测轨迹,所述预测轨迹包括至少一个预测参数;基于已获取的至少一个轨迹点的位置信息确定所述至少一个预测参数中的每个预测参数;根据确定的每个预测参数和所述预测轨迹确定所述目标对象的轨迹。In step 203, projection information of the multiple track points on the track of the target object may be acquired. Wherein, the trajectory may be obtained by: acquiring a predicted trajectory, where the predicted trajectory includes at least one predicted parameter; and determining each predicted parameter in the at least one predicted parameter based on the obtained position information of at least one trajectory point; The trajectory of the target object is determined according to each determined prediction parameter and the predicted trajectory.

在一些实施例中,所述预测轨迹可以是一条直线,也可以是一条曲线,所述曲线可以是二次或二次以上的曲线。在预测轨迹为直线的情况下,轨迹上各点的横坐标与纵坐标呈线性关系,预测轨迹可以记为:In some embodiments, the predicted trajectory may be a straight line or a curve, and the curve may be a quadratic or more than quadratic curve. When the predicted trajectory is a straight line, the abscissa and ordinate of each point on the trajectory have a linear relationship, and the predicted trajectory can be written as:

y=ax+b;y=ax+b;

其中,(x,y)为预测轨迹上的点在所述预设的坐标系下的坐标,x和横坐标,y为纵坐标,a和b均为预测参数,其中,a和b均为常数,且a≠0。Among them, (x, y) is the coordinates of the point on the predicted trajectory in the preset coordinate system, x and abscissa, y is the ordinate, a and b are both prediction parameters, where a and b are both constant, and a≠0.

在预测轨迹为二次曲线的情况下,轨迹上各点的纵坐标可以表示为对应点的横坐标的二次函数,从而预测轨迹可以记为:When the predicted trajectory is a quadratic curve, the ordinate of each point on the trajectory can be expressed as a quadratic function of the abscissa of the corresponding point, so the predicted trajectory can be written as:

y=cx2+dx+e;y=cx 2 +dx+e;

其中,(x,y)为预测轨迹上的点在所述预设的坐标系下的坐标,x和横坐标,y为纵坐标,c、d和e均为预测参数,其中,c、d和e均为常数,且c≠0。Among them, (x, y) is the coordinates of the point on the predicted trajectory in the preset coordinate system, x and abscissa, y is the ordinate, c, d and e are prediction parameters, where c, d and e are constants, and c≠0.

本实施例可以先生成预测轨迹的先验假设,即,假设预测轨迹为一条直线,或者一条二次曲线,然后再采用已获取的轨迹点的位置信息确定预测轨迹中包括的预测参数,从而得到目标对象的轨迹。在其他实施例中,也可以假设所述预测轨迹为三次或者三次以上的曲线,但这种先验假设受噪声扰动的影响较大,而大多数情况下,目标对象的轨迹并不是一条严格意义上的直线,因此,一般可以将目标对象的预测轨迹假设为上述二次曲线。这样,既减少了噪声扰动的影响,又比较符合实际应用场景。In this embodiment, a priori hypothesis of the predicted trajectory can be generated first, that is, the predicted trajectory is assumed to be a straight line or a quadratic curve, and then the obtained position information of the trajectory points is used to determine the prediction parameters included in the predicted trajectory, thereby obtaining The trajectory of the target object. In other embodiments, it can also be assumed that the predicted trajectory is a curve of three or more times, but this a priori assumption is greatly affected by noise disturbance, and in most cases, the trajectory of the target object is not a strict sense Therefore, the predicted trajectory of the target object can generally be assumed to be the above quadratic curve. In this way, the influence of noise disturbance is reduced, and it is more in line with practical application scenarios.

在确定所述轨迹的先验假设之后,可以基于一个或多个轨迹点的位置信息对所述先验假设进行求解,得到各个预测参数。在对位置信息进行了融合的实施例中,用于求解该先验假设的位置信息为融合后的位置信息。例如,在先验假设为上述二次曲线的情况下,可以基于最新获取的三个轨迹点,对所述二次曲线进行求解,得到预测参数c、d和e的值,从而确定该二次曲线。确定出的二次曲线即为目标对象的轨迹。After the prior hypothesis of the trajectory is determined, the prior hypothesis may be solved based on the position information of one or more track points to obtain each prediction parameter. In the embodiment in which the position information is fused, the position information used to solve the prior hypothesis is the fused position information. For example, when the a priori assumption is the above quadratic curve, the quadratic curve can be solved based on the three newly acquired trajectory points to obtain the values of the prediction parameters c, d and e, so as to determine the quadratic curve. curve. The determined quadratic curve is the trajectory of the target object.

在目标的移动过程中,该二次曲线可能发生变化,因此,可以仅采用最新获取的若干个轨迹点来确定预测轨迹的预测参数,并可以每隔一段时间,基于最新获取的多个轨迹点对预测轨迹的预测参数进行更新,从而不断得到符合目标当前的实际运动情况的轨迹。During the movement of the target, the quadratic curve may change. Therefore, only a few newly acquired trajectory points can be used to determine the prediction parameters of the predicted trajectory, and at intervals, based on the newly acquired multiple trajectory points The prediction parameters of the predicted trajectory are updated, so as to continuously obtain a trajectory that conforms to the current actual motion of the target.

在得到目标对象的轨迹之后,可以获取所述多个轨迹点在所述目标对象的轨迹上的投影信息。在一些实施例中,可以确定所述多个轨迹点中每组相邻的轨迹点之间的距离;将所述每组相邻的轨迹点之间的距离投影到所述轨迹的切线方向,得到所述每组相邻的轨迹点在所述轨迹上的投影距离;根据各组相邻的轨迹点之间的距离在所述轨迹上的投影距离,确定所述多个轨迹点在所述目标对象的轨迹上的投影信息。After the trajectory of the target object is obtained, projection information of the plurality of trajectory points on the trajectory of the target object can be obtained. In some embodiments, the distance between each group of adjacent trajectory points in the plurality of trajectory points may be determined; the distance between each group of adjacent trajectory points is projected to the tangent direction of the trajectory, Obtain the projected distance of each group of adjacent trajectory points on the trajectory; Projection information on the trajectory of the target object.

例如,可以直接将所述每组相邻的轨迹点在所述轨迹上的投影距离作为所述每组相邻的轨迹点在所述目标对象的轨迹上的投影信息,或者,可以对所述每组相邻的轨迹点在所述轨迹上的投影距离进行求和,得到所述多个轨迹点在所述目标对象的轨迹上的投影信息。For example, the projection distance of each group of adjacent trajectory points on the trajectory can be directly used as the projection information of each group of adjacent trajectory points on the trajectory of the target object, or the The projection distances of each group of adjacent trajectory points on the trajectory are summed to obtain the projection information of the plurality of trajectory points on the trajectory of the target object.

在对投影距离进行求和得到投影信息的情况下,参见图5,假设各个圆点分别表示在赛道上运动的物体(即目标对象)在不同位置时的轨迹点,分别记为P1,P2,……,P7,这7个轨迹点即为目标对象在第二视野范围内运动的过程中经过的多个轨迹点。所述多个轨迹点包括一组相邻的轨迹点P6和P7,实线表示物体的轨迹,如上所述,该运动轨迹可以基于预测轨迹以及预测参数得到,虚线表示轨迹的切线方向,且在上述场景中,x轴表示赛道方向,y轴表示垂直于赛道的方向。则针对上述相邻的轨迹点P6和P7,可以将P6与P7之间的距离投影到轨迹的切线方向,得到投影距离Δl。其中,轨迹的切线方向可以通过对轨迹求导得到。其他各组相邻的轨迹点(例如,轨迹点P5和P6)在所述轨迹上的投影距离可以采用类似的方式得到,此处不再赘述。在得到各组相邻的轨迹点在所述轨迹上的投影距离之后,可以对各组相邻的轨迹点在所述轨迹上的投影距离进行求和,得到所述多个轨迹点在所述目标对象的轨迹上的投影信息。When the projection information is obtained by summing the projection distances, referring to Figure 5, it is assumed that each dot represents the trajectory points of the object moving on the track (that is, the target object) at different positions, which are respectively recorded as P1, P2, ..., P7, these 7 track points are the multiple track points that the target object passes through during the movement in the second field of view. The plurality of trajectory points include a group of adjacent trajectory points P6 and P7, the solid line represents the trajectory of the object, as described above, the motion trajectory can be obtained based on the predicted trajectory and the predicted parameters, the dotted line represents the tangent direction of the trajectory, and in In the above scenario, the x-axis represents the direction of the track, and the y-axis represents the direction perpendicular to the track. Then, for the above-mentioned adjacent track points P6 and P7, the distance between P6 and P7 can be projected to the tangent direction of the track to obtain the projected distance Δ1. Among them, the tangent direction of the trajectory can be obtained by derivation of the trajectory. The projection distances of other groups of adjacent track points (for example, track points P5 and P6 ) on the track can be obtained in a similar manner, and details are not described herein again. After obtaining the projection distances of each group of adjacent trajectory points on the trajectory, the projection distances of each group of adjacent trajectory points on the trajectory may be summed to obtain the plurality of trajectory points on the trajectory. Projection information on the trajectory of the target object.

通过本实施例的方式对距离进行投影,并不断叠加各组相邻的轨迹点对应的投影距离,能够较为准确地确定用于表征目标对象的运动距离的投影信息,从而提高确定物体的运动信息的准确度。By projecting the distance in the method of this embodiment, and continuously superimposing the projection distances corresponding to each group of adjacent track points, the projection information used to represent the movement distance of the target object can be determined more accurately, thereby improving the determination of the movement information of the object. accuracy.

应当说明的是,在上述实施例中,用于确定轨迹的轨迹点除了步骤202中所述的多个轨迹点(即在步骤203中用于确定投影信息的多个轨迹点)之外,还可以包括其他的轨迹点。例如,在图5所示的实施例中,步骤202中所述的多个轨迹点包括轨迹点P1至轨迹点P7,其中,轨迹点P1与轨迹点P2之间、轨迹点P2与轨迹点P3之间,和/或图5中其他相邻的轨迹点之间,均可以包括一个或多个其他的轨迹点。也就是说,可以通过较多的轨迹点来确定轨迹,但仅基于其中的部分轨迹点来确定所述投影信息。例如,在目标对象的轨迹接近直线,或者在目标对象的运动速度较低,或者在采集视频帧的帧率较高等情况下,仅基于采集到的部分轨迹点所带来的误差相对较小,因此,可以无需将采集到的每个轨迹点都用来确定投影信息。这样,既减少了计算量,提高了处理效率,又保持了较高的计算准确度。It should be noted that, in the above embodiment, in addition to the multiple track points described in step 202 (that is, the multiple track points used to determine the projection information in step 203 ), the track points used for determining the track are also Other trajectory points may be included. For example, in the embodiment shown in FIG. 5 , the plurality of trajectory points described in step 202 include the trajectory point P1 to the trajectory point P7 , wherein the trajectory point P1 and the trajectory point P2 are between the trajectory point P2 and the trajectory point P3 and/or between other adjacent track points in FIG. 5 , one or more other track points may be included. That is to say, the trajectory can be determined through more trajectory points, but the projection information is determined only based on some of the trajectory points. For example, when the trajectory of the target object is close to a straight line, or when the moving speed of the target object is low, or when the frame rate of the captured video frame is high, the error caused by only some of the collected track points is relatively small. Therefore, it is not necessary to use every acquired trajectory point to determine projection information. In this way, the amount of calculation is reduced, the processing efficiency is improved, and a high calculation accuracy is maintained.

在步骤204中,可以基于所述投影信息确定所述目标对象的运动信息。在一些实施例中,所述运动信息可以包括但不限于目标对象的运动速度和/或加速度。In step 204, motion information of the target object may be determined based on the projection information. In some embodiments, the motion information may include, but is not limited to, the motion speed and/or acceleration of the target object.

在一些实施例中,可以获取所述多个轨迹点中每个轨迹点的时间信息,基于所述投影信息以及所述每个轨迹点的时间信息,确定所述目标对象的运动信息。其中,一个轨迹点的时间信息可以用于表征目标对象从某一轨迹点(例如第一个轨迹点)到达该轨迹点所花费的时长。基于两个轨迹点的时间信息之间的差值可以确定目标对象经过这两个轨迹点的时间间隔。In some embodiments, time information of each trajectory point in the plurality of trajectory points may be acquired, and based on the projection information and the time information of each trajectory point, the motion information of the target object is determined. Wherein, the time information of one trajectory point can be used to represent the time it takes for the target object to reach the trajectory point from a certain trajectory point (for example, the first trajectory point). Based on the difference between the time information of the two track points, the time interval at which the target object passes through the two track points can be determined.

在步骤203中直接将所述每组相邻的轨迹点在所述轨迹上的投影距离作为所述每组相邻的轨迹点在所述目标对象的轨迹上的投影信息的情况下,可以基于所述多个轨迹点中相邻的轨迹点的时间信息确定目标对象经过所述相邻的轨迹点的时间间隔,并基于述相邻的轨迹点之间的距离在所述轨迹上的投影距离与目标对象经过所述相邻的轨迹点的时间间隔之间的比值确定目标对象的运动速度,具体可记为:In the case where the projection distance of each group of adjacent trajectory points on the trajectory is directly used as the projection information of each group of adjacent trajectory points on the trajectory of the target object in step 203, it can be based on The time information of the adjacent trajectory points among the plurality of trajectory points determines the time interval at which the target object passes through the adjacent trajectory points, and is based on the projected distance of the distance between the adjacent trajectory points on the trajectory. The ratio of the time interval between the target object and the time interval when the target object passes through the adjacent trajectory points determines the movement speed of the target object, which can be specifically recorded as:

vn=Δl/Δt;v n =Δl/Δt;

其中,vn为目标对象经过第n(n为正整数)个轨迹点时的运动速度,Δl为第n个轨迹点与第n-1个轨迹点之间的距离在所述轨迹上的投影距离,Δt为目标对象经过第n个轨迹点的时间与目标对象经过第n-1个轨迹点的时间之间的时间间隔。Among them, v n is the movement speed of the target object when it passes the nth (n is a positive integer) trajectory point, and Δl is the projection of the distance between the nth trajectory point and the n-1th trajectory point on the trajectory Distance, Δt is the time interval between the time when the target object passes the nth trajectory point and the time when the target object passes through the n-1th trajectory point.

在一些实施例中,参考图6,可以将各组相邻的轨迹点在所述轨迹上的投影距离和每个轨迹点的时间信息不断加入双向队列中,并从双向队列不断读取最新的投影距离和时间信息,即可得到目标对象的运动速度。In some embodiments, referring to FIG. 6 , the projection distances of each group of adjacent trajectory points on the trajectory and the time information of each trajectory point can be added to the bidirectional queue continuously, and the latest data can be continuously read from the bidirectional queue. Projection distance and time information can obtain the movement speed of the target object.

其中:in:

Figure BDA0003655482750000181
Figure BDA0003655482750000181

Ln和Ln-1分别表示前n-1组相邻的轨迹点在所述轨迹上的投影距离之和以及前n-2组相邻的轨迹点在所述轨迹上的投影距离之和,其中,前n-1组相邻的轨迹点包括第1个轨迹点与第2个轨迹点组成的相邻的轨迹点,……,第n-1个轨迹点与第n个轨迹点组成的相邻的轨迹点,tn和tn-1分别表示第n个轨迹点的时间信息和第n-1个轨迹点的时间信息,n为正整数。L n and L n-1 respectively represent the sum of the projection distances of the first n-1 groups of adjacent track points on the track and the sum of the projection distances of the first n-2 groups of adjacent track points on the track, respectively , where the first n-1 groups of adjacent trajectory points include the adjacent trajectory points composed of the first trajectory point and the second trajectory point, ..., the n-1th trajectory point and the nth trajectory point. The adjacent trajectory points of , t n and t n-1 respectively represent the time information of the nth trajectory point and the time information of the n-1th trajectory point, and n is a positive integer.

然而,上述获取运动速度的方式得到的运动速度的波动较大,这是因为处理过程中存在噪声,导致基于视频帧确定出的目标对象的位置信息往往不够准确,从而导致确定出的投影距离不准确。However, the movement speed obtained by the above-mentioned method of obtaining the movement speed fluctuates greatly. This is because there is noise in the processing process, so that the position information of the target object determined based on the video frame is often not accurate enough, so that the determined projection distance is not accurate. precise.

为了解决上述问题,本公开可以对所述每组相邻的轨迹点在所述轨迹上的投影距离进行求和,得到所述多个轨迹点在所述目标对象的轨迹上的投影信息。在此基础上,可以先获取所述目标对象的运动模型,所述运动模型用于表征所述目标对象的运动距离随时间的变化关系,所述运动模型包括至少一个模型参数;基于已获取的至少一个轨迹点的投影信息以及已获取的至少一个轨迹点中每个轨迹点的时间信息,确定所述至少一个模型参数中的每个模型参数;根据确定的模型参数以及所述运动模型确定所述目标对象的运动信息。In order to solve the above problem, the present disclosure may sum the projection distances of each group of adjacent trajectory points on the trajectory to obtain the projection information of the plurality of trajectory points on the trajectory of the target object. On this basis, the motion model of the target object can be obtained first, and the motion model is used to represent the time-varying relationship of the motion distance of the target object, and the motion model includes at least one model parameter; The projection information of the at least one trajectory point and the acquired time information of each trajectory point in the at least one trajectory point are used to determine each model parameter in the at least one model parameter; Describe the motion information of the target object.

所述运动模型可以是匀速运动模型、匀加速运动模型或者其他运动模型,例如,在匀速运动模型中,目标对象的运动距离与时间的变化关系为线性关系,具体可记为:The motion model can be a uniform motion model, a uniform acceleration motion model or other motion models. For example, in a uniform motion model, the relationship between the movement distance of the target object and time is a linear relationship, which can be specifically recorded as:

L=rt;L = rt;

其中,r为模型参数,L表示目标对象的运动距离,可以采用多个轨迹点在目标对象的轨迹上的投影信息来近似,t表示目标对象经过所述多个轨迹点花费的总时长,可以基于所述多个轨迹点的时间信息确定。Among them, r is the model parameter, L represents the moving distance of the target object, which can be approximated by the projection information of multiple trajectory points on the trajectory of the target object, and t represents the total time spent by the target object to pass through the multiple trajectory points, which can be Determined based on time information of the plurality of trajectory points.

在另一些实施例中,在目标对象的运动模型中,运动距离为时间的二次函数,具体可记为:In other embodiments, in the motion model of the target object, the motion distance is a quadratic function of time, which can be specifically recorded as:

L=r1t2+r2t+r3L=r 1 t 2 +r 2 t+r 3 ;

其中,r1、r2和r3均为模型参数,L表示目标对象的运动距离,可以采用多个轨迹点在目标对象的轨迹上的投影信息来近似,t表示目标对象经过所述多个轨迹点花费的总时长,可以基于所述多个轨迹点的时间信息确定。Among them, r 1 , r 2 and r 3 are all model parameters, L represents the moving distance of the target object, which can be approximated by the projection information of multiple trajectory points on the trajectory of the target object, and t represents that the target object passes through the plurality of trajectory points. The total duration of the trajectory points may be determined based on the time information of the plurality of trajectory points.

在其他实施例中,也可以根据实际情况采用其他的运动模型,此处不再一一列举。In other embodiments, other motion models may also be used according to actual conditions, which will not be listed one by one here.

在确定运动模型之后,可以基于已获取的至少一个轨迹点的投影信息以及已获取的至少一个轨迹点中每个轨迹点的时间信息,确定所述至少一个模型参数中的每个模型参数,从而确定所述目标对象的运动信息。After the motion model is determined, each model parameter of the at least one model parameter may be determined based on the acquired projection information of the at least one trajectory point and the acquired time information of each trajectory point in the at least one trajectory point, thereby Determine the motion information of the target object.

在一些实施例中,所述至少一个轨迹点包括所述多个轨迹点中已获取的N个轨迹点,例如,所述N个轨迹点可以包括最近获取的N个轨迹点。在基于所述N个轨迹点确定模型参数之后,可以根据所述模型参数以及所述运动模型,确定所述目标对象在所述N个轨迹点中的第M个轨迹点的运动信息;M和N均为正整数,且M小于N。例如,在一个具体的数值实施例中,N的取值为60,M的取值为50。也就是说,可以基于最新获取的60个轨迹点对应的投影信息和时间信息,确定这60个轨迹点中的第50个轨迹点的运动信息。之所以获取的并非最近采集的轨迹点(在该数值实施例中为第60个轨迹点)对应的运动信息,是因为目标对象在运动过程中,运动信息有可能发生突变。In some embodiments, the at least one trajectory point includes N trajectory points that have been acquired among the plurality of trajectory points, for example, the N trajectory points may include the most recently acquired N trajectory points. After the model parameters are determined based on the N track points, the motion information of the Mth track point of the target object in the N track points may be determined according to the model parameters and the motion model; M and N is a positive integer, and M is less than N. For example, in a specific numerical embodiment, the value of N is 60, and the value of M is 50. That is to say, the motion information of the 50th trajectory point among the 60 trajectory points can be determined based on the projection information and time information corresponding to the newly acquired 60 trajectory points. The reason why the obtained motion information is not the motion information corresponding to the most recently collected trajectory point (in this numerical example, the 60th trajectory point) is because the motion information of the target object may change abruptly during the motion process.

例如,参见图7,目标对象在沿着某一方向运动的过程中撞击到障碍物,然后沿着相反的方向继续运动。其中,箭头表示目标对象的运动方向,v1和v2分别表示撞击障碍物前的运动速度和撞击障碍物后的运动速度。在某个轨迹点A的运动信息突变的情况下,仅基于在该轨迹点A之前采集的轨迹点来确定轨迹点A的运动信息将使获得的运动信息与实际情况存在较大偏差,因此,上述实施例中借助在轨迹点A之后采集的若干个轨迹点来确定轨迹点A的运动信息,以便提高运动信息突变的情况下获取的运动信息的准确度。For example, referring to FIG. 7 , the target object hits the obstacle in the process of moving in a certain direction, and then continues to move in the opposite direction. Among them, the arrow indicates the moving direction of the target object, and v1 and v2 indicate the movement speed before hitting the obstacle and the moving speed after hitting the obstacle, respectively. In the case of a sudden change in the motion information of a certain trajectory point A, only determining the motion information of the trajectory point A based on the trajectory points collected before the trajectory point A will cause a large deviation between the obtained motion information and the actual situation. Therefore, In the above embodiment, the motion information of the trajectory point A is determined by means of several trajectory points collected after the trajectory point A, so as to improve the accuracy of the motion information obtained when the motion information changes abruptly.

在一些实施例中,在获取上述模型参数之后,可以采用高斯加权最小二乘对包括上述模型参数的运动模型进行求解,即可得到上述第M个目标轨迹点对应的运动信息。其中,待求解的等式如下:In some embodiments, after obtaining the above-mentioned model parameters, Gaussian weighted least squares may be used to solve the motion model including the above-mentioned model parameters, so as to obtain the above-mentioned motion information corresponding to the M-th target trajectory point. Among them, the equation to be solved is as follows:

Figure BDA0003655482750000201
Figure BDA0003655482750000201

Figure BDA0003655482750000202
Figure BDA0003655482750000202

式中,ωi为第i个轨迹点对应的权重,该权重包括时间权重和速度权重,其中,时间权重和速度权重均符合高斯分布,ti和vi分别表示第i个轨迹点对应的时间信息和运动速度,σt和σv分别表示时间权重对应的方差和速度权重对应的方差,t0和v0分别表示所述第M个轨迹点对应的时间信息和运动速度。In the formula, ω i is the weight corresponding to the ith trajectory point, and the weight includes the time weight and the velocity weight, where both the time weight and the velocity weight conform to the Gaussian distribution, and t i and v i respectively represent the ith trajectory point corresponding to Time information and motion speed, σ t and σ v respectively represent the variance corresponding to the time weight and the variance corresponding to the speed weight, t0 and v0 respectively represent the time information and the motion speed corresponding to the Mth trajectory point.

则目标对象在第M个轨迹点的切向速度v可以通过对运动模型求导得到,具体表示为:Then the tangential velocity v of the target object at the M-th trajectory point can be obtained by derivation of the motion model, which is specifically expressed as:

v=2×r1t0+r2v=2×r 1 t 0 +r 2 ;

目标在第M个轨迹点的切向加速度Acc可以通过对运动模型求二阶导数得到,具体表示为:The tangential acceleration Acc of the target at the Mth trajectory point can be obtained by calculating the second derivative of the motion model, which is specifically expressed as:

Acc=2×r1Acc=2×r 1 .

进一步地,在所述目标对象在参考对象的表面运动,且所述目标对象的表面与所述参考对象的表面相接触的情况下,还可以基于所述加速度信息确定所述目标对象的表面与所述参考对象的表面之间的摩擦系数μ,所述摩擦系数μ可以基于所述加速度信息与重力加速度之间的比值确定,具体可以表示为:Further, in the case that the target object moves on the surface of the reference object, and the surface of the target object is in contact with the surface of the reference object, it is also possible to determine, based on the acceleration information, that the surface of the target object and the surface of the reference object are in contact with each other. The friction coefficient μ between the surfaces of the reference objects, the friction coefficient μ can be determined based on the ratio between the acceleration information and the gravitational acceleration, and can be specifically expressed as:

Figure BDA0003655482750000211
Figure BDA0003655482750000211

其中,g为重力加速度,其数值可以近似为9.8m/s2Among them, g is the acceleration of gravity, and its value can be approximated as 9.8m/s 2 .

在实际应用中,由于轨迹点是在物体运动过程逐个采集到的,因此,在基于所述投影信息确定所述目标对象的运动信息之后,还可以获取位于所述多个轨迹点之后的在后轨迹点;获取所述在后轨迹点与所述多个轨迹点中的最后一个轨迹点之间的第一距离;将所述第一距离投影到所述轨迹的切线方向上,得到第二距离;基于所述第二距离对所述投影信息进行更新。在得到更新后的投影信息之后,可以返回基于所述投影信息确定所述目标对象的运动信息的步骤,从而不断地得到目标对象运动过程中的运动信息。In practical applications, since the track points are collected one by one during the movement of the object, after the motion information of the target object is determined based on the projection information, the following track points behind the multiple track points can also be obtained. track point; obtain the first distance between the trailing track point and the last track point in the plurality of track points; project the first distance on the tangent direction of the track to obtain the second distance ; Update the projection information based on the second distance. After the updated projection information is obtained, the step of determining the motion information of the target object based on the projection information can be returned, so as to continuously obtain the motion information of the target object during the movement process.

仍然参照图5所示的实施例,由于各个轨迹点P1到P7是依次被获取到的,假设P1为起始轨迹点,则在获取到P1核P2这两个轨迹点的情况下,可以先确定P1与P2之间的距离在轨迹上的投影距离Δl1,将该投影距离Δl1确定为投影信息。在获取到P3之后,可以再确定P2与P3之间的距离在轨迹上的投影距离Δl2,将Δl2叠加到Δl1上,得到更新后的投影信息。按照上述方式以此类推,即可得到目标在整个运动过程中的投影信息,并不断地重复基于所述投影信息确定所述目标对象的运动信息的步骤。Still referring to the embodiment shown in FIG. 5 , since each trajectory point P1 to P7 is acquired in sequence, assuming that P1 is the starting trajectory point, in the case of acquiring the two trajectory points P1 and P2, you can first The projected distance Δl 1 of the distance between P1 and P2 on the trajectory is determined, and the projected distance Δl 1 is determined as projection information. After P3 is acquired, the projection distance Δl 2 of the distance between P2 and P3 on the trajectory can be determined again, and Δl 2 is superimposed on Δl 1 to obtain updated projection information. By analogy in the above manner, the projection information of the target in the whole movement process can be obtained, and the step of determining the movement information of the target object based on the projection information is continuously repeated.

本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.

参见图8,本公开实施例还提供一种数据处理装置,用于确定在相机的视野范围内移动的目标的运动信息;所述装置包括:Referring to FIG. 8 , an embodiment of the present disclosure further provides a data processing apparatus for determining motion information of a target moving within the field of view of the camera; the apparatus includes:

第一获取模块801,用于获取多个相机分别在不同的第一视野范围内采集的视频帧;A first acquisition module 801, configured to acquire video frames captured by multiple cameras in different first fields of view;

第一确定模块802,用于基于获取的视频帧,确定目标对象在第二视野范围内运动的过程中经过的多个轨迹点的位置信息;所述第二视野范围覆盖所述多个相机分别对应的所述第一视野范围;The first determining module 802 is configured to, based on the acquired video frames, determine the position information of multiple track points that the target object passes through during the movement in the second field of view; the second field of view covers the multiple cameras respectively. the corresponding first field of view;

第二获取模块803,用于获取所述多个轨迹点在所述目标对象的轨迹上的投影信息,所述目标对象的轨迹基于所述多个轨迹点的位置信息确定;所述投影信息用于表征所述目标对象的运动距离;The second obtaining module 803 is configured to obtain the projection information of the multiple track points on the track of the target object, where the track of the target object is determined based on the position information of the multiple track points; the projection information uses to characterize the movement distance of the target object;

第二确定模块804,用于基于所述投影信息确定所述目标对象的运动信息。The second determining module 804 is configured to determine motion information of the target object based on the projection information.

在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。In some embodiments, the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments. For specific implementation, reference may be made to the descriptions of the above method embodiments. For brevity, here No longer.

本说明书实施例还提供一种计算机设备,其至少包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行所述程序时实现前述任一实施例所述的方法。The embodiments of the present specification further provide a computer device, which at least includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements any of the above-mentioned embodiments when executing the program. method described.

图9示出了本说明书实施例所提供的一种更为具体的计算设备硬件结构示意图,该设备可以包括:处理器901、存储器902、输入/输出接口903、通信接口904和总线905。其中处理器901、存储器902、输入/输出接口903和通信接口904通过总线905实现彼此之间在设备内部的通信连接。FIG. 9 shows a more specific hardware structure diagram of a computing device provided by an embodiment of this specification. The device may include: a processor 901 , a memory 902 , an input/output interface 903 , a communication interface 904 and a bus 905 . The processor 901 , the memory 902 , the input/output interface 903 and the communication interface 904 realize the communication connection between each other within the device through the bus 905 .

处理器901可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。处理器901还可以包括显卡,所述显卡可以是Nvidia titan X显卡或者1080Ti显卡等。The processor 901 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related program to implement the technical solutions provided by the embodiments of this specification. The processor 901 may further include a graphics card, and the graphics card may be an Nvidia titan X graphics card or a 1080Ti graphics card or the like.

存储器902可以采用ROM(Read Only Memory,只读存储器)、RAM(Random AccessMemory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器902可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器902中,并由处理器901来调用执行。The memory 902 may be implemented in the form of a ROM (Read Only Memory, read only memory), a RAM (Random Access Memory, random access memory), a static storage device, a dynamic storage device, and the like. The memory 902 may store an operating system and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 902 and invoked by the processor 901 for execution.

输入/输出接口903用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/output interface 903 is used for connecting input/output modules to realize information input and output. The input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.

通信接口904用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如运动网络、WIFI、蓝牙等)实现通信。The communication interface 904 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices. The communication module may implement communication through wired means (eg, USB, network cable, etc.), or may implement communication through wireless means (eg, sports network, WIFI, Bluetooth, etc.).

总线905包括一通路,在设备的各个组件(例如处理器901、存储器902、输入/输出接口903和通信接口904)之间传输信息。Bus 905 includes a path to transfer information between the various components of the device (eg, processor 901, memory 902, input/output interface 903, and communication interface 904).

需要说明的是,尽管上述设备仅示出了处理器901、存储器902、输入/输出接口903、通信接口904以及总线905,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above device only shows the processor 901, the memory 902, the input/output interface 903, the communication interface 904 and the bus 905, in the specific implementation process, the device may also include the necessary components for normal operation. other components. In addition, those skilled in the art can understand that, the above-mentioned device may only include components necessary to implement the solutions of the embodiments of the present specification, rather than all the components shown in the figures.

本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法。An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, implements the method described in any of the foregoing embodiments.

计算机可读介质包括永久性和非永久性、可运动和非可运动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media, including persistent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the embodiments of the present specification can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the embodiments of this specification or the parts that make contributions to the prior art may be embodied in the form of software products, and the computer software products may be stored in storage media, such as ROM/RAM, A magnetic disk, an optical disk, etc., includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or some parts of the embodiments in this specification.

上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementing device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, email sending and receiving device, game control desktop, tablet, wearable device, or a combination of any of these devices.

本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the apparatus embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for related parts. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated. When implementing the solutions of the embodiments of the present specification, the functions of each module may be integrated into the same module. or multiple software and/or hardware implementations. Some or all of the modules may also be selected according to actual needs to achieve the purpose of the solution in this embodiment. Those of ordinary skill in the art can understand and implement it without creative effort.

以上所述仅是本说明书实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本说明书实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本说明书实施例的保护范围。The above are only specific implementations of the embodiments of the present specification. It should be pointed out that for those skilled in the art, without departing from the principles of the embodiments of the present specification, several improvements and modifications can be made. These Improvements and modifications should also be regarded as the protection scope of the embodiments of the present specification.

Claims (13)

1. A method for determining motion information, the method comprising:
acquiring video frames acquired by a plurality of cameras in different first visual field ranges respectively;
determining position information of a plurality of track points which are passed by the target object in the process of moving in the second visual field range based on the acquired video frame; the second visual field range covers the first visual field ranges respectively corresponding to the plurality of cameras;
acquiring projection information of the plurality of track points on the track of the target object, wherein the track of the target object is determined based on the position information of the plurality of track points; the projection information is used for representing the movement distance of the target object;
determining motion information of the target object based on the projection information.
2. The method of claim 1, wherein determining the trajectory of the target object based on the position information of the plurality of trajectory points comprises:
obtaining a predicted trajectory, wherein the predicted trajectory comprises at least one prediction parameter;
determining each prediction parameter of the at least one prediction parameter based on the acquired position information of the at least one track point;
and determining the track of the target object according to each determined prediction parameter and the predicted track.
3. The method according to claim 1 or 2, wherein the obtaining of the projection information of the plurality of track points on the track of the target object comprises:
determining the distance between each group of adjacent track points in the plurality of track points;
projecting the distance between each group of adjacent track points to the tangential direction of the track to obtain the projection distance of each group of adjacent track points on the track;
and determining the projection information of the plurality of track points on the track of the target object according to the projection distance of the distance between each group of adjacent track points on the track.
4. A method according to any of claims 1 to 3, characterized in that the method further comprises:
acquiring time information of each track point in the plurality of track points;
the determining motion information of the target object based on the projection information includes:
and determining the motion information of the target object based on the projection information and the time information of each track point.
5. The method of any of claims 1 to 4, wherein after determining motion information of the target object based on the projection information, the method further comprises:
acquiring a rear track point behind the plurality of track points;
acquiring a first distance between the rear track point and the last track point in the plurality of track points;
projecting the first distance to the tangential direction of the track to obtain a second distance;
updating the projection information based on the second distance.
6. The method of any of claims 1-5, wherein the determining motion information of the target object based on the projection information comprises:
obtaining a motion model of the target object, wherein the motion model is used for representing the change relation of the motion distance of the target object along with time, and the motion model comprises at least one model parameter;
determining each model parameter in the at least one model parameter based on the acquired projection information of the at least one track point and the acquired time information of each track point in the at least one track point;
and determining the motion information of the target object according to the determined model parameters and the motion model.
7. The method of claim 6, wherein the at least one trace point comprises N trace points acquired from the plurality of trace points; the determining the motion information of the target object according to the determined model parameters and the motion model includes:
determining the motion information of the Mth track point of the target object in the N track points according to the determined model parameters and the motion model;
m and N are positive integers, and M is less than N.
8. The method according to any one of claims 1-7, wherein the motion information comprises acceleration information, the target object moves on a surface of a reference object, the surface of the target object is in contact with the surface of the reference object; the method further comprises the following steps:
determining a coefficient of friction between a surface of the target object and a surface of the reference object based on the acceleration information.
9. The method according to any one of claims 1-8, wherein determining position information of a plurality of track points that the target object passes through during the movement in the second field of view based on the acquired video frames comprises:
determining initial position information of track points in a first visual field range of each camera based on a video frame acquired by each camera;
and fusing initial position information of the track points in the first view range of each camera to obtain the position information of the plurality of track points.
10. The method of claim 9, wherein the first fields of view of at least two target cameras of the plurality of cameras partially overlap; the initial position information of the track point in the first field of view scope to each camera fuses, obtains the positional information of a plurality of track points, includes:
determining initial position information of track points in a third visual field range of each target camera, wherein the third visual field range is an overlapped visual field range of each target camera;
synchronizing initial position information of the track points in the third view range of each target camera to obtain synchronous position information of the track points in the third view range of each target camera;
and fusing the synchronous position information of the track points in the third view range of each target camera to obtain the position information of the track points in the third view range of each target camera.
11. An apparatus for determining motion information, the apparatus comprising:
the first acquisition module is used for acquiring video frames acquired by the plurality of cameras in different first visual field ranges;
the first determining module is used for determining the position information of a plurality of track points which are passed by the target object in the process of moving in the second visual field range based on the acquired video frame; the second visual field range covers the first visual field ranges respectively corresponding to the plurality of cameras;
the second acquisition module is used for acquiring projection information of the plurality of track points on the track of the target object, and the track of the target object is determined based on the position information of the plurality of track points; the projection information is used for representing the movement distance of the target object;
a second determination module to determine motion information of the target object based on the projection information.
12. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 10.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 10 when executing the program.
CN202210557440.1A 2022-05-20 2022-05-20 Method and apparatus for determining motion information, medium, and computer device Pending CN114926501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557440.1A CN114926501A (en) 2022-05-20 2022-05-20 Method and apparatus for determining motion information, medium, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557440.1A CN114926501A (en) 2022-05-20 2022-05-20 Method and apparatus for determining motion information, medium, and computer device

Publications (1)

Publication Number Publication Date
CN114926501A true CN114926501A (en) 2022-08-19

Family

ID=82810245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557440.1A Pending CN114926501A (en) 2022-05-20 2022-05-20 Method and apparatus for determining motion information, medium, and computer device

Country Status (1)

Country Link
CN (1) CN114926501A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824703A (en) * 2023-07-13 2023-09-29 哈尔滨工业大学 A method for identifying and evaluating curling players' movements

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150066363A1 (en) * 2013-08-30 2015-03-05 The Boeing Company Method and system for estimating aircrft course
US9672627B1 (en) * 2013-05-09 2017-06-06 Amazon Technologies, Inc. Multiple camera based motion tracking
CN109726355A (en) * 2019-01-04 2019-05-07 重庆邮电大学 A Ship Trajectory Repair Method Based on Vector Interpolation
CN109816700A (en) * 2019-01-11 2019-05-28 佰路得信息技术(上海)有限公司 A kind of information statistical method based on target identification
CN111176298A (en) * 2020-01-21 2020-05-19 广州赛特智能科技有限公司 Unmanned vehicle track recording and tracking method
CN112101170A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Target positioning method, device, computer equipment and storage medium
CN112422134A (en) * 2020-11-19 2021-02-26 中睿信数字技术有限公司 Method and device for space-time trajectory compression and segmented state expression
CN113345228A (en) * 2021-06-01 2021-09-03 星觅(上海)科技有限公司 Driving data generation method, device, equipment and medium based on fitted track
CN113903066A (en) * 2021-10-12 2022-01-07 杭州海康威视数字技术股份有限公司 Track generation method, system and device and electronic equipment
CN114234975A (en) * 2021-11-23 2022-03-25 深圳市跨越新科技有限公司 Low-frequency real-time track matching method, system, computer device and storage medium
CN114442083A (en) * 2021-12-24 2022-05-06 福建新继船舶服务有限公司 Self-adaptive weighted data fusion method based on vision and multi-source radar

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9672627B1 (en) * 2013-05-09 2017-06-06 Amazon Technologies, Inc. Multiple camera based motion tracking
US20150066363A1 (en) * 2013-08-30 2015-03-05 The Boeing Company Method and system for estimating aircrft course
CN109726355A (en) * 2019-01-04 2019-05-07 重庆邮电大学 A Ship Trajectory Repair Method Based on Vector Interpolation
CN109816700A (en) * 2019-01-11 2019-05-28 佰路得信息技术(上海)有限公司 A kind of information statistical method based on target identification
CN111176298A (en) * 2020-01-21 2020-05-19 广州赛特智能科技有限公司 Unmanned vehicle track recording and tracking method
CN112101170A (en) * 2020-09-08 2020-12-18 平安科技(深圳)有限公司 Target positioning method, device, computer equipment and storage medium
CN112422134A (en) * 2020-11-19 2021-02-26 中睿信数字技术有限公司 Method and device for space-time trajectory compression and segmented state expression
CN113345228A (en) * 2021-06-01 2021-09-03 星觅(上海)科技有限公司 Driving data generation method, device, equipment and medium based on fitted track
CN113903066A (en) * 2021-10-12 2022-01-07 杭州海康威视数字技术股份有限公司 Track generation method, system and device and electronic equipment
CN114234975A (en) * 2021-11-23 2022-03-25 深圳市跨越新科技有限公司 Low-frequency real-time track matching method, system, computer device and storage medium
CN114442083A (en) * 2021-12-24 2022-05-06 福建新继船舶服务有限公司 Self-adaptive weighted data fusion method based on vision and multi-source radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾承;曹加恒;: "对象空间踪迹的自动跟踪与管理", 计算机工程, no. 15, 5 August 2006 (2006-08-05) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824703A (en) * 2023-07-13 2023-09-29 哈尔滨工业大学 A method for identifying and evaluating curling players' movements

Similar Documents

Publication Publication Date Title
US10699441B2 (en) Calibration apparatus, calibration method and storage medium
JP6198230B2 (en) Head posture tracking using depth camera
JP6820967B2 (en) Indoor positioning systems and methods based on geomagnetic signals combined with computer vision
CN105635588B (en) A kind of digital image stabilization method and device
CN102272796A (en) Motion vector generation device and motion vector generation method
US12158981B2 (en) Persistent calibration of extended reality systems
CN112819860A (en) Visual inertial system initialization method and device, medium and electronic equipment
CN114543797B (en) Pose prediction method and device, equipment and medium
CN114120301B (en) Method, device and apparatus for determining posture
CN108827341A (en) The method of the deviation in Inertial Measurement Unit for determining image collecting device
WO2021031790A1 (en) Information processing method, apparatus, electronic device, storage medium, and program
CN113438409B (en) Delay calibration method, delay calibration device, computer equipment and storage medium
CN114567727A (en) Shooting control system, method and device, storage medium and electronic equipment
KR20250087618A (en) Image-based speed determination method, device, apparatus and storage medium
CN114037146A (en) Queuing waiting time length determining method and device
CN114926501A (en) Method and apparatus for determining motion information, medium, and computer device
TWI822423B (en) Computing apparatus and model generation method
JP7324066B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND IMAGING APPARATUS
CN112383677B (en) Video processing method and device
WO2019080879A1 (en) Data processing method, computer device, and storage medium
US12294773B2 (en) Method and apparatus for generating video digest and readable storage medium
CN117788659A (en) Method, device, electronic equipment and storage medium for rendering image
CN110428452B (en) Method and device for detecting non-static scene points, electronic equipment and storage medium
JP2021125000A (en) Image processing equipment, detection methods, and programs
CN111339898A (en) Behavior detection method and apparatus, computer readable storage medium, computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination