CN119850668B - Spherical object tracking method - Google Patents
Spherical object tracking method Download PDFInfo
- Publication number
- CN119850668B CN119850668B CN202510329601.5A CN202510329601A CN119850668B CN 119850668 B CN119850668 B CN 119850668B CN 202510329601 A CN202510329601 A CN 202510329601A CN 119850668 B CN119850668 B CN 119850668B
- Authority
- CN
- China
- Prior art keywords
- sphere
- target
- identified
- ball
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a sphere target tracking method, which is applied to the technical field of image processing. The method comprises the steps of after collecting a court image, identifying a ball target and a human body target in the court image according to a detection result of the court image, expanding a field angle for collecting the court image according to a preset field angle expansion rule in response to the fact that the ball target is not identified in the court image, so as to obtain the court image with the field angle expanded, searching the ball target in the court image with the field angle expanded according to a first search area, determining a tracking player target from the identified human body target if the ball target is not identified yet, searching the ball target according to the tracking player target and a second search area, and determining the position of the ball target in response to the fact that the ball target is identified. By the method, the problems of ball heel loss and picture jitter can be effectively solved, and the ball target tracking efficiency is improved.
Description
Technical Field
The application relates to the technical field of image processing, in particular to a sphere target tracking method.
Background
With the continuous development of computer technology, especially the rising of deep learning, the processing and analysis capability of images is remarkably improved, and a technical basis is provided for the detection of football in ball games.
In the related art, in order to optimize the sphere tracking effect, a panorama of a course image is divided into a plurality of candidate areas, the candidate areas are determined by a target, and then interpolation transition is performed.
However, in the above method, when a person is blocked or flies out of a far position, the problem of losing the ball and shaking the picture is likely to occur.
Disclosure of Invention
The application provides a sphere target tracking method, which can effectively solve the problems of ball heel loss and picture jitter and improve sphere target tracking efficiency.
In a first aspect, the present application provides a method for tracking a spherical object, comprising:
After collecting a court image, identifying a sphere target and a human body target in the court image according to a detection result of the court image;
In response to the fact that the ball target is not recognized in the court image, expanding the field angle for collecting the court image according to a preset field angle expansion rule to obtain the court image with the enlarged field angle, and searching the ball target in the court image with the enlarged field angle according to a first search area, wherein the first search area is determined according to the position of the ball target recognized last time and a first search time;
If the ball target is not recognized yet, determining a tracking ball target from the recognized human body targets, and searching the ball target according to the tracking ball target and a second search area, wherein the second search area is determined according to the position of the tracking ball target and a second search time;
In response to identifying a sphere target, determining a position of the sphere target according to a preset sphere target tracking algorithm.
The ball target tracking method, the ball target tracking device and the image processing equipment provided by the application are used for identifying the ball target and the human target by utilizing the detection result of the court image, searching the ball target by expanding the searching area under the condition of no ball or large ball noise, tracking by using the position of a player when the ball target is not searched, and effectively solving the problem of ball loss (such as the ball is shielded for a long time or kicked out of the field or the ball is not identified by the noise) by adopting the mode of joint tracking of the person and the ball so as to realize the efficient tracking of the ball target, and effectively solving the problem of large shaking or stillness of picture tracking by continuously adopting the mode of ball identification tracking under the condition of no ball (such as the field on a cheering team and the like) by adopting the mode of ball and person switching tracking.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic diagram of a possible application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a sphere target tracking method according to an embodiment of the present application;
FIG. 3 is a flowchart of another sphere target tracking method according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for tracking a spherical object according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a process for switching from person-to-person to sphere target tracking in an embodiment of the application;
FIG. 6 is a flow chart of a method for tracking a spherical object according to an exemplary embodiment of the present application;
Fig. 7 is a schematic structural diagram of a spherical object tracking device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Under a possible application scenario, the sphere target tracking technical scheme provided by the embodiment can be applied to football detection tracking scenarios. Optionally, the football event may use the wide-angle device to collect panoramic video of the field, and because the angle is large, the collected image is distorted greatly, or because the target (ball or person) in the video is too small relative to the whole field, in order to facilitate viewing of the video image, the region of interest may be scratched out for video display, so as to improve viewing experience, where the region of interest positioning may be implemented by artificial or artificial intelligence. After the region of interest is positioned, the scratched video can be continuously displayed after being subjected to distortion and posture correction again based on the distortion generated by the wide-angle equipment and the camera installation posture. The whole processing flow can be as shown in fig. 1, and comprises four modules of wide-angle image acquisition, artificial intelligent ball detection, smooth tracking and image matting, after wide-angle image acquisition, the artificial intelligent ball detection can be used, self-adaptive tracking is realized through an algorithm, and then the image correction of the scratched area is carried out through an image correction module, so that the application is applied to the smooth tracking module 101. Wherein, the artificial intelligence ball detecting module can detect all balls (including noise balls) and people in the scene, and can number all balls and people, and the same ball can be coded into the same number in a small range of continuous movement. When the ball moves greatly or is shielded by a player, the ball disappears, and when the ball reappears, the number is switched.
Through artificial intelligence examining ball and carrying out the self-adaptation and tracking in-process can exist a large amount of noise balls, noise ball mainly comes from many reasons, 1) the main ball is shielded by the player for a long time, or the main ball flies out of the place far away. 2) The volume of the ball is too small, the artificial intelligence module becomes smaller after the image is reduced, and the target is difficult to detect, so that the object is lost. 3) There are situations where there is no main ball on the race, such as when a cheering team is on the ground. 4) The artificial intelligence misdetects white shoes, less human head with hair, and even a lawn with higher brightness as a ball, which is more likely to occur under low illumination or under the condition that the camera is installed far from the court. 5) The game field has static balls on the edge or non-participating players play balls, but are shot by the wide-angle camera. 6) The same playing field is divided into a plurality of areas, a plurality of games are played simultaneously, etc.
In view of the above problems, embodiments of the present application provide a method, an apparatus, and an image processing device for tracking a spherical object, which identify a spherical object and a human object by using a detection result of a court image, search the spherical object by expanding a search area under a condition that no ball or a ball noise is large, and track the spherical object while not searching the spherical object, and effectively prevent the ball from being blocked for a long time, or kick out of the field, or the ball from being so noisy that the ball cannot be identified, so as to achieve efficient tracking of the ball, and effectively solve the problem that the picture tracking is largely rocked or stationary due to lack of clear tracking of the object when the ball is not in the field of view or not detected, by means of switching the ball and the person to the tracking. Under the condition of the ball, the main ball with lower noise is selected for smooth tracking, so that the stability of the spherical target tracking and unfolding visual angle is further ensured, and the accuracy of main ball tracking is improved.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flow chart of a sphere target tracking method according to an embodiment of the present application, where a possible execution subject of the sphere target tracking method may be an image processing device (the specific form and type of the image processing device are not limited in this embodiment, for example, the sphere target tracking method may be a terminal device or a server), and as shown in fig. 2, the method may include the following steps S201 to S204:
Step S201, after collecting a court image, identifying a sphere target and a human body target in the court image according to a detection result of the court image.
Alternatively, the court image may be a wide-angle image captured by a wide-angle device and transmitted to an image processing device for detection in real time, where the image processing device performs target detection on a person or a ball in the court image according to a preset detection algorithm (such as YOLO algorithm, convolutional neural network CNN, etc.).
In the ball game, the ball targets are balls which are different from balls which are placed or moved outside the field and which are contended by the team members of the two team members.
And step S202, in response to the fact that the spherical object is not recognized in the court image, expanding the field angle for collecting the court image according to a preset field angle expansion rule so as to obtain the court image with the enlarged field angle, and searching the spherical object in the court image with the enlarged field angle according to a first search area, wherein the first search area is determined according to the position of the spherical object recognized last time and the first search time.
In this embodiment, in the case where no sphere target is identified, such as no sphere is identified in the current frame, or the identified sphere is noisy (as in the following embodiments, the sphere target scoring result is lower than the lowest scoring threshold, which may be determined by those skilled in the art from a lot of experimental or empirical data, or in some embodiments, from actual or a priori data, as in the case of loud noise).
In this embodiment, the field angle expansion rule is a manner in which the field angle is gradually expanded to increase the probability that the ball falls into the region to be scratched. The larger the field angle, the greater the probability that the ball will fall into the matting area, and this embodiment takes into account that the horizontal and vertical matting areas are limited and cannot be infinitely expanded due to the limitations in deployment. The design of the angle of view expansion rule is as follows:
Psx = Posx – a*t1; Pex = Poex + a*t1;
Psy = Posy – a*t1; Pey = Poey + a*t1;
wherein P sx 、Pex is the starting and ending position of the unfolding view in the x-axis direction, P osx ,Poex is the starting and ending position of the last unfolding view in the x-axis direction, P sy 、Pey 、Posy 、Poey is the corresponding position of y, a is the enlarging speed of the view angle, and t1 is time.
It will be appreciated that in response to a condition or state being relied upon to indicate an operation being performed, one or more operations being performed may be in real-time or with a set delay when the condition or state being relied upon is satisfied, and that there is no restriction in the order in which the operations are performed without specifically being described.
Then, the ball target is searched for by searching the course image with the enlarged view angle using the first search area, which is determined based on the position where the ball target was last identified (i.e., the ball loss position) and the first search time, which can be adaptively determined by the user based on the actual application or the prior data.
In an example, the position of the last identified sphere target includes a horizontal axis coordinate position and a vertical axis coordinate position, and the first search area is determined by determining a horizontal axis search interval expanding gradually with a first search time according to the horizontal axis coordinate position and a sphere speed calculated by the last identified sphere target, determining a vertical axis search interval expanding gradually with the first search time according to the vertical axis coordinate position and a sphere speed calculated by the last identified sphere target, and determining the first search area according to the horizontal axis search interval and the vertical axis search interval, wherein the sphere speed is obtained according to pixels changed in unit time of the last identified sphere target, and the first search time is a real-time change time from a start time of the unrecognized sphere target to a search time threshold.
In this example, in order to improve the sphere target tracking efficiency while avoiding jitter caused by excessively enlarging the angle of view, considering that the extremely large probability of a sphere is still near the lost position in a short time after the loss of the sphere, the present embodiment limits the search range by using a method of determining the search area, the search area is gradually enlarged with the lapse of time, and the speed at which the search area is enlarged is correlated with the sphere speed, thereby further improving the sphere target tracking efficiency.
Next, the horizontal axis search section and the vertical axis search section will be further described. The method for determining the cross-axis search interval can be determined by determining a first cross-axis boundary of the cross-axis search interval according to the cross-axis coordinate position and the difference between the calculated speed of the last recognized sphere target and the product result of the first search time and the preset expansion coefficient, determining a second cross-axis boundary of the cross-axis search interval according to the sum of the cross-axis coordinate position and the calculated speed of the last recognized sphere target and the product result of the first search time and the expansion coefficient, and obtaining the cross-axis search interval according to the first cross-axis boundary and the second cross-axis boundary.
In this embodiment, the calculated speed of the last time the sphere target was identified and the first search time are used to calculate the possible distance that the sphere target moves within the first search time, and adjustment and correction of the distance can be achieved by multiplying the expansion coefficient (for example, the search area needs to be larger than the calculated distance to facilitate searching), and the difference between the products of the speed, time and expansion coefficient corresponding to the last time the sphere target was identified is used to calculate the first transverse axis boundary, where the boundary represents the farthest position where the sphere target may move leftwards with the current transverse axis position as the base point. Similarly, the second horizontal axis boundary, the furthest distance the target sphere may move to the right. And combining the first transverse axis boundary and the second transverse axis boundary to obtain the retrieval interval of the transverse axis.
The method for determining the vertical axis searching interval can be determined by determining a first vertical axis boundary of the vertical axis searching interval according to the vertical axis coordinate position and the difference between the calculated speed of the last recognized sphere target and the product result of the first searching time and the preset expansion coefficient, determining a second vertical axis boundary of the vertical axis searching interval according to the vertical axis coordinate position and the sum of the calculated speed of the last recognized sphere target and the product result of the first searching time and the expansion coefficient, and obtaining the vertical axis searching interval according to the first vertical axis boundary and the second vertical axis boundary.
It will be appreciated that the vertical axis search area is similar to the above-mentioned calculation principle of the horizontal axis search area, and the description thereof will not be repeated here.
Through the calculation of the first transverse axis boundary and the second transverse axis boundary, the size and the position of the search area can be dynamically adjusted by combining the historical motion information and the expansion coefficient so as to adapt to the historical motion trend and possible future change of the sphere target, and the success rate of repositioning the sphere target in a complex scene is improved, especially under the condition that the sphere target is possibly accelerated or the path is changed. In addition, the flexibility of the expansion coefficients allows optimization according to specific application requirements, thereby further improving sphere target tracking efficiency or accuracy. In an alternative example, the ball speed calculation method is denoted as V for the last identified pixel of the ball target that changed per unit time (i.e., the pixel that changed per unit time with the history ball ID unchanged). The time is from the beginning of the ball loss (i.e. the starting time of not identifying the ball target, which can be determined according to the time interval between the last time of identifying the ball, if the time interval exceeds the duration of the last time of failing to track the ball set by the system, the ball loss time is determined), and until each main ball searching frame, namely the first searching time is marked as t2, the specific searching area can be calculated as follows:
x-2 x v x x t2 (first transverse axis boundary) < D x < x + 2 *vx x t2 (second transverse axis boundary);
y-2 x v y x t2 (first vertical axis boundary) < D y < y + 2 *vy x t2 (second vertical axis boundary);
Wherein D x is the horizontal axis x direction search section with the last ball-loss position as the base point, D y is the vertical axis y direction search section with the last ball-loss position as the base point, v x is the historical speed of the ball in the x direction, v y is the historical speed of the ball in the y direction, and multiplication of 2 is the expansion section (i.e., optional expansion coefficient, in some embodiments, other expansion coefficient, and can be adaptively adjusted according to practical application) expressed by 2 times the speed, so that the probability of ball loss is effectively reduced.
Through the above searching area calculation mode, the target detection is carried out in the concentrated area, and the dependence on the whole view field is reduced by the localized searching, so that the phenomenon of shaking caused by rapid switching of the unfolding view angle is effectively prevented, and the problem of frequent switching of the unfolding view angle among a plurality of competition areas under the condition of multi-field regional ball games of the same court can be prevented through the above accurate positioning mode of the searching area based on the last identified sphere target.
Further by way of example, to further improve ball target tracking efficiency, the cached data may be utilized to help reposition the soccer ball after losing the soccer ball, considering that the image processing device will typically cache future data, i.e., predicted possible ball positions for future frames, during ball target tracking. The method for providing the field of view of the golf course comprises the steps of obtaining a cache search frame time according to the position of the ball of the golf course predicted by the future frame corresponding to the field of view-angle-enlarged golf course image, wherein the cache search frame time is used for predicting the ball of the golf course, and determining the cache search frame time as the search time threshold.
Continuing with the above description of the calculation method of the search area as an example, the determination method of the search time threshold may determine the search time threshold t2 based on the initial search time when the search range does not expand the cache ball, where the initial search time threshold is the time from the start of ball loss to the time corresponding to each main ball search frame. For a further example scenario, the search range may be extended to a buffer sphere for future data buffered in the system, and when extended to a buffer frame, t2 may be the time from the ball loss to the buffer search frame.
It will be appreciated that, in practical applications, the future frame, i.e. the detected frame following the current frame, is typically buffered according to historical data to predict the position in the future frame of the sphere, and the description thereof will not be repeated here.
The searching range is enlarged to contain the cached frame, the predicted sphere position in the cache is utilized to help relocation, and the searching time threshold is determined by utilizing the time of the cached searching frame, so that the searching range is enlarged, and the searching success rate can be further improved.
It should be noted that, in this embodiment, the first search area and the second search area are used to describe similar objects, and there is no special meaning, and the first search area and the second search area may be areas in the same range or areas in different ranges, and the first search time and the second search time are the same.
Continuing to show in fig. 2, if the ball target is not recognized yet, determining a tracking ball target from the recognized human body targets, and searching the ball target according to the tracking ball target and a second search area, wherein the second search area is determined according to the position of the tracking ball target and a second search time.
After the expanded view angle and the expanded search area are searched, if the sphere target is not searched yet, the situation that the sphere target is not available (such as a cheering team is on the ground) is described, and if the sphere search is continuously performed based on the mode, the picture can possibly generate the problem of tracking shaking or stillness greatly. According to the embodiment, the ball target tracking is switched to the following mode, so that the ball target tracking efficiency is improved, and meanwhile, the problem that tracking shakes or stands greatly is solved. Alternatively, it is contemplated that in a dynamic scene, the target may be temporarily undetected due to brief occlusion, rapid motion, or other reasons. By setting a waiting time, the system can be prevented from being immediately switched to the follow-up mode due to short detection failure, so that unnecessary mode switching is reduced, and the tracking efficiency of a sphere target is improved. Specifically, when the ball target (hereinafter referred to as a main ball) is not searched in the searching mode, waiting time is counted (the waiting time length can be set in a self-defined mode according to experience), and when the waiting time is longer than a certain time threshold (such as 30 s), the tracking of the ball target is switched to the tracking of the ball target, namely, the tracking of the ball target is determined from the recognized human body targets, and the ball target is recognized according to the tracking of the ball target.
Illustratively, the ball target is searched for according to the tracked player target and the second search area, and the ball target may be searched for in the second search area with the position of the tracked player target as a base point. It should be noted that, in this embodiment, the determining manner of the second search area may refer to the determining manner of the first search area, and the position and speed of the last identified sphere are replaced with the position and speed of the target of the tracking player, so that the second search area with the tracking player as the target may be calculated, and the second search time is the same, and the description thereof will not be repeated.
And step S204, in response to the identification of the sphere target, determining the position of the sphere target according to a preset sphere target tracking algorithm.
When the spherical target is identified, that is, the main sphere is identified (for example, the spherical target meets the noise condition, and the scoring result of the spherical target reaches the lowest scoring threshold), a smooth spherical target tracking algorithm or other spherical target tracking algorithms (which can be adjusted or determined by a person skilled in the art in combination with practical application) can be utilized to determine the position of the spherical target, so as to realize the tracking of the spherical target.
Illustratively, the step S204 determines the position of the sphere target according to a preset sphere target tracking algorithm, and may be implemented by determining a smooth position of the sphere target with respect to a current frame according to a position where the sphere target appears predicted by the current position of the sphere target corresponding to a future frame, determining a moving step of the sphere target with respect to the current frame according to the smoothed position and the number of future frames, and obtaining the position of the sphere target according to the smooth position and the moving step.
In this example, optionally, the determination of the smoothed position of the spherical object with respect to the current frame based on the position of the spherical object at which the spherical object is predicted to appear in the future frame may be performed by calculating a weighted average of the positions of the spherical object at which the spherical object is predicted to appear in each of the future frames based on a weighted average algorithm, and determining the smoothed position of the spherical object with respect to the current frame.
The weighted average algorithm is a method of calculating an average value by assigning a weight to a data point that reflects the importance of the data point in the overall average calculation. In the embodiment, the weighted average algorithm is utilized, and the multi-frame buffer memory is utilized to weight the positions of the main balls, so that the positions of the future main balls after the multi-frame buffer memory can be smoothed, and the smoothed positions are more accurate.
The method for obtaining the smoothed position by smoothing the position of the future main ball after the multiframe by weighting the positions of the main balls in the multiframe buffer memory can be as follows:
BP = /
the formula may be used to calculate a weighted average position, i.e., a smoothed position BP, where BP (n) is the position of the future nth buffered frame. n, i.e. the weight, reflects the importance of each buffered frame in predicting the future position, alternatively the larger the value of n, the greater the impact of that frame on the future position.
By weighted averaging the positions of the buffered frames, the future positions of the main ball can be smoothed out, reducing errors due to single frame fluctuations, which helps to reduce prediction errors due to single frame fluctuations or noise.
Alternatively, the step of moving the spherical object with respect to the current frame may be determined according to the smoothed position and the number of future frames, and the step of moving the spherical object with respect to the current frame may be determined according to a ratio between a product of the smoothed position and a preset speed control parameter and the number of future frames.
It will be appreciated that the movement of the current frame is stepped, i.e. the distance or amount of positional adjustment that needs to be moved in the current frame, in order to maintain accurate tracking of the sphere target.
For example, the moving steps of the current frame may be calculated as follows:
BPstep =
In the formula, BP step is the moving step of the current frame, that is, the step of the sphere target in the current frame that needs to move, nmax is the number of future frames, that is, the total number of buffered frames, and c is the parameter for controlling the speed, where the parameter can determine the size of the moving step, and a larger value of c means a faster response speed.
The calculation mode ensures that the motion stepping of the current frame is proportional to the predicted position of the buffered frames, so that consistent stepping behavior is maintained under the condition of different numbers of buffered frames, and the technical effect of smooth sphere target tracking is achieved. Furthermore, in each frame, the position of the sphere can be updated according to the calculated movement steps, and the new position is obtained by adding the current smooth position to the movement steps, so that smooth tracking of the sphere is realized, and the picture jitter is reduced.
In an alternative implementation manner, the speed control parameter may be determined by determining a speed control parameter according to an average value of relative position change amplitudes between positions of the sphere target in a history frame corresponding to the current position of the sphere target, where the speed control parameter is proportional to the average value of the relative position change amplitudes.
In this embodiment, the magnitude of the relative position change between the positions of the ball target in the history frame, that is, the magnitude of the frame difference between the history frames, may be used to indicate the speed of movement of the ball target, for example, the greater the average value of the relative position change magnitudes, the faster the speed of movement of the ball, whereas the smaller the average value of the relative position change magnitudes, the smaller the speed of movement of the ball.
In an example, the relative position change amplitude between the positions of the spherical targets in the history frames can be obtained by calculating the position difference of the spherical targets in each pair of continuous history frames (i.e. two continuous history frames), for example, by obtaining the position coordinates of the spherical targets in each two frames, performing difference calculation by using the position coordinates of the spherical targets to obtain the position difference between each two frames, and then accumulating the position differences between each two frames and performing mean calculation to obtain the average value of the relative position change amplitude between the positions of the spherical targets in the history frames.
In another example, the magnitude of the relative position change between the positions of the sphere targets in the history frames may also be achieved by calculating the pixel value difference of the pixel points at the positions of the sphere targets for each pair of consecutive history frames (i.e., consecutive two-frame history frames), as by calculating the absolute difference value, D (x, y) = |i t(x, y) – It-1 (x, y) |, where D (x, y)) is the pixel value of the position (x, y) in the differential image (i.e., the differential image between each pair of consecutive history frames), and I t (x, y) and I t-1 (x, y) are the pixel values of the positions (x, y) of the current and previous frames, respectively. The frame differences of all historical frames are accumulated to obtain an overall frame difference size, for example, the frame difference size can be achieved by summing or averaging all frame differences, a larger frame difference indicates that a sphere target moves faster, the value of c can be dynamically increased to improve the response speed, and if the frame difference is smaller, the value of c can be dynamically reduced to improve the stability.
Alternatively, the speed control parameter c may be dynamically adjusted based on the magnitude of the frame difference by a linear formula, or the speed control parameter may be increased based on the same ratio of the magnitude of the frame difference, which is not particularly limited in the present embodiment.
By dynamically adjusting the speed control parameters, the movement change of the spherical object can be adaptively responded in the process of tracking the smooth spherical object, the tracking accuracy is improved, the stepping is increased to improve the response speed when the spherical object moves faster, and the stability can be improved by reducing the stepping when the object moves slower. In other words, the performance of the target tracking algorithm can be remarkably improved by adjusting the speed control parameters by utilizing the frame difference information of the historical frames, so that the target tracking algorithm can keep good tracking effect under different motion conditions.
It should be noted that, in addition to the above determination, the setting may be customized by the user according to the experience value.
Therefore, the buffer frame weighting is used in the ball target tracking process, the position of a future time point is predicted to smooth the current frame position, and then the current moving step is calculated according to the position of the future time point so as to determine the main ball position, so that the picture can be obviously prevented from shaking violently along with the ball.
Fig. 3 is a schematic flow chart of another method for tracking a spherical object according to an embodiment of the present application, where based on the above embodiment, the embodiment considers that a detection algorithm detects a plurality of spheres (for example, misrecognized as spheres) or includes other spheres (non-master spheres, i.e., non-spherical objects) on a playing field, so as to improve the tracking accuracy of the spherical object. In addition to the steps S201 to S204, the method of searching the spherical object in the course image with the enlarged view angle according to the first search area in the step S202 includes:
Step S2021, in response to the ball target not being identified in the course image, expanding the field angle for collecting the course image according to a preset field angle expansion rule, so as to obtain the course image after expanding the field angle.
It should be noted that, the process of expanding the angle of view is described in the above embodiments, and the description thereof will not be repeated here.
Step S2022, if a plurality of spheres to be identified are searched in the first search area, calculating a sphere target score of each sphere to be identified according to at least one of a motion index of each sphere to be identified, a sphere confidence level, a distance between a sphere position and a last sphere target position identified and a time for searching the sphere to be identified, so as to obtain a sphere target scoring result, wherein the motion index is determined according to an absolute motion amount and a relative inter-frame relative motion amount of the sphere to be identified.
It can be understood that, in the present embodiment, the plurality of spheres to be identified, that is, the plurality of spheres detected by the detection algorithm in the image processing device, but only one target sphere, that is, the main sphere, is usually used in the event, and noise spheres are denoised by combining data such as a movement index of the sphere to be identified, a sphere confidence level, a distance between a sphere position and a last sphere target position identified, and a time for searching the sphere to be identified, so as to improve accuracy of sphere identification and tracking.
In one embodiment, the motion index may be determined by determining an absolute motion amount of the sphere to be identified according to a position difference between a start position and an end position of the sphere to be identified, determining a relative motion amount of the sphere to be identified according to an average value of position differences between a current frame position and a previous frame position of the sphere to be identified for each frame, and obtaining the motion index according to a product between the absolute motion amount and the relative motion amount.
Alternatively, the motion index mindx may be determined according to the absolute motion amount and the relative inter-frame relative motion amount of the sphere to be identified, and the calculation method may be as follows:
mindx = |Ps - Pe| * Σ(|P(n)-P(n-1)|)
Wherein P (n) is the position of the current frame, P (n-1) is the position of the previous frame, ps is the start position of the ball, and Pe is the end position of the ball. Wherein, ps-Pe is absolute motion quantity, P (n) -P (n-1) is relative motion quantity between relative frames, and the absolute motion quantity tracked by the motion index sphere target is proportional to the relative motion quantity between the relative frames as can be seen from the above formula. It will be appreciated that the start position Ps refers to the position at which the ball was successfully detected the last time before the target was lost, and the end position Pe refers to the position after the ball was re-detected, in which case the end position can be determined by prediction or estimation until the ball is again detected.
In this embodiment, the absolute motion amount provides the overall displacement information, the relative inter-frame motion amount provides the detail motion information, and the motion index is calculated by combining the absolute motion amount information and the relative motion information of the sphere target, so that the overall evaluation of the motion behavior of the sphere target can be provided, the motion characteristic of the sphere target is quantized, and the accuracy of identifying the sphere target is improved.
Illustratively, the step S2022 calculates the sphere target score of each sphere to be identified according to at least one of the movement index, the sphere confidence coefficient, the distance between the sphere position and the last sphere target position, and the time of searching for the sphere to be identified, which may be implemented by determining, for each sphere to be identified, first confidence information of the sphere to be identified according to the product between the movement index and the sphere confidence coefficient of the sphere to be identified, determining second confidence information of the sphere to be identified according to the ratio between the first confidence information and the distance between the sphere position of the sphere to be identified and the last sphere target position, and determining the sphere target score of the sphere to be identified according to the ratio between the second confidence information and the time of searching for the sphere to be identified, so as to obtain the sphere target score of each sphere to be identified.
In this example, there are a plurality of balls in the first search area (e.g., the current frame search area in which the search time threshold is determined by the current frame or the buffered frame search area in which the search time threshold is determined by the buffered frame as in the above-described embodiment), and selection of the main ball is performed to improve accuracy of main ball identification. The method for selecting the main ball uses a comprehensive scoring method, the ball with the highest comprehensive score is selected as the main ball, the score is related to the movement index, the confidence level of the ball and the position of the ball, and the calculation method can be as follows:
score = (conf * mindx) / diff / t2;
Wherein score is a composite score, i.e. a ball target score, conf is a calculated confidence of the ball (for example, the confidence of the ball to be identified in the spectator area is low and the confidence of the ball to be identified in the competition area is high, which can be determined according to the position of the competition area), mindx is a motion index of the ball, diff is a distance (linear distance) between the ball and the ball-losing position (i.e. the position of the ball target to be identified last time), and t2 is a time difference between the ball-losing time and the ball-losing time. Wherein conf mindx is the first confidence information, and (conf mindx)/diff is the second confidence information, it can be seen from the above formula that the higher the confidence of the ball, the larger the motion index, the closer to the ball-losing position, the smaller the time to the ball-losing point, and the higher the score of the ball.
And step S2023, selecting the sphere with the highest score from the spheres to be identified according to the scoring result of the sphere target, and determining the sphere target.
Through the mode of selecting the main ball by scoring the ball target, the ball tracking process can remove other balls with large noise identified by a detection algorithm (such as artificial intelligence misdetecting white shoes, fewer hairs and even lawns with higher brightness as balls, the phenomenon is easier to appear under low illumination or under the condition that a camera is installed far away from a court, and static balls are arranged on a competition field or non-participating players play balls, but are shot by a camera), so that the anti-noise effect is better, the main ball with higher precision is selected, and the ball target tracking precision is further improved.
It should be noted that, the process of identifying the spherical object with respect to the court image before enlarging the field angle and the process of identifying the spherical object after following the person may be applied to the above-mentioned process of identifying the spherical object, and the description thereof will not be repeated here.
Fig. 4 is a flow chart of another method for tracking a ball target according to an embodiment of the present application, where on the basis of the above embodiment, the present embodiment determines the target and the position of the tracked ball by single-frame clustering, and smoothes the position of the tracked ball target by multi-frame weighting and performs ball search, so as to further improve the accuracy of identifying and tracking the main ball. Specifically, the above step S203 includes the following steps S2031 to S2033.
Step S2031, if no sphere target is still identified, determining the position weight of each human target according to the layout information of the court and the position of each human target, and acquiring the human targets to be clustered with the position weights greater than the preset weight threshold.
The court layout information is information known in advance, and may include the size, boundaries, key areas (such as goals and penalty areas), audience, and the like of the court.
Step S2032, clustering human targets to be clustered according to the preset clustering quantity, and obtaining a clustering result, wherein the clustering quantity is determined according to the number of competition areas of the court.
Taking football match as an example, because players may be scattered, even two ball games of a single course may occur, in the selection of the initial position of the tracking player, the accurate target and position of the tracking player are determined by clustering the players according to the positions of the players. The predetermined number of clusters can be determined according to the number of the areas of the field, for example, the field usually has only two ball games at maximum in football match, moreover, too many centroids have little influence on the selection of the initial position, two centroids can be selected, namely, the number of the two clusters is determined, and the clustering of the human body targets is performed, so that the clustering results corresponding to the two centroids are obtained, wherein the clustering mode can adopt K-means clustering.
Step S2033, determining a tracking player target according to the clustering result and the position of the last identified sphere target, and searching for the sphere target according to the tracking player target and the second search area.
For each clustering result, the distance from the mass center of each clustering to the last sphere target position is calculated, the mass center of the player close to the last frame of main sphere position is selected as an initial position, and the player corresponding to the mass center (or the player closest to the mass center) is determined as a tracking player target. It will be appreciated that in cluster analysis, centroid (centroid) refers to the center point of a cluster, and can be calculated from the average of all data points in the cluster (i.e., the individual human target locations).
Step S2034, searching for a sphere target according to the tracking player target and the second search area.
After the tracking player target is determined, the tracking player can be taken as a base point, and the corresponding second searching area is searched for the ball target in a person-following mode. Through the Shan Zhen clustering mode, the gathering position of the spherical players can be rapidly positioned, and the target of the tracked spherical players can be accurately identified, so that the efficiency and the accuracy of the tracking identification of the spherical bodies are further improved.
In order to further improve the accuracy and the smoothness of the main ball searching process, in combination with the method shown in fig. 5, the embodiment performs player tracking and main ball searching by combining single-frame clustering and multi-frame weighting. Specifically, the step S2034 may include determining a transition position of the tracking ball member according to the initial position of the tracking ball member and the position of the last identified ball target, obtaining a multi-frame clustering position of the tracking ball member with respect to the current frame according to the transition position of the tracking ball member and the position of the tracking ball member predicted to appear corresponding to the future frame, and searching the ball target according to the multi-frame clustering position of the tracking ball member and the second search area.
As mentioned above, the initial position selects the center of mass of the player closer to the position of the last frame main ball as the initial position. However, since the position of the player is different from the position of the ball, the present embodiment uses the following formula to transition the position of the ball of the last frame (i.e., the position where the ball target was last identified) to the position of the player:
P = (BPlast *(maxT – t3) + PPinit * t3) / maxT;(t3 <= maxT)
Where P is the transition position of the trackball, BPlast is the position of the last frame of the main ball, PPinit is the initial position of the ball found by the cluster (i.e., the initial position of the trackball), t3 is the time, and maxT is the transition time.
After losing a ball, the direct tracking of the fast moving ball may cause a screen shake, because the ball motion is generally faster and irregular than the player, and this embodiment, in combination with the last recognition of the transition of the main ball position to the player's position, can utilize the relatively smooth movement of the player to smooth the screen change, reducing the shake.
In the follow-up tracking, the search of the main ball is required to be continuously carried out, if the main ball is not detected, clustering is required to be carried out on a plurality of players between single frames in a multi-frame weighting and man-following mode, and the class closer to the last tracking center is selected as a tracking target after clustering. The multi-frame person following mode can be as follows:
And (5) caching the clustered positions of the players for multiple frames, weighting the clustered positions, and smoothing the positions of the players after the multiple frames. Pp= /Where PP is the current frame position (i.e., the multi-frame cluster position of the trackball about the current frame) and PP (n) is the position of the trackball predicted by the nth buffered frame. And can be calculated by advancing the current frame to realize smooth follow-up: PPstep =Wherein PPstep is the step of the current frame which needs to be moved, nmax is the total number of buffered frames, and c is the parameter for controlling the speed.
In the following process, if the main ball is searched, the position of the main ball can be smoothly switched, and the smooth transition method is similar to the method that the ball target is tracked and switched to follow, and the description is not repeated here.
By buffering the historical position data of the plurality of frames, a smooth motion trail can be calculated. Compared with single frame data, the method can better capture the overall trend of motion, and is not influenced by noise or abnormality possibly existing in a single frame, so that the captured motion of the sphere becomes more stable, and the image shake caused by rapid or irregular motion is reduced.
For facilitating understanding of the embodiment of the present application, as shown in fig. 6, the following flow is included:
after the ball target tracking is started, the ball target tracking can be started at any time, for example, after the game is started, the ball target tracking can be started after the court image is acquired, and the ball target tracking can also be started halfway;
judging whether the main ball is identified, namely whether a sphere target is detected, and if the main ball is identified, executing main ball tracking;
If the main ball is not identified, expanding the expanded view angle, searching the main ball in a first search area, selecting the main ball meeting the requirements (such as selecting the ball with the highest score in the ball target scoring result) from a plurality of identified balls in the process of searching the main ball, and tracking the main ball;
The first search area is an area range which is gradually expanded along with time, judges whether the main ball is identified or not in a waiting counting mode, expands the first search area range for ball detection if the main ball is not identified and is not overtime (namely a search time threshold value), and switches tracking from a ball target to follow if the main ball is not identified and is overtime;
and (3) carrying out main ball searching in a mode that a second searching area is expanded along with time, selecting a main ball meeting requirements, if the main ball is identified, switching from the follow-up mode to sphere target tracking, and carrying out main ball tracking, otherwise, continuing to search the main ball along with the follow-up mode and based on the second searching area expanded along with time until the main ball is searched, and carrying out main ball tracking.
According to the technical scheme, the method for tracking the ball and the system for tracking the ball by the aid of the combined tracking of the person and the ball can effectively prevent the ball from being blocked for a long time, or the ball from being kicked out of a field, or noise is high enough to prevent the phenomenon that the developed view angle is frequently switched among a plurality of competition areas under the condition that the ball cannot be identified, or no ball exists (such as a cheering team is on the field), the method for tracking the ball by the aid of the continuous ball is used for processing after the ball is lost, the view angle is expanded gradually, the searching area is gradually expanded, comprehensive indexes such as a motion index, confidence, a position relation and a time relation are used for selecting the main ball, the ball loss rate can be effectively reduced, more accurate main balls are selected, the phenomenon that the developed view angle is rapidly switched can be effectively prevented by means of calculating the searching area, the phenomenon that the developed view angle is frequently switched among a plurality of competition areas under the condition that the same court has a plurality of separated areas is avoided, the continuous ball target tracking or continuous ball tracking method is used for following the person by using a caching frame to weight, the position of a future time point is calculated, and the current moving step is calculated by the position of the future time point is remarkably prevented from shaking along with the violent ball. In addition, the weighted clustering method in the follow-up effectively reduces the influence of audience at the field edge, improves the transition dispersion of players, or improves the accuracy of replacing the position of the main ball by the center of the players under the condition that two ball games are played at the same court.
Fig. 7 is a schematic structural diagram of a sphere target tracking device according to an embodiment of the present application, as shown in fig. 7, the device includes a joint identification module 701, a first search module 702, a second search module 703, and a sphere target tracking module 704, wherein,
A joint recognition module 701 configured to recognize a sphere target and a human body target in a course image according to a detection result of the course image after collecting the course image;
a first search module 702 configured to, in response to the ball target not being identified in the course image, expand a field angle for collecting the course image according to a preset field angle expansion rule to obtain a field image with an expanded field angle, and search for the ball target in the field image with an expanded field angle according to a first search area, wherein the first search area is determined according to a position where the ball target was identified last time and a first search time;
a second search module 703 configured to determine a tracked player target from the identified human targets if the ball target is not yet identified, and search for the ball target based on the tracked player target and a second search area, wherein the second search area is determined based on the tracked player target location and a second search time;
a sphere target tracking module 704 configured to determine a position of the sphere target according to a preset sphere target tracking algorithm in response to identifying the sphere target.
In one embodiment, the last identified location of the sphere target includes a horizontal axis coordinate location and a vertical axis coordinate location, and the first search module 702 includes:
A first section determining unit configured to determine a horizontal axis search section that gradually expands with a first search time, based on the horizontal axis coordinate position and a sphere speed calculated by identifying a sphere target last time;
a second section determining unit configured to determine a vertical axis search section that gradually expands with the first search time, based on the vertical axis coordinate position and a sphere speed calculated by identifying the sphere target last time;
a region determining unit configured to determine the first search region based on the horizontal axis search section and the vertical axis search section;
The first search time is the real-time change time from the starting time of the ball target which is not recognized to the search time threshold value.
In one embodiment, the first section determining unit is specifically configured to determine a first lateral axis boundary of the lateral axis search section according to the lateral axis coordinate position and a difference between a result of a product of a speed calculated by identifying a ball target last time and a first search time and a preset expansion coefficient, determine a second lateral axis boundary of the lateral axis search section according to the lateral axis coordinate position and a sum of a result of a product of a speed calculated by identifying a ball target last time and the first search time and the expansion coefficient, and obtain the lateral axis search section according to the first lateral axis boundary and the second lateral axis boundary.
In one embodiment, the second interval determining unit is specifically configured to determine a first vertical axis boundary of the vertical axis search interval according to the vertical axis coordinate position and a difference between a result of a product of a speed calculated by identifying a sphere target last time and a first search time and a preset expansion coefficient, determine a second vertical axis boundary of the vertical axis search interval according to the vertical axis coordinate position and a sum of a result of a product of a speed calculated by identifying a sphere target last time and the first search time and the expansion coefficient, and obtain the vertical axis search interval according to the first vertical axis boundary and the second vertical axis boundary.
In one embodiment, the first search module 702 includes:
The sphere scoring unit is configured to, if a plurality of spheres to be identified are searched in the first search area, respectively calculate a sphere target score of each sphere to be identified according to at least one of a movement index, a sphere confidence coefficient, a distance between a sphere position and a last sphere target position identified and a time of searching the sphere to be identified, so as to obtain a sphere target scoring result;
the scoring selection unit is used for selecting the sphere with the highest scoring from the spheres to be identified according to the scoring result of the sphere target, and determining the sphere target;
The motion index is determined according to the absolute motion quantity of the sphere to be identified and the relative inter-frame relative motion quantity.
In one embodiment, the device further comprises an index determining module for determining the movement index, wherein the index determining module comprises a first movement amount determining unit for determining the absolute movement amount of the sphere to be identified according to the position difference between the starting position and the end position of the sphere to be identified, a second movement amount determining unit for determining the relative movement amount of the sphere to be identified according to the average value of the position difference between the current frame position and the last frame position of the sphere to be identified for each frame, and obtaining the movement index according to the product between the absolute movement amount and the relative movement amount.
In one embodiment, the sphere scoring unit is specifically configured to determine, for each sphere to be identified, first confidence information of the sphere to be identified according to a product between a motion index of the sphere to be identified and a sphere confidence, determine second confidence information of the sphere to be identified according to a ratio of the first confidence information to a distance between a sphere position of the sphere to be identified and a last sphere target position identified, and determine a sphere target score of the sphere to be identified according to a ratio of the second confidence information to a time when the sphere to be identified is searched to obtain a sphere target score of each sphere to be identified.
In one embodiment, the apparatus further comprises:
The buffer time obtaining unit is used for obtaining buffer search frame time according to the position of the target sphere predicted by the court image corresponding to the future frame after the field angle is enlarged, wherein the buffer search frame time is used for predicting the occurrence time of the target sphere;
And a time threshold determining unit configured to determine the cache search frame time as the search time threshold.
In one embodiment, the second search module 703 includes:
The weight acquisition unit is used for determining the position weight of each human target according to the court layout information and the positions of the human targets, and acquiring the human targets to be clustered, wherein the position weight of the human targets is greater than a preset weight threshold;
the single-frame clustering unit is used for clustering human targets to be clustered according to a preset clustering quantity, so as to obtain a clustering result, wherein the clustering quantity is determined according to the number of competition areas of the court;
and the player determining unit is used for determining the tracking player target according to the clustering result and the position of the last identified sphere target.
In one embodiment, the second search module 703 includes:
a transition unit configured to determine a transition position of the trackball based on the initial position of the trackball and the position of the last identified ball target;
the multi-frame clustering unit is used for obtaining multi-frame clustering positions of the tracking player relative to the current frame according to the transition positions of the tracking player and the positions of the tracking player predicted by the corresponding future frames;
and a search unit configured to search for a sphere target according to the multi-frame cluster position of the trackball and the second search area.
In one embodiment, the sphere target tracking module 704 includes:
a smoothing unit configured to determine a smoothed position of the sphere target with respect to the current frame based on a position of the sphere target that the current position of the sphere target corresponds to a position of the future frame predicted to appear;
a step determining unit arranged to determine a moving step of the sphere target with respect to the current frame based on the smoothed position and the number of future frames;
and a tracking unit configured to obtain a position of the sphere target based on the smoothed position and the moving step.
In an embodiment the smoothing unit is specifically arranged for calculating a weighted average between the current position of the sphere object and the predicted position of the sphere object in each of the future frames based on a weighted average algorithm, determining a smoothed position of the sphere object in relation to the current frame, and/or determining a movement step of the sphere object in relation to the current frame based on the smoothed position and the number of future frames, comprising determining a movement step of the sphere object in relation to the current frame based on a ratio between the product of the smoothed position and a preset speed control parameter and the number of future frames.
In one embodiment, the step determining unit is specifically configured to determine a speed control parameter according to an average value of a relative position change amplitude between the positions of the sphere targets in the history frame corresponding to the current position of the sphere target, where the speed control parameter is proportional to the average value of the relative position change amplitude.
The sphere target tracking device provided in the above embodiment may be used to execute the sphere target tracking method in any of the above method embodiments, and its implementation principle and technical effects are similar, and will not be described herein.
Fig. 8 is an image processing apparatus according to an embodiment of the present application, as shown in fig. 8, the image processing apparatus includes a processor 802, and a memory 801 and a display 803 communicatively connected to the processor;
the memory 801 stores computer-executable instructions;
the processor 802 executes computer-executable instructions stored in the memory 801 to implement the sphere target tracking method as provided in any of the first aspects above. The display 803 is used to display an image after image processing.
The image processing device provided in the foregoing embodiments may be used to execute the sphere target tracking method in any of the foregoing method embodiments, and its implementation principle and technical effects are similar, and are not described herein again.
The embodiment of the application correspondingly provides a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and the computer executable instructions are used for realizing the sphere target tracking method provided in the first aspect when being executed by a processor.
The computer readable storage medium described above may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as static random access memory, electrically erasable programmable read-only memory, magnetic memory, flash memory, magnetic disk or optical disk. A readable storage medium can be any available medium that can be accessed by a general purpose or special purpose computer.
In the alternative, a readable storage medium is coupled to the processor such that the processor can read information from, and write information to, the readable storage medium. In the alternative, the readable storage medium may be integral to the processor. The processor and the readable storage medium may reside in an Application SPECIFIC INTEGRATED Circuits (ASIC). The processor and the readable storage medium may reside as discrete components in a device.
The computer readable storage medium provided in the foregoing embodiments may be used to perform the sphere target tracking method in any of the foregoing method embodiments, and its implementation principle and technical effects are similar, and are not repeated here.
The embodiment of the application also provides a computer program product, which comprises a computer program, the computer program is stored in a computer readable storage medium, at least one processor can read the computer program from the computer readable storage medium, and the technical scheme provided by any one of the method embodiments can be realized when the at least one processor executes the computer program.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the front and rear associated objects are in a "or" relationship, and in the formula, the character "/" indicates that the front and rear associated objects are in a "division" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b, or c) of a, b, c, a-b, a-c, b-c, or a-b-c may be represented, wherein a, b, c may be single or plural.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence number of each process does not mean the sequence of the execution sequence, and the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (13)
1. A sphere target tracking method, comprising:
After collecting a court image, identifying a sphere target and a human body target in the court image according to a detection result of the court image;
In response to the fact that the ball target is not recognized in the court image, expanding the field angle for collecting the court image according to a preset field angle expansion rule to obtain the court image with the expanded field angle, and searching the ball target in the court image with the expanded field angle according to a first search area, wherein the first search area is determined according to the position of the ball target recognized last time and a first search time, and the first search time is the real-time change time from the starting time when the ball target is not recognized to a search time threshold value;
If the sphere target is still not recognized, determining a tracking sphere target from the recognized human body targets after a preset time length, and searching the sphere target according to the tracking sphere target and a second search area, wherein the second search area is determined according to the position of the tracking sphere target and a second search time;
In response to identifying a sphere target, determining a position of the sphere target according to a preset sphere target tracking algorithm.
2. The method of claim 1, wherein the last identified location of the sphere target comprises a horizontal axis coordinate location and a vertical axis coordinate location, and wherein the determining the first search area comprises:
determining a transverse axis retrieval interval which gradually expands along with the first search time according to the transverse axis coordinate position and the sphere speed calculated by identifying the sphere target last time;
determining a longitudinal axis retrieval interval which is gradually expanded along with the first search time according to the longitudinal axis coordinate position and the sphere speed calculated by the last time of identifying the sphere target;
Determining the first search area according to the horizontal axis search interval and the vertical axis search interval;
wherein the sphere velocity is derived from pixels that have been last identified as varying per unit time of the sphere target.
3. The method of claim 2, wherein determining a lateral search interval that progressively expands with a first search time based on the lateral coordinate location and a ball velocity calculated last time a ball target was identified, comprises:
Determining a first transverse axis boundary of the transverse axis search interval according to the transverse axis coordinate position and the difference between the calculated speed of the last time of identifying the sphere target and the product result of the first search time and the preset expansion coefficient;
determining a second transverse axis boundary of the transverse axis search interval according to the transverse axis coordinate position and the sum of the velocity calculated by the last time of identifying the sphere target and the product result of the first search time and the expansion coefficient;
and obtaining the cross-axis search interval according to the first cross-axis boundary and the second cross-axis boundary.
4. The method of claim 2, wherein determining a longitudinal axis search interval that progressively expands with a first search time based on the longitudinal axis coordinate position and a sphere velocity calculated last time the sphere target was identified, comprises:
determining a first vertical axis boundary of the vertical axis search interval according to the vertical axis coordinate position and the difference between the calculated speed of the last time of identifying the sphere target and the product result of the first search time and the preset expansion coefficient;
determining a second vertical axis boundary of the vertical axis search interval according to the vertical axis coordinate position and the sum of the velocity calculated by the last time of identifying the sphere target and the product result of the first search time and the expansion coefficient;
and obtaining the longitudinal axis retrieval interval according to the first longitudinal axis boundary and the second longitudinal axis boundary.
5. The method according to any one of claims 1 to 4, wherein the searching for sphere objects in the enlarged field of view field image according to the first search area comprises:
If a plurality of spheres to be identified are searched in the first search area, calculating the sphere target score of each sphere to be identified according to at least one of the movement index, the sphere confidence coefficient, the distance between the sphere position and the last sphere target position identified and the time of searching the sphere to be identified, and obtaining a sphere target scoring result;
Selecting the sphere with the highest score from the spheres to be identified according to the scoring result of the sphere target, and determining the sphere target;
The motion index is determined according to the absolute motion quantity of the sphere to be identified and the relative inter-frame relative motion quantity.
6. The method of claim 5, wherein the means for determining the motion index comprises:
Determining the absolute motion quantity of the sphere to be identified according to the position difference between the initial position and the final position of the sphere to be identified;
determining the relative motion quantity of the sphere to be identified according to the average value of the position difference between the current frame position and the last frame position of each frame of the sphere to be identified;
and obtaining the motion index according to the product between the absolute motion quantity and the relative motion quantity.
7. The method of claim 5, wherein calculating the sphere target score for each sphere to be identified based on at least one of the movement index of each sphere to be identified, the sphere confidence, the distance between the sphere position and the last identified sphere target position, and the time the sphere to be identified was searched, comprises:
For each sphere to be identified, determining first credibility information of the sphere to be identified according to the product between the movement index of the sphere to be identified and the sphere credibility;
determining second credibility information of the sphere to be identified according to the ratio of the first credibility information to the distance between the sphere position of the sphere to be identified and the last sphere target position identified;
And determining the sphere target score of the sphere to be identified according to the ratio between the second credibility information and the time for searching the sphere to be identified, so as to obtain the sphere target score of each sphere to be identified.
8. The method of any one of claims 2-4, further comprising:
Obtaining a cache retrieval frame time according to the position of the ball body predicted by the court image after the field angle is enlarged, wherein the position corresponds to the future frame, and the cache retrieval frame time is used for predicting the ball body;
and determining the cache retrieval frame time as the search time threshold.
9. The method of claim 1, wherein said determining a trackball target from the identified human targets comprises:
determining the position weight of each human target according to the court layout information and the positions of the human targets, and acquiring the human targets to be clustered, wherein the position weight of the human targets is greater than a preset weight threshold;
clustering human targets to be clustered according to a predetermined clustering quantity to obtain a clustering result, wherein the clustering quantity is determined according to the number of competition areas of a court;
and determining the target of the tracking player according to the clustering result and the position of the last identified sphere target.
10. The method according to claim 1 or 9, wherein the searching for a sphere target according to the trackball target and a second search area comprises:
Determining the transition position of the tracking player according to the initial position of the tracking player and the position of the ball target identified last time;
obtaining multi-frame clustering positions of the tracking player relative to the current frame according to the transition positions of the tracking player and the positions of the tracking player predicted to appear according to the future frames;
and searching a sphere target according to the multi-frame clustering position of the tracking player and the second searching area.
11. The method of claim 1, wherein determining the location of the sphere target according to a preset sphere target tracking algorithm comprises:
determining a smooth position of the sphere target relative to the current frame according to the position of the sphere target predicted to appear by the current position of the sphere target corresponding to the future frame;
determining a moving step of the sphere target relative to the current frame according to the smooth position and the number of future frames;
and obtaining the position of the sphere target according to the smooth position and the moving step.
12. The method of claim 11, wherein determining a smoothed position of the spherical object with respect to the current frame based on the current position of the spherical object corresponding to the predicted position of the spherical object for the future frame comprises:
Calculating a weighted average value between the current position of the sphere target and the position of the sphere target predicted to appear in each frame in the future frames based on a weighted average algorithm, and determining the smooth position of the sphere target relative to the current frame;
and/or, the determining a moving step of the sphere target relative to the current frame according to the smooth position and the number of future frames comprises:
And determining the moving step of the sphere target relative to the current frame according to the ratio of the product of the smooth position and the preset speed control parameter to the number of the future frames.
13. The method of claim 12, wherein the determining the speed control parameter comprises:
Determining a speed control parameter according to an average value of the relative position change amplitude between the positions of the sphere targets in the historical frame corresponding to the current position of the sphere targets;
Wherein the speed control parameter is proportional to an average value of the relative position change amplitude.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510329601.5A CN119850668B (en) | 2025-03-20 | 2025-03-20 | Spherical object tracking method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510329601.5A CN119850668B (en) | 2025-03-20 | 2025-03-20 | Spherical object tracking method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119850668A CN119850668A (en) | 2025-04-18 |
| CN119850668B true CN119850668B (en) | 2025-07-08 |
Family
ID=95371113
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510329601.5A Active CN119850668B (en) | 2025-03-20 | 2025-03-20 | Spherical object tracking method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119850668B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2023161440A (en) * | 2022-04-25 | 2023-11-07 | キヤノン株式会社 | Video processing device, control method for the same, and program |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004046647A (en) * | 2002-07-12 | 2004-02-12 | Univ Waseda | Method and device for tracking moving object based on dynamic image data |
| ITRM20060110A1 (en) * | 2006-03-03 | 2007-09-04 | Cnr Consiglio Naz Delle Ricerche | METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION OF EVENTS IN SPORTS ENVIRONMENT |
| KR101291765B1 (en) * | 2013-05-15 | 2013-08-01 | (주)엠비씨플러스미디어 | Ball trace providing system for realtime broadcasting |
| CN104881882A (en) * | 2015-04-17 | 2015-09-02 | 广西科技大学 | Moving target tracking and detection method |
| JP6641163B2 (en) * | 2015-12-02 | 2020-02-05 | 日本放送協会 | Object tracking device and its program |
| EP3694205B1 (en) * | 2017-10-05 | 2024-03-06 | Panasonic Intellectual Property Management Co., Ltd. | Mobile entity tracking device and method for tracking mobile entity |
| CN107767392A (en) * | 2017-10-20 | 2018-03-06 | 西南交通大学 | A Ball Track Tracking Method Adapting to Occlusion Scenes |
| CN111242977B (en) * | 2020-01-09 | 2023-04-25 | 影石创新科技股份有限公司 | Object tracking method for panoramic video, readable storage medium and computer equipment |
| CN115104137A (en) * | 2020-02-15 | 2022-09-23 | 利蒂夫株式会社 | Method of operating server for providing platform service based on sports video |
| US11710316B2 (en) * | 2020-08-13 | 2023-07-25 | Toca Football, Inc. | System and method for object tracking and metric generation |
| DE202022101862U1 (en) * | 2022-04-07 | 2022-05-17 | Aziz Makandar | System for identifying players and tracking multiple targets using an extended Gaussian mixture model |
| TWI822380B (en) * | 2022-10-06 | 2023-11-11 | 財團法人資訊工業策進會 | Ball tracking system and method |
| WO2024116179A1 (en) * | 2022-11-30 | 2024-06-06 | Track160 Ltd | System and method for tracking ball movement during a sport game |
-
2025
- 2025-03-20 CN CN202510329601.5A patent/CN119850668B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2023161440A (en) * | 2022-04-25 | 2023-11-07 | キヤノン株式会社 | Video processing device, control method for the same, and program |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119850668A (en) | 2025-04-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11594029B2 (en) | Methods and systems for determining ball shot attempt location on ball court | |
| US20220027632A1 (en) | Methods and systems for multiplayer tagging using artificial intelligence | |
| WO2021120157A1 (en) | Light weight multi-branch and multi-scale person re-identification | |
| WO2021016901A1 (en) | Player trajectory generation via multiple camera player tracking | |
| US10909688B2 (en) | Moving body tracking device, moving body tracking method, and moving body tracking program | |
| US10803598B2 (en) | Ball detection and tracking device, system and method | |
| JP2015079502A (en) | Object tracking method, object tracking device, and tracking feature selection method | |
| CN112907618B (en) | Multi-target sphere motion trail tracking method and system based on rigid body collision characteristics | |
| WO2021016902A1 (en) | Game status detection and trajectory fusion | |
| KR20210146265A (en) | Method, device and non-transitory computer-readable recording medium for estimating information about golf swing | |
| JP7246005B2 (en) | Mobile tracking device and mobile tracking method | |
| JP4621910B2 (en) | Dominance level determination device and dominance level determination method | |
| KR20250068574A (en) | Method, system and non-transitory computer-readable recording medium for estimating information on golf swing pose | |
| KR101703316B1 (en) | Method and apparatus for measuring velocity based on image | |
| CN116797961A (en) | Picture acquisition method and device for moving sphere, computer equipment and storage medium | |
| JP7212998B2 (en) | Distance estimation device, distance estimation method and program | |
| US20220109795A1 (en) | Control apparatus and learning apparatus and control method | |
| CN119850668B (en) | Spherical object tracking method | |
| KR102563764B1 (en) | Method, system and non-transitory computer-readable recording medium for estimating information on golf swing | |
| CN115589532B (en) | Anti-shake processing method, device, electronic device and readable storage medium | |
| CN112990159B (en) | Video interesting segment intercepting method, electronic equipment and storage medium | |
| JP2023161440A (en) | Video processing device, control method for the same, and program | |
| KR102835721B1 (en) | Apparatus, method and recording medium for providing text label for game video analysis | |
| US20250168483A1 (en) | Method, device, computer device, and storage medium for controlling gimbal recorder | |
| CN120378750B (en) | Tracking shooting method, device, equipment and storage medium for ball sports |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |