Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," "target," and the like in the description and claims of the present invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a method for detecting an aquatic target based on a radar chart, which is applicable to predicting a relative positional relationship between a target to be detected and a ship stationary on water, according to an embodiment of the present application, the method may be performed by an aquatic target detection device based on the radar chart, the aquatic target detection device based on the radar chart may be implemented in a form of hardware and/or software, and the aquatic target detection device based on the radar chart may be configured in an electronic device having a data processing capability. As shown in fig. 1, the method includes:
S110, determining a radar map according to echo signals obtained by detection of the radar on the water stationary ship, and determining track data of the target to be detected according to the radar map.
The radar can be arranged at any position of a water stationary ship, the installation position is not limited by the embodiment of the application, and the radar can be a millimeter wave radar, such as a 94GHz millimeter wave radar. The radar image is an image for reflecting the reflected echo intensity data of each position in the radar detection area, and in the embodiment of the application, the radar image actually reflects the position information of the target to be detected in a circular area with the water stationary ship as the center and the radar detection distance as the radius. The target to be measured can be various targets such as fishing boats, yachts, water motorcycles and the like. The track data may reflect the position, speed and direction of travel of the object to be measured.
Specifically, if the radar detection area has the target to be detected, the radar reflected wave intensity data at the position of the target to be detected is different from the radar reflected wave intensity data at other positions, so that the pixel point value at the position corresponding to the target to be detected on the radar chart is different from the pixel point value at other positions, and accordingly the position of the target to be detected can be determined according to the radar chart. Further, at least two radar images are acquired to determine a plurality of positions where the target to be measured passes through, and the navigation speed and the navigation direction of the target to be measured are determined according to the plurality of positions where the target to be measured passes through. For example, the track data may be represented as (id n,xn,yn,vxn,vyn), where id n represents the nth target to be measured, x n represents the abscissa of the target to be measured, y n represents the ordinate of the target to be measured, vx n represents the speed of travel of the target to be measured in the x direction, and vy n represents the speed of travel of the target to be measured in the y direction. Obviously, the higher the acquisition frequency of the radar chart is, the more accurate the track data is, and the course of the target to be measured is determined in the subsequent steps.
And S120, if the distance between the object to be detected and the water stationary ship is determined to be greater than the first preset distance and less than the second preset distance according to the track data of the object to be detected, determining whether the heading of the object to be detected faces the water stationary ship according to the track data.
The specific values of the first preset distance and the second preset distance can be determined according to actual conditions, and the embodiment of the application is not limited to this. The course may be a direction of travel of the target to be measured, and the course may be determined according to the course data of the target to be measured.
Illustratively, as shown in FIG. 2, the first predetermined distance may be r, which may be adaptively determined based on the length of the waterborne stationary vessel. The second preset distance may be R. If the distance between the object to be measured and the water stationary ship is greater than the first preset distance and less than the second preset distance, collision risk may exist, and it is required to determine whether the heading of the object to be measured is toward the water stationary ship. For example, the distance L between the object to be measured and the water stationary ship can be obtained from the track data, and the formula is:
In the embodiment of the present application, optionally, determining whether the heading of the target to be measured faces the water stationary ship according to the track data includes steps A1-A3:
And A1, determining the track point of the target to be detected according to the track data, and determining the reference vector of the track point of the target to be detected pointing to the water stationary ship.
And A2, determining the component of the reference vector along the first direction and the component along the second direction, and determining the component of the heading of the target to be detected along the first direction and the component along the second direction, wherein the first direction and the second direction are perpendicular.
And A3, if the component of the reference vector along the first direction is consistent with the component direction of the heading of the target to be detected along the first direction, and the component of the reference vector along the second direction is consistent with the component direction of the heading of the target to be detected along the second direction, determining that the heading of the target to be detected faces the water stationary ship.
The track point may be the latest position point of the object to be measured. The direction of the reference vector may be the direction in which the track point of the object to be measured points to the above-water stationary vessel. Illustratively, as shown in FIG. 3, the first direction may be an x-axis direction in a Cartesian coordinate system and the second direction may be a y-axis direction in the Cartesian coordinate system. If the track point of the target to be measured is in the first quadrant, the component of the reference vector along the first direction is smaller than 0, and the component of the reference vector along the second direction is smaller than 0, and if the component of the heading of the target to be measured along the first direction is smaller than 0 and the component of the heading of the target to be measured along the second direction is smaller than 0, determining that the heading of the target to be measured faces the water stationary ship. Likewise, the judging method of the remaining quadrants is identical to that of the first quadrant, and the embodiments of the present application are not repeated. It should be noted that, if the track point is located on the x axis or the y axis, if the heading of the target to be measured is consistent with the direction of the reference vector, the heading of the target to be measured is determined to be towards the water stationary ship.
S130, if the course of the target to be detected faces the water stationary ship, determining course line information of the target to be detected according to the course data of the target to be detected, and predicting the relative position relationship between the target to be detected and the water stationary ship according to the course line information.
The course line information can reflect the course track of the target to be measured for a period of time in the future, and the course line information of the target to be measured can be estimated according to the course data of the target to be measured. The relative positional relationship may reflect whether the object to be measured may affect the safety of the aquatic stationary object in a future period of time, for example, if the course line information reflects that the object to be measured may collide with the aquatic stationary object, the relative positional relationship may be a collision relationship, and if the course line information reflects that the object to be measured may be far away from the aquatic stationary object, the relative positional relationship may be a collision-free relationship.
Specifically, since the heading of the object to be measured faces the water stationary object, safety accidents such as ship collision may be caused, and the heading line information of the object to be measured needs to be estimated according to the heading data of the object to be measured, so that the relative position relationship between the object to be measured and the water stationary object is predicted, and the safety accidents are avoided.
In the embodiment of the application, optionally, course line information of the target to be detected is determined according to the course line data of the target to be detected, and the relative position relation between the target to be detected and the water stationary ship is predicted according to the course line information of the target to be detected.
In the scheme, the water stationary ship is taken as the center, the first preset distance is taken as the radius, the safety area is divided, if the course line of the object to be detected passes through the circle range taking the first preset distance as the radius, the object to be detected is likely to collide with the water stationary ship, and the object to be detected can be determined to drive to the water stationary ship. In the scheme, the first preset distance is the circle range of the radius, so that the course line of the object to be detected can be identified from any direction to the water stationary ship, and the safety of the water stationary ship is improved.
According to the technical scheme, echo signals obtained through detection by the radar on the water stationary ship are used for determining a radar chart, track data of a target to be detected are determined according to the radar chart, if the distance between the target to be detected and the water stationary ship is determined to be larger than a first preset distance and smaller than a second preset distance according to the track data of the target to be detected, whether the heading of the target to be detected faces the water stationary ship is determined according to the track data, if the heading of the target to be detected faces the water stationary ship, the heading line information of the target to be detected is determined according to the track data of the target to be detected, and the relative position relation between the target to be detected and the water stationary ship is predicted according to the heading line information. According to the technical scheme, the target to be detected is detected through the radar, so that the detection range is not limited by weather, the detection efficiency is higher than that of a manual observation mode, the accurate detection of the target on water is realized, and the safety of an operation ship is improved.
Example two
Fig. 4 is a flowchart of a method for detecting an object on water based on a radar chart according to a second embodiment of the present application, where the method is optimized based on the foregoing embodiment of the present application.
As shown in fig. 4, the method in the embodiment of the present application specifically includes the following steps:
S210, determining a radar map according to echo signals obtained by detection of a radar on a water stationary ship, and determining track data of a target to be detected according to the radar map.
S220, if the distance between the object to be detected and the water stationary ship is determined to be greater than the first preset distance and smaller than the second preset distance according to the track data of the object to be detected, determining whether the heading of the object to be detected faces the water stationary ship according to the track data.
S230, determining the course slope of the target to be detected according to the track data of the target to be detected.
Specifically, according to the track data of the target to be measured, determining the navigation direction of the target to be measured at the track point, wherein the navigation direction is close to the navigation direction of the ship to be measured in a future period of time, so that the navigation direction is used as the slope of the heading line of the target to be measured.
S240, determining whether the course passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius according to the course slope and the safety course slope.
As shown in fig. 5, the slope of the safe heading line is the slope of a tangent line of a circle passing through the track point and taking the water stationary ship as a center and taking the first preset distance as a radius. Specifically, as shown in fig. 5, there are two slopes of the safe course lines, that is, there are two safe course lines, and if the course line is located between the two safe course lines, it is indicated that the course line will pass through a circle range with the first preset distance as a radius.
In the embodiment of the application, optionally, a tangent line of a circle taking the water stationary ship as a circle center and taking the first preset distance as a radius passing through a track point comprises a first tangent line and a second tangent line, a coordinate system where the track point is located is a rectangular coordinate system taking the water stationary ship as an origin, and whether a course line passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius is determined according to the course line slope and the safety course line slope, wherein the method comprises the following steps of:
and 1, under the condition that the first tangent line does not exist, if the ordinate of the track point is not zero, determining that the slope of the second tangent line is larger than or equal to zero and the slope of the course line is larger than or equal to the slope of the second tangent line when the course line passes through a circle range taking the water stationary ship as a center and taking the first preset distance as a radius, wherein the circle range is any condition that the slope of the course line is smaller than zero and the slope of the course line is smaller than or equal to the slope of the second tangent line.
In this scheme, as shown in fig. 6, the first tangent of the target a to be measured does not exist. Exemplary, let the slope of the course be k, then
Wherein vy n is the navigation speed of the object to be measured along the y direction of the track point, and vx n is the navigation speed of the object to be measured along the x direction of the track point.
Let the second tangential slope be k 3
Where sign () is a sign function, sign (x n) represents a sign for x, r is a first preset distance, x n represents the track point abscissa, and y n represents the track point ordinate.
If the following conditions are met, determining that the course line passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius:
And 2, when the first tangent line and the second tangent line exist, determining that the course line passes through a circle which takes the water stationary ship as a center and takes the first preset distance as a radius in the condition that the product of the slope of the first tangent line and the slope of the second tangent line is larger than zero, the slope of the course line is larger than or equal to the minimum value of the slope of the first tangent line and the slope of the second tangent line, smaller than or equal to the maximum value of the slope of the first tangent line and the slope of the second tangent line, and the product of the slope of the first tangent line and the slope of the second tangent line is smaller than zero, the slope of the course line is smaller than or equal to the minimum value of the slope of the first tangent line and the slope of the second tangent line, or larger than or equal to the maximum value of the slope of the first tangent line and the slope of the second tangent line.
For example, as shown in fig. 6, the minimum value of the slope of the first tangent line and the slope of the second tangent line is denoted as k 1, and the maximum value of the slope of the first tangent line and the slope of the second tangent line is denoted as k 2, then:
If the following conditions are met, determining that the course line passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius:
And S250, if the course line is determined to pass through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius, determining that the target to be measured is driven to the water stationary ship.
According to the technical scheme, echo signals obtained through detection by the radar on the water stationary ship are used for determining a radar chart, track data of a target to be detected are determined according to the radar chart, if the distance between the target to be detected and the water stationary ship is determined to be larger than a first preset distance and smaller than a second preset distance according to the track data of the target to be detected, whether the course of the target to be detected faces the water stationary ship is determined according to the track data, the course slope of the target to be detected is determined according to the track data of the target to be detected, whether the course passes through a circle range taking the water stationary ship as a circle center and the first preset distance as a radius according to the course slope and the safe course slope, and if yes, the target to be detected is determined to travel to the water stationary ship. According to the technical scheme, by comparing the slope of the course line with the slope of the safe course line, whether the course line passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius is accurately determined, and the relative position relationship between the target to be detected and the water stationary ship is obtained.
Example III
Fig. 7 is a flowchart of a method for detecting an object on water based on a radar chart according to a third embodiment of the present application, which is optimized based on the above embodiment.
As shown in fig. 7, the method in the embodiment of the present application specifically includes the following steps:
S310, performing intensity interpolation between adjacent position points according to the intensity data of echo signals at the adjacent position points with equal radar distance in the radar adjacent detection direction, and determining the intensity data at the interpolation position points with equal radar distance.
According to the technical scheme, targets are detected through the radar, the radar position is shown in fig. 8, and the radar can be a single-shot mechanical scanning millimeter wave radar. The radar rotates around the center, continuously transmitting and receiving frequency modulated radio waves. The divergent dotted line emitted from the radar is a detection signal of the radar, the directions corresponding to the two adjacent detection signals are adjacent detection directions, and the adjacent position points are points with equal distance from the radar in the adjacent detection directions, such as points a and B in fig. 9. The reflected echo is an echo received by the radar after a detection signal emitted by the radar is reflected, and the intensity data of the reflected echo can be obtained by the radar. The intensity data of the reflected echo corresponding to a certain position in the environment can reflect whether a target exists at the position, and the position, size, shape and other information of the target can be determined according to the intensity data of the reflected echo. The detection direction of each emitted detection signal of the radar can be characterized by an azimuth angle, and the azimuth angle of one detection direction can be a horizontal included angle from the direction line of the radar pointing to the north to the detection direction in a clockwise direction. When the radar detects each azimuth, intensity data of a reflected echo is correspondingly obtained at each position point with different distance from the radar on the azimuth, each azimuth corresponds to a plurality of intensity data to obtain one-dimensional intensity data, the radar scans a circle in a rotating way, and two-dimensional intensity data corresponding to each position point represented by a polar coordinate can be formed.
In the embodiment of the present application, different weights may be selected according to the intensity data of the adjacent position points to determine the intensity data at the interpolation position point, for example, the intensities of the adjacent position points are S A and S B, the weights are ω 1 and ω 2 respectively, and the intensity data at the interpolation position point is S C, then S C=SA×ω1+SB×ω2.
In the embodiment of the application, the step S310 is optimized, intensity interpolation is carried out between adjacent position points according to intensity data of reflected echo at the position points with equal distance to the radar in the adjacent detection direction of the radar, the intensity data at the interpolation position points with equal distance to the radar is determined, the intensity interpolation is carried out on concentric arcs taking the adjacent position points as endpoints to obtain interpolation position points, the concentric arcs are arcs taking the radar as the center and taking the radar as the radius, the ratio of the length of a concentric arc from a first position point to the interpolation position point in the adjacent position points to the length of the concentric arc between the adjacent position points is used as a first weight value of the intensity data of a second position point in the adjacent position points, the ratio of the length of the concentric arc from the second position point to the interpolation position point in the adjacent position points to the length of the concentric arc between the adjacent position points is used as a second weight value of the intensity data of the first position point in the adjacent position points, and the intensity data of the adjacent position points is weighted according to the first weight value and the second weight value to obtain the intensity data of the adjacent position points.
Specifically, in the interpolation mode of the embodiment of the application, the intensity interpolation between the adjacent position points can be performed on an arc taking the radar as a center and the adjacent position points as endpoints to obtain the interpolation position points. As shown in fig. 10. The interpolation position point C and the interpolation position point D are centered on the radarAnd interpolation is carried out on the obtained result. The interpolation locations may or may not be equally spaced. As shown in fig. 10, two points A, B are adjacent position points, two points C, D are interpolation position points and are located between two points A, B, and A, B, C and the distance from D to radar are the same, so that the intensity data of point CIntensity data for point D
For example, the initial matrix is constructed by using the intensity data of the echo reflected at the position points at different distances from the radar in the same detection direction as the row elements of the matrix and the intensity data of the echo reflected at the position points at different distances from the radar in the different detection directions as the column elements of the matrix.
In the embodiment of the application, because the intensity data of the interpolation position points are calculated from the intensity data of the adjacent position points, the intensity data of the echo reflected at the position points with different distances from the radar in the same detection direction of the radar are used as row elements of the matrix, and the second row, the third row, the fourth row and the like of the initial matrix are formed in sequence according to the change sequence of the detection direction of the radar. The intensity data for each detection direction is noted as a series { a n′|n′∈[1,N]},an′ over N' x δ meters away from the radar, where N is the number of samples and δ is the range resolution of the radar. Each row represents intensity data of position points with sequentially increased distance from the radar in the same detection direction of the radar, each column represents intensity data of position points with equal distance from the radar in different detection directions, the detection directions corresponding to adjacent elements in each column are adjacent detection directions, the detection directions corresponding to the first column element and the last column element are adjacent detection directions, and an initial matrix A is shown as follows:
Wherein M represents the number of radar detection directions, and N represents the number of samples, namely the number of intensity data obtained in the same detection direction of the radar. a 11 represents intensity data of a position point closest to the radar in the radar initial detection direction, a 12 represents intensity data of a second position point which is more distant from the radar than a 11 in the radar initial detection direction, a 21 represents intensity data of a position point closest to the radar in the radar second detection direction, a 31 represents intensity data of a position point closest to the radar in the radar third detection direction, and so on.
Further, intensity interpolation is carried out according to column elements in the same column in the initial matrix, an interpolation matrix is obtained, and intensity data at interpolation position points with the same distance with the radar are determined according to the interpolation matrix.
For example, the size of the initial matrix is mxn, if one interpolation position point is inserted between adjacent position points, the size of the interpolation matrix is 2 mxn, and if two interpolation position points are inserted between adjacent position points, the size of the interpolation matrix is 3 mxn. In the embodiment of the present application, the intensity data of the interpolation position point is calculated according to the intensity data of the adjacent position point, and in the initial matrix, two adjacent matrix elements in each column are the intensity data of the adjacent position point, for example, a 31 and a 41 are the adjacent matrix points. It should be noted that, in the same column of the initial matrix, the first element and the last element are adjacent position points.
In the embodiment of the application, the intervals of the position points with the same distance with the radar are equal, and the intensity interpolation is carried out according to the column elements in the same column in the initial matrix to obtain an interpolation matrix, which comprises the following steps:
based on the following formula, the values of the elements in the interpolation matrix can be determined by performing equal interval intensity interpolation according to column elements in the same column in the initial matrix:
Wherein b ij represents the value of the j-th column element of the i-th row of the interpolation matrix, The initial matrix is represented and is used to represent,Expressed as a downward rounding,% is expressed as a remainder operation, T is expressed as the number of interpolation position points between adjacent position points plus one, and M is expressed as the number of detection directions at one radar scan. Note that b ij determined by the above formula may not be an integer, so in order to make each element in the interpolation matrix be an integer, b ij may be rounded, or may be rounded up or rounded down, and the specific rounding mode is not limited. In addition, if the intensity data is not normalized during the previous execution, the data may be normalized to [0,255] and then rounded.
In this scheme, in order to facilitate calculation of the intensity data of the interpolation position points, the position point intervals equal to the radar distance are set to be equal, i.e. as shown in fig. 10, A, B is the adjacent position point, C, D is the interpolation position point, and the distances between A, B, C and D and the radar are equalIs equal to the length ofIs equal to the length ofIs a length of (c).
Specifically, if interpolation is performed as shown in fig. 10, it may be determined that the interpolation matrix is:
S320, determining position points corresponding to all pixel points in the radar chart.
The position points comprise position points corresponding to the intensity data after the intensity interpolation.
The method comprises the steps of setting a radar as an image center of a radar image, converting pixel coordinates of all pixel points in the radar image into Cartesian coordinates according to the actual radar scanning area and the corresponding positions of the pixels of the radar image, determining polar coordinates corresponding to all pixel points according to the conversion relation between the Cartesian coordinates and the polar coordinates, and determining the position points corresponding to all pixel points in the radar image according to the position points corresponding to the polar coordinates.
Exemplary, if the setup radar chart F includes P rows and Q columns of pixel points
Where F pq represents the gray value of the pixel at the position (P, Q), F may be mapped into a rectangular region of pΔ×qΔ, where each pixel corresponds to a square region with a side length of Δm in real space, for example, F includes 3000 rows and 2000 columns of pixels, and if F maps to a radar detection region of 600×400 m, each pixel corresponds to a square region with a side length of 0.2 m. In the embodiment of the application, P, Q can be odd number to set the radar position at the exact center of the radar chart.
Specifically, if the radar position is set at the exact center of the radar map, the pixel points (p, q) are converted into a cartesian coordinate system as follows:
According to the formula, the Cartesian coordinates corresponding to each pixel point can be obtained, and according to the conversion relation between the Cartesian coordinates and the polar coordinates, the polar coordinates corresponding to each pixel point can be determined, wherein the conversion relation is as follows:
(gamma, theta) is the polar coordinate, and the I represents the or relationship. Each pixel point can obtain the corresponding polar coordinates through the operation of the formula. According to the position points corresponding to the polar coordinates, the position points corresponding to the pixel points in the radar map can be determined, for example, the polar coordinates are (10, 0 degrees), and the position points corresponding to the polar coordinates are positions which are 10 meters away from the radar in the initial detection direction of the radar.
S330, according to the intensity data of each position point, determining the gray value of the radar image pixel point corresponding to each position point, and generating a radar image according to the gray value.
In the embodiment of the application, the position points comprise adjacent position points before interpolation and interpolation position points after interpolation, and correspondingly, the intensity data comprise the intensity data of the adjacent position points before interpolation and the intensity data of the interpolation position points after interpolation. After determining the position points corresponding to the pixel points in the radar chart, the gray value of each pixel point can be determined according to the intensity data of the position points, and then the radar chart is drawn according to the gray value. The target may be various objects of the radar detection area including, but not limited to, fishing boats, yachts, water signs, etc.
The gray value of the radar image pixel point is determined based on the following formula:
Wherein f pq represents a gray value of a pixel point with a pixel coordinate of (p, q) in the radar map, round represents rounding, θ represents an azimuth angle of a current detection direction, θ 1 represents an azimuth angle of an initial detection direction, σ represents a deflection angle between two adjacent position points with equal radar distance after intensity interpolation, γ represents a distance between a position point and the radar, and δ represents a minimum detection distance of the radar. By way of example, if no interpolation is performed, If the number of interpolation position points between adjacent position points is 2, thenIt should be noted that, before determining the gray value of the pixel point of the radar image according to the above formula, b ij is normalized to be between [0,255] to represent the gray value.
S340, determining track data of the target to be detected according to the radar chart.
S350, if the distance between the object to be detected and the water stationary ship is determined to be greater than the first preset distance and smaller than the second preset distance according to the track data of the object to be detected, determining whether the heading of the object to be detected faces the water stationary ship according to the track data.
And S360, if the course of the target to be detected faces the water stationary ship, determining course line information of the target to be detected according to the course data of the target to be detected, and predicting the relative position relationship between the target to be detected and the water stationary ship according to the course line information.
According to the technical scheme, intensity interpolation is carried out between adjacent position points according to intensity data of echo reflected at the position points with the same distance as the radar in the adjacent detection direction of the radar, intensity data at the interpolation position points with the same distance as the radar are determined, position points corresponding to all pixel points in a radar image are determined, gray values of radar image pixel points corresponding to all the position points are determined according to the intensity data of all the position points, and the radar image is generated according to the gray values. According to the technical scheme, the intensity data of the interpolation position points are obtained through interpolation of the adjacent position points, the number of the position points detected by the radar and the number of the intensity data are expanded, the gray value of the corresponding radar image pixel point is determined according to the intensity data of each position point, the radar image is rapidly drawn to perform target detection, the target detection of the area not detected by the radar is realized, and the target detection accuracy is improved.
Example IV
Fig. 11 is a flowchart of a method for detecting an object on water based on a radar chart according to a fourth embodiment of the present application, which is optimized based on the foregoing embodiment.
As shown in fig. 11, the method in the embodiment of the present application specifically includes the following steps:
s410, determining a radar map according to echo signals obtained by detection of the radar on the water stationary ship.
S420, aiming at the pixel points to be identified in the radar chart, determining target detection position points corresponding to the pixel points to be identified mapped in the radar detection area and a preset signal intensity probability distribution model corresponding to the radar when the radar scans the target detection position points.
The pixel points to be identified may be pixel points to be detected in the radar chart. The target detection position points can be detection positions in a radar detection area corresponding to pixel points to be identified in the radar chart, and each pixel point in the radar chart has a one-to-one correspondence with each target detection position point in the radar detection area.
Specifically, the radar detection area is scanned by the radar, a radar image of the radar detection area is obtained, target detection position points corresponding to all pixel points to be identified in the radar image in the radar detection area are determined, so that accuracy of the pixel points of the radar image and the corresponding target detection point positions is ensured, further, the positions which can accurately correspond to the target detection position points after analysis processing of the pixel points to be identified are ensured, the target detection position points are conveniently processed, and meanwhile, a preset signal intensity probability distribution model corresponding to the target detection position points is also determined, so that probability distribution types corresponding to the pixel points to be identified are conveniently determined.
S430, detecting a matching result of a preset signal intensity probability distribution model of the pixel point to be identified, wherein the preset signal intensity probability distribution model is used for describing signal intensity probability distribution of a radar echo signal when the target detection position point is scanned under the condition that the radar detection area does not comprise a prospect.
Specifically, after the radar chart is acquired, the pixel value to be identified in the radar chart is brought into each normal distribution model in the preset signal intensity probability distribution model to judge whether the pixel value to be identified is matched with the preset signal intensity probability distribution model, and if the pixel value to be identified is matched with one normal distribution model in the preset signal intensity probability distribution model, the pixel value to be identified is matched with the preset signal intensity probability distribution model.
In a possible embodiment, the detecting the matching result of the preset signal strength probability distribution model corresponding to the pixel point to be identified and the target detection position point may include the following steps B1-B3:
And B1, detecting whether at least one normal distribution model in the pixel point value to be identified and the preset signal intensity probability distribution model corresponding to the target detection position point meets a preset matching condition, wherein the preset matching condition comprises that the average value of the pixel point value to be identified and the normal distribution model meets a preset Laida criterion.
And step B2, if at least one normal distribution model meeting the preset matching condition exists, determining that the pixel point to be identified belongs to a background pixel in the radar map.
And step B3, if a normal distribution model meeting a preset matching condition does not exist, determining that the pixel point to be identified belongs to a foreground pixel in the radar map.
The preset matching condition may be at least one normal distribution model in the preset signal intensity probability distribution models corresponding to the target detection position points, which is used for judging whether the pixel point to be identified meets the value. The preset Laida criterion may be expressed by the following formula:
in the formula, x ij is the value of the pixel point to be identified, The average value in a preset signal intensity probability distribution model corresponding to the target detection position point is taken as the average value,And the variance in the preset signal intensity probability distribution model corresponding to the target detection position point is obtained.
Specifically, a radar image is obtained through radar scanning of a radar detection area, values of pixel points to be identified in the radar image are input into a probability distribution model corresponding to preset signal intensity of target detection position points, whether at least one normal distribution model in the probability distribution model corresponding to the preset signal intensity of the target detection position points meets preset matching conditions is judged, if at least one normal distribution model meeting the preset matching conditions exists, the pixel points to be identified are determined to belong to background pixels in the radar image, and if the normal distribution model meeting the preset matching conditions does not exist, the pixel points to be identified are determined to belong to foreground pixels in the radar image.
S440, separating the foreground and the background in the radar chart according to the matching result, and determining the pixel area of the target to be detected according to the separated foreground image.
Specifically, the values of all the pixel points to be identified in the radar chart are input into preset signal intensity probability distribution models corresponding to all the target detection position points, whether all the pixel points to be identified belong to background pixels or foreground pixels is determined by judging whether at least one normal distribution model in the preset signal intensity probability distribution models corresponding to the target detection position points meets preset matching conditions or not, and then separation of the foreground and the background in the current radar image can be achieved, and a separated foreground image is obtained.
Furthermore, the foreground image may include other objects except the object to be detected, for example, water surface floating garbage, so that contour detection can be performed on the objects, and a pixel area of the object to be detected is determined according to contour features of the object to be detected.
In another possible embodiment of the application, the foreground and background separation of the radar image comprises the steps of carrying out image difference processing on the radar image and a preset average background image to obtain a difference image, carrying out binarization processing on the difference image, and taking the image after the binarization processing as a foreground image.
The average background image may be an image reflecting a radar detection area in a state without a target to be detected, and may be obtained by averaging a plurality of background images. The background image may be a radar map acquired without a target to be detected in the radar detection area. The image difference processing may be to perform a difference processing on each pixel value of the two images. The binarization process may be that each pixel on the image has only two possible values or gray scale states, i.e. the gray scale value of any pixel point in the image is 0 or 255, representing black and white, respectively.
In the embodiment of the application, the average background image avoids the problem that a single background image possibly has abnormal individual pixel points, and improves the fault tolerance.
Specifically, at least two background images are obtained by using the radar, each background image can be marked as F, and the background image has P rows and Q columns, namely, a gray image formed by p×q pixel points, and the matrix is expressed as follows:
Further, the gray value of each pixel point of the average background image is obtained by averaging the gray value of the corresponding pixel point of each background image, taking f 11 pixel points as an example, and the gray value of f 11 pixel points in the average background image is obtained by averaging the gray value of f 11 pixel points in each background image, wherein in the process, f 11 is the target pixel point. Traversing all target pixel points to obtain an average background image Expressed by the formula:
Wherein F i is a gray scale of each of the respective background images. U is the number of background images.
In the embodiment of the application, the microwave radar image and the average background image are subjected to image difference processing to obtain a difference image F Δ, which can be expressed as:
wherein F is a gray scale of the microwave radar image.
Specifically, the binarization processing can be performed by the following formula:
wherein f ij' is the gray value of the corresponding pixel point after the radar image is binarized, For the gray value of the corresponding pixel in the difference image, S is a preset gray value, where the preset gray value may be a critical value that the gray value of the corresponding pixel in the radar chart is converted to 0 or 255, and when the gray value of the corresponding pixel in the radar chart is greater than or equal to the preset gray value, the gray value of the corresponding pixel is converted to 255, otherwise, is converted to 0. The preset gray value may be determined according to actual situations, which is not limited in the embodiment of the present application.
S450, tracking the target to be detected according to the pixel area of the target to be detected, and determining track data of the target to be detected.
In the embodiment of the present application, optionally, tracking the target to be measured according to the pixel area of the target to be measured, and determining the track data of the target to be measured may include steps C1 to C9:
and step C1, performing morphological dilation operation on the foreground image according to the separated foreground image to obtain an image A.
In the radar image, the background of the radar detection area and the target to be detected are included, after the background is removed, a foreground image can be obtained, the area in the foreground image can be divided into a plurality of small areas by mistake, so that the foreground image is subjected to morphological expansion operation to obtain an image A, and the internal cavity of the small area or the space of a neighboring area can be eliminated.
And C2, performing morphological corrosion operation on the image A to obtain an image B.
Since the region becomes large after expansion, it is necessary to perform morphological erosion operation on the image a to obtain the image B, and restore the region area to that before expansion.
And C3, performing Gaussian smoothing on the image B to obtain an image C.
Image B may have noise, and gaussian smoothing is performed on image B to obtain image C, so as to eliminate part of small noise, where the noise removal method includes, but is not limited to, mean filtering, gaussian filtering, median filtering, and the like.
And C4, carrying out Canny edge detection on the image C to obtain an image D.
In this step, the peripheral outline of each region is acquired by edge detection.
And C5, extracting the pixel coordinates of the outer boundary inflection point of each region in the image D to obtain a set of the pixel coordinates of the outer boundary inflection point.
The set of all outer boundary inflection pixel coordinates is denoted by D as follows:
where Di represents the set of outer boundary inflection pixel coordinates for the ith region, Row and column pixel indices representing the mth inflection point of the i-th region outer boundary.
In the above steps, each image processing algorithm used is the prior art, and specific details are not repeated in the embodiment of the present application.
Step C6, calculating the geometric center pixel coordinates of each region:
Where (r i,ci) is the geometric center pixel coordinate of the i-th region.
Step C7, converting the geometric center pixel coordinates of each region into Cartesian coordinate system coordinates:
Wherein P, Q is the number of rows and columns of radar images, and the value of the number of rows and columns can be odd so as to place the radar in the coordinate center. Delta is the width of the actual detection area corresponding to one pixel point in the radar chart. Traversing each region to obtain a center coordinate set X of each region,
X={(x1,y1),(x2,y2),…,(xn,yn)}。
And C8, clustering the central coordinate sets of the areas to obtain a clustered coordinate set X'.
And step C9, determining the position of the target to be detected at each moment according to at least two radar graphs, determining the navigation speed and direction of the target to be detected, determining the latest position of the target to be detected according to the latest acquired radar graphs, and further representing the track data of one target to be detected as (id n,xn,yn,vxn,vyn).
S460, if the distance between the object to be detected and the water stationary ship is determined to be greater than a first preset distance and less than a second preset distance according to the track data of the object to be detected, determining whether the heading of the object to be detected faces the water stationary ship according to the track data;
and S470, if the heading of the target to be detected faces the water stationary ship, determining the heading line information of the target to be detected according to the heading data of the target to be detected, and predicting the relative position relationship between the target to be detected and the water stationary ship according to the heading line information.
According to the technical scheme, whether the pixel points to be identified belong to the foreground or the background is determined by detecting whether the pixel point to be identified is matched with a preset signal intensity probability distribution model corresponding to the target detection position point, the background and the foreground in the current radar image are separated according to the matching result of each pixel point to be identified to obtain a foreground image, the pixel area of the target to be detected is determined, the target to be detected is tracked, and track data of the target to be detected are determined. According to the technical scheme, the target to be detected is rapidly and accurately determined from the radar graphs, and track data of the target to be detected is determined according to at least two radar graphs.
Example five
Fig. 12 is a schematic structural diagram of a radar chart-based on-water target detection device according to a fifth embodiment of the present application, where the device may execute the radar chart-based on-water target detection method according to any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method. As shown in fig. 12, the apparatus includes:
The track data determining module 510 is configured to determine a radar map according to an echo signal obtained by detecting a radar on a stationary marine vessel, and determine track data of a target to be detected according to the radar map;
the distance determining module 520 is configured to determine, if the distance between the target to be detected and the water stationary ship is greater than a first preset distance and less than a second preset distance according to the track data of the target to be detected, whether the heading of the target to be detected is towards the water stationary ship according to the track data;
The position relationship prediction module 530 is configured to determine course information of the target to be detected according to the course data of the target to be detected if the course of the target to be detected faces the water stationary ship, and predict a relative position relationship between the target to be detected and the water stationary ship according to the course information.
Optionally, the distance determining module 520 includes:
the track point determining unit is used for determining the track point of the target to be detected according to the track data and determining the reference vector of the track point of the target to be detected pointing to the water stationary ship;
the component determining unit is used for determining the component of the reference vector along the first direction and the component along the second direction, and determining the component of the heading of the target to be detected along the first direction and the component along the second direction, wherein the first direction and the second direction are vertical;
And the orientation determining unit is used for determining that the heading of the target to be detected faces the water static ship if the component of the reference vector along the first direction is consistent with the component of the heading of the target to be detected along the first direction and the component of the reference vector along the second direction is consistent with the component of the heading of the target to be detected along the second direction.
Optionally, the location relationship prediction module 530 includes:
the range determining unit is used for determining whether the course passes through a circle range taking the water stationary ship as a circle center and taking a first preset distance as a radius according to the course information of the target to be detected;
And the driving direction determining unit is used for determining that the object to be detected is driven to the water stationary ship if the object to be detected is driven to the water stationary ship.
Optionally, the range determining unit includes:
the course slope determining subunit is used for determining the course slope of the target to be detected according to the course data of the target to be detected;
And the range determining subunit is used for determining whether the course passes through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius according to the course slope and the safety course slope, wherein the safety course slope is the tangential slope of the circle passing through the track point and taking the water stationary ship as the circle center and taking the first preset distance as the radius.
Optionally, the tangent line of the circle passing through the track point and taking the water stationary ship as the center of a circle and taking the first preset distance as the radius comprises a first tangent line and a second tangent line;
further, the range determination subunit is specifically configured to:
If the ordinate of the track point is not zero under the condition that the first tangent line does not exist, when any one of the following conditions is met, the course line is determined to pass through a circle range taking the water stationary ship as a circle center and taking the first preset distance as a radius:
the slope of the second tangent line is greater than or equal to zero, and the slope of the heading line is greater than or equal to the slope of the second tangent line;
the slope of the second tangent line is smaller than zero, and the slope of the heading line is smaller than or equal to the slope of the second tangent line;
Under the condition that the first tangent line and the second tangent line exist, when any one of the following conditions is met, determining that the course line passes through a circle range taking the water stationary ship as a circle center and taking a first preset distance as a radius:
if the product of the slope of the first tangent line and the slope of the second tangent line is greater than zero, the slope of the heading line is greater than or equal to the minimum value of the slope of the first tangent line and the slope of the second tangent line, and is less than or equal to the maximum value of the slope of the first tangent line and the slope of the second tangent line;
if the product of the slope of the first tangent line and the slope of the second tangent line is less than zero, the slope of the heading line is less than or equal to the minimum value of the slope of the first tangent line and the slope of the second tangent line, or is greater than or equal to the maximum value of the slope of the first tangent line and the slope of the second tangent line.
Optionally, the track data determining module 510 includes:
the interpolation unit is used for carrying out intensity interpolation between adjacent position points according to the intensity data of echo signals at the adjacent position points with the same distance as the radar in the radar adjacent detection direction, and determining the intensity data at the interpolation position points with the same distance as the radar;
The radar system comprises a position point determining unit, a radar image processing unit and a radar image processing unit, wherein the position point determining unit is used for determining position points corresponding to all pixel points in the radar image, and the position points comprise position points corresponding to all intensity data after intensity interpolation;
And the radar map generating unit is used for determining the gray value of the radar map pixel point corresponding to each position point according to the intensity data of each position point and generating a radar map according to the gray value.
Optionally, the interpolation unit includes:
The interpolation subunit is used for carrying out intensity interpolation on concentric arcs taking the adjacent position points as endpoints to obtain interpolation position points, wherein the concentric arcs are arcs taking the radar as a circle center and taking the radar as a radius;
A first weight value determining subunit, configured to use a ratio of a concentric arc length from a first position point to an interpolation position point in the adjacent position points to a concentric arc length between the adjacent position points as a first weight value of intensity data of a second position point in the adjacent position points;
a second weight value determining subunit, configured to use a ratio of a concentric arc length from the second position point to the interpolation position point in the adjacent position points to a concentric arc length between the adjacent position points as a second weight value of the intensity data of the first position point in the adjacent position points;
And the intensity data determining subunit is used for carrying out weighted summation on the intensity data of the adjacent position points according to the first weight value and the second weight value to be used as the intensity data of the interpolation position points.
Optionally, the track data determining module 510 includes:
The model determining unit is used for determining a target detection position point corresponding to the pixel point to be identified mapped in the radar detection area and a preset signal intensity probability distribution model corresponding to the radar when scanning the target detection position point aiming at the pixel point to be identified in the radar map;
The radar detection system comprises a matching result detection unit, a target detection position point detection unit and a radar detection unit, wherein the matching result detection unit is used for detecting a matching result of a preset signal intensity probability distribution model of a pixel point value to be identified, the preset signal intensity probability distribution model is used for describing signal intensity probability distribution of a radar echo signal when the target detection position point is scanned under the condition that a radar detection area does not comprise a prospect;
the separation unit is used for separating the foreground from the background in the radar image according to the matching result, and determining the pixel area of the target to be detected according to the foreground image obtained by separation;
And the track data determining unit is used for tracking the target to be detected according to the pixel area of the target to be detected and determining the track data of the target to be detected.
The device for detecting the target on water based on the radar map provided by the embodiment of the application can execute the method for detecting the target on water based on the radar map provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method.
Example six
Fig. 13 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 13, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including an input unit 16, such as a keyboard, mouse, etc., an output unit 17, such as various types of displays, speakers, etc., a storage unit 18, such as a magnetic disk, optical disk, etc., and a communication unit 19, such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the radar map-based method of detecting an object on water.
In some embodiments, the radar map-based water target detection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the radar map-based method of detecting an object on water described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the radar map-based method of water target detection in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), a blockchain network, and the Internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.