CN112805989A - Time domain filtering method and device based on global motion estimation and storage medium - Google Patents
Time domain filtering method and device based on global motion estimation and storage medium Download PDFInfo
- Publication number
- CN112805989A CN112805989A CN202080005224.5A CN202080005224A CN112805989A CN 112805989 A CN112805989 A CN 112805989A CN 202080005224 A CN202080005224 A CN 202080005224A CN 112805989 A CN112805989 A CN 112805989A
- Authority
- CN
- China
- Prior art keywords
- frame image
- image
- row
- global
- column direction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
A time domain filtering method, device and storage medium based on global motion estimation, the method includes: acquiring a current frame image shot by a camera (S101); based on the global image mean histogram of the row and column direction, performing global motion estimation on the current frame image by using a reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the row and column direction (S102); shifting the reference frame image by using the global motion vector in the row-column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient (S103); and filtering the current frame image according to the shifted reference frame image and the time domain filter coefficient to obtain a filtered current frame image (S104).
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a time-domain filtering method and apparatus based on global motion estimation, and a storage medium.
Background
In recent years, consumer unmanned aerial vehicles and industrial unmanned aerial vehicles have been widely used, and the requirements for the imaging quality of the optoelectronic devices carried by the unmanned aerial vehicles, such as cameras and infrared cameras, are increasing, wherein the noise level is one of the key factors of the video quality.
For video noise filtering, time domain noise filtering is more effective than spatial noise filtering, and noise on a video image time domain can be obviously removed by using multi-frame information without losing image space details. However, in order to filter noise in the time domain, the conventional non-motion-compensated time-domain filtering has a smear problem in a certain sense, and the time-domain filtering fails when the camera device generates global motion; on the other hand, the motion estimation is inaccurate when the image has motion blur or the edge of a scene target is not clear, which causes additional defects.
Disclosure of Invention
Based on this, the application provides a temporal filtering method and device based on global motion estimation, and a storage medium.
In a first aspect, the present application provides a temporal filtering method based on global motion estimation, applied to a movable platform including an image capturing apparatus, the method including:
acquiring a current frame image shot by the camera device;
based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction;
shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation;
and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
In a second aspect, the present application provides a temporal filtering apparatus based on global motion estimation, applied to a movable platform including an image capturing apparatus, the apparatus including: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring a current frame image shot by the camera device;
based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction;
shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation;
and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
In a third aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to implement the temporal filtering method based on global motion estimation as described above.
The embodiment of the application provides a time domain filtering method, a time domain filtering device and a storage medium based on global motion estimation, and the method comprises the steps of obtaining a current frame image shot by a camera device; based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction; shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation; and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image. Compared with the prior art, no matter the matching based on the feature points or the matching based on the macro blocks needs a large amount of calculation, the global motion estimation is calculated based on the global image mean histogram in the row and column directions of the current frame image and the reference frame image, so that the calculation amount is small, the real-time performance is strong, and the real-time processing of the high-resolution video can be realized; compared with the traditional non-motion-compensated time-domain filtering, the motion estimation is inaccurate when the image has motion blur or the edge of a scene target is not clear, and the additional defect is caused; because the reference frame image is shifted by using the global motion vectors in the row and column directions, the problem of smear can be avoided as much as possible during time domain filtering; the time-domain filter coefficient is determined based on the global motion estimation, so that technical support is further provided for timely adjusting the time-domain filter coefficient during filtering, further technical support is further provided for ensuring accuracy and reliability of time-domain filtering, and technical support is further provided for further enhancing adaptability of time-domain filtering.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flowchart illustrating an embodiment of a temporal filtering method based on global motion estimation according to the present application;
FIG. 2 is a schematic size diagram of a frame image in the temporal filtering method based on global motion estimation according to the present application;
FIG. 3 is a schematic diagram of a global image mean histogram of the row direction of the frame image of FIG. 2;
FIG. 4 is a schematic diagram of a column-wise global image mean histogram of the frame image of FIG. 2;
FIG. 5 is a flowchart illustrating another embodiment of the temporal filtering method based on global motion estimation according to the present application;
FIG. 6 is a schematic flowchart illustrating a temporal filtering method based on global motion estimation according to another embodiment of the present application;
FIG. 7 is a schematic diagram illustrating an embodiment of dividing a matching calculation length of a reference frame image into a plurality of region segments in the temporal filtering method based on global motion estimation according to the present application;
FIG. 8 is a diagram illustrating an embodiment of a sliding operation of a reference frame image in the temporal filtering method based on global motion estimation according to the present application;
FIG. 9 is a schematic flowchart illustrating a temporal filtering method based on global motion estimation according to another embodiment of the present application;
FIG. 10 is a flowchart illustrating a temporal filtering method based on global motion estimation according to another embodiment of the present application;
FIG. 11 is a flowchart illustrating a temporal filtering method based on global motion estimation according to another embodiment of the present application;
FIG. 12 is a diagram illustrating an embodiment of shifting a reference frame image in the temporal filtering method based on global motion estimation according to the present invention;
fig. 13 is a schematic structural diagram of an embodiment of a temporal filtering apparatus based on global motion estimation according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Consumer unmanned aerial vehicles and industrial unmanned aerial vehicles are widely used, and the requirements on the imaging quality of a camera device carried on the unmanned aerial vehicle are higher and higher, wherein the noise level is one of the key factors of the video quality. For video noise filtering, temporal noise filtering is more efficient than spatial noise filtering. However, the traditional non-motion-compensated time-domain filtering has a smear problem in a certain sense, and the time-domain filtering fails when the camera device generates global motion; however, the motion-compensated temporal filtering has a large computation amount and is difficult to implement in real time, and if the image has motion blur or the edge of a scene target is not clear, the motion estimation is not accurate, which causes additional defects.
The method comprises the steps of obtaining a current frame image shot by the camera device; based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction; shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation; and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image. Compared with the prior art that feature point-based matching or macro block-based matching requires a large amount of calculation, the embodiment of the application calculates the global motion estimation based on the global image mean histogram in the row and column directions of the current frame image and the reference frame image, so that the calculation amount is small, the real-time performance is strong, and the real-time processing of the high-resolution video can be realized; compared with the traditional non-motion-compensated time-domain filtering, the motion estimation is inaccurate when the image has motion blur or the edge of a scene target is not clear, so that the additional defect is caused; because the reference frame image is shifted by using the global motion vectors in the row and column directions, the problem of smear can be avoided as much as possible during time domain filtering; the time-domain filter coefficient is determined based on the global motion estimation, so that technical support is further provided for timely adjusting the time-domain filter coefficient during filtering, further technical support is further provided for ensuring accuracy and reliability of time-domain filtering, and technical support is further provided for further enhancing adaptability of time-domain filtering.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of a temporal filtering method based on global motion estimation according to the present application, where the method of the present embodiment is applied to a movable platform including an image capturing device, and the movable platform may refer to various platforms that can move automatically or under controlled conditions, for example: unmanned aerial vehicles, unmanned vehicles, ground based robots, unmanned boats, and the like.
The method comprises the following steps: step S101, step S102, step S103, and step S104.
Step S101: and acquiring the current frame image shot by the camera device.
Step S102: and performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm based on the global image mean histogram of the current frame image and the reference frame image in the row and column direction to obtain a global motion vector in the row and column direction.
The reference frame image in this embodiment may be a frame image that needs to be referred to when performing temporal filtering on the current frame; specifically, the reference frame image may be a previous frame image or several previous frame images; in more applications, the reference frame image may be a previous frame image.
The column-direction global image mean histogram includes a column-direction global image mean histogram and a row-direction global image mean histogram. The global image mean histogram in the row direction may be a histogram in which the value of each point is the gray scale mean of the pixels of each row; if the horizontal coordinate is used for representing the positions of the rows of the image, the vertical coordinate represents the gray level mean value of the pixels of each row; if the ordinate is used for representing the positions of the rows of the image, the abscissa represents the mean value of the gray scale of the pixels of each row; each position of a row of the image represents each point of the global image mean histogram in the row direction, and the point where the coordinates of the position are the largest may represent the width of the image frame. The column-wise global image mean histogram may be a histogram in which the value of each point is the mean of the gray levels of the pixels of each column; if the abscissa is used to represent the position of the columns of the image, the ordinate represents the mean value of the grey levels of the pixels of each column; if the ordinate is used to represent the position of the columns of the image, the abscissa represents the mean value of the grey scale of the pixels of each column; each position of a column of the image represents each point of the column-wise global image mean histogram, and the point at which the coordinates of the position are the largest may represent the length of the image frame.
For example, referring to fig. 2, the size of the frame image is m × n, m is the length of the frame image (i.e., the length of the histogram of the global image mean in the column direction), and n is the width of the frame image (i.e., the length of the histogram of the global image mean in the row direction). (1) The horizontal coordinate represents the position of the row of the image, the vertical coordinate represents the gray level mean value of the pixel of each row, and a global image mean value histogram with the length of n in the row direction is obtained, as shown in fig. 3; the points on the abscissa (i.e., the positions of the rows) include 1, 2, 3, … …, n, and the values on the ordinate include: the mean value of the grey levels of the pixels of the point of row 1 is equal to: (the gray value of the pixel in row 1, column 1 + the gray value of the pixel in row 1, column 2 + the gray value of the pixel in row 1, column 3 + … … + the gray value of the pixel in row 1, column m) divided by m; the mean value of the grey levels of the pixels of the points of row 2 is equal to: (the gray value of the pixel in row 2, column 1 + the gray value of the pixel in row 2, column 2 + the gray value of the pixel in row 2, column 3 + … … + the gray value of the pixel in row 2, column m) divided by m; … …, respectively; the mean value of the grey levels of the pixels of the points of the nth row is equal to: (the gradation value of the pixel in the 1 st column in the nth row + the gradation value of the pixel in the 2 nd column in the nth row + the gradation value of the pixel in the 3 rd column in the nth row + … … + the gradation value of the pixel in the m th column in the nth row) is divided by m. (2) The horizontal coordinate represents the position of the image column, the vertical coordinate represents the gray level mean value of each column of pixels, and a global image mean value histogram in the column direction with the length of m is obtained, as shown in fig. 4; the points on the abscissa (i.e., the column positions) include 1, 2, 3, … …, m, and the values on the ordinate include: the mean value of the gray levels of the pixels of the point of column 1 is equal to: (the gray scale value of the pixel in the 1 st column, 1 st row + the gray scale value of the pixel in the 1 st column, 2 nd row + the gray scale value of the pixel in the 1 st column, 3 rd row + … … + the gray scale value of the pixel in the 1 st column, n th row) divided by n; the mean value of the gray levels of the pixels of the point of column 2 is equal to: (the grayscale value of the pixel in column 2, row 1 + the grayscale value of the pixel in column 2, row 2 + the grayscale value of the pixel in column 2, row 3 + the grayscale value of … … + the grayscale value of the pixel in column 2, row n) divided by n; … …, respectively; the mean value of the grey levels of the pixels of the point of the m-th column is equal to: (the gray scale value of the pixel in the 1 st column and row + the gray scale value of the pixel in the 2 nd row in the m th column and the gray scale value of the pixel in the 3 rd row in the m th column + … … + the gray scale value of the pixel in the n th row in the m th column) divided by n.
In the present embodiment, the motion vector includes velocity, displacement, acceleration, and the like.
In this embodiment, a global motion vector in a row-column direction (including a global motion vector in a row direction and a global motion vector in a column direction) is obtained by performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm, and may be a plurality of local motion vectors in the row-column direction (including a local motion vector in the row direction and a local motion vector in the column direction) corresponding to a plurality of optimal matching degrees of a plurality of local image mean histograms in the row-column direction of the current frame image and the reference frame image, and the final global motion vector in the row-column direction is obtained by statistically calculating the plurality of local motion vectors in the row-column direction, for example: the local motion vectors in the plurality of row and column directions can be summed and then averaged to obtain an average motion vector in the row and column directions, and the average motion vector in the row and column directions is used as a final global motion vector in the row and column directions; or, removing the maximum and minimum of the local motion vectors in the plurality of row-column directions, then averaging the remaining local motion vectors in the row-column directions to obtain an average motion vector in the row-column direction, and taking the average motion vector in the row-column direction as a final global motion vector in the row-column direction; or, the local motion vector in the row-column direction with the largest counted number in the plurality of local motion vectors in the row-column direction is used as the final global motion vector in the row-column direction; and so on.
Wherein, the specific matching algorithm includes but is not limited to: absolute values of corresponding pixel Differences (SAD), Sum of Squared Differences (SSD), Correlation of images (NCC), and the like.
Step S103: and shifting the reference frame image by using the global motion vector in the row and column directions to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation.
In the application of a camera device of a movable platform (such as a camera device of an unmanned aerial vehicle), a static scene is relatively few, the camera device performs a large amount of global motion relative to a target, the reference frame image is shifted by using the global motion vectors in the row and column directions to obtain a shifted reference frame image, the global motion of the camera device of the movable platform can be compensated, and a good time domain noise reduction effect can be ensured.
Since there are many specific implementation manners for performing global motion estimation in step S102, and there are also many results of global motion estimation for these various specific implementation manners, determining a temporal filter coefficient based on the global motion estimation can enable the temporal filter coefficient to be flexibly adjusted according to different specific implementation manners of global motion estimation, and enable the temporal filter coefficient to be flexibly adjusted according to different results of global motion estimation for specific implementation manners, thereby ensuring accuracy, reliability, and strong adaptability of temporal filtering.
Step S104: and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
The method comprises the steps of obtaining a current frame image shot by the camera device; based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction; shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation; and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image. Compared with the prior art that feature point-based matching or macro block-based matching requires a large amount of calculation, the embodiment of the application calculates the global motion estimation based on the global image mean histogram in the row and column directions of the current frame image and the reference frame image, so that the calculation amount is small, the real-time performance is strong, and the real-time processing of the high-resolution video can be realized; compared with the traditional non-motion-compensated time-domain filtering, the motion estimation is inaccurate when the image has motion blur or the edge of a scene target is not clear, so that the additional defect is caused; because the reference frame image is shifted by using the global motion vectors in the row and column directions, the problem of smear can be avoided as much as possible during time domain filtering; the time-domain filter coefficient is determined based on the global motion estimation, so that technical support is further provided for timely adjusting the time-domain filter coefficient during filtering, further technical support is further provided for ensuring accuracy and reliability of time-domain filtering, and technical support is further provided for further enhancing adaptability of time-domain filtering.
If the global image mean histogram in the row-column direction of the current frame image and the reference frame image is not obtained in advance, before step S102, the method may include: step S105.
Step S105: and determining a global image mean histogram of the current frame image and the reference frame image in the row-column direction, wherein each point of the global image mean histogram is the gray mean of pixels in each column or each row.
Referring to fig. 5, in an embodiment, step S102 may specifically include: substep S1021 and substep S1022.
Substep S1021: and determining a plurality of displacement offsets corresponding to a plurality of optimal matching degrees of a plurality of local image mean value histograms in the row and column directions of the current frame image and the reference frame image through a local optimal matching algorithm based on the global image mean value histograms in the row and column directions of the current frame image and the reference frame image.
In the present embodiment, the motion vector uses a relatively common displacement offset. During matching, a plurality of local image mean value histograms in the row and column directions of the current frame image can be determined, and a plurality of corresponding displacement offsets of the optimal matching degree are searched on the reference frame image; or determining a plurality of local image mean histograms in the row-column direction of the reference frame image, and searching a plurality of corresponding displacement offsets of the optimal matching degree on the current frame image.
Substep S1022: and taking the displacement offset corresponding to the maximum statistical times in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
Generally speaking, if there is local motion, the displacement offset corresponding to the local motion region is inconsistent with the displacement offsets corresponding to other regions without local motion, but the statistical number of the displacement offsets corresponding to the local motion region is less than that of the displacement offsets corresponding to other regions without local motion; and taking the displacement offset corresponding to the maximum statistical times in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction, so that the influence of local motion can be obviously eliminated, and the accurate estimation of the global motion displacement is realized.
In order to further simplify the operation and the calculation amount, the local image mean histograms may be adjacent local image mean histograms, and referring to fig. 6, the sub-step S1021 specifically may include: substep S10211, substep S10212, and substep S10213.
Substep S10211: and determining the matching calculation length according to the statistical length and the maximum calculable displacement offset of the global image mean histogram in the row and column direction.
The statistical length is less than or equal to the length or width of the frame image, for the convenience of calculation, the statistical length of the column direction global image mean histogram may be the same, and the matching calculation length is less than the statistical length. Specifically, the statistical length value z and the maximum calculable displacement offset dz match the calculated length t, t-z-2 × dz. Generally, the larger the statistical length is, the higher the estimation accuracy of global motion estimation is, and the more time is required; the larger the maximum computable displacement offset, the higher the estimation accuracy of the global motion estimation, the more time is required. The statistical length value z and the maximum calculable displacement offset dz are determined according to the specific practical application.
Substep S10212: dividing the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image into a plurality of region segments, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in matching calculation so as to obtain a plurality of displacement offsets corresponding to multiple sliding.
The matching calculation length is divided into a plurality of region segments (which may be equally divided or unequally divided), and the plurality of region segments correspond to a plurality of adjacent local image mean histograms. Specifically, the matching calculation length t of the global image mean histogram in the row-column direction of the reference frame image is divided into r region segments, so that the global image mean histogram in the row-column direction of the current frame image slides from left to right on the z length of the corresponding position to intercept the t length for participating in the matching calculation, and k displacement offset corresponding to k times of sliding is obtained, where k is 2 × dz + 1. Referring to fig. 7, the matching calculation length t (schematically, the middle position in the figure) in the global image mean histogram in the row (or column) direction of the reference frame image is divided into r region segments (schematically, 3 region segments in the figure), referring to fig. 8, the matching calculation is performed by truncating the t length of the global image mean histogram in the row (or column) direction of the current frame image from left to right (schematically, the middle position in the figure) in the z length of the corresponding position (schematically, the middle position in the figure), and k displacement offsets corresponding to k times of sliding can be obtained, where k is 2 × dz + 1.
Substep S10213: and obtaining a plurality of displacement offsets in the row-column direction according to the displacement offset corresponding to the optimal matching degree between each region section and the plurality of corresponding sections of the current frame image when the plurality of region sections of the reference frame image slide for a plurality of times.
And obtaining r displacement offsets in the row-column direction according to the displacement offsets corresponding to the optimal matching degree between each region section and the corresponding section of the current frame image when the r region sections of the reference frame image slide for k times. Referring to fig. 7 and 8, the plurality of region segments of the reference frame image include a region segment 1, a region segment 2, and a region segment 3, when sliding for k times, there are k corresponding segments for the region segment 1 and the current frame image, k corresponding segments for the region segment 2 and the current frame image, and k corresponding segments for the region segment 3 and the current frame image; k matching degrees exist between the region section 1 and k corresponding sections of the current frame image, the corresponding section with the optimal matching degree in the k matching degrees is a corresponding section ab, and the displacement offset corresponding to the optimal matching degree between the region section 1 and the k corresponding sections of the current frame image is the displacement offset 1 between the region section 1 and the corresponding section ab; k matching degrees are arranged between the region section 2 and k corresponding sections of the current frame image, the corresponding section with the optimal matching degree in the k matching degrees is a corresponding section bc, and the displacement offset corresponding to the optimal matching degree between the region section 2 and the k corresponding sections of the current frame image is the displacement offset 2 between the region section 2 and the corresponding section bc; k matching degrees are arranged between the region section 3 and k corresponding sections of the current frame image, the corresponding section with the optimal matching degree in the k matching degrees is a corresponding section cd, and the displacement offset corresponding to the optimal matching degree between the region section 3 and the k corresponding sections of the current frame image is the displacement offset 3 between the region section 3 and the corresponding section cd; the plurality of displacement offsets in the row (or column) direction include: displacement offset 1, displacement offset 2, and displacement offset 3.
In order to avoid that the region matching the reference frame image and the current frame image is missing in the subsequent offset, the most middle matching calculation length is selected, that is, in sub-step S10212, the dividing the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image into a plurality of region segments may include: and dividing the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction into a plurality of region segments. Specifically, the most middle matching calculation length t on the global image mean histogram of the reference frame image in the row-column direction is divided into r region segments.
The optimal matching degree may be measured by an absolute value SAD of a minimum pixel difference or a sum of squares SSD of the minimum pixel difference, that is, the sub-step S10213, where the obtaining a plurality of displacement offsets in a row and column direction according to a displacement offset corresponding to the optimal matching degree between each region segment and a plurality of corresponding segments of the current frame image when the plurality of region segments of the reference frame image slide for a plurality of times may specifically include:
and obtaining a plurality of displacement offsets in the row and column direction according to the absolute value SAD of the minimum pixel difference or the square sum SSD of the minimum pixel difference between each regional section and a plurality of corresponding sections of the current frame image when the plurality of regional sections of the reference frame image slide for a plurality of times.
Specifically, the r displacement offsets in the row and column direction are obtained according to the absolute value SAD of the minimum pixel difference between each region segment and a plurality of corresponding segments of the current frame image or the displacement offset corresponding to the sum of the squares of the minimum pixel differences and the SSD value when the r region segments of the reference frame image slide for k times.
For example: during calculation, the matching calculation length is t, the matching calculation length is divided into r area sections, and the length of each area section is t/r. In sliding window, each region segment will perform a plurality of corresponding segment SAD calculations, namely:
when the r region segments slide for k times, the local image mean histogram of each region segment (for example, the local image mean histograms of the region segments 1, 2, and 3 in fig. 7) obtains k SAD coefficients and corresponding displacement offsets, and the displacement offset corresponding to the minimum value of the k SAD coefficients is the displacement offset of the optimal matching degree calculated by the corresponding region segment, so that r displacement offsets are obtained. The r displacement offsets are subjected to statistical histogram processing to obtain the displacement offset with the maximum statistical times, and the displacement offset with the maximum statistical times is used as the result of the calculation of the local SAD method displacement offset. For example: the statistical number of the shift offsets 1 is 10, the statistical number of the shift offsets 2 is 14, the statistical number of the shift offsets 3 is 12, and the shift offsets 2 with the statistical number of 14 will be the result of the shift offset calculation by the local SAD method. By the method, the influence of local motion can be obviously eliminated, and the accurate estimation of global motion displacement is realized.
In order to ensure that the estimated global motion vector in the row-column direction is as close to the actual situation as possible and ensure the accuracy and reliability of subsequent time-domain filtering, when the maximum statistical frequency obtained by adopting the local image optimal matching algorithm exceeds a certain threshold value, the local image optimal matching algorithm can be considered to be credible, otherwise, the local image optimal matching algorithm is not credible; when it is finally determined that at least one of the displacement offsets (i.e., the global motion vectors in the row-column direction) corresponding to the maximum statistical number of times is equal to the maximum calculable displacement offset, it can be considered that the image motion may exceed the calculable maximum offset, and the offset calculation result at this time is not reliable.
That is, in the sub-step S1022, the step of taking the displacement offset corresponding to the maximum statistical number of times among the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction may specifically include:
and if the maximum statistical frequency is greater than or equal to a statistical frequency threshold value, and the displacement offset corresponding to the maximum statistical frequency is smaller than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum statistical frequency in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
Wherein the method further comprises: and if the displacement offset with the maximum statistics times is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
Further, if the maximum number of times of statistics is smaller than the threshold value of the number of times of statistics, other algorithms may be used when the local image optimal matching algorithm is not reliable. In an embodiment, before step S103, the method may further include: and step S106.
Step S106: if the maximum statistical frequency is smaller than the statistical frequency threshold, performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram of the current frame image and the reference frame image in the row and column direction to obtain a global motion vector in the row and column direction.
In this embodiment, the global motion vector in the row-column direction is obtained by performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm, and may be a row-column direction global motion vector (including a row-direction global motion vector and a column-direction global motion vector) corresponding to the optimal matching degree of the global image mean histograms in the row-column direction of the current frame image and the reference frame image.
Wherein, the specific matching algorithm includes but is not limited to: absolute values of corresponding pixel Differences (SAD), Sum of Squared Differences (SSD), Correlation of images (NCC), and the like.
In an embodiment, the global motion vector in the row-column direction is estimated by a global image correlation method, that is, step S106 may specifically include: and performing global motion estimation on the current frame image by using the reference frame image through a global image correlation method based on the global image mean histogram of the line direction of the current frame image and the reference frame image to obtain a global motion vector of the line direction.
When local motion does not exist, the global motion vector in the row and column direction calculated by adopting a global image correlation method is accurate, and the global image correlation method based on the global image mean histogram has strong adaptability to uniform scenes and scenes with motion blur, so that the accuracy of global motion estimation can be ensured.
Wherein, step S106 may specifically further include: substeps 1061 and substep S1062.
Substep S1061: and determining the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in correlation calculation so as to obtain a plurality of correlation coefficients and a plurality of displacement offsets corresponding to multiple sliding.
Specifically, a matching calculation length t on the global image mean histogram in the row-column direction of the reference frame image is determined, so that the global image mean histogram in the row-column direction of the current frame image is subjected to correlation calculation by intercepting the length t from left to right on the z length of the corresponding position, k correlation coefficients and k displacement offsets corresponding to k times of sliding are obtained, and k is 2 × dz + 1. Referring to fig. 7, a matching calculation length t on the global image mean histogram in the row (or column) direction of the reference frame image (schematically, the middle position in the figure) is obtained, referring to fig. 8, the global image mean histogram in the row (or column) direction of the current frame image is subjected to correlation calculation by cutting the t length from left to right (schematically, the middle position in the figure) on the z length of the corresponding position (schematically, the middle position in the figure), and k correlation coefficients and k displacement offsets corresponding to k times of sliding can be obtained, where k is 2 × dz + 1.
The cross-correlation coefficient is calculated by the following formula:
wherein x isiThe value of the global image mean histogram for the current frame image participating in the correlation calculation (row i or column i),the mean value of the associated global image mean histogram calculated for the current frame image participation,ithe values of the global image mean histogram for reference frame images participating in the correlation calculation (row i or column i),mean, r, of the histogram of the mean values of the relevant global images participating in the calculation for the reference frame imagexyIs the correlation coefficient.
When the sliding window calculates the maximum correlation coefficient,is a fixed value, so the maximum correlation coefficient can be calculated by only calculating:
in this way, the amount of calculation can be further reduced. And the calculation amount and the iteration times of the global motion estimation are fixed, and the method is suitable for being realized by logic or other hardware processors such as Field Programmable Gate Arrays (FPGA).
Substep S1062: and if the matching degree of the current frame image and the reference frame image on the global image mean histogram of the row and column direction of the statistical length is greater than or equal to the matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is less than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients as the global motion vector of the row and column direction.
Specifically, if the matching degree of the current frame image and the reference frame image on the global image mean histogram in the row-column direction of the z length is greater than or equal to the matching threshold, and the displacement offset corresponding to the maximum correlation coefficient of the plurality of correlation coefficients is less than the maximum calculable displacement offset dz, the displacement offset corresponding to the maximum correlation coefficient of the plurality of correlation coefficients is used as the global motion vector in the row-column direction.
Wherein the method further comprises: and if the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
When the local image optimal matching method is not reliable, and a histogram-based global image correlation method is used to calculate the global motion vector in the row and column direction, the matching degree on the global image mean histogram in the row and column direction with the same length of the current frame image and the reference frame image also needs to be judged to determine whether the calculated global motion vector in the row and column direction is valid.
For example: when the camera device rotates, the camera device moves too fast, the displacement offset exceeds the maximum, and the displacement offset can be calculated, different motions of the foreground and the background of the target are observed by the camera device, or the target is blurred during the motion, the calculation error of the displacement offset can occur in the global motion estimation based on the global image correlation method of the histogram, and therefore the abnormal problem of image display is caused.
If the matching degree of the current frame image and the reference frame image on the global image mean histogram in the row-column direction with equal length is greater than or equal to the matching threshold, the two histograms can be considered to be matched, and the calculation result of the histogram-based global image correlation method is credible. When the finally determined displacement offset corresponding to the largest correlation coefficient of the plurality of correlation coefficients is equal to the maximum calculable displacement offset, it can be considered that the image motion may exceed the calculable maximum offset, and the calculation result at this time is also unreliable.
In order to avoid missing of the region where the reference frame image and the current frame image are matched during subsequent offset, the most intermediate matching calculation length is selected, that is, in sub-step S1061, the determining the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image may specifically include: and determining the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction. Specifically, the length t of the most middle matching calculation on the global image mean histogram of the reference frame image in the row-column direction is determined.
In other embodiments, the global motion vector in the row and column direction may be estimated by an SAD method or an SSD method, which is an absolute sum of squares (sum of absolute differences) method of the minimum pixel difference of the global image, that is, the step S106 may further include: and performing global motion estimation on the current frame image by using the reference frame image through an absolute value SAD (sum of absolute differences) method or a Sum of Squares (SSD) method of the minimum pixel difference of the global image based on the global image mean histogram of the row and column directions of the current frame image and the reference frame image to obtain a global motion vector of the row and column directions.
Specifically, during sliding, SAD or SSD operation is performed on two equal-length global image mean histograms of the sliding window, and the displacement offset corresponding to the minimum SAD/SSD value is the displacement offset corresponding to the optimal matching degree.
The details of determining the temporal filter coefficients based on the global motion estimation in step S103 are described in detail below.
In the embodiment of the present application, the time domain filter coefficient will be determined by the first time domain filter coefficient and the second time domain filter coefficient. The first time domain filter coefficient is related to the matching degree corresponding to the global motion vector in the row-column direction (i.e. the matching degree of the corresponding region of the current frame image and the reference frame image), and the larger the matching degree is, the larger the first time domain filter coefficient is, the smaller the matching degree is, and the smaller the first time domain filter coefficient is. The second temporal filter coefficient is related to the credibility of the global motion vector in the row and column direction obtained based on the global motion estimation, and the higher the credibility is, the larger the second temporal filter coefficient is, the lower the credibility is, and the smaller the second temporal filter coefficient is.
If a local image optimal matching algorithm is adopted, in step S103, the determining a temporal filter coefficient based on the global motion estimation may include: substep S11, substep S12, and substep S13, as shown in fig. 9.
Substep S11: and obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the local image optimal matching algorithm, the corresponding relation between the preset credibility range and the preset second time domain filter coefficient.
Substep S12: and obtaining a first time domain filter coefficient according to the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and the corresponding relation between the preset first time domain filter coefficient.
Substep S13: and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
It should be noted that the substep S11 and the substep S12 have no definite sequence relationship and can be operated simultaneously.
Specifically, the sub-step S11 may include: if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is smaller than the maximum calculable displacement offset, the second time domain filter coefficient is 1; and if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is equal to the maximum calculable displacement offset, the second time domain filter coefficient is 0.
If the maximum statistical times corresponding to the global motion vector in the row and column direction obtained by the local image optimal matching algorithm are smaller than the statistical time threshold, the global motion vector in the row and column direction obtained by the local image optimal matching algorithm is not credible, and other algorithms, such as a global image matching algorithm, can be adopted.
If a global image matching algorithm is adopted, in step S103, the determining a temporal filter coefficient based on the global motion estimation may include: substep S21, substep S22, and substep S23, as shown in fig. 10.
Substep S21: and obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the global image matching algorithm, the corresponding relation between the preset credibility range and the preset second time domain filter coefficient.
Substep S22: and obtaining a first time domain filter coefficient according to the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and the corresponding relation between the preset first time domain filter coefficient.
Substep S23: and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
It should be noted that the substep S21 and the substep S22 have no definite sequence relationship and can be operated simultaneously.
Specifically, the sub-step S21 may include: and if the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is greater than or equal to a matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is smaller than the maximum calculable displacement offset, according to the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm, presetting the corresponding relation between the matching degree range and a preset second time domain filter coefficient, and obtaining a second time domain filter coefficient.
In this embodiment, the credibility of the global motion vector in the row and column direction obtained by the global image matching algorithm may be divided into different credibility levels according to the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm, where the greater the matching degree is, the greater the second temporal filter coefficient (which is similar to the first temporal filter coefficient at this time) is, the smaller the matching degree is, and the smaller the second temporal filter coefficient is.
For example, when the sliding window correlation calculation is performed by using the histogram-based global image correlation method, two histograms with equal length are also required to perform global SAD operation, when the deviation of the histogram SAD corresponding to the calculated displacement offset is less than a certain threshold, the two histograms are considered to be matched, and the calculation result by using the histogram-based global image correlation method is credible.
Wherein the time domain filter coefficient is a product of the first time domain filter coefficient and the second time domain filter coefficient.
For example, if the first time-domain filter coefficient is s1 and the matching metric is H, s1 has:
the low and high match thresholds Lowthres and highthres are thresholds related to the image noise level, and highthres > Lowthres, ratio is the maximum temporal filter coefficient, 0< ratio < 1.
The second temporal filter coefficient is s2, s2 is determined according to the credibility degree of the global motion vector in the row and column direction obtained by different matching algorithms, and s2 is greater than or equal to 0 and less than or equal to 1.
The final temporal filter coefficient may be defined as the product of the two, s-s 1-s 2.
Details of shifting the reference frame image in step S103 are described in detail below.
In an embodiment, in step S103, the shifting the reference frame image by using the column-row global motion vector to obtain a shifted reference frame image may include: substep S31 and substep S32, as shown in fig. 11.
Substep S31: and carrying out global offset processing on the reference frame image according to the global motion vector in the row-column direction to obtain an offset image of the missing partial image. As shown in fig. 12.
Substep S32: and expanding the missing partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
And the missing partial images are expanded, so that the boundary deviation between the missing partial images and the normal images can be reduced, and the imaging effect can be ensured.
Specifically, the sub-step S32, before expanding the missing partial image, may further include: blurring the outer boundary of the offset image; in this case, the sub-step S32, expanding the missing partial image and combining the expanded partial image and the offset image into the offset reference frame image, may include: copying the blurred outer boundary image to the position of the missing partial image to obtain an expanded partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
And blurring the outer boundary of the offset image, and copying the blurred outer boundary image to the position of the missing partial image. For example: in the blurring process, a 3 × 3 filter matrix is referred to as follows:
of course, when the global offset processing is performed on the reference frame image, the missing partial image may not be subjected to the boundary extension, and the filter coefficient of the missing partial image is defined as 0 at this time, and the temporal filtering is not performed.
Details of step S104 are described in detail below.
The time domain filtering can adopt a recursive filter IIR for filtering, and also can adopt a plurality of methods such as Kalman filtering, multi-frame average filtering and the like. In an application, a current frame image pixel point V (p, q), a reference frame image pixel point W (p + dp, q + dq), and a time-domain filtering output result V0(p, q) is:
Vo(p,q)=(1-s(p,q))V(p,q)+s(p,q)W(p+dp,q+dq)
where s (p, q) is a temporal filter coefficient, and dp and dq are global motion vectors in row and column directions, such as displacement offset.
Referring to fig. 13, fig. 13 is a schematic structural diagram of an embodiment of the temporal filtering apparatus based on global motion estimation according to the present invention, where the apparatus of the present embodiment is applied to a movable platform including an image capturing apparatus, it should be noted that the apparatus can perform the steps in the temporal filtering method based on global motion estimation, and details of relevant contents refer to the relevant contents of the temporal filtering method based on global motion estimation, which will not be described herein again.
The apparatus 100 comprises: the memory 1 and the processor 2 are connected by a bus.
The processor 2 may be a micro-control unit, a central processing unit, a digital signal processor, or the like.
The memory 1 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The memory 1 is used for storing a computer program; the processor 2 is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring a current frame image shot by the camera device; based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction; shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation; and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
Wherein the processor, when executing the computer program, implements the steps of: and determining a global image mean histogram of the current frame image and the reference frame image in the row-column direction, wherein each point of the global image mean histogram is the gray mean of pixels in each column or each row.
Wherein the processor, when executing the computer program, implements the steps of: based on the global image mean histograms in the row and column directions of the current frame image and the reference frame image, determining a plurality of displacement offsets corresponding to a plurality of optimal matching degrees of a plurality of local image mean histograms in the row and column directions of the current frame image and the reference frame image through a local optimal matching algorithm; and taking the displacement offset corresponding to the maximum statistical times in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
Wherein the processor, when executing the computer program, implements the steps of: determining a matching calculation length according to the statistical length and the maximum calculable displacement offset of the global image mean histogram in the row and column direction; dividing the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image into a plurality of region segments, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in matching calculation so as to obtain a plurality of displacement offsets corresponding to multiple sliding; and obtaining a plurality of displacement offsets in the row-column direction according to the displacement offset corresponding to the optimal matching degree between each region section and the plurality of corresponding sections of the current frame image when the plurality of region sections of the reference frame image slide for a plurality of times.
Wherein the processor, when executing the computer program, implements the steps of: and dividing the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction into a plurality of region segments.
Wherein the processor, when executing the computer program, implements the steps of: and obtaining a plurality of displacement offsets in the row and column direction according to the absolute value SAD of the minimum pixel difference or the square sum SSD of the minimum pixel difference between each regional section and a plurality of corresponding sections of the current frame image when the plurality of regional sections of the reference frame image slide for a plurality of times.
Wherein the processor, when executing the computer program, implements the steps of: and if the maximum statistical frequency is greater than or equal to a statistical frequency threshold value, and the displacement offset corresponding to the maximum statistical frequency is smaller than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum statistical frequency in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
Wherein the processor, when executing the computer program, implements the steps of: if the maximum statistical frequency is smaller than the statistical frequency threshold, performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram of the current frame image and the reference frame image in the row and column direction to obtain a global motion vector in the row and column direction.
Wherein the processor, when executing the computer program, implements the steps of: and performing global motion estimation on the current frame image by using the reference frame image through a global image correlation method based on the global image mean histogram of the line direction of the current frame image and the reference frame image to obtain a global motion vector of the line direction.
Wherein the processor, when executing the computer program, implements the steps of: determining the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in correlation calculation so as to obtain a plurality of correlation coefficients and a plurality of displacement offsets corresponding to multiple sliding; and if the matching degree of the current frame image and the reference frame image on the global image mean histogram of the row and column direction of the statistical length is greater than or equal to the matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is less than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients as the global motion vector of the row and column direction.
Wherein the processor, when executing the computer program, implements the steps of: and determining the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction.
Wherein the processor, when executing the computer program, implements the steps of: and performing global motion estimation on the current frame image by using the reference frame image through an absolute value SAD (sum of absolute differences) method or a Sum of Squares (SSD) method of the minimum pixel difference of the global image based on the global image mean histogram of the row and column directions of the current frame image and the reference frame image to obtain a global motion vector of the row and column directions.
Wherein the processor, when executing the computer program, implements the steps of: obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the local image optimal matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients; obtaining a first time domain filter coefficient according to the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and the corresponding relation between the preset first time domain filter coefficients; and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
Wherein the processor, when executing the computer program, implements the steps of: if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is smaller than the maximum calculable displacement offset, the second time domain filter coefficient is 1; and if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is equal to the maximum calculable displacement offset, the second time domain filter coefficient is 0.
Wherein the processor, when executing the computer program, implements the steps of: obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the global image matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients; obtaining a second time domain filter coefficient according to the corresponding relation among the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and a preset second time domain filter coefficient; and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
Wherein the processor, when executing the computer program, implements the steps of: and if the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is greater than or equal to a matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is smaller than the maximum calculable displacement offset, according to the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm, presetting the corresponding relation between the matching degree range and a preset second time domain filter coefficient, and obtaining a second time domain filter coefficient.
Wherein the time domain filter coefficient is a product of the first time domain filter coefficient and the second time domain filter coefficient.
Wherein the processor, when executing the computer program, implements the steps of: and if the displacement offset with the maximum statistics times is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
Wherein the processor, when executing the computer program, implements the steps of: and if the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
Wherein the processor, when executing the computer program, implements the steps of: performing global offset processing on the reference frame image according to the global motion vector in the row-column direction to obtain an offset image of the missing partial image; and expanding the missing partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
Wherein the processor, when executing the computer program, implements the steps of: blurring the outer boundary of the offset image; copying the blurred outer boundary image to the position of the missing partial image to obtain an expanded partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
The present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to implement the temporal filtering method based on global motion estimation as described in any of the above. For a detailed description of relevant contents, reference is made to the above-mentioned relevant contents section, which is not described herein again in a redundant manner.
The computer readable storage medium may be an internal storage unit of the above device, such as a hard disk or a memory. The computer readable storage medium may also be an external storage device such as a hard drive equipped with a plug-in, smart memory card, secure digital card, flash memory card, or the like.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (43)
1. A temporal filtering method based on global motion estimation is applied to a movable platform comprising a camera device, and the method comprises the following steps:
acquiring a current frame image shot by the camera device;
based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction;
shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation;
and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
2. The method of claim 1, wherein before performing global motion estimation on the current frame image by using the reference frame image through a local optimal matching algorithm based on the column-row direction global image mean histogram of the current frame image and the reference frame image to obtain a column-row direction global motion vector, the method comprises:
and determining a global image mean histogram of the current frame image and the reference frame image in the row-column direction, wherein each point of the global image mean histogram is the gray mean of pixels in each column or each row.
3. The method of claim 1, wherein the performing global motion estimation on the current frame image by using the reference frame image through a local optimal matching algorithm based on the column-row direction global image mean histogram of the current frame image and the reference frame image to obtain a column-row direction global motion vector comprises:
based on the global image mean histograms in the row and column directions of the current frame image and the reference frame image, determining a plurality of displacement offsets corresponding to a plurality of optimal matching degrees of a plurality of local image mean histograms in the row and column directions of the current frame image and the reference frame image through a local optimal matching algorithm;
and taking the displacement offset corresponding to the maximum statistical times in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
4. The method according to claim 3, wherein the determining, by a local optimal matching algorithm, a plurality of displacement offsets corresponding to optimal matching degrees of a plurality of local image mean histograms in row and column directions of the current frame image and the reference frame image based on the global image mean histograms in row and column directions of the current frame image and the reference frame image comprises:
determining a matching calculation length according to the statistical length and the maximum calculable displacement offset of the global image mean histogram in the row and column direction;
dividing the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image into a plurality of region segments, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in matching calculation so as to obtain a plurality of displacement offsets corresponding to multiple sliding;
and obtaining a plurality of displacement offsets in the row-column direction according to the displacement offset corresponding to the optimal matching degree between each region section and the plurality of corresponding sections of the current frame image when the plurality of region sections of the reference frame image slide for a plurality of times.
5. The method of claim 4, wherein the dividing the matching computation length on the histogram of the global image mean value in the row-column direction of the reference frame image into a plurality of region segments comprises:
and dividing the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction into a plurality of region segments.
6. The method according to claim 4, wherein obtaining a plurality of displacement offsets in a row-column direction according to the displacement offset corresponding to the optimal matching degree between each region segment and a plurality of corresponding segments of the current frame image when the plurality of region segments of the reference frame image slide for a plurality of times comprises:
and obtaining a plurality of displacement offsets in the row and column direction according to the absolute value SAD of the minimum pixel difference or the square sum SSD of the minimum pixel difference between each regional section and a plurality of corresponding sections of the current frame image when the plurality of regional sections of the reference frame image slide for a plurality of times.
7. The method according to claim 4, wherein said using the displacement offset corresponding to the maximum statistical number of times of the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction comprises:
and if the maximum statistical frequency is greater than or equal to a statistical frequency threshold value, and the displacement offset corresponding to the maximum statistical frequency is smaller than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum statistical frequency in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
8. The method of claim 7, wherein before shifting the reference frame image by the row-column direction global motion vector to obtain a shifted reference frame image and determining temporal filter coefficients based on the global motion estimation, further comprising:
if the maximum statistical frequency is smaller than the statistical frequency threshold, performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram of the current frame image and the reference frame image in the row and column direction to obtain a global motion vector in the row and column direction.
9. The method of claim 8, wherein the obtaining the global motion vector in the row-column direction by performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram in the row-column direction of the current frame image and the reference frame image comprises:
and performing global motion estimation on the current frame image by using the reference frame image through a global image correlation method based on the global image mean histogram of the line direction of the current frame image and the reference frame image to obtain a global motion vector of the line direction.
10. The method of claim 9, wherein the obtaining the global motion vector in the row-column direction by performing global motion estimation on the current frame image by using the reference frame image through a global image correlation method based on the global image mean histogram in the row-column direction of the current frame image and the reference frame image comprises:
determining the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in correlation calculation so as to obtain a plurality of correlation coefficients and a plurality of displacement offsets corresponding to multiple sliding;
and if the matching degree of the current frame image and the reference frame image on the global image mean histogram of the row and column direction of the statistical length is greater than or equal to the matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is less than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients as the global motion vector of the row and column direction.
11. The method of claim 10, wherein determining the matching computation length on the histogram of the global image mean of the row and column directions of the reference frame image comprises:
and determining the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction.
12. The method of claim 8, wherein the obtaining the global motion vector in the row-column direction by performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram in the row-column direction of the current frame image and the reference frame image comprises:
and performing global motion estimation on the current frame image by using the reference frame image through an absolute value SAD (sum of absolute differences) method or a Sum of Squares (SSD) method of the minimum pixel difference of the global image based on the global image mean histogram of the row and column directions of the current frame image and the reference frame image to obtain a global motion vector of the row and column directions.
13. The method of claim 3, wherein determining temporal filter coefficients based on the global motion estimation comprises:
obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the local image optimal matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients;
obtaining a first time domain filter coefficient according to the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and the corresponding relation between the preset first time domain filter coefficients;
and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
14. The method according to claim 13, wherein said obtaining a second temporal filter coefficient according to a correspondence between a confidence level of the global motion vector in the row and column direction obtained by the local image optimal matching algorithm, a preset confidence level range, and a preset second temporal filter coefficient comprises:
if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is smaller than the maximum calculable displacement offset, the second time domain filter coefficient is 1;
and if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is equal to the maximum calculable displacement offset, the second time domain filter coefficient is 0.
15. The method of claim 10, wherein determining temporal filter coefficients based on the global motion estimation comprises:
obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the global image matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients;
obtaining a second time domain filter coefficient according to the corresponding relation among the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and a preset second time domain filter coefficient;
and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
16. The method according to claim 15, wherein said obtaining a second temporal filter coefficient according to a correspondence between a confidence level of the global motion vector in the row and column direction obtained by the global image matching algorithm, a preset confidence level range, and a preset second temporal filter coefficient comprises:
and if the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is greater than or equal to a matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is smaller than the maximum calculable displacement offset, according to the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm, presetting the corresponding relation between the matching degree range and a preset second time domain filter coefficient, and obtaining a second time domain filter coefficient.
17. The method according to claim 13 or 14, wherein the temporal filter coefficient is a product of the first temporal filter coefficient and the second temporal filter coefficient.
18. The method of claim 7, further comprising:
and if the displacement offset with the maximum statistics times is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
19. The method of claim 10, further comprising:
and if the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
20. The method of claim 1, wherein shifting the reference frame picture with the row-column global motion vector to obtain a shifted reference frame picture comprises:
performing global offset processing on the reference frame image according to the global motion vector in the row-column direction to obtain an offset image of the missing partial image;
and expanding the missing partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
21. The method of claim 20, wherein prior to expanding the missing partial image, comprising:
blurring the outer boundary of the offset image;
the expanding the missing partial image and combining the expanded partial image and the offset image into the offset reference frame image includes:
copying the blurred outer boundary image to the position of the missing partial image to obtain an expanded partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
22. A temporal filtering device based on global motion estimation, applied to a movable platform including an image capturing device, the device comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring a current frame image shot by the camera device;
based on the global image mean histogram of the line direction of the current frame image and the reference frame image, performing global motion estimation on the current frame image by using the reference frame image through a local image optimal matching algorithm to obtain a global motion vector of the line direction;
shifting the reference frame image by using the global motion vector in the row and column direction to obtain a shifted reference frame image, and determining a time domain filter coefficient based on the global motion estimation;
and filtering the current frame image according to the shifted reference frame image and the time domain filtering coefficient to obtain the filtered current frame image.
23. The apparatus of claim 22, wherein the processor, when executing the computer program, performs the steps of:
and determining a global image mean histogram of the current frame image and the reference frame image in the row-column direction, wherein each point of the global image mean histogram is the gray mean of pixels in each column or each row.
24. The apparatus of claim 22, wherein the processor, when executing the computer program, performs the steps of:
based on the global image mean histograms in the row and column directions of the current frame image and the reference frame image, determining a plurality of displacement offsets corresponding to a plurality of optimal matching degrees of a plurality of local image mean histograms in the row and column directions of the current frame image and the reference frame image through a local optimal matching algorithm;
and taking the displacement offset corresponding to the maximum statistical times in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
25. The apparatus of claim 24, wherein the processor, when executing the computer program, performs the steps of:
determining a matching calculation length according to the statistical length and the maximum calculable displacement offset of the global image mean histogram in the row and column direction;
dividing the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image into a plurality of region segments, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in matching calculation so as to obtain a plurality of displacement offsets corresponding to multiple sliding;
and obtaining a plurality of displacement offsets in the row-column direction according to the displacement offset corresponding to the optimal matching degree between each region section and the plurality of corresponding sections of the current frame image when the plurality of region sections of the reference frame image slide for a plurality of times.
26. The apparatus of claim 25, wherein the processor, when executing the computer program, performs the steps of:
and dividing the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction into a plurality of region segments.
27. The apparatus of claim 25, wherein the processor, when executing the computer program, performs the steps of:
and obtaining a plurality of displacement offsets in the row and column direction according to the absolute value SAD of the minimum pixel difference or the square sum SSD of the minimum pixel difference between each regional section and a plurality of corresponding sections of the current frame image when the plurality of regional sections of the reference frame image slide for a plurality of times.
28. The apparatus of claim 25, wherein the processor, when executing the computer program, performs the steps of:
and if the maximum statistical frequency is greater than or equal to a statistical frequency threshold value, and the displacement offset corresponding to the maximum statistical frequency is smaller than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum statistical frequency in the plurality of displacement offsets in the row-column direction as the global motion vector in the row-column direction.
29. The apparatus of claim 28, wherein the processor, when executing the computer program, performs the steps of:
if the maximum statistical frequency is smaller than the statistical frequency threshold, performing global motion estimation on the current frame image by using the reference frame image through a global image matching algorithm based on the global image mean histogram of the current frame image and the reference frame image in the row and column direction to obtain a global motion vector in the row and column direction.
30. The apparatus of claim 29, wherein the processor, when executing the computer program, performs the steps of:
and performing global motion estimation on the current frame image by using the reference frame image through a global image correlation method based on the global image mean histogram of the line direction of the current frame image and the reference frame image to obtain a global motion vector of the line direction.
31. The apparatus of claim 30, wherein the processor, when executing the computer program, performs the steps of:
determining the matching calculation length on the global image mean histogram in the row-column direction of the reference frame image, and enabling the global image mean histogram in the row-column direction of the current frame image to slide from left to right on the statistical length of the corresponding position to intercept the matching calculation length to participate in correlation calculation so as to obtain a plurality of correlation coefficients and a plurality of displacement offsets corresponding to multiple sliding;
and if the matching degree of the current frame image and the reference frame image on the global image mean histogram of the row and column direction of the statistical length is greater than or equal to the matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is less than the maximum calculable displacement offset, taking the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients as the global motion vector of the row and column direction.
32. The apparatus of claim 31, wherein the processor, when executing the computer program, performs the steps of:
and determining the most middle matching calculation length on the global image mean histogram of the reference frame image in the row-column direction.
33. The apparatus of claim 29, wherein the processor, when executing the computer program, performs the steps of:
and performing global motion estimation on the current frame image by using the reference frame image through an absolute value SAD (sum of absolute differences) method or a Sum of Squares (SSD) method of the minimum pixel difference of the global image based on the global image mean histogram of the row and column directions of the current frame image and the reference frame image to obtain a global motion vector of the row and column directions.
34. The apparatus of claim 24, wherein the processor, when executing the computer program, performs the steps of:
obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the local image optimal matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients;
obtaining a first time domain filter coefficient according to the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and the corresponding relation between the preset first time domain filter coefficients;
and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
35. The apparatus of claim 34, wherein the processor, when executing the computer program, performs the steps of:
if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is smaller than the maximum calculable displacement offset, the second time domain filter coefficient is 1;
and if the maximum statistical times corresponding to the global motion vectors in the row and column directions obtained by the local image optimal matching algorithm are greater than or equal to a statistical time threshold value, and the displacement offset corresponding to the maximum statistical times is equal to the maximum calculable displacement offset, the second time domain filter coefficient is 0.
36. The apparatus of claim 31, wherein the processor, when executing the computer program, performs the steps of:
obtaining a second time domain filter coefficient according to the credibility of the global motion vector in the row and column direction obtained by the global image matching algorithm, a preset credibility range and a corresponding relation between preset second time domain filter coefficients;
obtaining a second time domain filter coefficient according to the corresponding relation among the matching degree corresponding to the global motion vector in the row and column direction, the range of the preset matching degree and a preset second time domain filter coefficient;
and determining the time domain filter coefficient according to the first time domain filter coefficient and the second time domain filter coefficient.
37. The apparatus of claim 36, wherein the processor, when executing the computer program, performs the steps of:
and if the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is greater than or equal to a matching threshold value, and the displacement offset corresponding to the maximum correlation coefficient corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm is smaller than the maximum calculable displacement offset, according to the matching degree corresponding to the global motion vector in the row and column direction obtained by the global image matching algorithm, presetting the corresponding relation between the matching degree range and a preset second time domain filter coefficient, and obtaining a second time domain filter coefficient.
38. The apparatus of claim 34 or 35, wherein the temporal filter coefficient is a product of the first temporal filter coefficient and the second temporal filter coefficient.
39. The apparatus of claim 28, wherein the processor, when executing the computer program, performs the steps of:
and if the displacement offset with the maximum statistics times is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
40. The apparatus of claim 31, wherein the processor, when executing the computer program, performs the steps of:
and if the displacement offset corresponding to the maximum correlation coefficient in the plurality of correlation coefficients is equal to the maximum calculable displacement offset, determining that the global motion vector in the row-column direction is zero.
41. The apparatus of claim 22, wherein the processor, when executing the computer program, performs the steps of:
performing global offset processing on the reference frame image according to the global motion vector in the row-column direction to obtain an offset image of the missing partial image;
and expanding the missing partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
42. The apparatus according to claim 41, wherein the processor, when executing the computer program, performs the steps of:
blurring the outer boundary of the offset image;
copying the blurred outer boundary image to the position of the missing partial image to obtain an expanded partial image, and combining the expanded partial image and the offset image into the offset reference frame image.
43. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the global motion estimation based temporal filtering method according to any one of claims 1-21.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2020/088846 WO2021223127A1 (en) | 2020-05-06 | 2020-05-06 | Global motion estimation-based time-domain filtering method and device, and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN112805989A true CN112805989A (en) | 2021-05-14 |
Family
ID=75809274
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202080005224.5A Pending CN112805989A (en) | 2020-05-06 | 2020-05-06 | Time domain filtering method and device based on global motion estimation and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112805989A (en) |
| WO (1) | WO2021223127A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115442522A (en) * | 2022-08-10 | 2022-12-06 | 深圳市贝嘉技术有限公司 | Microscope-based imaging adjustment method, device, equipment and storage medium |
| CN120355581A (en) * | 2025-06-18 | 2025-07-22 | 马栏山音视频实验室 | Image filtering processing method, system, equipment and storage medium |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114841866B (en) * | 2022-03-25 | 2024-09-13 | 武汉博宇光电系统有限责任公司 | Infrared image time domain filtering method based on displacement detection registration |
| CN114708168B (en) * | 2022-04-26 | 2025-08-26 | 维沃移动通信(深圳)有限公司 | Image processing method and electronic device |
| CN116503743B (en) * | 2023-06-28 | 2023-09-08 | 自然资源部第二海洋研究所 | Optimal matching method for geographic vector data and high-resolution remote sensing image |
| CN118505574B (en) * | 2023-09-13 | 2025-05-06 | 荣耀终端股份有限公司 | Image processing method and electronic equipment |
| CN118279819A (en) * | 2024-03-26 | 2024-07-02 | 广州炘美生物科技有限公司 | Regional monitoring data enhancement processing system |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070070250A1 (en) * | 2005-09-27 | 2007-03-29 | Samsung Electronics Co., Ltd. | Methods for adaptive noise reduction based on global motion estimation |
| CN104202504A (en) * | 2014-08-19 | 2014-12-10 | 昆明理工大学 | Processing method of real-time electronic image stabilization circuit system based on FPGA (Field Programmable Gate Array) |
| CN104735301A (en) * | 2015-04-01 | 2015-06-24 | 中国科学院自动化研究所 | Video time domain denoising device and method |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104717402B (en) * | 2015-04-01 | 2017-12-01 | 中国科学院自动化研究所 | A kind of Space-time domain combines noise estimating system |
| CN109743495B (en) * | 2018-11-28 | 2021-02-09 | 深圳市中科视讯智能系统技术有限公司 | Electronic stability augmentation method and device for video image |
-
2020
- 2020-05-06 CN CN202080005224.5A patent/CN112805989A/en active Pending
- 2020-05-06 WO PCT/CN2020/088846 patent/WO2021223127A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070070250A1 (en) * | 2005-09-27 | 2007-03-29 | Samsung Electronics Co., Ltd. | Methods for adaptive noise reduction based on global motion estimation |
| CN104202504A (en) * | 2014-08-19 | 2014-12-10 | 昆明理工大学 | Processing method of real-time electronic image stabilization circuit system based on FPGA (Field Programmable Gate Array) |
| CN104735301A (en) * | 2015-04-01 | 2015-06-24 | 中国科学院自动化研究所 | Video time domain denoising device and method |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115442522A (en) * | 2022-08-10 | 2022-12-06 | 深圳市贝嘉技术有限公司 | Microscope-based imaging adjustment method, device, equipment and storage medium |
| CN115442522B (en) * | 2022-08-10 | 2023-11-21 | 深圳市贝嘉技术有限公司 | Imaging adjustment method, device, equipment and storage medium based on microscope |
| CN120355581A (en) * | 2025-06-18 | 2025-07-22 | 马栏山音视频实验室 | Image filtering processing method, system, equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021223127A1 (en) | 2021-11-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112805989A (en) | Time domain filtering method and device based on global motion estimation and storage medium | |
| US9589328B2 (en) | Globally dominant point spread function estimation | |
| CN111161172B (en) | Infrared image column direction stripe eliminating method, system and computer storage medium | |
| US8582915B2 (en) | Image enhancement for challenging lighting conditions | |
| CN114731408A (en) | System, device and method for video frame interpolation using structured neural network | |
| US11282216B2 (en) | Image noise reduction | |
| KR101811718B1 (en) | Method and apparatus for processing the image | |
| GB2536430B (en) | Image noise reduction | |
| WO2017100971A1 (en) | Deblurring method and device for out-of-focus blurred image | |
| CN113658050B (en) | Image denoising method, denoising device, mobile terminal and storage medium | |
| CN113409353A (en) | Motion foreground detection method and device, terminal equipment and storage medium | |
| CN113012061A (en) | Noise reduction processing method and device and electronic equipment | |
| US10674178B2 (en) | One-dimensional segmentation for coherent motion estimation | |
| US11457158B2 (en) | Location estimation device, location estimation method, and program recording medium | |
| CN103377472B (en) | For removing the method and system of attachment noise | |
| EP1955548B1 (en) | Motion estimation using motion blur information | |
| Mohan | Adaptive super-resolution image reconstruction with lorentzian error norm | |
| JP3959547B2 (en) | Image processing apparatus, image processing method, and information terminal apparatus | |
| Xu et al. | Interlaced scan CCD image motion deblur for space-variant motion blurs | |
| CN116437024B (en) | Video real-time noise reduction method and device based on motion estimation and noise estimation | |
| Peng et al. | Image restoration for interlaced scan CCD image with space-variant motion blurs | |
| JP4669949B2 (en) | Moving object region extraction method | |
| Li et al. | Combining weighted curvelet accumulation with motion vector duty cycle for nonuniform video deblurring | |
| Teranishi et al. | Blind Blur Image Restoration For High-resolution Images | |
| Jung et al. | Iterative PSF estimation and its application to shift invariant and variant blur reduction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210514 |