CN117237287A - Rapid belt deviation detection method based on visual identification - Google Patents
Rapid belt deviation detection method based on visual identification Download PDFInfo
- Publication number
- CN117237287A CN117237287A CN202311130387.8A CN202311130387A CN117237287A CN 117237287 A CN117237287 A CN 117237287A CN 202311130387 A CN202311130387 A CN 202311130387A CN 117237287 A CN117237287 A CN 117237287A
- Authority
- CN
- China
- Prior art keywords
- belt
- video image
- image
- camera
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Image Analysis (AREA)
Abstract
The application discloses a belt deviation quick detection method based on visual identification, which comprises the following steps: acquiring video images in real time according to an IP network camera installed on belt equipment; carrying out coordinate system calibration on the video image by adopting a calibration algorithm; processing the video image to obtain a first video image; processing the first video image to obtain a second video image; extracting a belt area image in the second video image; edge detection is carried out on the belt area image, and a belt edge profile is obtained; converting pixel coordinate points of the belt edge contour into image physical coordinate points; and calculating the distance between the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is stationary, judging whether the distance is larger than a preset value, and if so, indicating the belt deviation. The application solves the problems of complex layout of power supply and network wiring of the camera in the traditional mode of fixing the monitoring camera above the belt equipment to detect belt deviation.
Description
Technical Field
The application relates to the technical field of belt deviation detection, in particular to a rapid belt deviation detection method based on visual identification.
Background
The problem of belt deviation often occurs in the process of coal conveying gallery belt operation, at present, the detection of the coal conveying gallery belt deviation is aimed at, a common detection mode is that a plurality of monitoring cameras are fixed above belt equipment to shoot from top to bottom, edge detection is firstly carried out on a belt in a video image, then the detected edge is compared with a preset electronic limit to obtain a deviation value, and finally whether the belt deviates or not is judged according to the deviation value. The method of fixing the monitoring camera above the belt device is difficult in selecting the mounting position of the camera, and many factors to be considered, such as the complicated layout of the power supply and the network wiring of the camera, result in reduced efficiency of engineering deployment.
Disclosure of Invention
Aiming at the defects, the application provides a rapid belt deviation detection method based on visual identification, which aims to solve the problems that the traditional mode of fixing a monitoring camera above belt equipment for belt deviation detection is difficult in the selection of the installation position of the camera, more factors to be considered, such as the complicated layout of a power supply and a network wiring of the camera, and the efficiency of engineering deployment is reduced.
To achieve the purpose, the application adopts the following technical scheme:
a belt deviation quick detection method based on visual identification is characterized in that: the method comprises the following steps:
step S1: acquiring video images in real time according to an IP network camera installed on belt equipment;
step S2: carrying out coordinate system calibration on the video image by adopting a calibration algorithm;
step S3: performing deformity correction processing and horizontal correction processing on the video image to obtain a first video image;
step S4: performing binarization processing and filtering processing on the first video image to obtain a second video image;
step S5: extracting a belt area image in the second video image;
step S6: performing edge detection on the belt region image to obtain a belt edge profile;
step S7: converting pixel coordinate points of the belt edge contour into image physical coordinate points by using parameters calibrated by a coordinate system;
step S8: calculating the distance between the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is stationary, judging whether the distance is larger than a preset value, and if so, indicating that the belt is deviated; if not, the belt is not deviated.
Preferably, in step S2, the calibration algorithm specifically comprises the following sub-steps:
step S21: obtaining a focal length of a camera through camera calibration;
step S22: obtaining the actual distance between the target object and the camera and the actual size of the target object through measurement;
step S23: according to the focal length of the camera, the actual distance between the target object and the camera and the actual size of the target object, the pixel size of the target object in the video image is calculated according to the following specific calculation formula:
wherein h represents the pixel size of the target object in the video image; d represents the actual distance between the target object and the camera; f represents the focal length of the camera; h represents the actual size of the target object;
step S24: and carrying out ratio operation on the pixel size of the target object in the video image and the actual size of the target object to obtain a coordinate conversion ratio.
Preferably, in step S3, the process of correcting deformity is performed on the video image, and specifically includes the following sub-steps:
step S31: calibrating the video image through camera calibration to obtain internal parameters and external parameters of a camera;
step S32: calculating a distortion parameter matrix using internal parameters and external parameters of the camera;
step S33: and remapping and gray-scale reconstructing the video image by using the distortion parameter matrix to obtain a corrected image corresponding to the video image.
Preferably, in step S3, the video image is subjected to a horizontal correction process, specifically including the following sub-steps:
step S34: counting the directions of lines appearing in the corrected image corresponding to the video image;
step S35: taking the direction of the line with the largest occurrence number as the direction of the corrected image corresponding to the video image;
step S36: calculating a slope frequency domain array;
step S37: and performing image level correction based on the inverse transformation of the slope frequency domain array corresponding to the line with the largest occurrence number.
Preferably, in step S5, the following substeps are specifically included:
step S51: converting the second video image from BGR color space to HSV color space using a cv2.cvttcolor () function in open.cv;
step S52: defining red HSV ranges, and creating three masks using a cv2.InRange () function in Open. CV, corresponding to the three ranges of red HSV, respectively;
step S53: merging the three masks into a unified mask, and performing bitwise and operation on the second video image and the unified mask by using a cv2.Bitwise_and () function in an open. Cv to extract a red region in the second video image;
step S54: and performing expansion and corrosion treatment on the extracted second video image to obtain a belt area image.
Preferably, in step S6, the following substeps are specifically included:
step S61: obtaining belt edge contour points in the belt region image by using an edge detection algorithm;
step S62: calculating key points in the belt edge contour points by using a non-maximum suppression algorithm;
step S63: fitting the edge profile of the belt by using a least square method according to the key points;
step S64: and calculating the abscissa extreme points in the edge profile of the belt according to the edge profile of the belt.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the scheme, the IP network camera is directly installed in the middle area of the belt equipment, the problem that the power supply and network routing layout of the camera installed above the belt equipment are complex can be avoided, the camera does not need to re-optimize the software algorithm according to different use scenes, and the engineering deployment efficiency is improved.
Drawings
FIG. 1 is a flowchart of the steps of a method for rapidly detecting belt deviation based on visual recognition;
FIG. 2 is a schematic diagram of one embodiment of the present application;
FIG. 3 is a schematic representation of one embodiment of the present application;
FIG. 4 is a schematic diagram of one embodiment of the present application;
FIG. 5 is a schematic diagram of one embodiment of the present application;
fig. 6 is a schematic diagram of one embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present application and are not to be construed as limiting the present application.
A belt deviation rapid detection method based on visual recognition comprises the following steps:
step S1: acquiring video images in real time according to an IP network camera installed on belt equipment;
step S2: carrying out coordinate system calibration on the video image by adopting a calibration algorithm;
step S3: performing deformity correction processing and horizontal correction processing on the video image to obtain a first video image;
step S4: performing binarization processing and filtering processing on the first video image to obtain a second video image;
step S5: extracting a belt area image in the second video image;
step S6: performing edge detection on the belt region image to obtain a belt edge profile;
step S7: converting pixel coordinate points of the belt edge contour into image physical coordinate points by using parameters calibrated by a coordinate system;
step S8: calculating the distance between the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is stationary, judging whether the distance is larger than a preset value, and if so, indicating that the belt is deviated; if not, the belt is not deviated.
According to the belt deviation rapid detection method based on visual identification, as shown in fig. 1, the first step is to collect video images in real time according to the IP network camera installed on the belt equipment, in the embodiment, the IP network camera is directly installed in the middle area of the belt equipment, the installation is convenient, and the problem that the power supply and the network routing layout of the camera installed above the belt equipment are complex can be avoided. And the second step is to calibrate the coordinate system of the video image by using a calibration algorithm, specifically, the calibration of the coordinate system refers to determining the conversion relation between the coordinate system in a physical system and the coordinate system in the actual world, and in this embodiment, the coordinate system calibration is performed on the video image, so as to realize the conversion of the pixel coordinate system in the video image into the physical coordinate system of the image. And thirdly, carrying out deformity correction processing and horizontal correction processing on the video image to obtain a first video image, wherein in the process of carrying out coordinate system calibration on the video image, the video image is distorted due to a camera lens, and the video image is required to be subjected to distortion correction to restore the video image to a normal image. The video image subjected to distortion correction may be in an inclined state, and then the video image in the inclined state needs to be horizontally corrected, so that the video image in the inclined state can be horizontally arranged, and the subsequent extraction of the characteristics of the video image is facilitated. And performing binarization processing and filtering processing on the first video image to obtain a second video image, specifically, performing binarization processing on the first video image to obtain a binary image, so that feature extraction and analysis can be more conveniently performed on the image, and the contrast of target features in the image can be enhanced. In the filtering process of the binary image, gaussian filtering is adopted in the embodiment, so that high-frequency noise in the image can be removed. And a fifth step of extracting a belt area image in the second video image, wherein a special laser light source module is arranged in the middle area of the belt equipment and used for drawing the belt area and highlighting the edge condition of the belt. When the special laser light source module emits red laser to the belt, the belt can scatter and display red, so that in the embodiment, the red area in the second video image is segmented in a color segmentation mode, and then the red area is extracted, thereby realizing the extraction of the belt area. And a sixth step of performing edge detection on the belt region image to obtain a belt edge profile, wherein in the embodiment, an edge detection algorithm, such as a sobel edge detection operator, a prewitt edge detection operator, and the like, is used to extract the belt edge profile in the belt region image. The seventh step is to convert the pixel coordinate point of the belt edge contour into the physical coordinate point of the image by using the parameters calibrated by the coordinate system, in this embodiment, the pixel coordinate of the belt edge contour is converted into the actual physical coordinate by converting the coordinates of the belt edge contour point, which is favorable for calculating the distance between the actual physical coordinate and the position coordinate set when the belt is stationary. The eighth step is to calculate the distance between the physical coordinate point of the image of the belt edge outline and the position coordinate point set when the belt is stationary, and judge whether the distance is greater than a preset value, if yes, the belt deviation is indicated; if not, the belt is not deviated, in this embodiment, the belt is set with a correct angle to consider that the belt is along a straight track, and no deviation occurs, and the position coordinate point corresponding to the belt with the angle is the position coordinate point set when the belt is stationary. The preset value is 1cm, the calculation of the distance between the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is static is specifically to perform subtraction operation on the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is static, and if the distance between the image physical coordinate point of the belt edge profile and the position coordinate point is greater than 1cm, the belt deviation is indicated.
According to the scheme, the IP network camera is directly installed in the middle area of the belt equipment, the problem that the power supply and network routing layout of the camera installed above the belt equipment are complex can be avoided, the camera does not need to re-optimize the software algorithm according to different use scenes, and the engineering deployment efficiency is improved.
Preferably, in step S2, the calibration algorithm specifically comprises the following sub-steps:
step S21: obtaining a focal length of a camera through camera calibration;
step S22: obtaining the actual distance between the target object and the camera and the actual size of the target object through measurement;
step S23: according to the focal length of the camera, the actual distance between the target object and the camera and the actual size of the target object, the pixel size of the target object in the video image is calculated according to the following specific calculation formula:
wherein h represents the pixel size of the target object in the video image; d represents the actual distance between the target object and the camera; f represents the focal length of the camera; h represents the actual size of the target object;
step S24: and carrying out ratio operation on the pixel size of the target object in the video image and the actual size of the target object to obtain a coordinate conversion ratio.
In step S21, camera calibration refers to establishing a relationship between the pixel position of the camera image and the scene point position, and solving parameters of the camera model, including internal parameters and external parameters of the camera, according to the camera imaging model by the corresponding relationship between coordinates of the feature points in the image and world coordinates. In this embodiment, the internal parameter of the camera is the focal length of the camera, which is obtained through camera calibration. In step S22, the actual distance between the target object and the camera and the actual size of the target object are measured using a measuring tool such as a vernier caliper. In step S24, it is advantageous to implement the conversion of the pixel coordinate system in the video image into the image physical coordinate system by using the coordinate conversion ratio.
Preferably, in step S3, the process of correcting deformity is performed on the video image, and specifically includes the following sub-steps:
step S31: calibrating the video image through camera calibration to obtain internal parameters and external parameters of a camera;
step S32: calculating a distortion parameter matrix using internal parameters and external parameters of the camera;
step S33: and remapping and gray-scale reconstructing the video image by using the distortion parameter matrix to obtain a corrected image corresponding to the video image.
In this embodiment, the video image is remapped and grey-scale reconstructed by using the distortion parameter matrix to obtain a corrected image corresponding to the video image, which is favorable for restoring the distorted video image into a normal image. Specifically, remapping refers to converting pixels at one location in one image to a specified location in another image through a mapping relationship. The gray level reconstruction comprises a nearest neighbor interpolation method, a bilinear interpolation method and a cubic convolution interpolation method, wherein the nearest neighbor interpolation method is low in precision, the cubic convolution interpolation method is high in precision but large in operation amount, the bilinear interpolation method can compromise the nearest neighbor interpolation method and the cubic convolution interpolation method, and the accuracy and the operation amount are optimal.
Preferably, in step S3, the video image is subjected to a horizontal correction process, which specifically includes the following sub-steps:
step S34: counting the directions of lines appearing in the corrected image corresponding to the video image;
step S35: taking the direction of the line with the largest occurrence number as the direction of the corrected image corresponding to the video image;
step S36: calculating a slope frequency domain array;
step S37: and performing image level correction based on the inverse transformation of the slope frequency domain array corresponding to the line with the largest occurrence number.
In this embodiment, slopes of all straight lines in the corrected image corresponding to the video image are found through hough transform, where the slopes are directions in which the lines are located. Taking the direction of the line with the largest occurrence number as the direction of the image, and carrying out image level correction based on the inverse transformation of the slope frequency domain array corresponding to the line with the largest occurrence number, wherein the method comprises the following steps: counting the slope with the largest occurrence number, and taking the slope as the direction of the image; acquiring a rotation angle of the slope; the image level is corrected according to the rotation angle inverse transform.
Preferably, in step S5, the method specifically comprises the following substeps:
step S51: converting the second video image from BGR color space to HSV color space using a cv2.cvttcolor () function in open.cv;
step S52: defining red HSV ranges, and creating three masks using a cv2.InRange () function in Open. CV, corresponding to the three ranges of red HSV, respectively;
step S53: merging the three masks into a unified mask, and performing bitwise and operation on the second video image and the unified mask by using a cv2.Bitwise_and () function in an open. Cv to extract a red region in the second video image;
step S54: and performing expansion and corrosion treatment on the extracted second video image to obtain a belt area image.
In this embodiment, the cv2.cvtdcolor () function, the cv2.inrange () function, and the cv2.bitwise_and () function are functions of the open.cv image processing software. The HSV range of red includes the range of hue, saturation, and brightness of red. In this scheme, the belt area is detected by the red laser, and since the belt is red by the red laser, the red area can be extracted, specifically, the red area in the second video image is divided by color division, three masks are created by using the cv2.InRange () function in the open. Cv in the color division process, and further, the three masks respectively perform bit-wise and operation on the second video image, so as to obtain three result diagrams, as shown in fig. 2, 3 and 4. And combining the three masks into one mask, performing bit-wise AND operation on the mask and the second video image to obtain a result image, and finally performing expansion and corrosion treatment on the result image to remove noise points in the image and obtain a final segmented belt region image, as shown in fig. 5.
Preferably, in step S6, the method specifically comprises the following substeps:
step S61: obtaining belt edge contour points in the belt region image by using an edge detection algorithm;
step S62: calculating key points in the belt edge contour points by using a non-maximum suppression algorithm;
step S63: fitting the edge profile of the belt by using a least square method according to the key points;
step S64: and calculating the abscissa extreme points in the edge profile of the belt according to the edge profile of the belt.
Specifically, as shown in fig. 6, in step S61, the edge detection algorithm used in the present embodiment is a sobel edge detection operator, which is easy to implement spatially, has a good edge detection effect, and is less affected by noise. In step S62, in this embodiment, a non-maximum suppression algorithm is adopted to effectively remove redundant belt edge contour points, so as to improve accuracy of a detection result. In step S63, the belt edge profile is fitted using the least square method in this embodiment, which is relatively simple in calculation, and under the best fit line, the independent variable of the known sample is substituted into the fit line, so that the sum of squares of errors between the obtained observed value and the actual value is the smallest. In step S64, by calculating the abscissa extreme point in the edge profile of the belt, the position point at which the belt deviation is maximum can be determined.
Furthermore, functional units in various embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations of the above embodiments may be made by those skilled in the art within the scope of the application.
Claims (6)
1. A belt deviation quick detection method based on visual identification is characterized in that: the method comprises the following steps:
step S1: acquiring video images in real time according to an IP network camera installed on belt equipment;
step S2: carrying out coordinate system calibration on the video image by adopting a calibration algorithm;
step S3: performing deformity correction processing and horizontal correction processing on the video image to obtain a first video image;
step S4: performing binarization processing and filtering processing on the first video image to obtain a second video image;
step S5: extracting a belt area image in the second video image;
step S6: performing edge detection on the belt region image to obtain a belt edge profile;
step S7: converting pixel coordinate points of the belt edge contour into image physical coordinate points by using parameters calibrated by a coordinate system;
step S8: calculating the distance between the image physical coordinate point of the belt edge profile and the position coordinate point set when the belt is stationary, judging whether the distance is larger than a preset value, and if so, indicating that the belt is deviated; if not, the belt is not deviated.
2. The rapid detection method for belt deviation based on visual recognition according to claim 1, wherein the rapid detection method comprises the following steps: in step S2, the calibration algorithm specifically includes the following sub-steps:
step S21: obtaining a focal length of a camera through camera calibration;
step S22: obtaining the actual distance between the target object and the camera and the actual size of the target object through measurement;
step S23: according to the focal length of the camera, the actual distance between the target object and the camera and the actual size of the target object, the pixel size of the target object in the video image is calculated according to the following specific calculation formula:
wherein h represents the pixel size of the target object in the video image; d represents the actual distance between the target object and the camera; f represents the focal length of the camera; h represents the actual size of the target object;
step S24: and carrying out ratio operation on the pixel size of the target object in the video image and the actual size of the target object to obtain a coordinate conversion ratio.
3. The rapid detection method for belt deviation based on visual recognition according to claim 1, wherein the rapid detection method comprises the following steps: in step S3, the process of correcting deformity is performed on the video image, and specifically includes the following sub-steps:
step S31: calibrating the video image through camera calibration to obtain internal parameters and external parameters of a camera;
step S32: calculating a distortion parameter matrix using internal parameters and external parameters of the camera;
step S33: and remapping and gray-scale reconstructing the video image by using the distortion parameter matrix to obtain a corrected image corresponding to the video image.
4. A method for rapidly detecting belt deviation based on visual recognition according to claim 3, wherein: in step S3, the video image is subjected to a horizontal correction process, which specifically includes the following sub-steps:
step S34: counting the directions of lines appearing in the corrected image corresponding to the video image;
step S35: taking the direction of the line with the largest occurrence number as the direction of the corrected image corresponding to the video image;
step S36: calculating a slope frequency domain array;
step S37: and performing image level correction based on the inverse transformation of the slope frequency domain array corresponding to the line with the largest occurrence number.
5. The rapid detection method for belt deviation based on visual recognition according to claim 1, wherein the rapid detection method comprises the following steps: in step S5, the method specifically includes the following substeps:
step S51: converting the second video image from BGR color space to HSV color space using a cv2.cvttcolor () function in open.cv;
step S52: defining red HSV ranges, and creating three masks using a cv2.InRange () function in Open. CV, corresponding to the three ranges of red HSV, respectively;
step S53: merging the three masks into a unified mask, and performing bitwise and operation on the second video image and the unified mask by using a cv2.Bitwise_and () function in an open. Cv to extract a red region in the second video image;
step S54: and performing expansion and corrosion treatment on the extracted second video image to obtain a belt area image.
6. The rapid detection method for belt deviation based on visual recognition according to claim 1, wherein the rapid detection method comprises the following steps: in step S6, the method specifically includes the following sub-steps:
step S61: obtaining belt edge contour points in the belt region image by using an edge detection algorithm;
step S62: calculating key points in the belt edge contour points by using a non-maximum suppression algorithm;
step S63: fitting the edge profile of the belt by using a least square method according to the key points;
step S64: and calculating the abscissa extreme points in the edge profile of the belt according to the edge profile of the belt.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311130387.8A CN117237287A (en) | 2023-09-01 | 2023-09-01 | Rapid belt deviation detection method based on visual identification |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311130387.8A CN117237287A (en) | 2023-09-01 | 2023-09-01 | Rapid belt deviation detection method based on visual identification |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117237287A true CN117237287A (en) | 2023-12-15 |
Family
ID=89093960
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311130387.8A Pending CN117237287A (en) | 2023-09-01 | 2023-09-01 | Rapid belt deviation detection method based on visual identification |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117237287A (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118609047A (en) * | 2024-05-31 | 2024-09-06 | 石家庄铁道大学 | Obstacle recognition method based on machine vision |
| CN119370519A (en) * | 2025-01-02 | 2025-01-28 | 北京科技大学 | Intelligent visual perception multivariable adaptive belt deviation control method and system |
| CN119991752A (en) * | 2025-04-17 | 2025-05-13 | 中交天津港湾工程研究院有限公司 | A wave breaking object model test method based on target detection technology |
| CN118609047B (en) * | 2024-05-31 | 2025-10-17 | 石家庄铁道大学 | Obstacle recognition method based on machine vision |
-
2023
- 2023-09-01 CN CN202311130387.8A patent/CN117237287A/en active Pending
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118609047A (en) * | 2024-05-31 | 2024-09-06 | 石家庄铁道大学 | Obstacle recognition method based on machine vision |
| CN118609047B (en) * | 2024-05-31 | 2025-10-17 | 石家庄铁道大学 | Obstacle recognition method based on machine vision |
| CN119370519A (en) * | 2025-01-02 | 2025-01-28 | 北京科技大学 | Intelligent visual perception multivariable adaptive belt deviation control method and system |
| CN119991752A (en) * | 2025-04-17 | 2025-05-13 | 中交天津港湾工程研究院有限公司 | A wave breaking object model test method based on target detection technology |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN117237287A (en) | Rapid belt deviation detection method based on visual identification | |
| US8184848B2 (en) | Liquid level detection method | |
| CN107301637B (en) | Surface defect detection method for nearly rectangular planar industrial products | |
| CN109543665B (en) | Image positioning method and device | |
| CN105160652A (en) | Handset casing testing apparatus and method based on computer vision | |
| CN119648679B (en) | Circuit board welding fault identification method and system based on machine vision | |
| CN111932504A (en) | Sub-pixel positioning method and device based on edge contour information | |
| CN111462246B (en) | Equipment calibration method of structured light measurement system | |
| CN117392127B (en) | Method and device for detecting display panel frame and electronic equipment | |
| CN110595397A (en) | Grate cooler working condition monitoring method based on image recognition | |
| CN109671084B (en) | A method for measuring workpiece shape | |
| JP3327068B2 (en) | Road surface measurement device | |
| CN119086589A (en) | Bridge welding quality detection method based on gray-relative depth fitting vision technology | |
| CN119850619A (en) | Method and device for detecting surface defects of LED special-shaped screen based on image analysis | |
| CN119540205A (en) | Method and system for inspecting residual film on IC substrate | |
| CN114791265A (en) | Workpiece dimension measuring method and system based on machine vision | |
| CN113920065A (en) | Imaging quality evaluation method for visual inspection system in industrial field | |
| CN116309760B (en) | Cereal image alignment method and cereal detection equipment | |
| US10958899B2 (en) | Evaluation of dynamic ranges of imaging devices | |
| CN113436214B (en) | Brinell hardness indentation circle measuring method and system and computer readable storage medium | |
| GB2470741A (en) | Liquid level detection method | |
| CN210773933U (en) | Product appearance on-line measuring device | |
| CN120070418B (en) | Structural facility deformation monitoring method and system based on machine vision | |
| CN120259639B (en) | Vision-based rapid identification method for target object in track area | |
| CN119022837B (en) | A method and device for detecting the flatness of engineering plastics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |