[go: up one dir, main page]

CN116228980A - Image reconstruction method, device and equipment - Google Patents

Image reconstruction method, device and equipment Download PDF

Info

Publication number
CN116228980A
CN116228980A CN202310196231.3A CN202310196231A CN116228980A CN 116228980 A CN116228980 A CN 116228980A CN 202310196231 A CN202310196231 A CN 202310196231A CN 116228980 A CN116228980 A CN 116228980A
Authority
CN
China
Prior art keywords
pixel
value
pixel point
coding
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310196231.3A
Other languages
Chinese (zh)
Inventor
胡杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Co Ltd filed Critical Hangzhou Hikrobot Co Ltd
Priority to CN202310196231.3A priority Critical patent/CN116228980A/en
Publication of CN116228980A publication Critical patent/CN116228980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides an image reconstruction method, device and equipment, wherein the method comprises the following steps: acquiring n frames of first images and n frames of second images; generating a first code image based on the n frames of first images, and generating a second code image based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in the first image of each frame; determining a corresponding coding value of the pixel point in the second coding diagram based on the corresponding pixel value of the pixel point in the second image of each frame aiming at each pixel point in the second coding diagram; determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; and generating a three-dimensional reconstruction image corresponding to the measured object based on the key point pairs. By the technical scheme, an accurate and reliable three-dimensional reconstruction image can be obtained, and the reconstruction accuracy is higher.

Description

Image reconstruction method, device and equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image reconstruction method, apparatus, and device.
Background
The three-dimensional imaging system may be composed of a laser and a camera, wherein the laser is used to project line structured light onto the surface of the object to be measured (i.e., the object to be measured), and the camera is used to capture the image of the object to be measured, so as to obtain an image with the line structured light, i.e., a line structured light image. After the line structure light image is obtained, a light bar center line of the line structure light image can be obtained, and the light bar center line is converted according to the pre-calibrated sensor parameters to obtain the space coordinate (namely the three-dimensional coordinate) of the measured object at the current position. Based on the spatial coordinates of the object to be measured at the current position, three-dimensional reconstruction (i.e., three-dimensional reconstruction) of the object to be measured can be realized.
In order to realize three-dimensional reconstruction of the measured object, line structure light images of different positions of the measured object need to be acquired, namely, the laser projects line structure light to different positions of the measured object, each position corresponds to one line structure light image, and the camera only acquires the line structure light image corresponding to one position at a time. When three-dimensional reconstruction is completed based on multiple line structured light images acquired by a camera, the problems of poor three-dimensional reconstruction effect and the like exist.
Disclosure of Invention
The application provides an image reconstruction method, which is applied to a three-dimensional imaging system, wherein the three-dimensional imaging system comprises a first camera, a second camera and a speckle projector, and the method comprises the following steps:
acquiring n frames of first images and n frames of second images, wherein n is a positive integer greater than 1; when the speckle projector projects a speckle to an object to be detected each time, a first image of the object to be detected is acquired through a first camera, and a second image of the object to be detected is acquired through a second camera;
generating a first code image based on the n frames of first images, and generating a second code image based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in a second image of each frame;
Determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; for each key point pair, the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram, wherein the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object;
and generating a three-dimensional reconstruction image corresponding to the measured object based on the key point pairs.
The application provides an image reconstruction device, is applied to three-dimensional imaging system, three-dimensional imaging system includes first camera, second camera and speckle projector, the device includes:
the acquisition module is used for acquiring n frames of first images and n frames of second images, wherein n is a positive integer greater than 1; when the speckle projector projects a speckle to an object to be detected each time, a first image of the object to be detected is acquired through a first camera, and a second image of the object to be detected is acquired through a second camera;
the generation module is used for generating a first coding diagram based on the n frames of first images and generating a second coding diagram based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in a second image of each frame;
The matching module is used for determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; and aiming at each key point pair, the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram, wherein the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object, and a three-dimensional reconstruction image corresponding to the measured object is generated based on the key point pair.
The application provides an electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the processor is configured to execute the machine executable instructions to implement the image reconstruction method of the above example.
According to the technical scheme, in the embodiment of the application, the speckle projector projects the speckle to the detected object, the first camera is used for collecting the first image of the detected object, the second camera is used for collecting the second image of the detected object, when the speckle projector projects the speckle to the detected object n times, the first n-frame image and the second n-frame image can be obtained, the first coding image is generated based on the first n-frame image, the second coding image is generated based on the second n-frame image, and the three-dimensional reconstruction image corresponding to the detected object can be generated based on the first coding image and the second coding image, so that three-dimensional reconstruction is realized, the three-dimensional reconstruction effect is good, and the accurate and reliable three-dimensional reconstruction image can be obtained. For example, because the same position point on the measured object is highly similar to the corresponding code value in the first code image and the second code image, accurate and reliable key point pairs can be found from the first code image and the second code image, and when the three-dimensional reconstruction image is generated based on the key point pairs, the accurate and reliable three-dimensional reconstruction image can be obtained, the reconstruction precision of the three-dimensional reconstruction image is higher, and the robustness is better.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of an image reconstruction method in one embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional imaging system in one embodiment of the present application;
FIG. 3 is a schematic diagram of an image acquisition process in one embodiment of the present application;
FIG. 4 is a flow chart of an image reconstruction method in one embodiment of the present application;
FIG. 5 is a diagram illustrating the determination of the encoded values of pixels in one embodiment of the present application;
FIG. 6 is a schematic structural view of an image reconstruction device in one embodiment of the present application;
fig. 7 is a hardware configuration diagram of an electronic device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
In the embodiment of the application, an image reconstruction method is provided, and the image reconstruction method may be applied to a three-dimensional imaging system, where the three-dimensional imaging system may include a first camera, a second camera and a speckle projector, for example, the first camera, the second camera and the speckle projector may be disposed in the same device, i.e. the three-dimensional imaging system may be composed of one device, or the second camera and the speckle projector may be disposed in different devices, i.e. the three-dimensional imaging system may be composed of a plurality of devices, which is not limited thereto.
Referring to fig. 1, a flow chart of the image reconstruction method is shown, and the method may include:
Step 101, acquiring n frames of first images and n frames of second images, wherein n can be a positive integer greater than 1; when the speckle projector projects a speckle to an object to be measured each time, a first image of the object to be measured is acquired through the first camera, and a second image of the object to be measured is acquired through the second camera.
102, generating a first coding diagram based on n frames of first images and generating a second coding diagram based on n frames of second images; when the first coding diagram is generated, determining a coding value corresponding to each pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; when the second coding diagram is generated, for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in the second image of each frame.
For example, based on the n frames of the first image, a first coding strategy may be used to generate a first coding graph, where the first coding strategy is used to indicate a coding value generation manner of each pixel point in the first coding graph; based on the n frames of second images, a second coding strategy can be adopted to generate a second coding image, and the second coding strategy is used for indicating a coding value generation mode of each pixel point in the second coding image; the code value generation mode of the pixel point indicated by the first code strategy and the code value generation mode of the pixel point indicated by the second code strategy are the same code value generation mode, that is, the first code strategy and the second code strategy can be the same.
In one possible implementation manner, for each pixel point in the first coding diagram, determining, based on a pixel value corresponding to the pixel point in the first image of each frame, a corresponding coding value of the pixel point in the first coding diagram may include, but is not limited to: determining an intermediate value based on n pixel values corresponding to the pixel point in the n frames of first images; determining n reference values corresponding to the n pixel values based on the n pixel values and the intermediate value; and determining a corresponding coding value of the pixel point in the first coding diagram based on the n reference values. Wherein the median is the average of n pixel values; the reference value is a comparison result value of the pixel value and the intermediate value, or the reference value is a difference value of the pixel value and the intermediate value; if the pixel value is larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a second value.
For each pixel in the second coding diagram, determining a coding value corresponding to the pixel in the second coding diagram based on a pixel value corresponding to the pixel in the second image of each frame may include, but is not limited to: determining an intermediate value based on n pixel values corresponding to the pixel point in the n frames of second images; determining n reference values corresponding to the n pixel values based on the n pixel values and the intermediate value; and determining the corresponding coding value of the pixel point in the second coding diagram based on the n reference values. Wherein the intermediate value may be an average of n pixel values; the reference value may be a comparison result value of the pixel value and the intermediate value, or the reference value may be a difference value of the pixel value and the intermediate value; if the pixel value is larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a second value.
In another possible implementation manner, for each pixel point in the first coding diagram, determining, based on a pixel value corresponding to the pixel point in the first image of each frame, a corresponding coding value of the pixel point in the first coding diagram may include, but is not limited to: determining adjacent pixel points corresponding to the pixel points from the neighborhood window of the pixel points; and determining a corresponding coding value of the pixel point in the first coding diagram based on the corresponding pixel value of the pixel point in the first image of each frame and the corresponding pixel value of the adjacent pixel point in the first image of each frame. For example, for each frame of the first image, determining a reference value corresponding to the pixel point in the frame of the first image based on a pixel value corresponding to the pixel point in the frame of the first image and a pixel value corresponding to an adjacent pixel point in the frame of the first image; determining a corresponding coding value of the pixel point in a first coding diagram based on the reference values respectively corresponding to the pixel point in the n frames of first images; the reference value may be a comparison result value of a pixel value corresponding to the pixel point and a pixel value corresponding to an adjacent pixel point, or the reference value may be a difference value between the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point; if the pixel value corresponding to the adjacent pixel point is greater than the pixel value corresponding to the pixel point, the comparison result value may be a first value, and if the pixel value corresponding to the adjacent pixel point is not greater than the pixel value corresponding to the pixel point, the comparison result value may be a second value.
For each pixel point in the second coding diagram, determining the coding value corresponding to the pixel point in the second coding diagram based on the corresponding pixel value of the pixel point in the second image of each frame may include: determining adjacent pixel points corresponding to the pixel points from the neighborhood window of the pixel points; and determining the corresponding coding value of the pixel point in the second coding diagram based on the corresponding pixel value of the pixel point in the second image of each frame and the corresponding pixel value of the adjacent pixel point in the second image of each frame. For each frame of second image, determining a corresponding reference value of the pixel point in the frame of second image based on the corresponding pixel value of the pixel point in the frame of second image and the corresponding pixel value of the adjacent pixel point in the frame of second image; determining a corresponding coding value of the pixel point in a second coding diagram based on the reference values respectively corresponding to the pixel point in the n frames of second images; the reference value is a comparison result value of a pixel value corresponding to the pixel point and a pixel value corresponding to an adjacent pixel point, or is a difference value of the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point; if the pixel value corresponding to the adjacent pixel point is larger than the pixel value corresponding to the pixel point, the comparison result value is a first value, and if the pixel value corresponding to the adjacent pixel point is not larger than the pixel value corresponding to the pixel point, the comparison result value is a second value.
Step 103, determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; for each key point pair, the key point pair may include a first pixel point in the first code image and a second pixel point in the second code image, where the first pixel point and the second pixel point are corresponding to a same position point on the measured object.
Illustratively, determining a keypoint pair corresponding to each pixel point in the first code map based on the first code map and the second code map may include, but is not limited to: for a first pixel point in a first coding diagram, the first pixel point is each pixel point in the first coding diagram (namely, each pixel point in the first coding diagram is sequentially used as a first pixel point), and a plurality of candidate pixel points corresponding to the first pixel point are determined from a second coding diagram; for each candidate pixel point, determining the similarity between the first pixel point and the candidate pixel point based on the corresponding coding value of the first pixel point in the first coding diagram and the corresponding coding value of the candidate pixel point in the second coding diagram; selecting one candidate pixel point matched with the first pixel point from a plurality of candidate pixel points as a second pixel point based on the similarity between the first pixel point and each candidate pixel point; and generating a key point pair based on the first pixel point and the second pixel point.
Illustratively, generating a first encoded map based on n frames of a first image and generating a second encoded map based on n frames of a second image may include, but is not limited to: binocular correction is carried out on the n-frame first image and the n-frame second image, so that corrected n-frame first image and corrected n-frame second image are obtained; the binocular correction is used for enabling the same position point on the measured object to have the same pixel height in the corrected first image and the corrected second image; a first code map is generated based on the corrected n-frame first image, and a second code map is generated based on the corrected n-frame second image. For a first pixel point in a first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram may include, but is not limited to: for a first pixel point in a first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram based on the pixel height of the first pixel point in the first coding diagram; the pixel height of each candidate pixel point in the second coding diagram is the same as the pixel height of the first pixel point in the first coding diagram.
And 104, generating a three-dimensional reconstruction image corresponding to the measured object based on the key point pairs.
For example, the three-dimensional reconstructed image corresponding to the object to be measured is generated based on all the key point pairs, or the three-dimensional reconstructed image corresponding to the object to be measured is generated based on part of the key point pairs, which is not limited.
According to the technical scheme, in the embodiment of the application, the speckle projector projects the speckle to the detected object, the first camera is used for collecting the first image of the detected object, the second camera is used for collecting the second image of the detected object, when the speckle projector projects the speckle to the detected object n times, the first n-frame image and the second n-frame image can be obtained, the first coding image is generated based on the first n-frame image, the second coding image is generated based on the second n-frame image, and the three-dimensional reconstruction image corresponding to the detected object can be generated based on the first coding image and the second coding image, so that three-dimensional reconstruction is realized, the three-dimensional reconstruction effect is good, and the accurate and reliable three-dimensional reconstruction image can be obtained. For example, because the same position point on the measured object is highly similar to the corresponding code value in the first code image and the second code image, accurate and reliable key point pairs can be found from the first code image and the second code image, and when the three-dimensional reconstruction image is generated based on the key point pairs, the accurate and reliable three-dimensional reconstruction image can be obtained, the reconstruction precision of the three-dimensional reconstruction image is higher, and the robustness is better.
The image reconstruction method according to the embodiment of the present application is described below with reference to a specific application scenario.
In order to acquire a three-dimensional reconstructed image, in the related art, a planar projection structured light method, a binocular speckle method, a TOF (Time of flight) method, a single line laser profile scanning method, and the like may be employed. When the plane projection structured light method is adopted, DLP (Digital Light Processing ) or LCD (Liquid Crystal Display, liquid crystal display) projection technology can be adopted, and an LED (Light Emitting Diode ) light source is adopted as a projection light source, so that the projection volume is large, the energy is dispersed, and the volume is large, the power consumption is large under the long-distance condition of large visual field, and the three-dimensional positioning application is not facilitated. When the binocular speckle method is adopted, the binocular parallax combined with the laser speckle binocular matching method is adopted, so that the detection precision is low, the edge trimming contour is poor, and contour scanning and three-dimensional positioning application are not facilitated. When the TOF method is adopted, the method is limited by the resolution of a camera, the detection precision is in the cm level, and the automatic high-precision positioning application is not satisfied. When the single line laser contour scanning method is adopted, the depth information of the object is scanned by utilizing the single line laser, but the scanning speed is slow and the stability is poor, and the positioning requirement is not met.
Taking a single line laser contour scanning method as an example, a laser can be used for projecting line structure light onto the surface of a measured object, and a camera is used for shooting the measured object to obtain an image with the line structure light, namely a line structure light image. After the line structure light image is obtained, the space coordinate (i.e. the three-dimensional coordinate) of the measured object at the current position can be obtained based on the line structure light image, so that the three-dimensional reconstruction (i.e. the three-dimensional reconstruction) of the measured object can be realized. However, in order to realize three-dimensional reconstruction of the measured object, line structure light images of different positions of the measured object, namely, line structure light projected by the laser to different positions of the measured object are required to be acquired, each position corresponds to one line structure light image, and as the camera only acquires one line structure light image corresponding to one position at a time, the camera can complete three-dimensional reconstruction only by acquiring multiple line structure light images, namely, the three-dimensional reconstruction time is longer, the scanning speed is slow and the stability is poor, and the positioning requirement of the three-dimensional reconstruction is not met.
Aiming at the finding, the embodiment of the application provides an image reconstruction method, which is a binocular matching method based on the projection of a speckle projector (the speckle projector can also be called as a speckle sensor), the speckle projector projects a speckle to an object to be detected, a first image of the object to be detected is acquired through a first camera, a second image of the object to be detected is acquired through a second camera, when the speckle projector projects the speckle to the object to be detected n times, n frames of first images and n frames of second images can be obtained, an accurate and reliable three-dimensional reconstruction image can be obtained based on the n frames of first images and the n frames of second images, the reconstruction accuracy of the three-dimensional reconstruction image is higher, the robustness is better, and therefore, depth map data and point cloud data with higher accuracy can be obtained efficiently.
The image reconstruction method based on the speckle projector projection can be applied to a three-dimensional imaging system, the type of the three-dimensional imaging system is not limited, and the image reconstruction method can be any system with a three-dimensional imaging function, for example, any system in the field of machine vision or industrial automation, and the like.
Referring to FIG. 2, a schematic diagram of a three-dimensional imaging system that may employ a binocular multi-speckle sensor system may include, but is not limited to: the system comprises a first camera, a second camera, a speckle projector and a processor. The first camera may be a left-hand camera, i.e. a camera located at the left side of the three-dimensional imaging system, also referred to as left imaging unit. The second camera may be a right-hand camera, i.e. a camera located on the right side of the three-dimensional imaging system, also referred to as right imaging unit. Alternatively, the first camera may be a right-side camera and the second camera may be a left-side camera, without limitation.
The speckle projector may also be referred to as a multi-speckle projection device, and is configured to project a speckle(s) onto an object under test, and each time the speckle projector projects a speckle onto the object under test, collect a first image of the object under test by the first camera, and collect a second image of the object under test by the second camera.
The processor may be, for example, a CPU or the like, and the processor is configured to acquire a first image from the first camera, acquire a second image from the second camera, and complete image reconstruction of the object to be measured based on the first image and the second image, that is, obtain a three-dimensional reconstructed image of the object to be measured based on the first image and the second image.
In one possible embodiment, referring to fig. 3, a schematic diagram of an image acquisition process is shown, which may include: after the speckle projector is started, the speckle projector projects a speckle to the detected object (for example, the speckle projector projects the speckle to the detected object based on the coding pattern K1), a first image L1 of the detected object is acquired through the first camera, a second image R1 of the detected object is acquired through the second camera, and the acquisition time of the first image L1 and the second image R1 is the same.
After the first image L1 and the second image R1 are acquired, the speckle projector switches the coding pattern, projects a spot to the detected object based on the switched coding pattern K2, acquires a first image L2 of the detected object through the first camera, and acquires a second image R2 of the detected object through the second camera.
After the first image L2 and the second image R2 are acquired, the speckle projector switches the coding pattern, projects a spot to the detected object based on the switched coding pattern K3, acquires the first image L3 of the detected object through the first camera, and acquires the second image R3 of the detected object through the second camera.
Similarly, until the speckle projector projects a speckle to the object to be measured based on the switched coding pattern Kn, the first image Ln of the object to be measured is acquired by the first camera, the second image Rn of the object to be measured is acquired by the second camera, and then the speckle projector can be turned off to complete the image acquisition process.
In summary, n-frame first images and n-frame second images, which are the first image L1, the first image L2, the first image Ln, the second image R1, the second image R2, the second image Rn, can be obtained.
For example, the speckle projector may be a single light source speckle projector, i.e. the speckle projector projects a spot onto an object to be measured by a single light source, for example, the single light source may project a plurality of spots onto the object to be measured, and the pattern of the plurality of spots projected onto the object to be measured is the code pattern. By changing the projection position of the single light source, different code patterns can be generated, for example, when the projection position of the single light source is the position A1, the speckle projector projects the speckle to the object to be measured through the single light source, and the speckle is projected to the object to be measured based on the code pattern K1. Then, by moving the projection position of the single light source to the position A2, the speckle projector projects a speckle to the object under test based on the code pattern K2 while projecting the speckle to the object under test by the single light source. By analogy, when the speckle projector projects a spot onto a measured object by moving the projection position of the single light source to the position An, the spot is projected onto the measured object based on the code pattern Kn.
The speckle projector can also be a multi-light source speckle projector, that is, the speckle projector projects the spots to the object to be measured through multiple light sources, for example, each light source can project multiple spots to the object to be measured, and the combination of the spots projected by the multiple light sources is the spots projected by the speckle projector to the object to be measured, and the patterns projected by the spots to the object to be measured are the coding patterns. By changing the projection positions of part of the light sources or the projection positions of all the light sources, different code patterns can be generated, such as by changing the projection positions of part of the light sources or all the light sources, code pattern K1, code pattern K2, code pattern Kn, etc., without limitation.
In summary, n code patterns may be projected onto the object to be measured, where the n code patterns may be projected by a single light source and obtained by changing the position of the light source, or may be projected by multiple light sources individually or combined with each other.
For example, when the speckle projector projects a speckle to an object under test based on the encoding pattern K1, a first image L1 of the object under test is acquired by the first camera, a second image R1 of the object under test is acquired by the second camera, in order to make the acquisition timings of the first image L1 and the second image R1 identical, a projection command may be sent by the processor to the speckle projector to cause the speckle projector to project the speckle to the object under test based on the encoding pattern K1, and an acquisition command is sent by the processor to both the first camera and the second camera to cause the first camera to acquire the first image L1 of the object under test and the second camera to acquire the second image R1 of the object under test.
The first image L1 includes a plurality of spots projected by the speckle projector onto the object to be measured based on the encoding pattern K1, and the second image R1 also includes a plurality of spots projected by the speckle projector onto the object to be measured based on the encoding pattern K1, that is, the first image L1 and the second image R1 include the same plurality of spots.
The speckle projector projects a plurality of speckles on the object to be measured, wherein the speckles can be random speckles, namely random discrete speckles, pseudo-random speckles, namely pseudo-random discrete speckles, or regular speckles, namely discrete speckles at regular positions, without limitation.
Illustratively, after the first image L1 and the second image R1 are acquired, the processor sends a projection command to the speckle projector to cause the speckle projector to project a speckle to the object under test based on the encoding pattern K2, and the processor sends an acquisition command to the first camera and the second camera simultaneously to cause the first camera to acquire the first image L2 of the object under test and the second camera to acquire the second image R2 of the object under test, so that the acquisition moments of the first image L2 and the second image R2 are the same, and so on.
In the above embodiment, n may be a positive integer greater than 1, the value of n is empirically configured, and the value of n is not limited as long as n is not less than 2, for example, n may be 3, 4, 5, 6, 7, 8, 9, etc.
Based on the above application scenario, an image reconstruction method is provided in the embodiments of the present application, which may be applied to a three-dimensional imaging system, and is shown in fig. 4, which is a schematic flow chart of the method, and the method may include:
step 401, acquiring n frames of first images and n frames of second images, where n may be a positive integer greater than 1.
For example, the n-frame first image is a first image L1, a first image L2, a first image Ln, and the n-frame second image is a second image R1, a second image R2, a second image Rn, respectively.
Step 402, generating a first code map based on the n frames of the first image.
For example, when the first code image is generated, for each pixel point in the first code image, a code value corresponding to the pixel point in the first code image is determined based on a pixel value corresponding to the pixel point in the first image of each frame. For example, for a pixel (x, y) in the first code image, a code value corresponding to the pixel (x, y) in the first code image may be determined based on pixel values corresponding to the pixel (x, y) in the first image L1, the first image L2, the first image Ln. After obtaining the code value corresponding to each pixel point in the first code map, the code values corresponding to all the pixel points can be combined to obtain the first code map, that is, the first code map can include the code value corresponding to each pixel point.
In one possible embodiment, the first code map may be generated based on the n-frame first image in the following manner, which is, of course, merely a few examples and is not limited thereto.
Mode 1: for each pixel point in the first coding diagram, determining a coding value corresponding to the pixel point in the first coding diagram based on a pixel value corresponding to the pixel point in the first image of each frame, and generating the first coding diagram based on the coding value corresponding to each pixel point, that is, after obtaining the coding value corresponding to each pixel point in the first coding diagram, combining the coding values corresponding to all the pixel points to obtain the first coding diagram.
The code value corresponding to each pixel point is obtained in the same manner, taking the code value corresponding to the pixel point (x, y) as an example, the code value corresponding to the pixel point (x, y) may be obtained by the following steps:
step S11, determining an intermediate value based on n pixel values corresponding to the pixel points (x, y) in the first image of the n frames. For example, the intermediate value may be an average value of n pixel values, or the intermediate value may be a minimum value of n pixel values, or the intermediate value may be a maximum value of n pixel values, or the intermediate value may be a median of n pixel values, which is not limited. For convenience of description, in the following embodiments, an example will be described in which the intermediate value is an average value of n pixel values. The pixel values in this embodiment may be gray values, but may be other types of values, and are not limited thereto.
For example, the pixel value corresponding to the pixel point (x, y) in the first image L1 may be referred to as a pixel value B1, the pixel value corresponding to the pixel point (x, y) in the first image L2 may be referred to as a pixel value B2, the pixel value corresponding to the pixel point (x, y) in the first image Ln may be referred to as a pixel value Bn, and on the basis of this, the average value of the pixel value B1, the pixel value B2, the pixel value Bn may be referred to as an intermediate value.
Step S12, n reference values corresponding to the n pixel values are determined based on the n pixel values and the intermediate value.
Illustratively, based on the pixel value B1 and the intermediate value, a reference value C1 corresponding to the pixel value B1 may be determined. For example, a comparison result value of the pixel value B1 and the intermediate value may be used as the reference value C1. If the pixel value B1 is greater than the intermediate value, the comparison result value may be a first value (e.g., 1), that is, the reference value C1 may be the first value; if the pixel value B1 is not greater than the intermediate value, the comparison result value may be a second value (e.g. 0), i.e. the reference value C1 may be the second value. For another example, a difference between the pixel value B1 and the intermediate value may be used as the reference value C1. For another example, an average value of the pixel value B1 and the intermediate value may be used as the reference value C1. For another example, a sum value of the pixel value B1 and the intermediate value may be taken as the reference value C1. Of course, the above is only a few examples, and is not limited thereto, as long as the reference value C1 corresponding to the pixel value B1 can be obtained.
Similarly, reference values C2, cn corresponding to the pixel values B2, and Bn corresponding to the pixel values can be obtained.
Step S13, determining the corresponding coding value of the pixel point (x, y) in the first coding diagram based on the n reference values corresponding to the pixel point (x, y). For example, the code value corresponding to the pixel point (x, y) in the first code map is determined based on the reference value C1, the reference value C2, the reference value Cn corresponding to the pixel point (x, y), and for example, the code value may include the reference value C1, the reference value C2, the reference value Cn.
For example, assuming that the reference value is a comparison result value of the pixel value and the intermediate value, and the value of n is 8, the code value corresponding to the pixel point (x, y) may be an 8-bit binary code value, such as 0110001, the last 1 bit represents the reference value C1, the 2 nd bit represents the reference value C2, and so on, the 1 st bit represents the reference value C8, or the last 1 bit represents the reference value C8, and so on, the 1 st bit represents the reference value C1.
In summary, it can be seen that the code value corresponding to the pixel (x, y) can be determined by n pixel values corresponding to the pixel (x, y) in the first image of n frames. For example, assuming that the reference value is a comparison result value of the pixel value and the intermediate value, and the value of n is 8, an 8-bit binary code value c (u, v, m) at the pixel point (u, v) in the first code map can be expressed as the following formula:
Figure BDA0004108937520000131
In the above formula, m represents the number of coding bits, i.e. the number of coding bits of the coding value corresponding to the pixel point (u, v), e.g. the 8-bit coding value, and c (u, v, m) represents the 8-bit binary coding value corresponding to the pixel point (u, v).
Figure BDA0004108937520000132
The average value of the corresponding n pixel values of the pixel point (u, v) in the first image of n frames, namely the above intermediate value, is represented. I p (u, v) represents the pixel value corresponding to the pixel point (u, v) in the first image, and the value range of p is 1-n, I 1 (u, v) represents the pixel value corresponding to the pixel point (u, v) in the first image L1, I 2 (u, v) represents the pixel value of the pixel point (u, v) corresponding in the first image L2, and so on.
If it is
Figure BDA0004108937520000133
If the pixel value is larger than the intermediate value, the comparison result value may be the first value 1, i.e. the reference value is the first value 1, otherwise, if +.>
Figure BDA0004108937520000134
It indicates that the pixel value is not greater than the intermediate value, i.e. the comparison result value may be the second value 0, i.e. the reference value is the second value 0.
The number of bits of the encoding result varies with the method, and includes, but is not limited to, gray value differences, gray value mapping, gray value sorting order, and the like, and combinations of encoding results of the respective methods.
Mode 2: for each pixel point in the first coding diagram, determining the adjacent pixel point corresponding to the pixel point from the neighborhood window of the pixel point, and determining the corresponding coding value of the pixel point in the first coding diagram based on the corresponding pixel value of the pixel point in the first image of each frame and the corresponding pixel value of the adjacent pixel point in the first image of each frame. Then, a first code map may be generated based on the code values corresponding to each pixel point, that is, the code values corresponding to all the pixel points may be combined to obtain the first code map.
For example, for a pixel (x, y) in the first encoding graph, based on pixel values corresponding to the pixel (x, y) in the first image L1, the first image L2, the first image Ln, and pixel values corresponding to adjacent pixels of the pixel (x, y) in the first image L1, the first image L2, the first image Ln, the encoding value corresponding to the pixel (x, y) in the first encoding graph may be determined. After obtaining the code value corresponding to each pixel point in the first code map, the code values corresponding to all the pixel points can be combined to obtain the first code map, that is, the first code map can include the code value corresponding to each pixel point.
In summary, referring to fig. 5, for the method for determining the encoding value of each pixel in the first encoding graph, the encoding value corresponding to the pixel may be determined according to the pixel value of the pixel in different first images and the neighboring pixel value (i.e., the pixel value of the neighboring pixel in different first images).
The code value corresponding to each pixel point is obtained in the same manner, taking the code value corresponding to the pixel point (x, y) as an example, the code value corresponding to the pixel point (x, y) may be obtained by the following steps:
step S21, determining adjacent pixel points corresponding to the pixel points (x, y) from the neighborhood windows of the pixel points (x, y), wherein the neighborhood windows of the pixel points (x, y) are windows taking the pixel points (x, y) as the center.
For example, the neighborhood window of the pixel point (x, y) may be a cross window of m×m, and the neighborhood window of the pixel point (x, y) may also be a rectangular window of m×m, which is not limited to the type of the neighborhood window. For a cross window of m×m, if m×m is 3*3, the pixel point (x, y) corresponds to 4 adjacent pixel points, which are respectively adjacent pixel point (x-1, y), adjacent pixel point (x+1, y), adjacent pixel point (x, y-1), and adjacent pixel point (x, y+1). For a rectangular window of m×m, if m×m is 3*3, the pixel point (x, y) corresponds to 8 adjacent pixel points, which are respectively adjacent pixel point (x-1, y-1), adjacent pixel point (x-1, y), adjacent pixel point (x-1, y+1), adjacent pixel point (x, y-1), adjacent pixel point (x, y+1), adjacent pixel point (x+1, y-1), adjacent pixel point (x+1, y), and adjacent pixel point (x+1, y+1).
Of course, the above is merely an example, and after the neighborhood window of the pixel (x, y) is determined, the pixel in the neighborhood window may be regarded as the adjacent pixel corresponding to the pixel (x, y).
Step S22, for each frame of the first image, determining a reference value corresponding to the pixel point (x, y) in the first image based on the pixel value corresponding to the pixel point (x, y) in the first image and the pixel value corresponding to the adjacent pixel point in the first image, so as to obtain the reference value corresponding to the pixel point (x, y) in the first image of n frames.
For the first image L1, for example, a reference value corresponding to the pixel point (x, y) in the first image L1 may be determined based on a pixel value corresponding to the pixel point (x, y) in the first image L1 and a pixel value corresponding to each adjacent pixel point in the first image L1. For example, a comparison result value between a pixel value corresponding to a pixel point (x, y) and a pixel value corresponding to an adjacent pixel point may be used as the reference value. If the pixel value corresponding to the adjacent pixel point is greater than the pixel value corresponding to the pixel point (x, y), the comparison result value may be a first value (e.g. 1), that is, the reference value may be the first value; if the pixel value corresponding to the adjacent pixel point is not greater than the pixel value corresponding to the pixel point (x, y), the comparison result value may be a second value (e.g., 0), that is, the reference value may be the second value. Taking the example that the pixel point (x, y) corresponds to 4 adjacent pixel points, comparison result values corresponding to the pixel point (x, y) and the 4 adjacent pixel points respectively can be obtained, so that the reference value corresponding to the pixel point (x, y) in the first image L1 can include 4 comparison result values.
For another example, a difference between a pixel value corresponding to the pixel point (x, y) and a pixel value corresponding to an adjacent pixel point may be used as a reference value, that is, a difference between the pixel point (x, y) and 4 adjacent pixel points may be obtained, and the reference value corresponding to the pixel point (x, y) in the first image L1 may include 4 differences.
For another example, an average value of pixel values corresponding to the pixel points (x, y) and pixel values corresponding to adjacent pixel points may be used as the reference value, that is, an average value of pixel points (x, y) and 4 adjacent pixel points may be obtained, and the reference value corresponding to the pixel points (x, y) in the first image L1 may include 4 average values.
Of course, the above is only a few examples, and the above is not limited thereto, as long as the reference value corresponding to the pixel point (x, y) in the first image L1 can be obtained, and the reference value corresponding to the pixel point (x, y) in the first image L2 can be obtained in the same way.
Step S23, based on the reference values corresponding to the pixel points (x, y) in the n frames of the first images, the corresponding coding values of the pixel points (x, y) in the first coding diagram are determined. For example, the code value corresponding to the pixel point (x, y) in the first code map is determined based on the reference value corresponding to the pixel point (x, y) in the first image L1, the reference value corresponding to the pixel point (x, y) in the first image L2, and the reference value corresponding to the pixel point (x, y) in the first image Ln.
For example, assuming that the reference value is the comparison result value and the value of n is 8, the code value corresponding to the pixel point (x, y) may be a 32-bit binary code value, where each 4 bits represents one reference value, and there are 8 reference values in total. Each reference value is a binary value of 4 bits, the binary value of each bit representing a comparison result value between a pixel point (x, y) and 1 adjacent pixel point, the 4 adjacent pixel points corresponding to 4 bits.
In summary, it can be seen that the code value corresponding to the pixel point (x, y) can be determined by n pixel values corresponding to the pixel point (x, y) in the first image of n frames, and the determining process is not limited.
The number of bits of the encoding result varies with the method, and includes, but is not limited to, gray value differences, gray value mapping, gray value sorting order, and the like, and combinations of encoding results of the respective methods.
Step 403, generating a second coding diagram based on the n frames of the second image.
For example, when the second code image is generated, for each pixel point in the second code image, a code value corresponding to the pixel point in the second code image is determined based on a corresponding pixel value of the pixel point in the second image of each frame. For example, for a pixel (x, y) in the second encoding graph, a corresponding encoding value of the pixel (x, y) in the second encoding graph may be determined based on pixel values corresponding to the pixel (x, y) in the second image R1, the second image R2, the second image Rn. After obtaining the code value corresponding to each pixel point in the second code map, the code values corresponding to all the pixel points can be combined to obtain the second code map, that is, the second code map can include the code value corresponding to each pixel point.
In one possible embodiment, the second code map may be generated based on the n frames of the second image in the following manner, which is, of course, merely a few examples and is not limited thereto.
Mode 1: for each pixel point in the second coding diagram, determining a coding value corresponding to the pixel point in the second coding diagram based on a pixel value corresponding to the pixel point in the second image of each frame, and generating the second coding diagram based on the coding value corresponding to each pixel point, for example, combining the coding values corresponding to all the pixel points to obtain the second coding diagram. The code value corresponding to each pixel point is obtained in the same manner, taking the code value corresponding to the pixel point (x, y) as an example, then the code value corresponding to the pixel point (x, y) may be obtained in the following manner: determining an intermediate value based on corresponding n pixel values of the pixel point (x, y) in the n frames of the second image: n reference values corresponding to the n pixel values are determined based on the n pixel values and the intermediate value. And determining the corresponding coding value of the pixel point (x, y) in the second coding diagram based on the n reference values corresponding to the pixel point (x, y).
Illustratively, the intermediate value may be an average of n pixel values; the reference value may be a comparison result value of the pixel value and the intermediate value, or the reference value may be a difference value of the pixel value and the intermediate value.
If the pixel value is larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a second value.
Mode 2: for each pixel point in the second coding diagram, determining the adjacent pixel point corresponding to the pixel point from the neighborhood window of the pixel point, and determining the corresponding coding value of the pixel point in the second coding diagram based on the corresponding pixel value of the pixel point in each frame of the second image and the corresponding pixel value of the adjacent pixel point in each frame of the second image. Then, a second code pattern may be generated based on the code values corresponding to each pixel point, that is, the code values corresponding to all the pixel points may be combined to obtain the second code pattern.
The code value corresponding to each pixel point is obtained in the same manner, taking the code value corresponding to the pixel point (x, y) as an example, then the code value corresponding to the pixel point (x, y) may be obtained in the following manner: and determining adjacent pixel points corresponding to the pixel points (x, y) from the neighborhood windows of the pixel points (x, y), wherein the neighborhood windows of the pixel points (x, y) are windows taking the pixel points (x, y) as the center. And determining a corresponding reference value of the pixel point (x, y) in the second image based on the pixel value of the pixel point (x, y) in the second image and the pixel value of the adjacent pixel point in the second image, namely obtaining the corresponding reference value of the pixel point (x, y) in the second image of n frames. And determining the corresponding coding value of the pixel point (x, y) in the second coding diagram based on the corresponding reference value of the pixel point (x, y) in the n frames of the second image.
The reference value is a comparison result value of a pixel value corresponding to the pixel point (x, y) and a pixel value corresponding to an adjacent pixel point, or the reference value is a difference value between a pixel value corresponding to the pixel point (x, y) and a pixel value corresponding to an adjacent pixel point. If the pixel value corresponding to the adjacent pixel point is greater than the pixel value corresponding to the pixel point (x, y), the comparison result value may be a first value, and if the pixel value corresponding to the adjacent pixel point is not greater than the pixel value corresponding to the pixel point (x, y), the comparison result value may be a second value.
In a possible implementation manner, based on the n frames of the first image, a first coding strategy can be used for generating a first coding image, wherein the first coding strategy is used for indicating a coding value generation mode of each pixel point in the first coding image; based on the n frames of second images, a second coding strategy can be adopted to generate a second coding image, and the second coding strategy is used for indicating a coding value generation mode of each pixel point in the second coding image; the coding value generation mode of the pixel point indicated by the first coding strategy and the coding value generation mode of the pixel point indicated by the second coding strategy are the same coding value generation mode, that is, the first coding strategy and the second coding strategy can be the same, that is, the first coding strategy and the second coding strategy adopt the same set of coding strategies.
Step 404, determining a key point pair corresponding to each pixel point in the first code image based on the first code image and the second code image (assuming that M pixel points exist in the first code image, M key point pairs corresponding to M pixel points may be determined); for each key point pair, the key point pair may include a first pixel point in the first code image and a second pixel point in the second code image, where the first pixel point and the second pixel point may be pixel points corresponding to a same position point on the measured object.
Illustratively, prior to steps 402 and 403, binocular correction may be performed on the n-frame first image and the n-frame second image, resulting in a corrected n-frame first image and a corrected n-frame second image, such that, in step 402, a first encoded map is generated based on the corrected n-frame first image, and in step 403, a second encoded map is generated based on the corrected n-frame second image.
For example, the first image L1 and the second image R1 (i.e., the first image and the second image at the same acquisition time) may be subjected to binocular correction to obtain a corrected first image L1 and a corrected second image R1, the first image L2 and the second image R2 may be subjected to binocular correction to obtain a corrected first image L2 and a corrected second image R2, and the first image Ln and the second image Rn may be subjected to binocular correction to obtain a corrected first image Ln and a corrected second image Rn, so that in summary, the corrected n-frame first image and the corrected n-frame second image may be obtained.
Illustratively, binocular correction is used to make the same point on the object under test, having the same pixel height in the corrected first image and the corrected second image. For example, for the same position point on the measured object, the first image and the second image are corrected to the same pixel height through binocular correction, so that matching is directly performed in one row during matching, and matching is more convenient. For example, matching corresponding points in a two-dimensional space is very time-consuming, in order to reduce the matching search range, epipolar constraint can be utilized to reduce matching of the corresponding points from two-dimensional search to one-dimensional search, and binocular correction has the effect of performing row correspondence on a first image and a second image to obtain a corrected first image and a corrected second image, so that epipolar lines of the corrected first image and the corrected second image are exactly on the same horizontal line, any point on the corrected first image and the corresponding point on the corrected second image have the same row number, only one-dimensional search is required in the row, and the specific search process affects the search process of the first code image and the second code image.
In step 402 and step 403, since the first code pattern is generated based on the corrected n-frame first image and the second code pattern is generated based on the corrected n-frame second image, the first code pattern and the second code pattern are also the binocular corrected first code pattern and the binocular corrected second code pattern, and the key point pair corresponding to each pixel point in the first code pattern is determined based on the binocular corrected first code pattern and the binocular corrected second code pattern.
In one possible implementation manner, based on the first code map and the second code map, the following steps may be adopted to determine a keypoint pair corresponding to each pixel point in the first code map:
step S31, sequentially selecting each pixel point from the first coding diagram as a first pixel point.
For example, for each pixel in the first coding graph, the pixel may be used as a first pixel, and the subsequent step is executed for the first pixel, so as to obtain a key point pair corresponding to the first pixel.
Step S32, for each first pixel point in the first code map (hereinafter, a first pixel point will be described as an example), determines a plurality of candidate pixel points corresponding to the first pixel point from the second code map.
For example, based on the pixel height of the first pixel point in the first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from the second coding diagram; the pixel height of each candidate pixel point in the second coding diagram is the same as the pixel height of the first pixel point in the first coding diagram. For example, assuming that the pixel height of the first pixel point in the first coding diagram is h1, all or part of the pixel points with the pixel height of h1 in the second coding diagram are used as a plurality of candidate pixel points corresponding to the first pixel point.
Step S33, for each candidate pixel point, determining the similarity between the first pixel point and the candidate pixel point based on the coding value corresponding to the first pixel point in the first coding diagram and the coding value corresponding to the candidate pixel point in the second coding diagram, so as to obtain the similarity between the first pixel point and each candidate pixel point.
For example, the code value corresponding to the first pixel point in the first code image may be an 8-bit binary code value (or a 32-bit binary code value), the code value corresponding to the candidate pixel point in the second code image may be an 8-bit binary code value, and the similarity between the first pixel point and the candidate pixel point may be calculated based on the code value corresponding to the first pixel point in the first code image and the code value corresponding to the candidate pixel point in the second code image, i.e. the similarity between the two code values is calculated, which is not limited to the calculation mode of the similarity. For example, the distance similarity between two encoded values may be calculated by the distance between the two encoded values, or the cosine similarity between the two encoded values may be calculated using a cosine algorithm, or the similarity between the two encoded values may be calculated using a pearson correlation coefficient algorithm.
In summary, for each candidate pixel, the similarity between the first pixel and the candidate pixel may be determined, so as to obtain the similarity between the first pixel and each candidate pixel.
In summary, it can be seen that the first code pattern and the second code pattern may perform similarity measurement by a similarity measurement means of code values, so as to determine the correspondence between pixel points. The measurement method may also depend on a calculation manner of the code values, for example, when binary codes (that is, the comparison result value of the pixel values is used as a reference value), the hamming distance may be used to measure the similarity of the two code values, and in step S33, based on the code value corresponding to the first pixel point in the first code map and the code value corresponding to the candidate pixel point in the second code map, the hamming distance between the two code values may be calculated, where the hamming distance between the two code values is used as the similarity between the first pixel point and the candidate pixel point.
For another example, when the pixel value difference (i.e., the difference between the pixel values is used as the reference value), the difference between the two encoded values may be calculated as the similarity between the first pixel point and the candidate pixel point in step S33 based on the encoded value corresponding to the first pixel point in the first encoded map and the encoded value corresponding to the candidate pixel point in the second encoded map.
Step S34, selecting one candidate pixel point matched with the first pixel point from a plurality of candidate pixel points as a second pixel point based on the similarity between the first pixel point and each candidate pixel point.
For example, based on the similarity between the first pixel point and each candidate pixel point, the candidate pixel point corresponding to the maximum similarity may be used as the second pixel point, that is, the candidate pixel point corresponding to the maximum similarity is the candidate pixel point matched with the first pixel point, and this candidate pixel point may be used as the second pixel point.
In step S35, a key point pair is generated based on the first pixel point and the second pixel point, that is, the key point pair may include the first pixel point in the first encoding graph and the second pixel point in the second encoding graph, and the first pixel point and the second pixel point may be the pixel points corresponding to the same position point on the measured object.
Obviously, for each pixel point in the first coding diagram, a key point pair corresponding to the pixel point can be obtained, wherein the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram. Thus, step 405 may be completed, where a key point pair corresponding to each pixel point in the first code map, that is, a key point pair corresponding to each pixel point, may be determined based on the first code map and the second code map.
In another possible implementation manner, based on the first code map and the second code map, the following steps may be adopted to determine a keypoint pair corresponding to each pixel point in the second code map:
and S41, sequentially selecting each pixel point from the second coding diagram as a second pixel point.
Step S42, for each second pixel point in the second code map (a second pixel point will be described later as an example), determines a plurality of candidate pixel points corresponding to the second pixel point from the first code map.
For example, based on the pixel height of the second pixel point in the second coding diagram, determining a plurality of candidate pixel points corresponding to the second pixel point from the first coding diagram; the pixel height of each candidate pixel point in the first coding diagram is the same as the pixel height of the second pixel point in the second coding diagram.
Step S43, for each candidate pixel point, determining the similarity between the second pixel point and the candidate pixel point based on the corresponding coding value of the second pixel point in the second coding diagram and the corresponding coding value of the candidate pixel point in the first coding diagram, and obtaining the similarity between the second pixel point and each candidate pixel point.
Step S44, selecting one candidate pixel point matched with the second pixel point from a plurality of candidate pixel points as a first pixel point based on the similarity between the second pixel point and each candidate pixel point.
In step S45, a key point pair is generated based on the second pixel point and the first pixel point, that is, the key point pair may include the second pixel point in the second encoding graph and the first pixel point in the first encoding graph, and the second pixel point and the first pixel point may be the pixel points corresponding to the same position point on the measured object.
Obviously, for each pixel point in the second coding diagram, a key point pair corresponding to the pixel point can be obtained, wherein the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram. Thus, step 405 may be completed, where a key point pair corresponding to each pixel point in the second code map may be determined based on the first code map and the second code map, that is, each pixel point corresponds to one key point pair.
For example, step S41 to step S45 may refer to step S31 to step S35, and are not described herein.
And 405, generating a three-dimensional reconstruction image corresponding to the measured object based on the key point pairs.
For example, the three-dimensional reconstructed image corresponding to the object to be measured is generated based on all the key point pairs, or the three-dimensional reconstructed image corresponding to the object to be measured is generated based on part of the key point pairs, which is not limited.
For example, for each key point pair, the key point pair includes a first pixel point in the first code image and a second pixel point in the second code image, where the first pixel point and the second pixel point are corresponding to the same position point on the measured object, on this basis, a three-dimensional point corresponding to the key point pair may be determined by adopting a trigonometry mode, or of course, a three-dimensional point corresponding to the key point pair may be determined by adopting other modes, and the determination mode of the three-dimensional point is not limited. After the three-dimensional points corresponding to all the key point pairs are obtained, a three-dimensional reconstruction image corresponding to the measured object can be generated based on the three-dimensional points corresponding to all the key point pairs.
According to the technical scheme, in the embodiment of the application, when the speckle projector projects the speckle to the measured object n times, n frames of first images and n frames of second images can be obtained, the first coding images are generated based on the n frames of first images, the second coding images are generated based on the n frames of second images, the three-dimensional reconstruction images corresponding to the measured object can be generated based on the first coding images and the second coding images, namely, three-dimensional reconstruction is realized, the three-dimensional reconstruction effect is good, accurate and reliable three-dimensional reconstruction images can be obtained, and multiple frames of images can be adopted to synchronously construct the same coding value for matching. For example, because the same position point on the measured object is highly similar to the corresponding code value in the first code image and the second code image, accurate and reliable key point pairs can be found from the first code image and the second code image, and when the three-dimensional reconstruction image is generated based on the key point pairs, the accurate and reliable three-dimensional reconstruction image can be obtained, the reconstruction precision of the three-dimensional reconstruction image is higher, and the robustness is better. Under the condition of achieving the same reconstruction effect, the calculation efficiency of the method is higher, and the method is flexible.
Based on the same application concept as the above method, an embodiment of the present application provides an image reconstruction device, which is applied to a three-dimensional imaging system, where the three-dimensional imaging system includes a first camera, a second camera, and a speckle projector, as shown in fig. 6, which is a schematic structural diagram of the device, and the device may include:
an acquisition module 61, configured to acquire n frames of first images and n frames of second images, where n is a positive integer greater than 1; when the speckle projector projects a speckle to an object to be detected each time, a first image of the object to be detected is acquired through a first camera, and a second image of the object to be detected is acquired through a second camera;
a generating module 62, configured to generate a first code image based on the n frames of first images, and generate a second code image based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in a second image of each frame;
A matching module 63, configured to determine a key point pair corresponding to each pixel point in the first code map based on the first code map and the second code map; and aiming at each key point pair, the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram, wherein the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object, and a three-dimensional reconstruction image corresponding to the measured object is generated based on the key point pair.
Illustratively, the generating module 62 generates a first code map based on the n frame first images, and generates a second code map based on the n frame second images, which is specifically configured to: generating a first coding diagram by adopting a first coding strategy based on the n frames of first images, wherein the first coding strategy is used for indicating a coding value generation mode of each pixel point in the first coding diagram; generating a second coding diagram by adopting a second coding strategy based on the n frames of second images, wherein the second coding strategy is used for indicating a coding value generation mode of each pixel point in the second coding diagram; the coding value generation mode of the pixel points indicated by the first coding strategy and the coding value generation mode of the pixel points indicated by the second coding strategy are the same coding value generation mode.
Illustratively, the generating module 62 is specifically configured to, when determining, based on the pixel value corresponding to the pixel point in the first image of each frame, the corresponding code value of the pixel point in the first code image: determining an intermediate value based on n pixel values corresponding to the pixel points in the n frames of first images; determining n reference values corresponding to the n pixel values based on the n pixel values and the intermediate value; determining corresponding coding values of the pixel points in the first coding diagram based on the n reference values; wherein the intermediate value is an average value of the n pixel values; the reference value is a comparison result value of the pixel value and the intermediate value, or the reference value is a difference value of the pixel value and the intermediate value; if the pixel value is larger than the intermediate value, the comparison result value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value is a second value.
Illustratively, the generating module 62 is specifically configured to, when determining, based on the pixel value corresponding to the pixel point in the first image of each frame, the corresponding code value of the pixel point in the first code image: determining adjacent pixel points corresponding to the pixel points from a neighborhood window corresponding to the pixel points; and determining a corresponding coding value of the pixel point in the first coding diagram based on the corresponding pixel value of the pixel point in the first image of each frame and the corresponding pixel value of the adjacent pixel point in the first image of each frame.
Illustratively, the generating module 62 is specifically configured to, when determining the corresponding encoding value of the pixel point in the first encoding graph based on the pixel value corresponding to the pixel point in the first image of each frame and the pixel value corresponding to the adjacent pixel point in the first image of each frame: for each frame of first image, determining a corresponding reference value of the pixel point in the frame of first image based on the corresponding pixel value of the pixel point in the frame of first image and the corresponding pixel value of the adjacent pixel point in the frame of first image; determining corresponding coding values of the pixel points in the first coding diagram based on the reference values respectively corresponding to the pixel points in the n frames of first images; the reference value is a comparison result value of the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point, or is a difference value between the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point; and if the pixel value corresponding to the adjacent pixel point is not greater than the pixel value corresponding to the pixel point, the comparison result value is a second value.
Illustratively, the matching module 63 determines, based on the first code map and the second code map, a key point pair corresponding to each pixel point in the first code map specifically for: for a first pixel point in a first coding diagram, the first pixel point is each pixel point in the first coding diagram, and a plurality of candidate pixel points corresponding to the first pixel point are determined from a second coding diagram; for each candidate pixel point, determining the similarity between the first pixel point and the candidate pixel point based on the corresponding coding value of the first pixel point in the first coding diagram and the corresponding coding value of the candidate pixel point in the second coding diagram; selecting one candidate pixel point matched with the first pixel point from a plurality of candidate pixel points as a second pixel point based on the similarity between the first pixel point and each candidate pixel point; and generating a key point pair based on the first pixel point and the second pixel point.
Illustratively, the generating module 62 generates a first code map based on the n frame first images, and generates a second code map based on the n frame second images, which is specifically configured to: binocular correction is carried out on the n-frame first image and the n-frame second image, so that a corrected n-frame first image and a corrected n-frame second image are obtained; the binocular correction is used for enabling the same position point on the measured object to have the same pixel height in the corrected first image and the corrected second image; generating the first code map based on the corrected n-frame first image, and generating the second code map based on the corrected n-frame second image; the matching module 63 is specifically configured to, for a first pixel point in the first code map, determine a plurality of candidate pixel points corresponding to the first pixel point from the second code map: for a first pixel point in a first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram based on the pixel height of the first pixel point in the first coding diagram; the pixel height of each candidate pixel point in the second coding diagram is the same as the pixel height of the first pixel point in the first coding diagram.
Based on the same application concept as the above method, an electronic device is provided in the embodiments of the present application, where the electronic device may be applied to a three-dimensional imaging system, and the three-dimensional imaging system may include, in addition to the electronic device, a first camera, a second camera, a speckle projector, and the like. Referring to fig. 7, the electronic device may include a processor 71 and a machine-readable storage medium 72, the machine-readable storage medium 72 storing machine-executable instructions that are executable by the processor 71; wherein the processor 71 is configured to execute machine executable instructions to implement the image reconstruction methods disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a number of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the image reconstruction method disclosed in the above example of the present application can be implemented.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer entity or by an article of manufacture having some functionality. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. An image reconstruction method for use in a three-dimensional imaging system including a first camera, a second camera, and a speckle projector, the method comprising:
acquiring n frames of first images and n frames of second images, wherein n is a positive integer greater than 1; when the speckle projector projects a speckle to an object to be detected each time, a first image of the object to be detected is acquired through a first camera, and a second image of the object to be detected is acquired through a second camera;
Generating a first code image based on the n frames of first images, and generating a second code image based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in a second image of each frame;
determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; for each key point pair, the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram, wherein the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object;
and generating a three-dimensional reconstruction image corresponding to the measured object based on the key point pairs.
2. The method of claim 1, wherein generating a first encoded map based on the n frames of first images and generating a second encoded map based on the n frames of second images comprises:
Generating a first coding diagram by adopting a first coding strategy based on the n frames of first images, wherein the first coding strategy is used for indicating a coding value generation mode of each pixel point in the first coding diagram;
generating a second coding diagram by adopting a second coding strategy based on the n frames of second images, wherein the second coding strategy is used for indicating a coding value generation mode of each pixel point in the second coding diagram;
the coding value generation mode of the pixel points indicated by the first coding strategy and the coding value generation mode of the pixel points indicated by the second coding strategy are the same coding value generation mode.
3. The method of claim 1, wherein determining the corresponding code value for the pixel in the first code map based on the corresponding pixel value for the pixel in the first image of each frame comprises:
determining an intermediate value based on n pixel values corresponding to the pixel points in the n frames of first images;
determining n reference values corresponding to the n pixel values based on the n pixel values and the intermediate value;
determining corresponding coding values of the pixel points in the first coding diagram based on the n reference values;
wherein the intermediate value is an average value of the n pixel values; the reference value is a comparison result value of the pixel value and the intermediate value, or the reference value is a difference value of the pixel value and the intermediate value;
If the pixel value is larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value of the pixel value and the intermediate value is a second value.
4. The method of claim 1, wherein determining the corresponding code value for the pixel in the first code map based on the corresponding pixel value for the pixel in the first image of each frame comprises:
determining adjacent pixel points corresponding to the pixel points from the neighborhood window of the pixel points;
and determining a corresponding coding value of the pixel point in the first coding diagram based on the corresponding pixel value of the pixel point in the first image of each frame and the corresponding pixel value of the adjacent pixel point in the first image of each frame.
5. The method of claim 4, wherein the determining the corresponding encoded value of the pixel in the first encoded map based on the corresponding pixel value of the pixel in the first image per frame and the corresponding pixel value of the neighboring pixel in the first image per frame comprises:
for each frame of first image, determining a corresponding reference value of the pixel point in the frame of first image based on the corresponding pixel value of the pixel point in the frame of first image and the corresponding pixel value of the adjacent pixel point in the frame of first image; determining corresponding coding values of the pixel points in the first coding diagram based on the reference values respectively corresponding to the pixel points in the n frames of first images;
The reference value is a comparison result value of the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point, or is a difference value between the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point; and if the pixel value corresponding to the adjacent pixel point is not greater than the pixel value corresponding to the pixel point, the comparison result value is a second value.
6. The method of claim 1, wherein the determining, based on the first encoding map and the second encoding map, a keypoint pair corresponding to each pixel point in the first encoding map comprises:
for a first pixel point in the first coding diagram, wherein the first pixel point is each pixel point in the first coding diagram, and a plurality of candidate pixel points corresponding to the first pixel point are determined from the second coding diagram; for each candidate pixel point, determining the similarity between the first pixel point and the candidate pixel point based on the corresponding coding value of the first pixel point in the first coding diagram and the corresponding coding value of the candidate pixel point in the second coding diagram;
Selecting one candidate pixel point matched with the first pixel point from the plurality of candidate pixel points as a second pixel point based on the similarity between the first pixel point and each candidate pixel point;
and generating a key point pair based on the first pixel point and the second pixel point.
7. The method of claim 6, wherein the step of providing the first layer comprises,
the generating a first code image based on the n frames of first images, and generating a second code image based on the n frames of second images, includes: binocular correction is carried out on the n-frame first image and the n-frame second image, so that a corrected n-frame first image and a corrected n-frame second image are obtained; the binocular correction is used for enabling the same position point on the measured object to have the same pixel height in the corrected first image and the corrected second image; generating the first code map based on the corrected n-frame first image, and generating the second code map based on the corrected n-frame second image;
the determining, for a first pixel point in the first coding diagram, a plurality of candidate pixel points corresponding to the first pixel point from the second coding diagram includes: for a first pixel point in a first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram based on the pixel height of the first pixel point in the first coding diagram; the pixel height of each candidate pixel point in the second coding diagram is the same as the pixel height of the first pixel point in the first coding diagram.
8. An image reconstruction apparatus for use in a three-dimensional imaging system including a first camera, a second camera, and a speckle projector, the apparatus comprising:
the acquisition module is used for acquiring n frames of first images and n frames of second images, wherein n is a positive integer greater than 1; when the speckle projector projects a speckle to an object to be detected each time, a first image of the object to be detected is acquired through a first camera, and a second image of the object to be detected is acquired through a second camera;
the generation module is used for generating a first coding diagram based on the n frames of first images and generating a second coding diagram based on the n frames of second images; for each pixel point in the first coding diagram, determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in a first image of each frame; for each pixel point in the second coding diagram, determining a corresponding coding value of the pixel point in the second coding diagram based on a corresponding pixel value of the pixel point in a second image of each frame;
the matching module is used for determining a key point pair corresponding to each pixel point in the first coding diagram based on the first coding diagram and the second coding diagram; and aiming at each key point pair, the key point pair comprises a first pixel point in the first coding diagram and a second pixel point in the second coding diagram, wherein the first pixel point and the second pixel point are pixel points corresponding to the same position point on the measured object, and a three-dimensional reconstruction image corresponding to the measured object is generated based on the key point pair.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the generation module generates a first coding diagram based on the n frames of first images, and is specifically used for generating a second coding diagram based on the n frames of second images: generating a first coding diagram by adopting a first coding strategy based on the n frames of first images, wherein the first coding strategy is used for indicating a coding value generation mode of each pixel point in the first coding diagram; generating a second coding diagram by adopting a second coding strategy based on the n frames of second images, wherein the second coding strategy is used for indicating a coding value generation mode of each pixel point in the second coding diagram; the coding value generation mode of the pixel points indicated by the first coding strategy and the coding value generation mode of the pixel points indicated by the second coding strategy are the same coding value generation mode;
the generating module is specifically configured to, when determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in the first image of each frame: determining an intermediate value based on n pixel values corresponding to the pixel points in the n frames of first images; determining n reference values corresponding to the n pixel values based on the n pixel values and the intermediate value; determining corresponding coding values of the pixel points in the first coding diagram based on the n reference values; wherein the intermediate value is an average value of the n pixel values; the reference value is a comparison result value of the pixel value and the intermediate value, or the reference value is a difference value of the pixel value and the intermediate value; if the pixel value is larger than the intermediate value, the comparison result value is a first value, and if the pixel value is not larger than the intermediate value, the comparison result value is a second value;
The generating module is specifically configured to, when determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in the first image of each frame: determining adjacent pixel points corresponding to the pixel points from a neighborhood window corresponding to the pixel points; determining a corresponding coding value of the pixel point in the first coding diagram based on a corresponding pixel value of the pixel point in each frame of the first image and a corresponding pixel value of the adjacent pixel point in each frame of the first image;
the generating module is specifically configured to, when determining the corresponding coding value of the pixel point in the first coding diagram based on the corresponding pixel value of the pixel point in the first image of each frame and the corresponding pixel value of the adjacent pixel point in the first image of each frame: for each frame of first image, determining a corresponding reference value of the pixel point in the frame of first image based on the corresponding pixel value of the pixel point in the frame of first image and the corresponding pixel value of the adjacent pixel point in the frame of first image; determining corresponding coding values of the pixel points in the first coding diagram based on the reference values respectively corresponding to the pixel points in the n frames of first images; the reference value is a comparison result value of the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point, or is a difference value between the pixel value corresponding to the pixel point and the pixel value corresponding to the adjacent pixel point; if the pixel value corresponding to the adjacent pixel point is larger than the pixel value corresponding to the pixel point, the comparison result value is a first value, and if the pixel value corresponding to the adjacent pixel point is not larger than the pixel value corresponding to the pixel point, the comparison result value is a second value;
The matching module determines, based on the first code map and the second code map, a key point pair corresponding to each pixel point in the first code map, where the key point pair is specifically configured to: for a first pixel point in a first coding diagram, wherein the first pixel point is each pixel point in the first coding diagram, and a plurality of candidate pixel points corresponding to the first pixel point are determined from a second coding diagram; for each candidate pixel point, determining the similarity between the first pixel point and the candidate pixel point based on the corresponding coding value of the first pixel point in the first coding diagram and the corresponding coding value of the candidate pixel point in the second coding diagram; selecting one candidate pixel point matched with the first pixel point from a plurality of candidate pixel points as a second pixel point based on the similarity between the first pixel point and each candidate pixel point; generating a key point pair based on the first pixel point and the second pixel point;
the generation module generates a first coding diagram based on the n frames of first images, and is specifically used for generating a second coding diagram based on the n frames of second images: binocular correction is carried out on the n-frame first image and the n-frame second image, so that a corrected n-frame first image and a corrected n-frame second image are obtained; the binocular correction is used for enabling the same position point on the measured object to have the same pixel height in the corrected first image and the corrected second image; generating the first code map based on the corrected n-frame first image, and generating the second code map based on the corrected n-frame second image; the matching module is specifically configured to, for a first pixel point in a first coding diagram, determine a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram: for a first pixel point in a first coding diagram, determining a plurality of candidate pixel points corresponding to the first pixel point from a second coding diagram based on the pixel height of the first pixel point in the first coding diagram; the pixel height of each candidate pixel point in the second coding diagram is the same as the pixel height of the first pixel point in the first coding diagram.
10. An electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the processor is configured to execute machine executable instructions to implement the method of any of claims 1-7.
CN202310196231.3A 2023-02-23 2023-02-23 Image reconstruction method, device and equipment Pending CN116228980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310196231.3A CN116228980A (en) 2023-02-23 2023-02-23 Image reconstruction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310196231.3A CN116228980A (en) 2023-02-23 2023-02-23 Image reconstruction method, device and equipment

Publications (1)

Publication Number Publication Date
CN116228980A true CN116228980A (en) 2023-06-06

Family

ID=86574723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310196231.3A Pending CN116228980A (en) 2023-02-23 2023-02-23 Image reconstruction method, device and equipment

Country Status (1)

Country Link
CN (1) CN116228980A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119254937A (en) * 2024-12-06 2025-01-03 杭州海康机器人股份有限公司 Image processing method, device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119254937A (en) * 2024-12-06 2025-01-03 杭州海康机器人股份有限公司 Image processing method, device and electronic equipment
CN119254937B (en) * 2024-12-06 2025-03-18 杭州海康机器人股份有限公司 Image processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN103796004B (en) A kind of binocular depth cognitive method of initiating structure light
US8805057B2 (en) Method and system for generating structured light with spatio-temporal patterns for 3D scene reconstruction
US9857166B2 (en) Information processing apparatus and method for measuring a target object
RU2621823C1 (en) Device for calculation of own location and method of calculation of own location
JP6097282B2 (en) 3D scanner with structured illumination
KR20130032368A (en) Three-dimensional measurement apparatus, three-dimensional measurement method, and storage medium
KR20230065978A (en) Systems, methods and media for directly repairing planar surfaces in a scene using structured light
US12159425B2 (en) Three-dimensional measurement system and three-dimensional measurement method
JP2013113600A (en) Stereo three-dimensional measuring apparatus
CN105427326A (en) Image matching method and device as well as depth data measuring method and system
US12125226B2 (en) Image processing device, three-dimensional measurement system, and image processing method
JP6009206B2 (en) 3D measuring device
JP2014115107A (en) Device and method for measuring distance
CN116228980A (en) Image reconstruction method, device and equipment
JP6486083B2 (en) Information processing apparatus, information processing method, and program
US20230090275A1 (en) Systems and Methods for Stereo Depth Sensing
JP6456084B2 (en) Image processing apparatus, image processing method, and program
WO2022254854A1 (en) Three-dimensional measurement device
JP4701848B2 (en) Image matching apparatus, image matching method, and image matching program
CN114373007A (en) Depth data measurement device, method and image matching method
CN113375600B (en) Three-dimensional measurement method and device and electronic equipment
JP2006023133A (en) Instrument and method for measuring three-dimensional shape
Jiang et al. Stereo matching based on random speckle projection for dynamic 3D sensing
CN119379926B (en) Structured light three-dimensional reconstruction method, device, electronic equipment and storage medium
JP2016061622A (en) Information processing system and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination