Disclosure of Invention
It is an object of the present invention to provide a device for achieving near-to-eye display of three-dimensional images which has the advantages of low manufacturing cost, simplicity and convenience in design, compactness and the like.
An apparatus for implementing near-eye display of a three-dimensional image according to one aspect of the present invention includes:
A light field reproduction unit configured to reconstruct light field information of a target object to reproduce a virtual scene, the light field reproduction unit comprising:
Spatial light modulator, and
A phase plate arranged in the light emitting direction of the spatial light modulator and having a diffraction structure configured to project images of different perspectives in the virtual scene to respective corresponding viewing positions, and
And a virtual-real fusion unit configured to output a three-dimensional image fusing the virtual scene and the real scene together.
Preferably, in the above apparatus, the apparatus further includes a projection unit configured to transfer the virtual scene output by the light field reproduction unit to the virtual-real fusion unit.
Preferably, in the above device, the spatial light modulator comprises a plurality of volume pixels, each volume pixel comprising a plurality of sub-pixels, each sub-pixel corresponding to a different viewing angle, and the diffractive structure comprises a plurality of nanostructure elements, each nanostructure element being configured to project light beams from a sub-pixel of the plurality of volume pixels corresponding to the same viewing angle to the same viewing position associated with the sub-pixel.
Preferably, in the above device, the spatial light modulator comprises a plurality of volume pixels, each volume pixel comprising a plurality of sub-pixels, each sub-pixel corresponding to a different viewing angle, and the diffractive structure comprises a plurality of nanostructure elements, each nanostructure element being configured to project light beams from a sub-pixel of the plurality of volume pixels corresponding to the same viewing angle to a set of viewing positions associated with the sub-pixel.
Preferably, in the above device, the set of viewing positions is a plurality of viewing positions distributed in a horizontal direction and/or a lateral direction.
Preferably, in the above device, the spatial light modulator is one of a DLP display screen, an LCOS display screen, or a liquid crystal display screen.
Preferably, in the above device, the virtual-real fusion unit includes a waveguide, a first nano grating disposed inside the waveguide, and a second nano grating, wherein the first nano grating diffracts incoming light, the waveguide totally reflects light diffracted by the first nano grating, and the second nano grating diffracts totally reflected light to guide the light from the waveguide to the visible region.
Preferably, in the above device, the virtual-real fusion unit includes three layers of transparent light field lenses stacked together, each layer of transparent light field lenses includes a waveguide, a first nano-grating disposed inside the waveguide, and a second nano-grating, wherein the first nano-grating diffracts incoming light, the waveguide totally reflects light diffracted by the first nano-grating, and the second nano-grating diffracts totally reflected light to guide light from the waveguide to the visible region, wherein the first nano-grating of each layer of transparent light field lenses has different orientation angles and/or periods, and the second nano-grating of each layer of transparent light field lenses has different orientation angles and/or periods.
Preferably, in the above device, the virtual-real fusion unit includes a prism, a waveguide, and a nano-grating disposed inside the waveguide, wherein the prism refracts incident light into the waveguide, the waveguide totally reflects the refracted light, and the nano-grating diffracts the totally reflected light to guide the light from the waveguide to the viewing area.
Preferably, in the above device, the virtual-real fusion unit includes a prism refracting the incident light into the waveguide, the waveguide totally reflecting the refracted light, a waveguide, and a pair of mirrors disposed inside the waveguide, the mirrors diffracting the totally reflected light to guide the light from the waveguide to the viewing area.
Preferably, in the above device, the virtual-real fusion unit includes a first waveguide lens, a second waveguide lens, a first set of nano gratings, a second set of nano gratings, and a third set of nano gratings, where the first set of nano gratings is located between the first and second waveguide lenses, the second set of nano gratings is located on a surface of the second waveguide lens, which is far away from the first waveguide lens, and the third set of nano gratings is located above the second set of nano gratings, and the first to third sets of nano gratings have different orientation angles and/or periods.
Detailed Description
The objects of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of an apparatus for implementing a near-eye display of a three-dimensional image in accordance with one embodiment of the invention.
The apparatus 100 for implementing near-to-eye display of three-dimensional images shown in fig. 1 includes a light field reproduction unit 110, a projection unit 120, and a virtual-real fusion unit 130. In the present embodiment, the light field reproduction unit 110 is configured to reconstruct light field information of a target object to reproduce a virtual scene. The projection unit 120 is optically coupled between the light field reproduction unit 110 and the virtual-to-real fusion unit 130, which is configured to transfer the virtual scene output by the light field reproduction unit 110 to the virtual-to-real fusion unit 130, for example, by geometrical optics such as reflection, refraction or diffraction. The virtual-real fusion unit 130 is configured to output a three-dimensional image that fuses the virtual scene with the real scene.
It is noted that the projection unit 120 is an optional component. Alternatively, the virtual scene reconstructed by the light field rendering unit 110 may be directly coupled to the virtual-to-real fusion unit 130 by a suitable design.
In this embodiment, the light field reproduction unit 110 includes a spatial light modulator and a phase plate to achieve reconstruction of the light field.
The spatial light modulator is used for amplitude modulation, i.e. loading the image information of the multi-view mixture. The spatial light modulator may include, for example, a display panel, a driving circuit, a control system, software control, and the like. Spatial light modulators can implement either monochrome or color displays, depending on the particular application field requirements. Preferably, the spatial light modulator may employ one of a DLP display screen, an LCOS display screen, and a liquid crystal display screen. The spatial light modulator may comprise a plurality of volume pixels or amplitude modulated pixels, each volume pixel comprising a plurality of sub-pixels, and each sub-pixel corresponding to a different viewing angle.
The phase plate has a diffraction grating structure including a plurality of bulk pixels. Further, each volume pixel of the phase plate comprises a plurality of nanostructure elements, each nanostructure element being aligned in matching relation with a viewing angle image pixel of the spatial light modulator, that is to say, light beams from a sub-pixel of the plurality of volume pixels from the spatial light modulator corresponding to the same viewing angle are projected to a set of viewing positions associated with that sub-pixel.
Fig. 2 is a schematic diagram of a phase plate that may be used in the apparatus of fig. 1.
Without loss of generality, the case shown in fig. 2 takes 5 viewing angles as an example and only three individual pixels of the phase plate are shown, but it is apparent that the present embodiment can be applied to cases where the viewing angles are other numbers. As shown in fig. 2, phase plate 212 includes volume pixels 212A-212C. Each individual pixel of phase plate 212 takes the form of a pixel cell and contains 5 nanograting regions or sub-pixels having different periods and/or orientation angles that will deflect light rays to different viewing positions or viewing angles 1-5 when the light rays from the sub-pixels of the spatial light modulator arrive, thereby enabling the projection of light beams from the same viewing angle to multiple viewing positions, thereby enabling a three-dimensional display with separate viewing angles. Likewise, through the design of the phase plate, a plurality of separated or continuous view points distributed by the dot matrix, the linear array and the area array can be realized, so as to achieve the effect of the optimal observation area.
The period and orientation angle of the grating region may be determined according to the following grating equation:
tanφ1=sinφ/(cosφ-nsinθ(Λ/λ)) (1)
sin2(θ1)=(λ/Λ)2+(nsinθ)2-2nsinθcosφ(λ/Λ) (2)
Wherein, theta 1 and phi 1 respectively represent diffraction angle (included angle of diffracted light and Z axis negative direction) and azimuth angle (included angle of diffracted light and Y axis positive direction) of diffracted light, theta and lambda respectively represent incidence angle (included angle of incident light and Z axis negative direction) and wavelength of light source, lambda and phi respectively represent period and orientation angle (included angle of groove type direction and X axis positive direction) of nano diffraction grating, and n represents refractive index of light wave in medium.
Fig. 3 is a schematic structural diagram of the phase plate shown in fig. 2 in an X-Y plane and an X-Z plane.
After the wavelength, incidence angle, diffraction angle and diffraction azimuth angle of the incident light are determined, the required grating period and orientation angle can be calculated by using the above formula. The period and the orientation angle of the nanostructure unit determine the modulation characteristics of the light field angle and the spectrum, and the regulation and the conversion of the light field can be realized by designing the change patterns of the orientation angle and the period of the nanostructure unit. The diffraction grating structure shown can be fabricated directly as a functional thin film layer on, for example, a glass substrate.
As described above, each nanograting region is considered to be a pixel unit or sub-pixel. The orientation of the grating region determines the optical field angular modulation characteristic and its period determines the spectral filtering characteristic. The modulation and transformation of the light field can be achieved by continuously varying the period (spatial frequency) and orientation of each nanograting region between the subpixels. Therefore, after a plurality of grating areas with different orientation angles and periods set according to the needs are manufactured on one screen surface, enough viewpoints can be obtained, and the 3D display under multiple viewing angles can be realized by amplitude control.
Fig. 4 is a graph of nanostructure distribution of a directional functional thin film implementing single viewpoint convergence. The nanostructure shown has a single off-axis fresnel structure that allows the image to converge at viewpoint 1. In the structure shown in fig. 4, n×m sub-pixels constitute an off-axis fresnel structure of n×m different focal points. It should be noted that the shape of the sub-pixels in fig. 4 is not limited to a rectangle, but may be a circle, a hexagon, or the like.
Fig. 5a and 5b are schematic diagrams of realizing single view and multi-view, respectively, with a light field rendering unit according to another embodiment of the present invention.
The light field reproduction unit 510 shown in fig. 5a and 5b comprises a spatial light modulator (e.g. a liquid crystal display) 511 and a phase plate 512 (e.g. having the structure of a phase plate as shown in fig. 2) with grating areas of different orientations and/or periods arranged on the surface, wherein the phase plate 512 focuses light rays passing through the spatial light modulator 511 to one or several viewpoints.
Fig. 6 is a schematic structural view of an apparatus for implementing near-eye display of a three-dimensional image according to another embodiment of the present invention.
The apparatus 600 shown in fig. 6 comprises a light field rendering unit 610, a projection unit 620 and a virtual-to-real fusion unit 630.
The light field reproduction unit 610 includes a liquid crystal display 611 and a phase plate 612 positioned on the light emitting surface of the liquid crystal display. The phase plate 612 may take the configuration described above with respect to fig. 2. The light field information of the object reproduced by the light field reproduction unit 610 is coupled to the virtual-real fusion unit 630, for example, by diffraction, refraction, or reflection of the projection unit 620.
In this embodiment, the virtual-real fusion unit 630 includes a waveguide 631, a first nanograting 632a, and a second nanograting 632b. Referring to fig. 6, a first nanograting 632a is disposed inside the waveguide 631 near the position where light is incident into the waveguide, which diffracts the incident light. The light diffracted by the first nanograting 632a is totally reflected inside the waveguide 631. The light reaches the second nanograting 632b after undergoing multiple total reflections, and is then directed to a visual region outside the waveguide by diffraction of the second nanograting 632b, thereby outputting a three-dimensional image fusing the virtual scene with the real scene.
Fig. 7 is a schematic structural view of an apparatus for implementing near-eye display of a three-dimensional image according to another embodiment of the present invention.
The apparatus 700 shown in fig. 7 includes a light field reproduction unit 710, a projection unit 720, and a virtual-real fusion unit 730.
The light field reproduction unit 710 includes a liquid crystal display 711 and a phase plate 712 positioned on the light-emitting surface of the liquid crystal display. The phase plate 712 may take the configuration described above with respect to fig. 2. The light field information of the object reproduced by the light field reproduction unit 710 is coupled to the virtual-real fusion unit 730, for example, by diffraction, refraction, or reflection of the projection unit 720.
The main difference between this embodiment and the embodiment shown in fig. 6 is the structure of the virtual-real fusion unit. Specifically, the virtual-actual fusion unit 730 of the present embodiment includes a prism 731, a waveguide 732, and a nanograting 733 located inside the waveguide. Referring to fig. 7, the projection unit 720 projects a virtual scene from the light field reproduction unit 710 to the prism 731, and enters the waveguide 732 after being refracted by the prism 731. The refracted light rays are totally reflected inside waveguide 732. The light reaches the nano-grating 733 after undergoing multiple total reflections, and is then directed to a visual region outside the waveguide by diffraction of the nano-grating 733, thereby outputting a three-dimensional image fusing the virtual scene and the real scene together.
Fig. 8 is a schematic structural view of an apparatus for implementing near-eye display of a three-dimensional image according to another embodiment of the present invention.
The apparatus 800 shown in fig. 8 comprises a light field rendering unit 810, a projection unit 820 and a virtual-to-real fusion unit 830.
The light field reproduction unit 810 includes a liquid crystal display 811 and a phase plate 812 positioned on the light exit surface of the liquid crystal display. The phase plate 812 may take the configuration described above with respect to fig. 2. The light field information of the object reproduced by the light field reproduction unit 810 is coupled to the virtual-real fusion unit 830, for example, by diffraction, refraction or reflection action of the projection unit 820.
The main difference between this embodiment and the embodiment shown in fig. 6 is the structure of the virtual-real fusion unit. Specifically, the virtual-actual fusion unit 830 of the present embodiment includes a prism 831, a waveguide 832, and a pair of mirrors 833a and 833b located inside the waveguide. Referring to fig. 8, the projection unit 820 projects the virtual scene from the light field reproduction unit 810 to the prism 831, and is refracted by the prism 831 to enter the waveguide 832. The refracted ray undergoes total reflection within the waveguide 832. The light reaches the mirror 833a after undergoing multiple total reflections, a part of the light is reflected by the mirror 833a to be guided to the visible region outside the waveguide, and the rest of the light passes through the mirror 833a to reach the mirror 833b and is reflected by the mirror 833b to be guided to the visible region outside the waveguide, thereby outputting a three-dimensional image in which the virtual scene and the real scene are fused together.
Fig. 9 is a top view of a virtual-real fusion unit according to another embodiment of the present invention.
The virtual-real fusion unit 930 shown in fig. 9 takes the form of a diffractive optical element, the surface of which comprises a plurality of functional areas 931-933, each of which contains a pixel-type nanodiffraction grating. In this embodiment, the nanodiffraction grating may be fabricated using photolithographic techniques, nanoimprinting, or holographic interference techniques. As shown in fig. 9, the first functional region 931 having a circular grating is used for light coupling in. It should be noted that the shape of the first functional region is not limited to a circle. The light is coupled through the first functional region 931 and then directed into the second functional region 932 of the wedge-shaped diffraction grating, which enables expansion of the image projected onto the virtual-real fusion unit 930 in the X-axis direction. The image is guided to the third functional region 933 after being expanded in the X-axis direction via the second functional region 932. This functional area 933 realizes expansion of the projection image in the Y-axis direction. Thus, the human eye can observe the virtual image with enlarged viewing angle through the virtual-to-actual fusion unit 930.
Fig. 10 is a schematic structural view of an apparatus for implementing near-eye display of a three-dimensional image according to another embodiment of the present invention.
The apparatus 1100 shown in fig. 10 includes a light field reproduction unit 1010, a projection unit 1020, and a virtual-real fusion unit 1030.
The light field reproduction unit 1010 includes a liquid crystal display 1011 and a phase plate 1012 positioned on the light exit surface of the liquid crystal display. The phase plate 1012 may take the configuration described above with respect to fig. 2. The light field information of the object reproduced by the light field reproduction unit 1010 is coupled to the virtual-real fusion unit 1030, for example, by diffraction, refraction, or reflection action of the projection unit 1020. The virtual-real coupling unit 1030 includes a plurality of functional areas 1031-1033, which are similar to the functional areas 931-933 shown in fig. 9, and are not described herein.
Fig. 11a and 11b are schematic diagrams for respectively implementing longitudinal and transverse viewing area expansion using the phase plate as described above. Without loss of generality, only four views are taken as examples herein.
Referring to fig. 11a, there is shown a volume pixel (rectangle divided into 4 areas in the figure) of phase plate 1112, the volume pixel containing 4 sub-pixels (identified by numerals 1-4). Each subpixel corresponds to one subpixel of the spatial light modulator, which comprises three nanograting pixel units stacked or arranged in a distribution. The nano grating pixel units in the sub-pixels have different orientations and/or periods, so that a plurality of windows 1a-1c with the same visual angle information, which are longitudinally and continuously arranged, can be formed in the vertical direction for each view point, thereby achieving the effect of expanding the longitudinal visual angle.
Referring to fig. 11b, as such, a plurality of voxels (rectangular divided into 4 regions in the figure) of the phase plate 1112 are shown. Taking the leftmost voxel as an example, the voxel contains 4 subpixels (identified by the numbers 1-4). Each subpixel corresponds to one subpixel of the spatial light modulator, which comprises three nanograting pixel units stacked or arranged in a distribution. The nanometer grating pixel units in the sub-pixels have different orientations and/or periods, so that a plurality of windows which are transversely and continuously arranged and have the same visual angle information can be formed along the horizontal direction for each view point, and the effect of expanding the transverse visual angle is achieved.
In the above example, the expanded view angle information is formed in the vertical or horizontal direction of the same viewpoint and is the same as the light field information covered by the original viewpoint, so that the view angle can be increased without reducing the resolution of the image.
Preferably, in order to enable human eyes to observe virtual images in a large view field angle, the pixel structure of the phase plate can also adopt spatial multiplexing stacking or binary optical elements to display three-dimensional images in a wide view field. The image reproduction information after the visual angle expansion is coupled into the virtual-real fusion unit through the projection unit. In practical application, by matching the exit pupil of the projection optical unit and the entrance pupil of the virtual-real fusion lens, an observer can observe a three-dimensional scene with an enlarged field angle through the virtual-real fusion unit without moving.
Fig. 12 is a schematic diagram of a virtual-real fusion unit according to another embodiment of the present invention.
The virtual-to-real fusion unit 1230 shown in fig. 12 includes three subunits 1231-1233 stacked together, each of which may take the configuration of the virtual-to-real fusion unit shown in fig. 6. By having different orientation angles and/or periods of the nanograting therein for each subunit, different images can be generated at different distances from the virtual-real fusion unit, thereby obtaining an image with a sense of depth of field.
Fig. 13 is a schematic diagram of an apparatus for implementing near-eye display of three-dimensional images according to another embodiment of the present invention.
The near-eye display device 1300 shown in fig. 13 can realize color display. To this end, in device 1300, as shown in fig. 13, light field rendering unit 1310 includes a spatial light modulator, a phase plate, and a filter between the spatial light modulator and the phase plate. Without loss of generality, each volume pixel of the phase plate (a rectangle divided into four regions in the figure) contains four differently oriented sub-pixels (identified by numerals 1-4 in the figure), each sub-pixel corresponding to one of the four viewing angles. In this embodiment, the subpixels of the spatial light modulator are matched with the pixels of the filter and the subpixels of the phase plate. Still further, each subpixel of the phase plate is comprised of R, G, B subpixels or pixel raster units (shown in different hatching). For each sub-pixel, by designing each pixel raster unit to have the proper orientation angle and period, light can be converged at the same viewpoint through R, G, B sub-pixels. The convergence viewpoint contains color information of the image. The image is reconstructed by the light field reproduction unit 1310, coupled to the virtual-to-real fusion unit 1330 by the projection unit 1320, and finally a color virtual image is presented through the virtual-to-real fusion unit 1330. In an actual three-dimensional display, the input of the spatial light modulator can be controlled by a computer, so that video playing of color images can be realized.
Fig. 14 is a schematic diagram of an apparatus for implementing near-eye display of three-dimensional images in accordance with another embodiment of the invention.
The near-eye display device 1400 shown in fig. 14 can realize a color display. In the apparatus 1400 shown in fig. 14, the virtual-real fusion unit 1430 adopts a structure of a double-layered waveguide lens. Specifically, the virtual-real fusion unit 1430 includes a first waveguide lens 1431, a second waveguide lens 1432, a first set of nano-gratings 1433a, 1433b, a second set of nano-gratings 1434a, 1434b, and a third set of nano-gratings 1435a, 1435b. Referring to fig. 14, a first set of nanograms 1433a, 1433b are located between the first and second waveguide lenses, a second set of nanograms 1434a, 1434b are located on the surface of the second waveguide lens 1432 remote from the first waveguide lens 1431, and a third set of nanograms 1435a, 1435b are located on top of the second set of nanograms 1434a, 1434 b.
In this embodiment, each set of nanograting diffracts only light of a specific color (e.g., R, G and B). The projection unit 1420 couples light rays from the light field reproduction unit 1410 into the first waveguide lens 1431, and light rays of a specific color (e.g. green) are totally reflected in the first waveguide lens 1431 due to wavelength selectivity of the grating, while light rays of other colors (e.g. blue and red) cannot be transmitted in the first waveguide lens 1431 due to the diffraction angle not satisfying the total reflection condition. Similarly, the second set of nano-gratings 1434a, 1434b and the third set of nano-gratings 1435a, 1435b, which have diffraction effects only on blue and red light, respectively, cause total reflection of the blue and red light within the second waveguide lens 1432. Thus, image information having different colors is transmitted through the waveguide lens without crosstalk. The virtual-real fusion unit 1430 projects a color converging light field in front of the human eye.
Compared with the prior art, the device for realizing the near-eye display of the three-dimensional image has a plurality of advantages. For example, the microlens array-based near-eye display device of the present invention can automatically generate stereoscopic images, is simple to operate, is compact and can operate under incoherent light sources without special illumination light, and provides continuous parallax and viewpoint to an observer.
The foregoing has described the principles and preferred embodiments of the present invention. However, the invention should not be construed as being limited to the particular embodiments discussed. The preferred embodiments described above should be regarded as illustrative rather than restrictive, and it should be appreciated that variations may be made in those embodiments by workers skilled in the art without departing from the scope of the present invention as defined by the following claims.