CN221784279U - A host system for joint acquisition of multi-depth cameras for gait motion data - Google Patents
A host system for joint acquisition of multi-depth cameras for gait motion data Download PDFInfo
- Publication number
- CN221784279U CN221784279U CN202323019433.5U CN202323019433U CN221784279U CN 221784279 U CN221784279 U CN 221784279U CN 202323019433 U CN202323019433 U CN 202323019433U CN 221784279 U CN221784279 U CN 221784279U
- Authority
- CN
- China
- Prior art keywords
- laminate structure
- data
- depth
- host
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005021 gait Effects 0.000 title claims abstract description 27
- 238000013480 data collection Methods 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 5
- 238000003491 array Methods 0.000 claims description 7
- 238000012544 monitoring process Methods 0.000 claims description 4
- 238000003860 storage Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 13
- 238000000034 method Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 240000004282 Grewia occidentalis Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000017525 heat dissipation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 101000941170 Homo sapiens U6 snRNA phosphodiesterase 1 Proteins 0.000 description 1
- 102100031314 U6 snRNA phosphodiesterase 1 Human genes 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000007769 metal material Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Closed-Circuit Television Systems (AREA)
Abstract
The utility model discloses a multi-depth camera combined acquisition host system aiming at gait motion data. The system comprises a gait motion acquisition host and a camera array, wherein: the camera array comprises a plurality of depth cameras, and each depth camera collects target motion data of a corresponding area at a preset visual angle; the gait motion collection host is provided with a plurality of data collection control mainboards, a network switch and a keyboard and mouse screen controller, each data collection control mainboard is connected to a corresponding depth camera in a control way, the data collection control mainboards are connected through the network switch, and the keyboard and mouse screen controller is configured to be connected to a display, a set of keyboard and mouse to control all the data collection control mainboards. The system provided can control a plurality of depth cameras through one host, and the efficiency of combined data acquisition and the flexibility of deployment modes are improved.
Description
Technical Field
The utility model relates to the technical field of information acquisition, in particular to a multi-depth camera combined acquisition host system aiming at gait motion data.
Background
Unlike color cameras, depth cameras are a special type of camera that is capable of capturing three-dimensional information, typically represented as a depth image, where the value of each pixel corresponds to a point in the scene to the camera's distance. The depth camera is widely applied to the fields of motion data acquisition, target tracking and the like.
In the prior art, data acquisition based on a depth camera is generally performed independently by a camera corresponding to an acquisition host. This approach is costly to deploy, inefficient in data acquisition, and relatively complex in the synchronization process between the data. The existing joint acquisition scheme, such as the latest Kinect Azure, adopts a mode of connecting a hardware interface line (3.5 mm data line) for synchronous control. However, this approach suffers from the problems that the manipulation manner is single and inflexible, and the synchronization of data cannot be controlled accurately.
Through analysis, the method is limited by the processing capacity of data, and at present, one host only supports data acquisition of one camera, and no host device for joint acquisition of multiple depth cameras exists.
Disclosure of Invention
The utility model aims to overcome the defects of the prior art and provides a multi-depth camera combined acquisition host system aiming at gait motion data. The system comprises a gait motion acquisition host and a camera array, wherein:
the camera array comprises a plurality of depth cameras, and each depth camera collects target motion data of a corresponding area at a preset visual angle;
The gait motion collection host is provided with a plurality of data collection control mainboards, a network switch and a keyboard and mouse screen controller, each data collection control mainboard is connected to a corresponding depth camera in a control way, the data collection control mainboards are connected through the network switch, and the keyboard and mouse screen controller is configured to be connected to a display, a set of keyboard and mouse to control all the data collection control mainboards.
Compared with the prior art, the utility model has the advantages that a plurality of acquisition control mainboards and network equipment accessories are reasonably integrated through a host system structure with brand new design, the aim of controlling a plurality of depth cameras through one host to perform combined data acquisition is fulfilled, the data synchronization and the data fusion processing based on a network are realized, and the data processing efficiency and the data synchronization accuracy are improved.
Other features of the present utility model and its advantages will become apparent from the following detailed description of exemplary embodiments of the utility model, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the utility model and together with the description, serve to explain the principles of the utility model.
FIG. 1 is a longitudinal cross-sectional view of a gait motion acquisition mainframe in accordance with an embodiment of the utility model;
FIG. 2 is a schematic side view of a gait motion collection mainframe in accordance with an embodiment of the utility model;
FIG. 3 is a side deck lid view of a gait motion collection mainframe in accordance with an embodiment of the utility model;
FIG. 4 is a top plan view of a gait motion acquisition mainframe in accordance with an embodiment of the utility model;
FIG. 5 is a schematic side view of a long straight deployment scenario according to one embodiment of the present utility model;
FIG. 6 is a schematic top view of a long straight and lateral deployment scenario according to one embodiment of the present utility model;
FIG. 7 is a schematic top view of a rectangular quadrangle scheme according to one embodiment of the utility model;
Fig. 8 is a diagram illustrating a design of a gait motion collection host according to the present application of a novel embodiment.
Detailed Description
Various exemplary embodiments of the present utility model will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present utility model unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the utility model, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In general, the provided multi-depth camera combined acquisition host system for gait motion data comprises a gait motion acquisition host and a camera array, wherein the camera array comprises a plurality of depth cameras; the gait motion collection host comprises a plurality of data collection control mainboards, each data collection control mainboard controls a corresponding depth camera, all mainboards can be connected through a network port of a network switch to carry out data communication, and a display, a set of keyboard and mouse are used for controlling all data collection control mainboards by utilizing a keyboard and mouse screen (KVM) controller. Through the design, the motion data acquisition host can control a plurality of depth cameras simultaneously to form an array, so that combined data acquisition is performed.
In one embodiment, a plurality of acquisition control mainboards, network switch equipment and the like are organically integrated through designing and optimizing a mainframe structure, and related acquisition and control matched software systems are developed to form a combined data acquisition device special for the multi-depth cameras. In addition, based on the gait motion collection host computer of self-lapping, the combination mode of a plurality of camera arrays is designed, and a scheme is provided for interference-free and effective data collection.
Referring to the longitudinal sectional view of the gait motion collection mainframe shown in fig. 1, the mainframe box of the mainframe is provided with a first laminate structure 10, a second laminate structure 20, a third laminate structure 30, etc. from above and below. Each laminate structure may comprise one or more sub-layers, for example a first laminate structure is provided with four sub-layers. The first laminate structure 10 may house a plurality of data acquisition control boards. The second laminate structure 20 may house a network switch, a power adapter device, etc. for controlling data communication and power supply between the main boards, for example, the second laminate structure 20 comprises two sub-layers, from above and below, a first sub-layer for housing the network switch and a second sub-layer for housing the power adapter. The third tier structure 30 comprises a sub-layer in which a (KVM) controller may be located.
In one embodiment, the customized host may include a plurality of data acquisition control mainboards, for example 6-8 mainboards, connected to each other through a network port of the network switch for data communication, as required. And a display, a set of keyboard and a set of mouse control all acquisition control mainboards through the KVM controller.
Fig. 2 is a schematic side view of the host, wherein fig. 2 (a) is a perspective side view of the host, and fig. 2 (b) is a schematic left side view of the host. In this embodiment, three high power mute fans 40 are placed on one side of the main chassis for active heat dissipation. The other side is provided with a video output interface (such as HDMI) and a USB interface (such as USB9 to USB 12) and the like, so that a display, a mouse, a keyboard and the like can be conveniently connected. In addition, considering that the amount of gait data is large, 8 network interfaces (e.g., COM-OUT1 to COM-OUT 8) are specifically left on the side, and flexible configuration of link aggregation is supported to increase the transmission speed of network data.
Fig. 3 is a schematic diagram of a side chassis cover of a host. The two side case covers can be made of various materials such as plastic materials or metal materials. Considering that the host can communicate with the external device in a wireless manner, in order to reduce the shielding of wireless signals as much as possible, preferably, the two side case covers of the host are made of plastic materials so as to improve the signal quality of wireless communication, and three rectangular hole arrays are arranged so as to facilitate heat dissipation. Fig. 2 illustrates one of the rectangular arrays of holes 50.
Fig. 4 is a top plan view of the host. In this embodiment, power keys (e.g., PC1 to PC 8), USB data interfaces (e.g., USB1 to USB 8), and control keys (corresponding to PC1 to PC8 below) are provided above the main box. Each power key corresponds to the starting of one main board and is used for controlling the opening or closing of the main board. The control keys are used for switching the KVM to the corresponding main boards, each control key corresponds to two display lamps, one for displaying whether the corresponding main board is currently powered On (ONLINE), and one for displaying which main board (SELECTED) is currently being controlled. The Reset key is used to control the Reset operation of the host.
By utilizing the designed host, camera arrays with different heights, different visual angles and different spacing distances from the ground can be deployed according to the gait motion data acquisition scene, so that different deployment modes with the host are realized. For clarity, factors such as the relative position, relative distance, etc., of each camera array may be termed a long straight deployment scheme or a long straight and lateral deployment scheme, etc.
1. Long straight deployment scheme
In one embodiment, the camera array comprises 6-8 depth cameras and is arranged on the same horizontal plane suspended at a predetermined height from the ground, and the relative spacing, deployment position and inclination of the horizontal line of the monitoring path of each depth camera are preset according to the area characteristics required to be monitored. Specifically, the deployment scheme can be defined as that a plurality of cameras are positioned on the same horizontal plane, and the cameras are arranged at intervals, so that different view angles are formed with the ground. The deployment scheme is suitable for both outdoor environments and indoor environments, for example, in the outdoor environments, a plurality of cameras are hung at a certain height from the ground, and gait data acquisition is carried out on pedestrians on a certain road. Fig. 5 is a side view schematic diagram of a long straight deployment solution for indoor environments, where the deployment solution consists of 6-8 depth cameras forming a camera array deployed under a ceiling. Each depth camera has a set separation distance and viewing angle direction. For example, each depth camera is inclined at an angle of about 30 degrees relative to the horizontal line, and a more accurate angle can be calculated through test data at the system software level. Each depth camera may be suspended below the ceiling by a bracket that passes through the suspended ceiling. The color blocks with corresponding colors emitted by each camera mark the directions and the visual angles of the color blocks, and the corresponding colors on the ground and the cameras represent the effective extraction skeleton range of the color blocks.
2. The long straight and lateral deployment scheme can be defined as that on the basis of the long straight deployment scheme, two depth cameras for lateral shooting are further arranged on one side of a monitoring path (or a target path), and the interval between the two depth cameras is preset according to a target active area. The lateral shooting cameras are additionally arranged on one side or two sides of the target path, and the method is also suitable for indoor environments and outdoor environments. Fig. 6 is a schematic top view of a long straight and lateral deployment scenario for an indoor environment. On the basis of a long straight deployment scheme, two depth cameras for lateral shooting are deployed at about 2 meters on the other side of a pavement and positioned at the positions of 3.6 meters at the front end and the rear end and used for recording the lateral walking posture of an observed object.
3. Rectangular four-corner deployment scheme
In one embodiment, the camera array comprises 4-8 depth cameras which are respectively arranged at four corners of the rectangular area and have preset heights from the ground, the lens of each depth camera faces the center of the rectangular area, and the distance between the depth cameras arranged at the adjacent corners is configured to be in the range of 3 meters to 4.4 meters.
Fig. 7 is a schematic top view of a rectangular four corner deployment scenario. In this embodiment, 4 to 8 depth cameras are respectively placed at four corners of an indoor ceiling, and the lens faces the center X of the room. In fig. 7, the layer height is 2.6 meters and each depth camera is marked with a different color rectangle at an angle of about 30 ° to the horizontal. The optimal acquisition area of the test object is a circular area (white area) with x as a circle center and a radius of 0.7 m; a circular area from the outside of this area to a radius of 1.7 meters is a sub-optimal acquisition area (light gray area); the dark gray area is a dead zone. It should be appreciated that this type of deployment scenario is applicable to environments where the monitored area is generally rectangular, nor is it limited to indoor environments.
It should be noted that, the gait motion collection host provided by the utility model can be designed into different sizes according to actual needs, or be configured with different numbers of mainboards, be provided with different numbers of USB interfaces or network interfaces, etc. And the main board and the network switch used can be special design or commercial products. For clarity, fig. 8 further shows schematic views of different orientations of the designed host, taking a normal placement state as an example, fig. 8 (a) is a schematic view of the right side of the host, fig. 8 (b) is a schematic view of the front of the host, fig. 8 (c) is a schematic view of the left side of the host, and fig. 8 (d) is a top plan view of the host, and the numerical units for representing dimensions are centimeters.
It will be appreciated that those skilled in the art can make appropriate changes and modifications to the embodiments described above without departing from the scope and spirit of the utility model. For example, a different number of data acquisition control motherboards, USB interfaces, or depth cameras are provided. Or otherwise deploy the spacing or distance between the depth cameras, so long as the depth camera array is capable of monitoring the active area of the target.
In summary, the utility model reasonably integrates a plurality of acquisition control mainboards and network equipment accessories through a brand new design device structure, realizes the aim of controlling the combination data acquisition of a plurality of depth cameras through one host, realizes the data synchronization and the data fusion processing based on the network, can simplify the process of data synchronization fusion, and improves the data processing efficiency and the flexibility of deploying the depth camera array.
The present utility model may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present utility model.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present utility model may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++, python, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present utility model are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present utility model are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the utility model. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present utility model. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the utility model has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the utility model is defined by the appended claims.
Claims (10)
1. The utility model provides a many degree of depth cameras are united and are gathered host system to gait motion data, includes gait motion collection host computer and camera array, wherein:
the camera array comprises a plurality of depth cameras, and each depth camera collects target motion data of a corresponding area at a preset visual angle;
The gait motion collection host is provided with a plurality of data collection control mainboards, a network switch and a keyboard and mouse screen controller, each data collection control mainboard is connected to a corresponding depth camera in a control way, the data collection control mainboards are connected through the network switch, and the keyboard and mouse screen controller is configured to be connected to a display, a set of keyboard and mouse to control all the data collection control mainboards.
2. The system of claim 1, wherein the mainframe box of the gait motion collection mainframe is provided with a first laminate structure, a second laminate structure and a third laminate structure in sequence from top to bottom, each laminate structure comprises one or more sublayers, the first laminate structure houses the plurality of data collection control mainboards, the second laminate structure houses the network switch, the power adapter device and the third laminate structure houses the keyboard mouse screen controller.
3. The system of claim 2, wherein one side of the mainframe box is provided with a plurality of mute fans, and the other side is provided with a video output interface, a USB interface and a plurality of network interfaces.
4. The system of claim 2, wherein the two side covers of the main housing are provided with three rectangular arrays of holes.
5. The system of claim 2, wherein a power key, a USB data interface, and a control key are provided above the mainframe box.
6. The system of claim 2, wherein the first laminate structure comprises three or four sublayers, the second laminate structure comprises two sublayers, and the third laminate structure comprises one sublayer.
7. The system of claim 1, wherein the camera array comprises 6-8 depth cameras and is disposed on the same horizontal plane suspended at a predetermined height from the ground, and the relative spacing, deployment location, and tilt of each depth camera from the horizontal of the monitored path are predetermined based on the desired characteristics of the area to be monitored.
8. The system according to claim 7, wherein two depth cameras for lateral photographing are further provided at one side of the monitoring path, and the interval between the two depth cameras is preset according to the target active area.
9. The system of claim 1, wherein the camera array comprises 4-8 depth cameras respectively arranged at four corners of the rectangular area and having a predetermined height from the ground, the lens of each depth camera faces the center of the rectangular area, and the depth camera distance arranged at the adjacent corner is configured to be in the range of 3 meters to 4.4 meters.
10. The system of claim 2, wherein the two side covers of the main housing are plastic.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202323019433.5U CN221784279U (en) | 2023-11-08 | 2023-11-08 | A host system for joint acquisition of multi-depth cameras for gait motion data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202323019433.5U CN221784279U (en) | 2023-11-08 | 2023-11-08 | A host system for joint acquisition of multi-depth cameras for gait motion data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN221784279U true CN221784279U (en) | 2024-09-27 |
Family
ID=92837812
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202323019433.5U Active CN221784279U (en) | 2023-11-08 | 2023-11-08 | A host system for joint acquisition of multi-depth cameras for gait motion data |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN221784279U (en) |
-
2023
- 2023-11-08 CN CN202323019433.5U patent/CN221784279U/en active Active
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11100649B2 (en) | Fiducial marker patterns, their automatic detection in images, and applications thereof | |
| JP7422105B2 (en) | Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device | |
| KR101583286B1 (en) | Method, system and recording medium for providing augmented reality service and file distribution system | |
| JP2022501684A (en) | Shooting-based 3D modeling systems and methods, automated 3D modeling equipment and methods | |
| TW201418866A (en) | Display input device | |
| KR102677290B1 (en) | Projector for augmented reality and method for controlling the same | |
| CN107646109A (en) | Manage the characteristic of the environment mapping on electronic equipment | |
| WO2022070767A1 (en) | Information processing device, moving body, imaging system, imaging control method, and program | |
| KR20200062437A (en) | Extensible vehicle for providing realistic education service based on movable three-dimensional multi-display | |
| KR20230025911A (en) | Dynamic sensor selection for visual inertial odometry systems | |
| EP3547265A1 (en) | Method, storage medium and apparatus for generating environment model | |
| Nüchter et al. | Irma3D—An intelligent robot for mapping applications | |
| CN221784279U (en) | A host system for joint acquisition of multi-depth cameras for gait motion data | |
| KR20150127450A (en) | Display apparatus and method for alignment | |
| US10670688B2 (en) | Method and tool for reflector alignment | |
| US20150097965A1 (en) | Eliminating line-of-sight needs and interference in a tracker | |
| CN111162840B (en) | Method and system for setting virtual objects around an optical communication device | |
| CN111753565B (en) | Method and electronic equipment for presenting information related to optical communication device | |
| US20200302643A1 (en) | Systems and methods for tracking | |
| AU2021289087A1 (en) | Facility for use in providing simulated environment | |
| Omosekeji | Industrial vision robot with raspberry pi using pixy camera: stereo vision system | |
| US11902691B2 (en) | LED panel having integrated infrared retroreflectors for video volume production environments | |
| CN207051642U (en) | One kind projection and lighting device | |
| KR20200076326A (en) | Electronic device used in installation and maintenance of another electronic device and method of controlling the same | |
| CN214067559U (en) | Integrated AR sandbox projection equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| GR01 | Patent grant | ||
| GR01 | Patent grant |