IES85733Y1 - Method and system for detecting the movement of objects - Google Patents
Method and system for detecting the movement of objects Download PDFInfo
- Publication number
- IES85733Y1 IES85733Y1 IE2010/0715A IE20100715A IES85733Y1 IE S85733 Y1 IES85733 Y1 IE S85733Y1 IE 2010/0715 A IE2010/0715 A IE 2010/0715A IE 20100715 A IE20100715 A IE 20100715A IE S85733 Y1 IES85733 Y1 IE S85733Y1
- Authority
- IE
- Ireland
- Prior art keywords
- camera
- zone
- objects
- people
- ground plane
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000033001 locomotion Effects 0.000 title claims abstract description 18
- 238000001914 filtration Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000009434 installation Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000009408 flooring Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Abstract
ABSTRACT The invention provides a system and method for detecting the movement of 3D objects, in particular people, in a defined zone using a camera comprising the steps of calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane; capturing a miulti-array of 3D range pixel data from a single camera in one single frame in said defined zone; and segmenting the 3D pixel data as volume features for the subsequent purpose of object identification and tracking through multiple captured frames. The invention overcomes the issues highlighted with traditional multi-camera systems in that no matching of points between cameras is required. This negates the effects of occlusion and the requirement of texture in the image. Also planar surfaces with no texture can now be measured with certainty. This is particularly important due the variety of surfaces present under a people counter i.e. carpet, tiles etc.
Description
Title Method and System for Detecting the Movement of Objects Field of the Invention The present invention relates to a method and system for detecting the movement of objects using 3D images generated by a time of flight multi array camera. In particular the invention relates to a system and method movement and identification of objects for automated people counting, queue detection, monitoring people entering and leaving a secure area and security surveillance. » Background to the Invention There are many sensors in use for automatic object detection and classification. Optical sensors detect objects by interrupting a single point beam of visible, Ultra—Vi0let or Infra Red light. More advanced single line of sight ‘time of flight’ lasers can be used to detect the presence of objects due to a change in distance measured. Mechanical sensors detect objects through their weight. Thermal sensors segment the presence of objects due to their differing heat characteristics from the ambient surroundings. Electro- magnetic sensors can detect metals due to the presence of induced currents.
These sensors are usually installed in particular applications whereby the scene or area of interest is severely constrained. In a general sense these sensors are limited in their practical uses as they can only detect objects moving through a narrow constrained space.
Sensors such as microwave or ultrasonic signals detect the presence of objects due to change in reflected signal power. These sensors have their uses such as presence/absence detection, but are limited in their spatial resolution and clarity. These sensors are also prone to outside Radio Frequency interference.
Monocular camera vision based approaches have been developed based on the colour and geometric shape patterns of images acquired within their field of views. Some monocular camera systems also focus on depth cues with the use of external structured light and texture analysis. The images produced by a camera system are by default 2D data. It is only by interpreting these images that 3D data can be inferred. However, a problem with these systems is that they suffer from 'prior knowledge’ installations and thus in real world situations the inferred 3D data can easily be misrepresented.
Two or more camera systems can be combined to produce depth cues from multiple images of the same objects from varying perspectives by matching the corresponding points. This produces a disparity map which can be normalized to produce a range map.
Any objects that fall within this 3D space can detected.
Although using two (stereo) or more cameras goes a long way to measuring 3D image space data directly rather than inferring the data, there are a number of fundamental problems with this method. It is for these reasons that this method needs to be installed in a constrained scene for it to be fiilly utilized and thus cannot be installed in the ‘general’ sense as a true 3D people counter sensor. This method of using multiple cameras to reconstruct a scene depends very much on matching corresponding points or pixels for each camera. This presents a number of difficulties; the points must be viewable and seen by each camera. If the point is occluded a 3D match cannot be made.
This has the effect of causing holes or areas of ambiguity within an image. The scene must contain enough texture to create valid matching points. If there is no texture for example a planar surface like a tiled floor no 3D data can be measured. If the scene is oversubscribed with texture such as a scene of grass or a highly repeating spatial patterns such as that on common clothes, the 3D data will be ambiguous due to the occurrence of mismatching corresponding points.
A paper entitled ‘Improved Image Segmentation Using Photonic Mixer Devices’, Walhoff et al, Proceedings of the Intl Conference on Image Processing 2007, pages 53- 56 discloses a method of improving image segmentation and extending regular computer Vision algorithms and an image sensor that provides additional depth information. Another paper entitled ‘TOF imaging in smart room environments towards improved people tracking’, Guomundsson et al, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2008, pages 1 to 6 discloses a time-of-flight camera in a smart room environment and how it can be used to improved results in segmenting people in a room from a background. US Patent Publication A 3 number US 2006/0239558, Rafii et al, discloses a method and system that analyses data acquired by image systems to more rapidly identify objects of interest in the acquired data. However none of these documents highlight and track the movement of people effectively and suffer from the above mentioned problems, including matching of points between cameras are required, occlusion effects and the requirement of texture in the image. Also planar surfaces with no texture can be difficult to measure.
There is therefore a need to provide a camera based system and method for detecting 3D moving objects to overcome the above mentioned problems.
Summary of the Invention According to the present invention there is provided, as set out in the appended claims, a method method for detecting the movement of 3D objects in a defined zone using a camera comprising the steps of: calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane and/or position of the camera; capturing a multi-array of 3D range pixel data from a single. camera in one single frame in said defined zone; characterised by the steps of: segmenting the 3D pixel data as volume features for the purpose of 3D object identification and tracking through multiple captured frames; and subsequent image filtering to eliminate background patterns from the 3D objects by calculating that the background patterns are not in the defined zone.
Advantages of the present invention are superior performance in highlighting and tracking the presence of people in the field of view. The Various embodiments of the invention with respect to heretofore known people counting and motion detection systems include superior shadow discrimination, background ground suppression and ability to operate indoors and outdoors without ambient light interference.
Ideally the 3D pixel data is an actual measurement of distance of the object from the camera, to provide a full 3D map comprising spatial coordinates.
The invention uses a direct means of detecting objects in 3D space as opposed to indirect and inferred means. The invention has superior performance with respect to intensity based camera systems in that each 3D pixel is an actual measurement of distance of the object from the sensor. Thus a full 3D map including x, y spatial coordinates can be obtained, The invention overcomes the issues highlighted with traditional multi-camera systems in that no matching of points between cameras is required. This negates the effects of occlusion and the requirement of texture _in the image. Also planar surfaces with no texture can now be measured with certainty.
In one embodiment said step of capturing further comprises the step of eliminating shadows from said defined zone.
In one embodiment the invention comprises the step of eliminating background patterns from the 3D objects.
In one embodiment the invention comprises the step of subsequent image filtering to eliminate background patterns from the 3D objects.
In one embodiment the invention comprises the step of image filtering to connect features in 3D space.
In one embodiment the invention comprises the further step of performing subsequent image analysis to filter range data to clusters.
In one embodiment the invention comprises the further step of filtering interest points into relative 3D planes.
In one embodiment the invention comprises the step of connecting 3D features through multiple frames.
In one embodiment the invention comprises the step of extracting threshold and acquire apex areas from said 3D features In one embodiment the invention comprises the further step of eliminating ambient lighting effects.
Ideally said filtering step eliminates features outside of preselected 3D zones.
In one embodiment the invention comprises selecting 3D spaces of interest; and eliminating features outside said selected 3D spaces of interest.
In one embodiment the invention comprises performing different run-time algorithms on each of a plurality of said 3D spaces of interest.
Ideally a single detected object represents a person.
In one embodiment the invention comprises the step of processing features detected as objects to be counted as people. Ideally people are tracked and counted through multiple frames. Ideally wherein people can be indentified and tracked in zero lux ambient light conditions.
In another embodiment of the present invention there is provided a 3D camera vision apparatus for people identification, said apparatus comprising: A single multi-array time of flight camera, a processer for normalizing the data into a 3D range maps, a processor for determining the presence of people from the 3D range maps, a trajectory processor for receiving frames fi'om 3D processor, a processor for people detection, people counting, rate of flow of people, directional people counting and people queue analysis.
In one embodiment said objects having a close proximity to said ground plane relative to a threshold are filtered out as ground plane noise.
In one embodiment said trajectory processor determines an object's trajectory by tracking said people objects in multiple frames.
In one embodiment said time of flight image acquisition device comprises a monocular camera configured for acquiring a plurality of images.
According to a further embodiment of the invention there is provided a system for detecting the movement of 3D objects in a defined zone using a camera comprising: means for calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane; means for capturing a multi-array of 3D range pixel data from a single camera in one single frame in said defined zone; and means for segmenting the 3D pixel data as volume features for the subsequent purpose of object identification and tracking through multiple captured frames.
There is also provided a computer program comprising program instructions for causing a computer program to carry out the above method which may be embodied on a record medium, carrier signal or read-only memory.
Brief Description of the Drawing The invention will be more clearly understood from the following description of an embodiment thereof, given by way of example only, with reference to the accompanying drawings, in which:- Fig. 1 illustrates the field of view of the camera, ie. the surveillance area, according to one aspect of the invention; Fig. 2 illustrates objects in the surveillance area of Fig. 1, objects can be in the centre of the image or to the extreme corner of the X and Y axis within the surveillance area; Fig. 3 illustrates obtained images from the time of flight sensor showing 2D luminance values representing the area under surveillance and 3D data representing the height values relative to the camera; Fig. 4 illustrates obtaining three dimensional height points of the ground plane relative to the camera orientation and position; Fig. 5 illustrates segmented objects relative to the camera, according to one aspect of the invention; Fig. 6 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems; Fig. 7 illustrates another perspective distortion inherent in all point source 3D and 2D imaging systems; Fig. 8 illustrates 3D point clouds of connected and unconnected objects in a surveillance area; Fig. 9 illustrates a further view of 3D point clouds of connected "and unconnected objects shown in Fig 8; Fig. 10 illustrates the decision point or virtual beam on which a cross over point is defined, according to one aspect of the invention; Fig. ll illustrates the decision point or virtual beam on which a cross over point is defined, according to another aspect of the invention; Fig. 12 illustrates a virtual beam with the two zones A and B.; Fig. 13 and '14 illustrates an object in zone B at frame number H1 and at frame number t+2 in Zone A of Figure 12; and Fig. 15 illustrates a flow chart illustrating operation of the invention.
Detailed Description of the Drawings A multi—array time of flight camera based vision system is calibrated to provide heights above a planar surface for any point in the field of view, as shown in Figure 1, indicated generally by the reference numeral 1. Figure 1 illustrates the field of View of the camera, i.e. the surveillance area, by the shaded area 2 which represents a ground plane, i.e. the floor. A camera 3, for example a time of flight camera, can be placed directly overhead or at an angle relative to the ground plane 2. The Time of Flight camera 3 captures both 2Dluminance values and 3D range data. The 3D range data is colour coded to represent the differences in height within the image scene from a sensor.
Fig. 2 illustrates a number of objects 4 within the scene above the ground plane 2.
Objects 4 can be in the centre of the image or to the extreme corner of the X and Y axis within the surveillance area. When an object 4, for example a person, enters the camera's field of view, it generates interest points called “features,” the heights of which are measured relative to the camera 3. These points are clustered in 3D space to provide volumes of interest. These volumes of interest clusters are transformed into single and multiple people objects that are identified and tracked in multiple frames to provide “trajectories.” These people objects and associated “trajectories” are then used for automated people counting, queue detection and security surveillance, the operation of which is discussed in more detail below. Embodiments of the present invention can use a factory calibrated time of flight camera system that provides 3D coordinates of points in the field of view. A short pulse of light is sent to the scene to provide depth information. A very fast shutter is used in front of the semiconductor imager to time the roundtrip distance to the different portions of the scene.
Fig. 3 illustrates the obtained image from the time of flight sensor showing 2D luminance values representing the area under surveillance and 3D data representing the height values relative to the sensor. It will be noted the object in the centre 4a has minimal perspective distortion while the object 4b at the extreme X and Y coordinates suffers from the most extreme perspective distortion. This is caused by image warping at the sensor due to the sensor being a single fixed point in space.
Fig. 4 illustrates obtaining the 3D height points of the ground plane relative to the camera’s orientation and position. In one method, Method A, these 3D planar cloud points can be filtered out to provide only above ground 3D points. This eliminates the effects of various flooring materials. Fig. 5 illustrates segmented objects relative to the camera. In another method, Method B, these objects can be segmented by increasing the distance value in 3D from the cameras 3D generated point cloud. In this case only objects a certain distance within range of the camera are filtered and shown. At installation time the plane of the ground is calibrated relative to the camera. Only those points that have some height relative to the ground plane are of interest. Therefore, unwanted or unknown objects and highlights can be filtered out due to their lack of height relative to the ground plane. The points of interest are then clustered directly in D space.
Fig 6 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems, illustrated by the reference numeral 10. Within a unique software routine, the invention normalises against this distortion and obtains a persons scale by a user drawing a box 11 around a detected object 4. Within sofiware the position and size of this box is known within the image and the appropriate weighting factors applied to normalise the perspective distortion is applied across both the X and Y axis as shown in the X and Y diagrams.
In another embodimentto Fig 6, Fig. 7 illustrates the perspective distortion inherent in all point source 3D and 2D imaging systems. Within sofiware, the invention norrnalises against this distortion to obtain a persons scale by a user drawing a line 12 across an object 4. Within software the position of and the width of this line is known within the image and the appropriate weighting factors can be applied to normalise the perspective distortion is applied across both the X and Y axis as shown in the X and Y diagrams.
Fig. 8 illustrates 3D point clouds of connected and unconnected blop objects 21. A box 22 of similar dimensions to that acquired at calibration time is superimposed in software to a best fit model enclosing the blops. These blops are ‘subsequently categorised as individual objects or people. The height and scale of each object can be easily extracted, using the process of the present invention. Similar to Fig. 8, Fig. 9 illustrates 3D point clouds of connected and unconnected objects. These objects may shave the same height profile or may have different height apexes. Through software a count is made highlighting the number of objects or people that are present within the surveillance area. Each separate cluster can be considered an object and is tracked from frame to frame. Therefore, at each frame selected information is available including, the number of objects; their positions in 3D space (centroid); and the instantaneous motion vector (magnitude and direction). Using this raw data, people can be identified, counted and tracked extremely accurately.
Fig. 10 illustrates a decision point or virtual beam 30 on which a cross over point is defined to define a boundary between two zones. If an object is tracked through a number of frames and crosses this point an appropriate count is incremented and a direction flag is assigned. The beam 30 can be moved on the screen through software and can be shortened or lengthened as desired by simple control instructions. Therefore, and advantageously, the defined zone can be adapted easily by simple program instructions without the need for physically re-positioning the camera. In a further embodiment to Fig. 10, Fig. 11 illustrates the decision point or virtual beam 40 on which a cross over point is defined. If an object is tracked through a number of frames and crosses this point an appropriate count is incremented and a direction flag is assigned. As in Fig. 10, this boundary line can be translated, rotated and a number of curves added to mimic real world scenarios and requirements.
In operation Fig. 12 illustrates the virtual beam with two zones labelled, Zone A and Zone B. These zones exist on either side of a virtual beam 50. The system has a hysteresis in that an object must pass from Zone A to Zone B or vice versa to be counted as a valid single count. Fig. 13 illustrates an object 51 in zone B at frame number t+1. Fig. 14 illustrates an object in zone B at frame number t+2. This object 51 is tracked via a velocity vector from frame t+1 to frame t+2. Since the object 51 has crossed the boundary line a single count is activated. The present invention provides an easy to use people counter and detection camera system based on the time off flight camera principle to populate a semiconductor imager. People within its field of view are indentified, counted and tracked through its area of interest.
Fig. 15 illustrates a flow chart illustrating operation of the invention implemented in software and with reference to above description, illustrated generally by the reference numeral 60. Advantages of the present invention are superior performance in highlighting and tracking the presence of people in the field of view. The various embodiments of the invention with respect to heretofore known people counting and motion detection systems include superior shadow discrimination, multiple people identification, background ground suppression and ability to operate indoors and outdoors without ambient light interference.
It will be appreciated that the invention uses a direct means of detecting objects in 3D space as opposed to indirect and inferred means. The invention has superior performance with respect to intensity based camera systems in that each 3D pixel is an actual measurement of distance of the object from the sensor. Thus a full 3D map including x, y spatial coordinates can be obtained.
The invention overcomes the issues highlighted with traditional multi-camera systems in that no matching of points between cameras is required. This negates the effects of occlusion and the requirement of texture in the image. Also planar surfaces with no texture can now be measured with certainty. This is particularly important due the variety of surfaces present under a people counter i.e. carpet, tiles etc. Furthermore the appearance of these surfaces changes with the passage of time. Problems caused by ll shadows, movement of mats etc can now be ignored. Highlights in the prior art are thus eliminated in the various embodiments of the present invention because detection of an object's motion in the invention is based on physical coordinates rather than on the appearance of the object.
The present invention also features easy installation and set up without requiring initial training procedures. The invention upon initial set up, self calibrates to acquire the ground plane. This involves only a one-time installation setup and requires no further training of any sort. Another advantage of the system is that stationary or slow moving objects do not become invisible as they would to a motion detection system.
It will be appreciated the present invention also provides a very easy to use graphic user interface for calibrating the scale of the system to the ground height.
It will be further appreciated that the system can be placed directly above an area of interest, i.e. placement is at a normal to the planar surface of the ground or can be placed at an angle to the scene of interest.
The present invention also features a flexible masking capability. The masking capability allows a user during set up to graphically specify zones to be masked out in either 2D or in 3D. This feature can be used, for example, to account for either non- custom doorways or stationary background scenery in the field of view.
In one embodiment of the invention the system can be applied to areas which require access control to a secure area or clean room environment. The accuracy of the method in counting objects overcomes the problem of tailgating when two or more people are classified as one person and counted within a specified time period that exist with prior art systems. Therefore the system highlights the presence of two or more counted people as separate objects with extreme accuracy within the sensors field of view or surveillance area overcoming the problem of tailgating.
In another application of the invention the system and method of the present invention can be used to -calculate the human occupancy level of a room, a building or an enclosed zone by counting the number of people entering and leaving an entrance or entrances.
This information can be used to effectively control the atmospheric environment, for example air conditioning systems and other control systems.
In another application the invention can be used in a building to conform with fire regulations. The system and method of the invention can be used to calculate the number of people in a selected zone or zones of a building. The people count data can be fed back to a central control panel either located in the front entrance of the building or at a remote location. The central control panel can display on a visual display unit, for example a digital display, the number of people in each zone or the entire building.
Therefore, the invention provides an effective system to accurately count the number of people in building at any time, by either giving an overall count for an entire building or on a zone by zone basis.
The present invention also provides for elimination of excessive blind spots. A non- stationary background like the motion of a door opening can be easily masked off.
Accordingly, the present invention is easier to use, install and more robust than heretofore known people counting, people queue detection and people tracking systems.
The embodiments in the invention described with reference to the drawings comprise a computer apparatus and/or processes performed in a computer apparatus. However, the invention also extends to computer programs, particularly computer programs stored on or in a carrier adapted to bring the invention into practice. The program may be in the form of source code, object code, or a code intermediate source and object code, such as in partially compiled form or in any other form suitable for use in the implementation of the method according to the invention. The carrier may comprise a storage medium such as ROM, e.g. CD ROM, or magnetic recording medium, e.g. a floppy disk or hard disk.
The carrier may be an electrical or optical signal which may be transmitted via an electrical or an optical cable or by radio or other means.
In the specification the terms "comprise, comprises, comprised and comprising" or any variation thereof and the terms include, includes, included and including" or any variation thereof are considered to be totally interchangeable and they should all be afforded the widest possible interpretation and vice versa.
The invention is not limited to the embodiments hereinbefore described but may be varied in both construction and detail.
Claims (5)
1. Claims 1. A method for detecting the movement of 3D objects in a defined zone using a camera comprising the steps of: calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane and/or position of the camera; capturing a multi-array of 3D range pixel data from a single camera in one single frame in said defined zone; characterised by the steps of: segmenting the 3D pixel data as Volume features for the purpose of 3D object identification and tracking through multiple captured frames; and subsequent image filtering to eliminate background patterns from the 3D objects by calculating that the background patterns are not in the defined zone.
2. The method of claim 1 wherein said 3D pixel data is an actual measurement of 3. distance of the object from the camera, to provide a full 3D map comprising spatial coordinates.
3. The method of claims 1 or 2 wherein said step of capturing further comprises the step of eliminating shadows from said defined zone.
4. A system for detecting the movement of 3D objects in a defined zone using a camera comprising: means for calibrating the camera to define said zone with respect to a ground plane, such that the zone is set at a desired height relative to the ground plane and/or position of the camera; means for capturing a multi-array of 3D range pixel data from a single camera in one single frame in said defined zone; characterised by: means for segmenting the 3D pixel data as volume features for the purpose of 3D object identification and tracking through multiple captured flames; and means for subsequent image filtering to eliminate background patterns from the 3D objects by calculating that the background patterns are not in the defined zone.
5. A method for detecting the movement of 3D objects in a defined zone using a camera as substantially hereinbefore described with reference to the accompanying description and/or drawings.
Publications (2)
| Publication Number | Publication Date |
|---|---|
| IE20100715U1 IE20100715U1 (en) | 2011-03-30 |
| IES85733Y1 true IES85733Y1 (en) | 2011-03-30 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2011054971A2 (en) | Method and system for detecting the movement of objects | |
| US7397929B2 (en) | Method and apparatus for monitoring a passageway using 3D images | |
| US9646212B2 (en) | Methods, devices and systems for detecting objects in a video | |
| Diraco et al. | An active vision system for fall detection and posture recognition in elderly healthcare | |
| US7929017B2 (en) | Method and apparatus for stereo, multi-camera tracking and RF and video track fusion | |
| US7400744B2 (en) | Stereo door sensor | |
| Diraco et al. | People occupancy detection and profiling with 3D depth sensors for building energy management | |
| KR101608889B1 (en) | Monitoring system and method for queue | |
| US8452050B2 (en) | System and method for counting people near external windowed doors | |
| CN111753609A (en) | Target identification method and device and camera | |
| US9129181B1 (en) | Object detection, location, and/or tracking with camera and lighting system | |
| KR20160035121A (en) | Method and Apparatus for Counting Entity by Using Location Information Extracted from Depth Image | |
| Snidaro et al. | Automatic camera selection and fusion for outdoor surveillance under changing weather conditions | |
| US11734834B2 (en) | Systems and methods for detecting movement of at least one non-line-of-sight object | |
| Sun et al. | People tracking in an environment with multiple depth cameras: A skeleton-based pairwise trajectory matching scheme | |
| WO2018087545A1 (en) | Object location technique | |
| CN106898014A (en) | A kind of intrusion detection method based on depth camera | |
| Zhang et al. | Fast crowd density estimation in surveillance videos without training | |
| Hadi et al. | Fusion of thermal and depth images for occlusion handling for human detection from mobile robot | |
| Hung et al. | Real-time counting people in crowded areas by using local empirical templates and density ratios | |
| IES85733Y1 (en) | Method and system for detecting the movement of objects | |
| IE20100715U1 (en) | Method and system for detecting the movement of objects | |
| IES20100715A2 (en) | Method and system for detecting the movement of objects | |
| Jędrasiak et al. | The comparison of capabilities of low light camera, thermal imaging camera and depth map camera for night time surveillance applications | |
| Miljanovic et al. | Detection of windows in facades using image processing algorithms |