[go: up one dir, main page]

CN111355885B - Tracking camera system and method - Google Patents

Tracking camera system and method Download PDF

Info

Publication number
CN111355885B
CN111355885B CN201911345241.9A CN201911345241A CN111355885B CN 111355885 B CN111355885 B CN 111355885B CN 201911345241 A CN201911345241 A CN 201911345241A CN 111355885 B CN111355885 B CN 111355885B
Authority
CN
China
Prior art keywords
time
user
computer system
data packet
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911345241.9A
Other languages
Chinese (zh)
Other versions
CN111355885A (en
Inventor
苏诗韵
黄耀明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
K11 Group Ltd
Original Assignee
K11 Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from HK18116547.4A external-priority patent/HK1263241A1/en
Application filed by K11 Group Ltd filed Critical K11 Group Ltd
Publication of CN111355885A publication Critical patent/CN111355885A/en
Application granted granted Critical
Publication of CN111355885B publication Critical patent/CN111355885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/0008General problems related to the reading of electronic memory record carriers, independent of its reading method, e.g. power transfer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明涉及一种追踪摄像系统,包括至少一个标识装置,至少一个定位装置,至少一个拍摄装置和计算机系统。使得本发明可以准确输出与标识装置相关联的用户的影片或照片。

Figure 201911345241

The invention relates to a tracking camera system, comprising at least one identification device, at least one positioning device, at least one photographing device and a computer system. This enables the present invention to accurately output the movie or photo of the user associated with the identification device.

Figure 201911345241

Description

Tracking camera system and method
Technical Field
The present invention relates to an imaging system.
Background
In a large playground or a theme park, children play with joy, and parents always hold a mobile phone or take pictures with a camera. The playing experience is seriously influenced by the normal state, the children are not happy enough due to the carelessness of parents and the investment, and even potential safety hazards exist. Moreover, the effect of the handheld dynamic shooting is still very good, the front cannot be shot, the angle is too inclined, the shooting cannot be performed, the person is too small, the shooting is blurred, and the like. It would be desirable to provide a method and system that changes this situation to provide a better play experience.
Disclosure of Invention
Accordingly, an embodiment of the present invention provides a tracking camera method, including; adopting a positioning device to receive a signal from an identification device carried by a user, packaging the signal strength, the receiving time and the unique identification code of the identification device received by the positioning device into a data packet, and sending the data packet to a computer system to form a data packet sequence; marking data packets with signal intensity higher than a preset threshold value by using a computer system, wherein the receiving time of the first marked data packet is a first time, and the receiving time of the last marked data packet is a second time; after the computer system determines that the marked data packet exists in the data packet sequence sent by the positioning device, the positioning device is selected as a central device, a shooting device for shooting images within a preset distance is appointed, and the shooting device is associated with the unique identification code of the identification device; the computer system outputs the images of the shooting device in the first time range and the second time range.
Drawings
A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings; the same components are numbered the same throughout the several views. In some cases, a sublabel is placed after a reference numeral and hyphen to denote one of many similar components. When a reference numeral is used to refer to a particular feature, but not necessarily any particular element, it is intended to refer to that feature.
FIG. 1 is a schematic diagram of one embodiment of the present invention.
Fig. 2 is a diagram illustrating the packet sequence marking result according to the embodiment of fig. 1.
Detailed Description
The inventor provides a novel tracking photography and camera shooting system and a novel tracking photography and camera shooting method, which can accurately identify the activity of a user at a certain position at a certain time based on an identification device carried by the user, and select a corresponding camera and a shot image thereof based on the activity time and the position of the user; and the shot images can be screened or clipped based on the face information of the user, so that the image taking the user as a main shooting object is obtained. As used herein, the user is the main subject of the image capture, and it is understood, but not limited to, that the user always has a larger image angle, is located at a more central position, mostly is the front/side, and so on. The images referred to herein should be understood to include continuous or discontinuous motion pictures, as well as still pictures. The camera described here can take continuous or discontinuous motion pictures or discontinuous still pictures.
Embodiments are described in more detail below with reference to the following examples, which are provided herein by way of illustration only and are not intended to be limiting.
In one embodiment of the invention, a tracking camera shooting method is provided, which comprises the steps of adopting a positioning device to receive a signal from an identification device carried by a user, packaging the signal intensity, the receiving time and a unique identification code of the identification device received by the positioning device into a data packet, and sending the data packet to a computer system to form a data packet sequence; marking data packets with signal intensity higher than a preset threshold value by using a computer system, wherein the receiving time of the first marked data packet is a first time, and the receiving time of the last marked data packet is a second time; after the computer system determines that the data packet sequence sent by the positioning device has the marked data packet, the positioning device is selected as a central device, a shooting device capable of continuously shooting images within a preset distance is appointed, and the shooting device is associated with the unique identification code of the identification device; the computer system outputs the images of the shooting device in the first time range and the second time range.
In one embodiment, each unique identification code is further associated with facial information of the user, and the computer system outputs images of the camera within a first time range and a second time range, compares the images with the facial information of the user, and outputs a filter or clip of the images containing the matched user after identifying the matched user.
In one embodiment, the location device may be any device that determines the location of the user, such as, but not limited to, one or more of a long range RFID reader, a wireless transceiver that receives wireless signals, a wireless transceiver that receives Bluetooth signals; the corresponding identification devices are a long-distance RFID label, a wireless mobile device and a Bluetooth device respectively.
In one embodiment, when more than one locating device receives the marked data packet, the locating device with the strongest average received signal in the first predetermined time is selected as the center device.
In one embodiment, when more than one camera is designated, the computer system outputs the movies for all cameras in the first time and second time ranges, compares them to the user's facial information, and selects the movie clip output that contains the most average facial time and/or average center position time for the matching user. In one embodiment, the face time is the time when the user's face is on the front or within a 45 ° yaw angle range thereof; the center position time is a time when the user is located within 25%, 50%, or 70% of the center of the in-lens picture.
In one embodiment, the identification device is in the form of a smart watch, a bracelet, a badge, or a pendant to be carried by the user.
In one embodiment, if the time difference between the first time and the second time is less than the second predetermined time, the movie during the time period is not output.
In one embodiment, the shooting device is two 180-degree lenses which are arranged back to back, 360-degree images can be output, and when the images of the matched users are screened or edited, the computer system selects the angle of the images of the matched users located in the image center position to output.
In another aspect of the present invention, a tracking camera system is provided, including; at least one identification device carried by a user, each identification device having a unique identification code; at least one positioning device which continuously receives the signal from the identification device, and packages the received signal strength, the receiving time and the unique identification code of the identification device into data packets, forms a data packet sequence and sends the data packet sequence to the computer system; a computer system for marking data packets having a signal strength above a predetermined threshold, the first marked data packet having a first time of receipt and the last marked data packet having a second time of receipt; at least one shooting device which can continuously shoot images; after the computer system determines that the marked data packet exists in the data packet sequence sent by the positioning device, the positioning device is selected as a central device, and the shooting device within a preset distance is associated with the unique identification code of the identification device; the computer system outputs the images of the shooting device in the first time range and the second time range.
In one embodiment, the computer system further comprises face information of users associated with each unique identification code prestored therein, and after the computer system outputs images of the shooting device in the first time range and the second time range and compares the images with the face information for analysis, and after a matched user is identified, images containing the matched user are screened or clipped for output.
In one embodiment, the positioning device is one or more of a long-distance RFID reader, a wireless transceiver for receiving wireless signals, and a wireless transceiver for receiving Bluetooth signals; the corresponding identification devices are a long-distance RFID label, a wireless mobile device and a Bluetooth device respectively.
In one embodiment, when more than one locating device receives the marked data packet, the locating device with the strongest average received signal in the first predetermined time is selected as the center device.
In one embodiment, when more than one camera is designated, the computer system outputs the movies for all cameras in the first time and second time ranges, compares them to the user's facial information, and selects the movie clip output that contains the most average facial time and/or average center position time for the matching user. In one embodiment, the facial time is a time when the user's face is the front or within a 45 ° yaw angle range thereof; the center position time is a time when the user is located within 25%, 50%, or 70% of the center of the in-lens picture.
In one embodiment, the identification device is in the form of a smart watch, a bracelet, a badge, or a pendant to be carried by the user.
In one embodiment, if the time difference between the first time and the second time is less than the second predetermined time, the movie during the time period is not output.
In one embodiment, the shooting device is two 180-degree lenses which are arranged back to back, 360-degree images can be output, and when the images of the matched users are screened or edited, the computer system selects the angle of the images of the matched users located in the image center position to output.
In one embodiment, a tracking camera method is provided, which includes receiving a signal from an identification device carried by a user and/or face information of the user by using a first positioning device, wherein the first positioning device is associated with a specified camera; marking, with the computer system, a first time and a second time of a signal from the first locating device; after the preset time passes after the second time of the first positioning device, the appointed shooting device enters a shooting state; adopting a second positioning device to receive a signal from an identification device carried by a user and/or face information of the user, and adopting a computer system to mark a first time and a second time of the signal from the second positioning device; the computer system outputs the images of the shooting device in the first time range and the second time range of the second positioning device.
In one embodiment, the tracking shooting method comprises a plurality of continuously shot shooting devices in at least one designated area, wherein each shooting device is a tracking portrait shooting device and is provided with a multi-angle pan-tilt head, the shooting devices are fixedly, semi-fixedly or non-fixedly arranged to follow users in the shooting area, shot images are uploaded to a computer system, and the computer system screens or clips the images according to facial information of the users. In one embodiment, the system further comprises a positioning device positioned at the entrance of the plurality of designated areas, and is used for detecting the identification device associated with the user, confirming that the user is positioned at the first time and the second time of the area, and is associated with the shooting device of the designated area, and when the computer system outputs the image, only the shooting device of the designated area uploads the image in the first time and the second time.
Example 1
In one embodiment, the user wears the identification with him. It can be a bracelet worn on the hand, a badge worn on the clothes, a pendant worn on the neck or hung on a bag. One or more of a long-distance RFID label, a wireless mobile device and a Bluetooth device can be arranged in the identification mark. Correspondingly, the positioning device is one or more of a long-distance RFID reader, a wireless transceiver for receiving wireless signals and a wireless transceiver for receiving Bluetooth signals.
In one embodiment, the positioning device may be in the form of a decorative post placed on the ground or ceiling near the center of the casino or mounted at the entrance to the casino/mobile game. The relevant positioning means may be installed based on common RFID, wireless or bluetooth positioning technology. In one embodiment, the location-related technology is indoor location technology to suit indoor playgrounds.
Example 2
A tracking camera system 100 according to an embodiment of the invention is understood with reference to fig. 1. The user 103 picks up the identification device 105 and associates the facial information obtained by taking a picture with the unique identification code of the identification device 105, and the locating devices 101a, 101b, and 101c in the casino simultaneously detect the signal of the identification device 105 worn by the gamer. The locating devices 101a, 101b, and 101c package the signal strength of the signal that detected the identification device, the time of receipt, and the unique identification code of the identification device into data packets and send them to a computer system (not shown) to form a series of data packet sequences. The detection of the identification means by the locating means may be continuous or performed at intervals, for example, every 5 seconds, 10 seconds, or 30 seconds. The computer system marks the packets with signal strength above a predetermined threshold as T and the packets with signal strength below the predetermined threshold as F. Referring to fig. 2, a table is generated based on the packet acceptance time 201 of the positioning device 202. The first marked data packet in a series of data packets has a first time of reception 10:00:30 and the last marked data packet has a second time of reception 10:12: 30. And generates a table 200 as shown in figure 2. Where more than one packet of positioning device 202 is marked T, i.e. the signal is above a predetermined strength. The computer system selects the positioning device with the strongest average received signal in the first preset time as a central device. In this embodiment, the average received signal of the positioning device 101a is the strongest and is selected as the center device. The first predetermined time may be 15 seconds to 2 minutes, preferably 30 seconds. Cameras within a predetermined distance from the central device are associated with a unique identification code that identifies the device. The computer system outputs the images of the shooting device in the first time range and the second time range. In one embodiment, if the time difference between the first time and the second time is less than a second predetermined time, such as less than 1 minute, less than 30 seconds, less than 15 seconds, less than 5 seconds, or less than 2 seconds, the image is not output.
In one embodiment, only the camera 102b closest to the center device is designated. The computer system compares and analyzes the images shot by the camera 102b in the first time range and the second time range with the facial information of the user, and after a matched user is identified, the images containing the matched user are screened or clipped and output.
In one embodiment, all cameras 102a, 102b, and 102c within 2-15m from the center device are designated. And the computer system outputs the films of all the shooting devices in the first time and the second time range, compares the films with the face information of the user, analyzes the films, and selects the film clip containing the matched user with the largest average face time and outputs the film clip. The camera 102a is thus not closest, but the user's face in the movie is the front, or the time is longest in the 45 ° yaw angle range, and the movie output of the camera 102a is selected.
In another embodiment, the movie clip output with the greatest average center position time may also be selected, the center position time being a time when the user is within 25%, 50%, or 70% of the center of the in-shot view. In one embodiment, the shooting device is two 180-degree lenses which are arranged back to back, 360-degree images can be output, and when the images of the matched users are screened or edited, the computer system selects the image angle of the matched user located at the image center position to output.
In one embodiment, the positioning in the tracking camera method can also adopt a positioning device based on face recognition. Each particular face has a corresponding unique identification code. And packaging the face time detected by the positioning device and the unique identification code corresponding to the face into a data packet, and sending the data packet to a computer system to form a data packet sequence. In each sequence, the receiving time of the first detected data packet is the first time, the receiving time of the last detected data packet is the second time, the positioning device is selected as a central device, a shooting device capable of continuously shooting images within a preset distance is designated, and the shooting device is associated with the unique identification code of the face; the computer system outputs the images of the shooting device in the first time range and the second time range. In one embodiment, when the definition or the percentage of the image is less than a predetermined value, the human face information is regarded as being unsuccessfully recognized and the computer system does not output the relevant image.
In one embodiment, one or several cameras may be specifically designated for the center device instead of selecting based on distance. For example, but not limited to, a camera specifying a camera exit for a camera installed at a large slide entrance; or a photographing device for photographing a specific location is designated for a photographing device installed at an entrance of the roller coaster.
In one embodiment, the invention is practiced using a combination of multiple positioning devices. A first positioning device is provided at an entrance of an amusement ride having a single entrance, and the positioning device is provided with both a face recognition device and an identification device detection device. And packaging the face detected by the positioning device and/or the time for identifying the device and the unique identification code corresponding to the face into a data packet, and sending the data packet to a computer system to form a data packet sequence. The positioning device is selected as the center device and has a specific photographing device designated. In the packet sequence of the central device, the reception time of the first detected packet is a first time, and the reception time of the last detected packet is a second time. Wherein the set of data packets is discarded if the time difference between the first time and the second time is less than a third predetermined time. And if the time difference is larger than the third preset time, continuing to count the time. And after the fourth preset time after the second time, the specified specific shooting device enters a shooting state. One or more second positioning devices which are provided with face identification equipment and identification device detection equipment at the same time are also arranged near the specified specific shooting device, the second positioning device associates the time of the detected face and/or the identification device with the unique identification code corresponding to the face, and sends the pictures or images shot by the specific shooting device to the computer system, wherein the receiving time of the first detected data packet is the first time, and the receiving time of the last detected data packet is the second time; the computer system outputs dynamic or static images, such as films or movies, of the specific camera within the first time and the second time range. In one embodiment, the specific camera designated is located at an exit or a specific location of the attraction. In one embodiment, the third predetermined time is less than 5 seconds, less than 20 seconds or less than 1 minute, so as to avoid misjudgment when the worker or an irrelevant person walks nearby. In one embodiment, the fourth predetermined time is associated with a time at which the user naturally travels from the attraction entrance to within the specified particular camera capture range.
In one embodiment, the area includes a plurality of shooting devices which can actively track the human images in the shooting range and continuously shoot, such as a visual follow-up robot. The shooting device can be fixedly shot along with a certain user or can be semi-fixedly or non-fixedly shot along with the user, the shot images are uploaded to the computer system, and the computer system filters or clips the images according to the facial information of the user. When a person having the facial information requests output, the computer system outputs the filtered or clipped image. The screening or clipping method is as described in the previous example
In one embodiment the venue is a large casino comprising a plurality of zones, and the locating means is a detector located at the entrance to each zone which detects the RFID tag to confirm that a registered user is allowed to enter the zone. In one embodiment the detector may simultaneously maintain the face information for a photograph of the user that leaves no face information. The time when the positioning device detects that the user enters the area is the first time, and the time when the user leaves the area is the second time. In this area, there is more than one camera and is designated by the detector at the entrance. In one embodiment, the camera is a tracking portrait camera. The shooting device continuously shoots, actively tracks the portrait in the shooting range and uploads the shot movie or photo to the computer system. When the user changes the attraction area, the above process is repeated. The computer system screens or clips the image according to the facial information of the user. When a person having the RFID or facial information requests an output, the computer system outputs the filtered or clipped image. The tracking portrait photographing apparatus has a multi-angle pan-tilt. In one embodiment, the tracking portrait session is a smart camera assistant with a 360 ° pan-tilt and a smart camera mounted thereon. When the computer system filters or clips the image according to the face information of the user, only the uploaded image in the first time range and the second time range of the shooting device in the area with the user is processed, and therefore more efficient and accurate output is provided.
The methods provided by the example embodiments in this specification are by way of example only, and the examples of one method are not intended to limit the examples of another method. The apparatus/methods discussed in one figure may be added to or exchanged with the apparatus/methods in other figures. Moreover, specific numeric data values (e.g., specific numbers, quantities, categories, etc.) or other specific information are used only to discuss the example embodiments and are not used to limit the example embodiments to such specific information.

Claims (19)

1. A tracking camera shooting method comprises the following steps;
receiving signals from an identification device carried by a user using a plurality of positioning devices;
packing the signal strength, the receiving time and the unique identification code of the identification device received by the positioning devices into a data packet, and sending the data packet to a computer system to form a data packet sequence;
marking data packets with signal intensity higher than a preset threshold value by using a computer system, wherein the receiving time of the first marked data packet is a first time, and the receiving time of the last marked data packet is a second time;
after the computer system determines that the marked data packet exists in the data packet sequence sent by the positioning device, the positioning device is selected as a central device, a shooting device for shooting images within a preset distance is appointed, and the shooting device is associated with the unique identification code of the identification device; when more than one positioning device receives the marked data packet, selecting the positioning device with the strongest average received signal in first preset time as a central device;
the computer system outputs the images of the shooting device in the first time range and the second time range.
2. The method of claim 1, wherein each unique identification code is further associated with facial information of the user, the computer system outputs images of the camera at a first time and a second time range, analyzes the images against the facial information of the user, and filters or clips the images containing the matching user after identifying the matching user.
3. The method of claim 1, wherein the location device is one or more of a long range RFID reader, a wireless transceiver to receive wireless signals, a wireless transceiver to receive bluetooth signals; the corresponding identification devices are a long-distance RFID label, a wireless mobile device and a Bluetooth device respectively.
4. The method of claim 1, wherein when more than one camera is designated, the computer system outputs the movies for all cameras in the first time and second time ranges, analyzes them against the user's facial information, and selects the movie clip output that contains the most average facial time and/or average center position time for the matching user.
5. The method of claim 4, wherein the facial time is 45 or 45 of the face of the user being a front faceoTime within the range of deflection angles.
6. The method of claim 4, wherein the center position time is a time when the user is within 25%, 50%, or 70% of the center of the in-shot view.
7. The method of claim 1, wherein the identification device is in the form of a smart watch, a bracelet, a badge, or a pendant to be carried by the user.
8. The method of claim 1, wherein if a time difference between the first time and the second time is less than a second predetermined time, the during-time movie is not output.
9. The method of claim 1, wherein the cameras are two back-to-back mounted 180 camerasoLens capable of outputting 360%oWhen the images are screened or edited to match with the image output of the user, the computer system selects the image angle of the matched user at the image center position to output.
10. A tracking camera system, comprising;
at least one identification device carried by a user, each identification device having a unique identification code;
a plurality of positioning devices which continuously receive signals from the identification device, and pack the received signal strength, the receiving time and the unique identification code of the identification device into data packets, form a data packet sequence and send the data packet sequence to the computer system;
a computer system for marking data packets having a signal strength above a predetermined threshold, the first marked data packet having a first time of receipt and the last marked data packet having a second time of receipt;
a plurality of imaging devices that capture images;
after the computer system determines that the marked data packet exists in the data packet sequence sent by the positioning device, the positioning device is selected as a central device, and the shooting device within a preset distance is associated with the unique identification code of the identification device; when more than one positioning device receives the marked data packet, selecting the positioning device with the strongest average received signal in first preset time as a central device;
the computer system outputs the images of the shooting device in the first time range and the second time range.
11. The tracking camera system of claim 10, wherein the computer system further includes pre-stored therein facial information for the user associated with each unique identification code, the computer system outputting images of the camera over the first and second time ranges and analyzing the images against the facial information, and upon identifying a matching user, filtering or clipping the images containing the matching user.
12. The tracking camera system of claim 10, wherein the positioning device is one or more of a long-range RFID reader, a wireless transceiver for receiving wireless signals, and a wireless transceiver for receiving bluetooth signals; the corresponding identification devices are a long-distance RFID label, a wireless mobile device and a Bluetooth device respectively.
13. The tracking camera system of claim 10, wherein when more than one camera is designated, the computer system outputs the movies of all cameras in the first and second time ranges, analyzes them against the user's facial information, and selects the movie clip output that contains the most time on the average face and/or the average center position of the matching user.
14. The tracking camera system of claim 13, wherein the face time is 45 or more of the face of the user being the front faceoTime within the range of deflection angles.
15. The tracking camera system of claim 13, wherein the center position time is a time when the user is in the range of 25%, 50%, or 70% of the center of the in-lens view.
16. The tracking camera system of claim 10, wherein the identification device is in the form of a smart watch, a bracelet, a badge, or a pendant to be carried by the user.
17. The tracking camera system according to claim 10, wherein if a time difference between the first time and the second time is less than a second predetermined time, the movie during the time is not output.
18. The tracking camera system of claim 10, in which the cameras are two 180 mounted back-to-backoLens capable of outputting 360%oWhen the images are screened or edited to match with the image output of the user, the computer system selects the image angle of the matched user at the image center position to output.
19. A tracking camera shooting method comprises the following steps;
adopting a first positioning device to receive a signal from an identification device carried by a user and/or face information of the user, wherein the first positioning device is associated with a specified shooting device;
marking, with the computer system, a first time and a second time of a signal from the first locating device;
after the preset time passes after the second time of the first positioning device, the appointed shooting device enters a shooting state; one or more second positioning devices which are provided with a face recognition device and an identification device detection device at the same time are arranged near the appointed shooting device;
adopting a second positioning device to receive a signal from an identification device carried by a user and/or face information of the user, and enabling the second positioning device to associate the detected face identification equipment and/or the time of the identification device detection equipment with a unique identification code corresponding to the face, and a picture or an image shot by a specific shooting device, and sending the picture or the image to a computer system;
the computer system outputs the images of the shooting device in the first time range and the second time range of the second positioning device.
CN201911345241.9A 2018-12-24 2019-12-24 Tracking camera system and method Active CN111355885B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HK18116547.4 2018-12-24
HK18116547.4A HK1263241A1 (en) 2018-12-24 Tracking camera system and method

Publications (2)

Publication Number Publication Date
CN111355885A CN111355885A (en) 2020-06-30
CN111355885B true CN111355885B (en) 2021-10-29

Family

ID=71193976

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911345241.9A Active CN111355885B (en) 2018-12-24 2019-12-24 Tracking camera system and method
CN201922343783.4U Active CN212067726U (en) 2018-12-24 2019-12-24 smart slide

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201922343783.4U Active CN212067726U (en) 2018-12-24 2019-12-24 smart slide

Country Status (1)

Country Link
CN (2) CN111355885B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899827A (en) * 2015-12-17 2017-06-27 杭州海康威视数字技术股份有限公司 Image data acquiring, inquiry, video frequency monitoring method, equipment and system
CN206524912U (en) * 2017-02-23 2017-09-26 北京智物达科技有限公司 Recreation ground high precision wireless alignment system
CN207817749U (en) * 2018-05-14 2018-09-04 星视麒(北京)科技有限公司 A kind of system for making video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8773266B2 (en) * 2007-11-16 2014-07-08 Intermec Ip Corp. RFID tag reader station with image capabilities

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899827A (en) * 2015-12-17 2017-06-27 杭州海康威视数字技术股份有限公司 Image data acquiring, inquiry, video frequency monitoring method, equipment and system
CN206524912U (en) * 2017-02-23 2017-09-26 北京智物达科技有限公司 Recreation ground high precision wireless alignment system
CN207817749U (en) * 2018-05-14 2018-09-04 星视麒(北京)科技有限公司 A kind of system for making video

Also Published As

Publication number Publication date
CN212067726U (en) 2020-12-04
CN111355885A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
US8737688B2 (en) Targeted content acquisition using image analysis
US8878949B2 (en) Camera based interaction and instruction
KR101044887B1 (en) Flight data measurement method of golf ball using captured image of high speed CD camera
CN110495163A (en) Camera system and bearing calibration
WO2012066910A1 (en) Guidance system, detection device, and position assessment device
US10083351B2 (en) Control system and control method
CN108293103A (en) Enliven spokesman's position detection
CN109886995B (en) A multi-target tracking method in complex environment
US12163766B2 (en) Control and monitoring devices and system for shooting range
WO2017135310A1 (en) Passing number count device, passing number count method, program, and storage medium
WO2018078607A1 (en) A method and apparatus for detection of light-modulated signals in a video stream
WO2018222932A1 (en) Video recording by tracking wearable devices
CN105227947B (en) Dirt detecting method and device
CN111355885B (en) Tracking camera system and method
CN107426552A (en) Anti-glare method and device, projector equipment
CN103716531A (en) Photographing system, photographing method, light emitting apparatus and photographing apparatus
HK40026312B (en) Tracking camera system and method
HK40026312A (en) Tracking camera system and method
CN110351579B (en) Intelligent video editing method
KR101441285B1 (en) Multi-body Detection Method based on a NCCAH(Normalized Cross-Correlation of Average Histogram) And Electronic Device supporting the same
CN112073677B (en) Method, device and system for detecting articles
US10499030B2 (en) Video providing system, video providing method, and video providing program
KR102480784B1 (en) Network camera and monitoring system using therof
KR20130106660A (en) A monitoring system using image analysis
JP6905965B2 (en) System and method to identify a person by the mounting position of the terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026312

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant