Summary of the invention
Technical problem
Yet the disclosed technology of patent documentation 1 has problem below.
(1) owing to need to storing the device of the characteristic information of subject, so the structure of camera becomes complicated.For this reason, the production cost of camera increases.
(2) due to user, need to register in advance the characteristic information of subject, so the operating process of camera becomes complicated.For this reason, user's burden increases.
In addition, the disclosed technology of patent documentation 2 has problem below.
(3), owing to need to storing the server of the face image of subject, therefore produced introducing and the running cost of server.For this reason, the introducing of system and running cost increase.
(4), due to needs sending and receiving view data, therefore produced communications cost.For this reason, the running cost of system increases.
An object of the present invention is to provide the image processing equipment, camera, image processing method and the program that solve the above-mentioned problem.
Technical scheme
According to the present invention, the image processing equipment of illustrative aspects comprises:
Object detection device, for from image detection subject image;
Subject identifying information storage device, for store subject image identification id, detection sign and primary importance that whether described subject image is detected are shown, this primary importance is the subject image position that be detected last time in image; And
Identification id is assigned device, for identification id being assigned to subject image, wherein
Within if primary importance is positioned at and the position-scheduled justice distance of subject image is detected from object detection device, identification id assigns device that the identification id corresponding with primary importance is assigned to subject image and the value of indicating described subject image to be detected is set to described detection sign, and
When described detection sign indicates described subject image to be detected, frame is superimposed on described subject image and shows.
According to the present invention, the camera of illustrative aspects comprises:
Imaging device, for taking subject;
Above-mentioned image processing equipment; And
Display unit, the relevant information of subject that it shows the image that imaging device is taken and generates with image processing equipment.
According to the present invention, the image processing method of illustrative aspects comprises:
The identification id of storage subject image, detection sign and primary importance that whether described subject image is detected are shown, this primary importance is the subject image position that be detected last time in image;
From image detection subject image;
If primary importance is assigned to the identification id corresponding with primary importance subject image and the value of indicating described subject image to be detected is set to described detection sign within being positioned at the position-scheduled justice distance being detected from subject image; And
When described detection sign indicates described subject image to be detected, frame is superimposed on described subject image and shows.
According to the present invention, the image processing program of illustrative aspects makes computer carries out image processing, and this image processing is characterised in that and comprises:
Identification id and the primary importance of storage subject image, this primary importance is the subject image position that be detected last time in image;
From image detection subject image; And
If primary importance is assigned to subject image by the identification id corresponding with primary importance within being positioned at the position-scheduled justice distance being detected from subject image.
Beneficial effect
According to image processing equipment of the present invention, camera, image processing method and program, make it possible to utilize the operation of simple structure and simplification to detect and follow the tracks of specific shot body.
embodiment
Next, exemplary embodiment of the present invention will be described with reference to the drawings.
(the first exemplary embodiment)
Fig. 1 is the block diagram that the functional structure of the graphics processing unit 10 of the view data processing of carrying out subject in camera of the first exemplary embodiment according to the present invention is shown.Graphics processing unit 10 comprises object measurement unit 101, identification id (Identification, sign) assignment unit 102 and subject identifying information table 103.Subject identifying information table 103 is also referred to as subject identifying information memory cell.Object measurement unit 101 detects the image of subject from inputted input image data 21.Identification id and the position coordinates of subject identifying information table 103 storage subject.Identification id assignment unit 102 is searched for from the position coordinates in the position coordinates predefine distance of detected subject image in subject identifying information table 103.If corresponding position coordinates is present in subject identifying information table 103, identification id assignment unit 102 is assigned to subject image by the identification id corresponding with position coordinates.These assemblies can consist of hardware or software.Output image data 22 comprises the information relevant with subject, for example identification id.
Therefore,, by the detected position coordinates of subject is compared with the position coordinates of subject identifying information table 103 storage, graphics processing unit 10 is always assigned to identical subject by identical identification id.By doing like this, graphics processing unit 10 makes it possible to detect and follows the tracks of specific shot body and need not store the characteristic information of subject.
Next, with reference to Fig. 2, the overall structure of the camera of the first exemplary embodiment according to the present invention comprise graphics processing unit 10 is described.Fig. 2 is the block diagram illustrating according to the major function structure of the camera 1 of the first exemplary embodiment.Image-generating unit 11 comprises camera lens, focus set and CCD (charge coupled device) etc., and takes subject and generate the view data as numerical data.Monitor screen 12 is the display devices such as liquid crystal display, and show the image of the subject will being taken, about the information of subject or about the information of the operation of camera.Key input unit 13 is that user is used for that camera 1 is set or provides the input equipment of indication, and comprises for example cursor key, numerical key and enter key.Controlled processing unit 14 comprises that system clock generates equipment, CPU etc., and the general control of equipment is operated.Graphics processing unit 10 is processed from the view data of image-generating unit 11 inputs, and by its output and be presented on monitor screen 12.In addition,, because object measurement unit 101, identification id assignment unit 102 and the subject identifying information table 103 of the assembly as graphics processing unit 10 is described, therefore the descriptions thereof are omitted.
Here, with reference to Fig. 3, the information in subject identifying information table 103 that is stored in is described.Subject identifying information table 103 comprises the position coordinates that detects identification id, last time, its detection sign whether being detected and tracing object sign, the information while carrying out as subject image recognition processing is shown.This information is called as " subject identifying information ".Position coordinates represents face image occupied region in view data.That is, for example, by the coordinate on a summit of rectangular area be positioned at { (Xm, Ym), (Xn, the Yn) } expressing through the coordinate on another summit on the diagonal on this summit.Further, subject image recognition processing will be described below.
According to the present invention, the camera of the first exemplary embodiment detects the image of the specific part of subject from input image data, and utilizes the position coordinates of detected image to search for subject identifying information table as keyword.And this camera by the distance from detected image within predefine distance and the subject identifying information with approximated position coordinate be judged to be the subject identifying information of the detected image of indication.This series of processes is called as " subject image recognition processing ".Below with reference to Fig. 4, the subject image recognition processing operation in the first exemplary embodiment is described.In addition, although subject image recognition processing is the image-related processing with the specific part of subject, yet in the following description of the first exemplary embodiment, suppose that this specific part is people's face.
Fig. 4 is the flow chart that illustrates the subject image recognition processing in the first exemplary embodiment.When having pass by the set time from last subject image recognition processing, controlled processing unit 14 starts subject image recognition processing to graphics processing unit 10 indications.When receiving now, first, object measurement unit 101 detects the people's of subject face image (step S111) from the view data obtaining from image-generating unit 11.This Check processing is for example carried out as lower area by search in view data, and in described region, the shape that forms the key element (hair, eyes, nose, face etc.) of face exists with the position relationship identical with actual face with color.When face image is detected, the position coordinates of the detected face image in object measurement unit 101 output image datas.If a plurality of face images are included in view data, object measurement unit 101 detects all face images.
When the Check processing of face image completes, the processing that identification id assignment unit 102 is carried out below.First, identification id assignment unit 102 is removed the detection sign (step S112) of all subject identifying informations in subject identifying information table 103.Next, identification id assignment unit 102 is extracted the not yet processed face image (step S113) in object measurement unit 101 detected those face images.And identification id assignment unit 102 is searched for from the distance of the face image extracting within predefine distance and is had the subject identifying information (step S114) of immediate position coordinates in subject identifying information table 103.The centre coordinate of face image that this processing example is extracted by calculating in this way and the distance between the centre coordinate of subject identifying information are carried out.In addition { (Xm+Xn)/2, (Ym+Yn)/2} carrys out the centre coordinate in the represented region of calculating location coordinate { (Xm, Ym), (Xn, Yn) }, can to pass through arithmetic expression.As the result of search, identification id assignment unit 102 judges whether to exist corresponding subject identifying information (step S115).If existed, identification id assignment unit 102 utilizes the value of the position coordinates of the face image that extracts to upgrade the position coordinates of this corresponding subject identifying information, and " Y " arranged to detecting sign (step S116).If corresponding subject identifying information does not exist, identification id assignment unit 102 is assigned new identification id.And the value of the position coordinates of this identification id, the face image that extracts and detection sign=" Y " are additionally registered in subject identifying information table 103 as new subject identifying information (step S117).
Here, graphics processing unit 10 judges whether to have completed processing (step S118) for all face images that detect in step S111.If do not completed, return to step S113 and repeat identical processing for not yet processed face image.If completed, subject image recognition processing finishes.
Therefore,, in subject image recognition processing, the approaching subject of position coordinates that graphics processing unit 10 detected the position coordinates detecting specifically and last time is judged to be identical subject.By carrying out such processing, the camera in the first exemplary embodiment is assigned to identical identification id a plurality of subjects and each in them continuously, and the characteristic information that need not register subject.Thus, can from a plurality of detected subjects, select and follow the tracks of specific shot body.
Here, the cycle of execution subject image recognition processing needn't be identical with the cycle (frame per second) of image-generating unit 11 image data generatings.Yet the cycle of carrying out processing becomes longer, the displacement of the subject between processing will be larger.This makes the possibility of erroneous judgement in said method " the approaching subject of position coordinates that the position coordinates detecting specifically and last time were detected is judged to be identical subject " high.Therefore, wish that the cycle of carrying out subject image recognition processing is short to the impossible degree of erroneous judgement possibility.
Next, with reference to Fig. 5-9, with time sequencing, the subject image recognition processing operation in the first exemplary embodiment is described.
When subject image recognition processing finishes, the graphics processing unit 10 output information relevant with subject, for example identification id.Together with the image of this information and subject, be presented on monitor screen 12.The state of the camera 1 after the subject image recognition processing and then of showing Fig. 5 finishes.Fig. 5 A shows the image that monitor screen 12 shows.Fig. 5 B shows the content of subject identifying information table 103.These are also applicable to Fig. 6-9.
Three people 1201-1203 are displayed on the monitor screen 12 of Fig. 5 A.In addition, indicate the frame 1211-1213 of the position that these people's face image is detected to be superimposed on corresponding face image and shown.In addition the code 1221-1223 that, represents to be assigned to the identification id (" 1 "-" 3 ") of face image be displayed on corresponding face image near.In addition, in the following description, the face image that has been assigned identification id " 1 " is called as " face image ' 1 ' ".
Frame 1213 is shown the rugosity different from other frame or different colors, so that itself and other frame can be distinguished.This illustrates the indicated face image of frame 1213 " 3 " and is set to tracing object.Can by carrying out in the state being detected at face image " 3 ", select the operation of face image " 3 " to carry out this setting by user.This operational example in this way the respective counts keyboard of pressing key input unit 13 (key " 3 "), utilize cursor key that cursor (not shown) is adjusted to face image " 3 ", etc.
The subject identifying information table 103 of Fig. 5 B has been preserved and three subject identifying informations that face image is relevant.Each record has been set up the position coordinates that identification id " 1 "-" 3 " and face image are detected: { (X11, Y11), (X12, Y12) }, { (X21, Y21), (X22, Y22) } and { (X31, Y31), (X32, Y32) }." Y " is set to the detection sign that all records that they are detected are shown.In addition only, for the record of identification id " 3 ", " Y " arranged to tracing object sign.This shows face image " 3 " and is set to tracing object.
Fig. 6 show subject image recognition processing and then after the moment of Fig. 5, be performed and finish after the state of camera 1.
With reference to figure 6A, newly demonstrate two people 1204 and 1205 that newly detect specifically.New identification id " 4 " and " 5 " are assigned to these face images, and frame 1214 and 1215 and code 1224 and 1225 be shown.In addition,, if a plurality of face images newly detected, order that can be based on face image size or the position on display the order of the distance in the upper left corner (for example from) comes identification id to number.
On the other hand, people 1202 has shifted out frame.Therefore, face image " 2 " is not detected.In addition, the identification id identical with last time " 1 " and " 3 " are assigned to respectively the face image of people 1201 and 1203.This is because in subject image recognition processing, people 1201 and 1203 face image are because they are positioned at from the position coordinates detecting last time and are linked to corresponding subject identifying information compared with short distance.
With reference to figure 6B, newly added the record of two subject identifying informations.The position coordinates that identification id " 4 " and " 5 " and face image are detected { (X41, Y41), (X42, Y42) } and { (X51, Y51), (X52, Y52) } are set to this two records.On the other hand, for the subject identifying information record of the relevant identification id " 2 " of the face image with not being detected specifically, detect sign and be eliminated.Value { (X21, Y21), (X22, Y22) } when in addition, face image " 2 " was detected last time continues to be set to position coordinates.
Fig. 7 show subject image recognition processing and then after the moment of Fig. 6, be performed and finish after the state of camera 1.
With reference to figure 7A, people 1203 astern sees.Therefore, face image " 3 " is not detected.In addition, people 1204 has moved to the position above of having blocked people 1205.Therefore, face image " 5 " is not detected yet.Because the face image corresponding with identification id " 2 ", " 3 " and " 5 " is not detected, therefore indicate the frame of face image " 2 ", " 3 " and " 5 " and code not to be shown.
With reference to figure 7B, for the relevant identification id " 2 " of the face image with not being detected specifically, the subject identifying information record of " 3 " and " 5 ", detect sign and be eliminated.Value { (X21, Y21), (X22 when in addition, face image " 2 ", " 3 " and " 5 " last time are detected, Y22) }, { (X31, Y31), (X32, Y32) } and { (X51, Y51), (X52, Y52) } continue to be set to position coordinates.On the other hand, as the new position coordinates { (X41`, Y41`), (X42`, Y42`) } of the mobile destination of face image, be set to the record of identification id " 4 ".In addition { (X41`, Y41`),, (X42`, Y42`) } be the position coordinates that is positioned at the distance that is short to following degree, be short to and be enough to be judged as and be positioned at { (X41 last time, Y41), (X42, Y42) } in the distance of the subject that the face image located is identical.
This moment, the face image " 3 " that is set to tracing object is not detected.Therefore, at the moment, there is not the face image of tracing object.Yet, in subject identifying information table 103, continue " Y " to arrange to the tracing object sign in the record of identification id " 3 ".
Fig. 8 show subject image recognition processing and then after the moment of Fig. 7, be performed and finish after the state of camera 1.
With reference to figure 8A, people 1202 moves in frame again.In addition, people 1204 moves again, and the people 1205 who is blocked occurs again.Therefore the people 1202 who, was not detected last time and people 1205 face image are detected.The face image that identical identification id " 2 " and " 5 " are assigned to respectively people 1202 and 1205 when being detected before.This is because in subject image recognition processing, people 1202 and 1205 face image are because they are positioned at from the position coordinates detecting last time and are linked to corresponding subject identifying information compared with short distance.On the other hand, because people 1203 is still in respectant state, so face image " 3 " is not detected.
With reference to figure 8B, " Y " is set to the detection sign of the subject identifying information record of the identification id " 2 " relevant with the face image being again detected specifically and " 5 " again.In addition, as the new position coordinates { (X21`, Y21`), (X22`, Y22`) } of the mobile destination of face image and the record that { (X41``, Y41``), (X42``, Y42``) } is set to identification id " 2 " and " 4 ".
This moment, the face image " 3 " that is set to tracing object is not detected yet.Therefore, at the moment, there is not the face image of tracing object yet.
Fig. 9 show subject image recognition processing and then after the moment of Fig. 8, be performed and finish after the state of camera 1.
With reference to figure 9A, people 1203 eyes front again.Therefore the people's 1203 who, was not detected last time face image is detected.Identical identification id " 3 " is assigned to people 1203 face image when being detected before.This is because in subject image recognition processing, people 1203 face image is linked to corresponding subject identifying information, because it is positioned at the shorter distance of position coordinates detecting from last time.In addition, with face image " 3 " become be not detected before in the same manner, show its be set to tracing object and the frame 1213 that demonstrates with the rugosity different from other frame or color shown.This is because even become while not being detected at face image " 3 ", in subject identifying information table 103, " Y " has also been set to the tracing object sign of the record of identification id " 3 ".
With reference to figure 9B, " Y " is set to the detection sign of the subject identifying information record of the identification id " 3 " relevant with the face image being again detected specifically again.In addition, " Y " continues to be set to this tracing object sign.
As mentioned above, according to the camera of the first exemplary embodiment, need not register the characteristic information of subject, and for the face image of detected a plurality of subjects, continue identical identification id to be assigned to each in them.Reason is the position coordinates of identification id assignment unit 102 by more detected face image and the position coordinates in subject identifying information table 103 and detected face image and corresponding subject identifying information is linked.
In addition, even when the face image of subject temporarily becomes when not being detected and being after this again detected, according to the camera of the first exemplary embodiment, also can, for the face image of detected a plurality of subjects, continue identical identification id to be assigned to each in them.Reason is, even for the subject not being detected, subject identifying information table 103 has been preserved position coordinates that this subject was detected last time and the identification id of this subject.
In addition,, even temporarily become when not being detected and being after this again detected when being set as the face image of the subject of tracing object, according to the camera of the first exemplary embodiment, also can automatically recover to follow the tracks of.Reason is, even for the subject not being detected, subject identifying information table 103 has also been preserved the tracing object sign of this subject and the identification id of this subject.
In addition, compare with the disclosed camera of patent documentation 1, according to the camera of the first exemplary embodiment, can realize data storage and relatively process by simpler structure.Reason is, compares with the characteristic information of image about subject, and position coordinates needs less data capacity and relatively processes also simple.
(the second exemplary embodiment)
Next, the present invention's the second exemplary embodiment will be described.
In the present invention's the first exemplary embodiment, once subject identifying information be registered, after this just can not be deleted.Yet, in the second exemplary embodiment of the present invention, when the image of subject not detected for a long time, being with the difference of the first exemplary embodiment, corresponding subject identifying information is deleted.
Figure 10 is the block diagram illustrating according to the major function structure of the camera 1 of the second exemplary embodiment.In addition,, in the block diagram of Figure 10, be assigned the assembly of describing in the assembly of the label code that label code in the block diagram with Fig. 2 is identical and Fig. 2 identical.The camera 1 of Figure 10 has comprised the subject identifying information delete cells 104 as New Parent.
Figure 11 is the diagram that the information in the subject identifying information table 103 that is stored in the second exemplary embodiment is shown.As new item, subject identifying information table 103 comprises erasing time.In addition other identical with Fig. 3.
Next, with reference to Figure 12, the subject image recognition processing operation in the second exemplary embodiment is described.
Figure 12 is the flow chart that illustrates the subject image recognition processing in the second exemplary embodiment.In addition,, in the flow chart of Figure 12, be assigned the step of describing in the step of the label code that label code in the flow chart with Fig. 4 is identical and Fig. 4 identical.Therefore, omit the description about these same steps.In Figure 12, the step S116 of Fig. 4 and S117 are substituted by step S216 and S217 respectively, and have newly added step S219.
At step S216, identification id assignment unit 102, except upgrading position coordinates and detection sign, also utilizes the value of current time to upgrade erasing time.In addition,, at step S217, identification id assignment unit 102, except identification id, position coordinates and detection sign are set, also arranges the value of current time to erasing time.In addition, identification id assignment unit 102 can be obtained from the onboard clock (not shown) of camera 1 value of current time.
And, when judging in step S118 while having completed processing for all face images, subject identifying information delete cells 104 extract from this erasing time through time predefined all subject identifying informations and delete they (step S219).Whether this extraction process is for example greater than predefine value by the difference of current time being compared with the value of detected time and judge them is carried out.After this, subject image recognition processing finishes.
As mentioned above, according to the camera of the second exemplary embodiment, delete the not detected subject identifying information that arrives time predefined.As a result, thus can prevent that the subject identifying information not being detected for a long time from piling up causes the become shortage and no longer carry out to face image and assign identification id of the free space of subject identifying information table 103.
(the 3rd exemplary embodiment)
Next, the 3rd exemplary embodiment of the present invention will be described.
In the first exemplary embodiment of the present invention, subject is people's face.In the 3rd exemplary embodiment of the present invention, be that with the difference of the first exemplary embodiment subject is to wear the people's of livery image.
Identical due to the functional structure of the camera of the 3rd exemplary embodiment according to the present invention and Fig. 1 and 2, therefore will the descriptions thereof are omitted.In addition,, owing to being stored in also identical with Fig. 3 of information in subject identifying information table 103, therefore the descriptions thereof are omitted.
Next, with reference to Figure 13, the subject image recognition processing operation in the 3rd exemplary embodiment is described.
Figure 13 is the flow chart that illustrates the subject image recognition processing in the 3rd exemplary embodiment.In addition,, in the flow chart of Figure 13, be assigned the step of describing in the step of the label code that label code in the flow chart with Fig. 4 is identical and Fig. 4 identical.Therefore, omit the description about these same steps.In Figure 13, the step S111 of Fig. 4 is substituted by step S311.
When subject image recognition processing starts, first, object measurement unit 101 detects the people's who wears livery image from take from the view data of image-generating unit 11.For example, object measurement unit 101 detection upper body parts are worn yellow shirt with short sleeves and lower body part is worn the people's of green pants image (step S311).This Check processing for example by view data search as lower area execution: in this region, the part with yellow color and shirt with short sleeves shape exists adjacent to each other with the part with green color and pants-shaped.When the image of such clothes is detected, the position coordinates of the image of detected clothes in object measurement unit 101 output image datas.When the image of a plurality of such clothes is included in view data, object measurement unit 101 detects the image of all clothes.Owing to except subject being the image of clothes but not this point of face image, processing is below identical with Fig. 4's, so the descriptions thereof are omitted.
Next, the state of subject image recognition processing in the 3rd exemplary embodiment and then camera 1 after finishing is described with reference to Figure 14.Respectively, Figure 14 A shows the image of monitor screen 12, and Figure 14 B shows the content of subject identifying information table 103.And, in the following description, suppose that object measurement unit 101 detection upper body parts are worn yellow shirt with short sleeves and lower body part is worn the people's of green pants image.
Four people 1206-1209 are presented on the monitor screen 12 of Figure 14 A.Wherein, people 1206 and 1207 upper body part are worn yellow shirt with short sleeves and lower body part is worn green pants.On the other hand, people 1208 and the 1209 dress clothes (for example, blue shirt with short sleeves and white pants) different from them.
Object measurement unit 101 detects the image of the clothes of people 1206 and 1207.Indicate the position that the image of these clothes is detected frame 1216 and 1217 and the code 1226 and 1227 of identification id (" 6 ", " 7 ") that represents to be assigned to the image of these clothes be displayed on monitor screen 12.
The subject identifying information table 103 of Figure 14 B has been preserved the image-related subject identifying information with detected two clothes.
As mentioned above, according to the camera of the 3rd exemplary embodiment, the function of the camera of the first exemplary embodiment is applied to the image of people's livery, and inhuman face image.As a result, when taking in sports tournament, for example, the people who only wears the uniform of particular team can be detected also tracked.
In addition, the function of the camera of the first exemplary embodiment can be applied to particular animals, the particular vehicle such as car or aircraft such as cat or dog, or additionally, its shape or color have the subject of particular characteristics.
As mentioned above, having take the camera of assigning identification id to subject has described exemplary embodiment of the present invention as example.Yet, according to image processing equipment of the present invention, also can be applied to be included in the camera function in cell phone or PDA (personal digital assistant) etc.In addition, according to image processing equipment of the present invention, also can be applied to have the camera of taking moving image function.In addition, according to image processing equipment of the present invention, also can be applied to the image analysis equipment of for example analyzing recorded image and not thering is imaging means.
Although described the present invention with reference to exemplary embodiment of the present invention, yet the invention is not restricted to these embodiment.It will be appreciated by the skilled addressee that without departing from the spirit and scope of the present invention and can make change to the form of embodiment and details.
The Japanese patent application No.2008-112813 of the application based on submitting on April 23rd, 2008 also requires its priority, and the open of this application is incorporated into this by integral body by reference.