[go: up one dir, main page]

WO2007036842A2 - Method and apparatus for capturing metadata for a content item - Google Patents

Method and apparatus for capturing metadata for a content item Download PDF

Info

Publication number
WO2007036842A2
WO2007036842A2 PCT/IB2006/053385 IB2006053385W WO2007036842A2 WO 2007036842 A2 WO2007036842 A2 WO 2007036842A2 IB 2006053385 W IB2006053385 W IB 2006053385W WO 2007036842 A2 WO2007036842 A2 WO 2007036842A2
Authority
WO
WIPO (PCT)
Prior art keywords
content item
person
unique identifier
metadata
identifying
Prior art date
Application number
PCT/IB2006/053385
Other languages
French (fr)
Other versions
WO2007036842A3 (en
Inventor
Mark H. Verberkt
Bartel M. Van De Sluis
Koen H. J. Vrielink
Wilhelmus F. J. Fontijn
Albert M. A. Rijckaert
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007036842A2 publication Critical patent/WO2007036842A2/en
Publication of WO2007036842A3 publication Critical patent/WO2007036842A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/005Discovery of network devices, e.g. terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0008Connection or combination of a still picture apparatus with another apparatus
    • H04N2201/0034Details of the connection, e.g. connector, interface
    • H04N2201/0037Topological details of the connection
    • H04N2201/0039Connection via a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0008Connection or combination of a still picture apparatus with another apparatus
    • H04N2201/0034Details of the connection, e.g. connector, interface
    • H04N2201/0048Type of connection
    • H04N2201/0055By radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0084Digital still camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3204Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium
    • H04N2201/3205Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to a user, sender, addressee, machine or electronic recording medium of identification information, e.g. name or ID code
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3226Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3274Storage or retrieval of prestored additional information
    • H04N2201/3276Storage or retrieval of prestored additional information of a customised additional information profile, e.g. a profile specific to a user ID
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/26Network addressing or numbering for mobility support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the present invention relates to method and apparatus for capturing metadata for a content item, e.g. a photograph, a video segment or an audio segment.
  • Metadata or annotations are required to assist users in browsing images in their increasingly large collections. Given the large number of images, manually adding metadata or annotations is not a realistic option. Consequently, as much metadata as possible should be generated automatically. Examples of metadata are the time, date and location that are captured when the picture is taken. Furthermore, metadata may include the names of the persons captured by the image. These may be identified by facial recognition techniques.
  • the image capture device communicates with external devices.
  • the external devices such as smart tags etc.
  • These tags send information to the image capture device to identify the object and from this information the metadata is generated.
  • Identification of a person captured by the image can be made by use of facial recognition software.
  • This software is complex and expensive and the processing required slows down the automatic generation of the metadata.
  • the image capture device is a portable device, the power consumption of running such software is, invariably, unacceptable.
  • the performance of facial recognition is typically very sensitive to the lighting conditions on the image and the position of the face in the image. This can lead to inaccurate results.
  • the content item may be a still picture, a video segment or an audio segment, for example.
  • a method for identifying at least one person within a vicinity of a location of capture of a content item comprising the steps of: receiving a unique identifier, the unique identifier not being part of the content item; and identifying said at least one person from said received unique identifier.
  • an apparatus for identifying at least one person within a vicinity of a location of capture of content comprising: a receiver for receiving a unique identifier, the unique identifier not being part of the content item; and processing means for identifying said at least one person from said received unique identifier.
  • any persons regardless of whether or not they were captured by the content item can be identified.
  • the method and apparatus of the present invention do not rely on analysis of the content item after it has been captured since it receives identifiers at the time the content item is captured to identify the persons present. Furthermore, it does not require use of facial recognition software.
  • the unique identifier or information derived from the unique identifier may be embedded in the content item or may be stored separately to the content item.
  • the unique identifier is stored in the metadata and in a second embodiment, also the "friendly name" of the corresponding person is stored in the metadata.
  • the browser needs to have the translation table between unique identifier and "friendly name”. This may require an additional Bluetooth receiver and/or additional manual effort to create the mapping table.
  • the unique identifier should preferably be in the metadata as well, as not all people may be known.
  • a method for enabling to browse a plurality of content items each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of said each image when it was captured, the method comprising the steps of: selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and enabling browsing of said at least one content item.
  • an apparatus enabling browsing of a plurality of content items, each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier identifying a person who was present within a vicinity of a location of said each content item when it was captured, the apparatus comprising: a selector for selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and means for enabling browsing of said at least one content item.
  • the user can then easily browse stored content items, e.g. images, and present a personalised view of those content items in which a particular person was present when the content item was captured regardless of whether or not the person was captured by the content item.
  • stored content items e.g. images
  • a method for exchanging content items at least a first communication terminal having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the method comprising the steps of: said first communication terminal enabling the identified person to access to the at least one content item.
  • the method may further comprise the step of a second communication terminal sending a request to said first communication terminal for at least one content item in which a specified person was present within the vicinity of the location of the content item when it was captured.
  • an apparatus for exchanging content items said apparatus having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the apparatus comprising: means for enabling the identified person to access the at least one content item.
  • Content items e.g. digital images, can, therefore, be easily exchanged or shared among those persons present when the image was captured.
  • the communication terminals comprise mobile telephones which may have integral image capture devices (cameras).
  • Many mobile telephones include Bluetooth transceivers.
  • the Bluetooth network technology covers a limited range, typically 10 - 10O m, dependent on the power class in which the devices operate. In the preferred embodiments, this network is used to collect a list of the Bluetooth devices in the vicinity at a certain point in time.
  • the bluetooth module may be integrated in a digital image capture device (camera). At the moment of capturing an image, the device stores the list of bluetooth devices it detects in its vicinity within the metadata to associate with the image being captured.
  • another device that typically is used in conjunction with the digital image capture device, has a bluetooth module that records a list of bluetooth devices (e.g. mobile telephones) that it detects in its vicinity, as a function of time. This information is later correlated to the images taken by an image capture device by using the time that images were captured.
  • Figure 1 is a simple schematic of apparatus according to an embodiment of the present invention
  • Figure 2 is a flow chart of the method steps according to an embodiment of the present invention
  • Figure 3 is a flow chart of the method steps according to another embodiment of the present invention.
  • the apparatus comprises a transceiver 101 which receives incoming transmissions containing a unique identifier of a device within the range of the apparatus.
  • the unique identifier is the bluetooth identification of the device within the short range of communication of the apparatus.
  • the output of the transceiver 101 is connected to a processor 103.
  • the apparatus further comprises an integral image capture device 105 which is connected to the processor 103 and an image store 107.
  • the processor 103 is also connected to the image store 107.
  • the processor 103 may also access a local memory store 109. Operation of the apparatus will now be described with reference to figure 2.
  • the transceiver 101 of the apparatus transmits a request, as the user activates the image capture device 105 to capture an image, via short-range wireless communication, such as bluetooth, to other bluetooth devices within its range (vicinity), step 203.
  • Each bluetooth device responds to the request and transmits its unique identifier to the transceiver 101 of the apparatus.
  • the transceiver 101 receives a list of unique identifiers within its vicinity, step 205.
  • the processor 103 looks up the unique identifiers in its local memory store 109 which includes a table of mappings of unique identifiers and the name of the person associated with that unique identifier.
  • this step may be carried out at a later stage, for example after capturing the image / just before storing the person information.
  • the mappings are stored in a remote database, for example a database accessed via the Internet.
  • the processor 103 identifies the person who is in the vicinity of the device, step 207.
  • the image is captured by the image capture device 105, step 209, and the digital image is stored in the image store 107 with associated metadata or annotation which includes a list of those unique identifiers which were also in the vicinity of the device when the image was captured and a list of the persons who were in the vicinity of the device when the image was captured.
  • the list of unique identifiers which were in the vicinity of the device when the image was captured may contain identifiers for which no mapping to the name of the person is known (person unknown), step 211. If the mapping table is in a remote device, the retrieval of names associated with identifiers may have to be postponed.
  • the method of this embodiment comprises an additional step, step 301, in which the transceiver 101 continuously scans the network for which devices are present.
  • step 301 the transceiver 101 continuously scans the network for which devices are present.
  • the devices present at that moment are stored and the persons in the vicinity of the location when the image is captured can be identified.
  • the table of mappings can be extended to include a history log file, which indicates that a person is identified by a unique identifier (bluetooth device) between certain dates. This can, therefore, take into account any change of ownership of a device and correctly identify the person present.
  • the table of mappings may be built by the user and stored in local memory
  • mappings can be made available via other services such as Internet services, where users can register their bluetooth devices and link them to their own identification.
  • Metadata can be generated including information about which persons were present on the occasion a certain image was captured.
  • This increases the recognition rate, as conventional face recognition software is not required. It can be used to create a personalised view of the image collection of a family, for example show me all pictures "when I was there" regardless of whether or not captured by the images. Further, it can be used to facilitate sharing of images between people that were on the same occasion. For example, images can be exchanged where I was present and sent to another who was also present on the same occasion. In this situation, it may very well be possible that the other user is not known to the sender.
  • the bluetooth device identifier can be used to find these people/their pictures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A method for identifying at least one person within the vicinity of capture of a content item, e.g. an image, regardless of whether or not that person is captured by the content item. This is achieved by collating unique identifiers, such as bluetooth identifiers, for identifying those persons within the vicinity of the location when the image is captured, steps 205-209. The unique identifiers (or information derived there from) are included in the metadata stored with the content item, step 211, for later retrieval and/or browsing.

Description

Method and apparatus for capturing metadata for a content item
FIELD OF THE INVENTION
The present invention relates to method and apparatus for capturing metadata for a content item, e.g. a photograph, a video segment or an audio segment.
BACKGROUND OF THE INVENTION
In the last couple of years, digital photography has become increasingly popular. As there are no development costs involved with digital photographs, users generally take many more pictures than with conventional cameras.
A key problem that arises when users are taking very large amount of pictures is the organisation of these pictures, retrieving and browsing these pictures. This may become particularly problematic for devices with a very large storage capacity, like future e-hub products, PCs etc. As a result metadata or annotations are required to assist users in browsing images in their increasingly large collections. Given the large number of images, manually adding metadata or annotations is not a realistic option. Consequently, as much metadata as possible should be generated automatically. Examples of metadata are the time, date and location that are captured when the picture is taken. Furthermore, metadata may include the names of the persons captured by the image. These may be identified by facial recognition techniques.
One known system for automatic generation of metadata or annotation for photographs is disclosed by US2004/0126038. The image capture device communicates with external devices. The external devices, such as smart tags etc., are associated with an object which is captured by the image. These tags send information to the image capture device to identify the object and from this information the metadata is generated. Identification of a person captured by the image can be made by use of facial recognition software. This software is complex and expensive and the processing required slows down the automatic generation of the metadata. Furthermore, as the image capture device is a portable device, the power consumption of running such software is, invariably, unacceptable. Further, the performance of facial recognition is typically very sensitive to the lighting conditions on the image and the position of the face in the image. This can lead to inaccurate results. SUMMARY OF THE INVENTION
Therefore, it is desirable to provide a system for capturing metadata for a content item which is not reliant on analysis of the content item after it has been captured. The content item may be a still picture, a video segment or an audio segment, for example.
This is achieved, according to a first aspect of the present invention, by a method for identifying at least one person within a vicinity of a location of capture of a content item, the method comprising the steps of: receiving a unique identifier, the unique identifier not being part of the content item; and identifying said at least one person from said received unique identifier.
According to another aspect of the present invention, it is achieved by an apparatus for identifying at least one person within a vicinity of a location of capture of content, the apparatus comprising: a receiver for receiving a unique identifier, the unique identifier not being part of the content item; and processing means for identifying said at least one person from said received unique identifier.
In this way any persons regardless of whether or not they were captured by the content item can be identified. The method and apparatus of the present invention do not rely on analysis of the content item after it has been captured since it receives identifiers at the time the content item is captured to identify the persons present. Furthermore, it does not require use of facial recognition software.
After being received, the unique identifier or information derived from the unique identifier may be embedded in the content item or may be stored separately to the content item. In a first embodiment, only the unique identifier is stored in the metadata and in a second embodiment, also the "friendly name" of the corresponding person is stored in the metadata. In the first embodiment, the browser needs to have the translation table between unique identifier and "friendly name". This may require an additional Bluetooth receiver and/or additional manual effort to create the mapping table. However, the unique identifier should preferably be in the metadata as well, as not all people may be known. This is also achieved, according to another aspect of the present invention, by a method for enabling to browse a plurality of content items, each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of said each image when it was captured, the method comprising the steps of: selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and enabling browsing of said at least one content item.
According to yet another aspect of the present invention, it is achieved by an apparatus enabling browsing of a plurality of content items, each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier identifying a person who was present within a vicinity of a location of said each content item when it was captured, the apparatus comprising: a selector for selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and means for enabling browsing of said at least one content item.
The user can then easily browse stored content items, e.g. images, and present a personalised view of those content items in which a particular person was present when the content item was captured regardless of whether or not the person was captured by the content item.
This is also achieved according to a further aspect of the present invention by a method for exchanging content items, at least a first communication terminal having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the method comprising the steps of: said first communication terminal enabling the identified person to access to the at least one content item.
The method may further comprise the step of a second communication terminal sending a request to said first communication terminal for at least one content item in which a specified person was present within the vicinity of the location of the content item when it was captured.
According to yet another aspect of the present invention, it is achieved by an apparatus for exchanging content items, said apparatus having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the apparatus comprising: means for enabling the identified person to access the at least one content item. Content items, e.g. digital images, can, therefore, be easily exchanged or shared among those persons present when the image was captured.
In a preferred embodiment, the communication terminals comprise mobile telephones which may have integral image capture devices (cameras). Many mobile telephones include Bluetooth transceivers. The Bluetooth network technology covers a limited range, typically 10 - 10O m, dependent on the power class in which the devices operate. In the preferred embodiments, this network is used to collect a list of the Bluetooth devices in the vicinity at a certain point in time. The bluetooth module may be integrated in a digital image capture device (camera). At the moment of capturing an image, the device stores the list of bluetooth devices it detects in its vicinity within the metadata to associate with the image being captured. Alternatively, another device, that typically is used in conjunction with the digital image capture device, has a bluetooth module that records a list of bluetooth devices (e.g. mobile telephones) that it detects in its vicinity, as a function of time. This information is later correlated to the images taken by an image capture device by using the time that images were captured.
BRIEF DESCRIPTION OF DRAWINGS
For a more complete understanding of the present invention, reference is now made to the following description taken in conjunction with the accompanying drawings, in which:
Figure 1 is a simple schematic of apparatus according to an embodiment of the present invention;
Figure 2 is a flow chart of the method steps according to an embodiment of the present invention; and Figure 3 is a flow chart of the method steps according to another embodiment of the present invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
With reference to figure 1, the apparatus according to an embodiment of the present invention will be described in more detail.
The apparatus comprises a transceiver 101 which receives incoming transmissions containing a unique identifier of a device within the range of the apparatus. The unique identifier is the bluetooth identification of the device within the short range of communication of the apparatus. The output of the transceiver 101 is connected to a processor 103. The apparatus further comprises an integral image capture device 105 which is connected to the processor 103 and an image store 107. The processor 103 is also connected to the image store 107. The processor 103 may also access a local memory store 109. Operation of the apparatus will now be described with reference to figure 2.
The transceiver 101 of the apparatus transmits a request, as the user activates the image capture device 105 to capture an image, via short-range wireless communication, such as bluetooth, to other bluetooth devices within its range (vicinity), step 203. Each bluetooth device responds to the request and transmits its unique identifier to the transceiver 101 of the apparatus. The transceiver 101 receives a list of unique identifiers within its vicinity, step 205. Upon receipt of the list of unique identifiers, the processor 103 looks up the unique identifiers in its local memory store 109 which includes a table of mappings of unique identifiers and the name of the person associated with that unique identifier. It can be appreciated that this step may be carried out at a later stage, for example after capturing the image / just before storing the person information. In an alternative embodiment the mappings are stored in a remote database, for example a database accessed via the Internet. For those unique identifiers included in the look-up table, the processor 103 identifies the person who is in the vicinity of the device, step 207. At that point, the image is captured by the image capture device 105, step 209, and the digital image is stored in the image store 107 with associated metadata or annotation which includes a list of those unique identifiers which were also in the vicinity of the device when the image was captured and a list of the persons who were in the vicinity of the device when the image was captured. The list of unique identifiers which were in the vicinity of the device when the image was captured may contain identifiers for which no mapping to the name of the person is known (person unknown), step 211. If the mapping table is in a remote device, the retrieval of names associated with identifiers may have to be postponed.
An alternative embodiment is shown in figure 3. The steps are similar to those described in respect of figure 2 and these have the same reference numerals and will not be described in detail here. The method of this embodiment comprises an additional step, step 301, in which the transceiver 101 continuously scans the network for which devices are present. When the shutter release of the image capture device 105 is pressed, the devices present at that moment are stored and the persons in the vicinity of the location when the image is captured can be identified. The table of mappings can be extended to include a history log file, which indicates that a person is identified by a unique identifier (bluetooth device) between certain dates. This can, therefore, take into account any change of ownership of a device and correctly identify the person present. The table of mappings may be built by the user and stored in local memory
109. This can easily be supported by the device discovery mechanism that is built in the bluetooth network technology. If the device name of a bluetooth device (as entered by the owner of that device) is applicable, that name can be used in the mappings. The mappings could be made available via other services such as Internet services, where users can register their bluetooth devices and link them to their own identification.
According to the preferred embodiment of the present invention, metadata can be generated including information about which persons were present on the occasion a certain image was captured. This increases the recognition rate, as conventional face recognition software is not required. It can be used to create a personalised view of the image collection of a family, for example show me all pictures "when I was there" regardless of whether or not captured by the images. Further, it can be used to facilitate sharing of images between people that were on the same occasion. For example, images can be exchanged where I was present and sent to another who was also present on the same occasion. In this situation, it may very well be possible that the other user is not known to the sender. The bluetooth device identifier can be used to find these people/their pictures.
Although a preferred embodiment of the present invention has been illustrated in the accompanying drawings and described in the foregoing description, it will be understood that the invention is not limited to the embodiment disclosed but is capable of numerous modifications without departing from the scope of the invention as set out in the following claims.

Claims

CLAIMS:
1. A method for identifying at least one person within a vicinity of a location of capture of a content item, the method comprising the steps of: receiving a unique identifier, the unique identifier not being part of the content item; and identifying said at least one person from said received unique identifier.
2. A method according to claim 1 , wherein said step of identifying comprises identifying said at least one person from a unique identifier received at or around the time of capture of the content item.
3. A method according to any one of the preceding claims, wherein the method further comprises the steps of: storing a table of mappings of a plurality of unique identifiers and a name associated with the person having that unique identifier; and referencing said table of mappings to identify said at least one person, by name, from said received unique identifier.
4. A method according to claim 3, wherein the table further includes a history log of unique identifiers and associated person and the time period for which that person is associated to a particular unique identifier.
5. A method for enabling to browse a plurality of content items, each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of said each image when it was captured, the method comprising the steps of: selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and enabling browsing of said at least one content item.
6. A method for exchanging content items, at least a first communication terminal having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the method comprising the steps of: said first communication terminal enabling the identified person to access to the at least one content item.
7. A method according to claim 6, wherein the method further comprises the step of: a second communication terminal sending a request to said first communication terminal for at least one content item in which a specified person was present within a vicinity of a location of the content item when it was captured.
8. A method according to claim 7, wherein the method further comprises the step of: said first communication terminal detecting a unique identifier of said second communication terminal when said at least one content item is captured.
9. A method according to any one of claims 6 to 8, wherein said metadata is embedded in a file containing its associated captured content item.
10. A method according to any one of the preceding claims, wherein said unique identifier is received via short-range wireless communication.
11. A computer program product comprising a plurality of program code portions for carrying out the method according to any one of claims 1 to 10.
12. Apparatus for identifying at least one person within a vicinity of a location of capture of content, the apparatus comprising: a receiver for receiving a unique identifier, the unique identifier not being part of content item; and processing means for identifying said at least one person from said received unique identifier.
13. Apparatus according to claim 12, wherein said processing means is operative to identify said at least one person from a unique identifier received at or around the time of capture of the image.
14. Apparatus according to claim 12, wherein the apparatus further comprises: means for accessing a table of mappings of a plurality of unique identifiers and a name associated with a person having that unique identifier; and means for referencing said table of mappings to identify said at least one person, by name, from said received unique identifier.
15. Apparatus according to claim 14, wherein the apparatus further comprises a memory for storing said table of mappings.
16. Apparatus according to claim 14, wherein said table of mappings is accessed via an Internet service.
17. Apparatus for enabling browsing of a plurality of content items, each of said plurality of content items having metadata associated therewith, said associated metadata comprising a unique identifier identifying a person who was present within a vicinity of a location of said each content item when it was captured, the apparatus comprising: a selector for selecting from said plurality of content items at least one content item, the person identified by the unique identifier of the metadata associated with the selected content item matching a selected person; and means for enabling browsing of said at least one content item.
18. Apparatus for exchanging content items, said apparatus having access to at least one content item, said content item having metadata associated therewith, said associated metadata comprising a unique identifier for identifying a person who was present within a vicinity of a location of the content item when it was captured, the apparatus comprising: means for enabling the identified person to access the at least one content item.
PCT/IB2006/053385 2005-09-30 2006-09-20 Method and apparatus for capturing metadata for a content item WO2007036842A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05109124.7 2005-09-30
EP05109124 2005-09-30

Publications (2)

Publication Number Publication Date
WO2007036842A2 true WO2007036842A2 (en) 2007-04-05
WO2007036842A3 WO2007036842A3 (en) 2007-10-11

Family

ID=37900144

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/053385 WO2007036842A2 (en) 2005-09-30 2006-09-20 Method and apparatus for capturing metadata for a content item

Country Status (1)

Country Link
WO (1) WO2007036842A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009068089A1 (en) * 2007-11-28 2009-06-04 Nokia Corporation Wireless device detection
WO2011121479A1 (en) * 2010-03-31 2011-10-06 Nokia Corporation Method and apparatus for object identification within a media file using device identification
WO2013041758A1 (en) * 2011-09-23 2013-03-28 Nokia Corporation Method and apparatus for providing embedding of local identifiers
CN103620594A (en) * 2011-06-21 2014-03-05 瑞典爱立信有限公司 Caching support for visual search and augmented reality in mobile networks
US10468066B2 (en) 2015-09-23 2019-11-05 Nokia Technologies Oy Video content selection
US10681335B2 (en) 2015-09-23 2020-06-09 Nokia Technologies Oy Video recording method and apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020101519A1 (en) * 2001-01-29 2002-08-01 Myers Jeffrey S. Automatic generation of information identifying an object in a photographic image
KR20050094041A (en) * 2003-01-21 2005-09-26 코닌클리케 필립스 일렉트로닉스 엔.브이. Adding metadata to pictures
US7373109B2 (en) * 2003-11-04 2008-05-13 Nokia Corporation System and method for registering attendance of entities associated with content creation

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009068089A1 (en) * 2007-11-28 2009-06-04 Nokia Corporation Wireless device detection
CN101878632A (en) * 2007-11-28 2010-11-03 诺基亚公司 Wireless Device Detection
KR101110969B1 (en) * 2007-11-28 2012-02-20 노키아 코포레이션 Wireless device detection
CN101878632B (en) * 2007-11-28 2013-05-22 诺基亚公司 Wireless Device Detection
US8942638B2 (en) 2007-11-28 2015-01-27 Nokia Corporation Wireless device detection
WO2011121479A1 (en) * 2010-03-31 2011-10-06 Nokia Corporation Method and apparatus for object identification within a media file using device identification
CN103620594A (en) * 2011-06-21 2014-03-05 瑞典爱立信有限公司 Caching support for visual search and augmented reality in mobile networks
US9489773B2 (en) 2011-06-21 2016-11-08 Telefonaktiebolaget Lm Ericsson (Publ) Caching support for visual search and augmented reality in mobile networks
WO2013041758A1 (en) * 2011-09-23 2013-03-28 Nokia Corporation Method and apparatus for providing embedding of local identifiers
US9313539B2 (en) 2011-09-23 2016-04-12 Nokia Technologies Oy Method and apparatus for providing embedding of local identifiers
US10468066B2 (en) 2015-09-23 2019-11-05 Nokia Technologies Oy Video content selection
US10681335B2 (en) 2015-09-23 2020-06-09 Nokia Technologies Oy Video recording method and apparatus

Also Published As

Publication number Publication date
WO2007036842A3 (en) 2007-10-11

Similar Documents

Publication Publication Date Title
US20050192808A1 (en) Use of speech recognition for identification and classification of images in a camera-equipped mobile handset
US9075808B2 (en) Digital photograph content information service
US7734654B2 (en) Method and system for linking digital pictures to electronic documents
USRE43689E1 (en) System and method for registering attendance of entities associated with content creation
US7831141B2 (en) Mobile device with integrated photograph management system
JP3944160B2 (en) Imaging apparatus, information processing apparatus, control method thereof, and program
US8462231B2 (en) Digital camera with real-time picture identification functionality
WO2009138135A1 (en) Automatic tagging of photos in mobile devices
US20110043643A1 (en) Method for transmitting image and image pickup apparatus applying the same
US20090204641A1 (en) Techniques to associate media information with related information
US20070124249A1 (en) Methods and devices for image and digital rights management
KR100788605B1 (en) Content service device and method
KR20060026924A (en) Tagging method and system for digital data
WO2007036842A2 (en) Method and apparatus for capturing metadata for a content item
US9973649B2 (en) Photographing apparatus, photographing system, photographing method, and recording medium recording photographing control program
US20040239765A1 (en) Photographed image transmitting apparatus
KR20080072903A (en) Method and apparatus for capturing and outputting images and auxiliary data
US20050225643A1 (en) Context enhanced pictures
EP2211529B1 (en) Method for sharing file between control point and media server in a DLNA system, and system thereof
JP2004038840A (en) Memo image management device, memo image management system, and memo image management method
KR100798917B1 (en) Digital photo content processing system and method and apparatus for transmitting / receiving digital photo content in the system
US10412455B2 (en) Image management device
US20140104442A1 (en) Image information processing system
US8819167B2 (en) Apparatus and method for requesting and transferring contents
CN115309312B (en) Content display method and electronic device

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06821119

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 06821119

Country of ref document: EP

Kind code of ref document: A2