[go: up one dir, main page]

US20180189976A1 - Analysis unit and system for assessment of hair condition - Google Patents

Analysis unit and system for assessment of hair condition Download PDF

Info

Publication number
US20180189976A1
US20180189976A1 US15/418,549 US201715418549A US2018189976A1 US 20180189976 A1 US20180189976 A1 US 20180189976A1 US 201715418549 A US201715418549 A US 201715418549A US 2018189976 A1 US2018189976 A1 US 2018189976A1
Authority
US
United States
Prior art keywords
hair
map
follicular
image
videodermoscopy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/418,549
Inventor
Michal Kasprzak
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of US20180189976A1 publication Critical patent/US20180189976A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • A61B5/446Scalp evaluation or scalp disorder diagnosis, e.g. dandruff
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/448Hair evaluation, e.g. for hair disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7282Event detection, e.g. detecting unique waveforms indicative of a medical condition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D2044/007Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • A61B5/0066Optical coherence imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Definitions

  • Hair condition in particular hair disorders, have traditionally been assessed by clinical inspection and a number of invasive methods including a pull-test, a trichogram obtained from extraction of approximately hundred hairs for microscopic inspection of their roots, and patomorphology which uses biopsy after extraction of skin tissue for microscopic inspection.
  • a dermoscope or videodermoscope traditionally used for skin lesion observations, may be used for diagnosing hair disorders. Since then their method, usually referred to as trichoscopy, has gained some popularity due to its non-invasiveness.
  • trichoscopy has gained some popularity due to its non-invasiveness.
  • a number of studies have been carried out to provide guidelines for disease diagnosis based on visual, qualitative inspection of the videodermosopy images by a trained dermatologist doctor.
  • Visual trichoscopy has generally focused on setting the initial diagnosis based on certain characteristic features observed in the dermoscopy images of the scalp, such as broken hair, yellow dots, black dots, tulip hair, arborizing vessels, etc.
  • trichoscopy may further be used to refer to a technique used in the assessment of hair condition, examination of symptoms of hair disorder, diagnosis of hair disorders, and monitoring hair treatment efficiency.
  • Trichoscopy uses a microscopic camera, a so-called videodermoscope to register high resolution images of hair and scalp or other skin. Such images may further be referred to as videodermoscopy images.
  • the videodermoscopy images are subject to manual or computer-assisted analysis to try to identify all hair shafts and measure hair diameters.
  • a statistical analysis of images registered before and after the treatment allows to assess the response to treatment in terms of, for example, hair number or hair density, hair thickness and hair volume.
  • a multiple micro tattoo marking is used to help to identify the same skin location and field of view, with the aim of positioning the videodermoscope at the same position after the treatment as before.
  • Known methods suffer from various limitations. For example, it may be difficult or even impossible to draw any conclusions if the overall hair density change is statistically insignificant. Also, currently used methods do not allow to ensure that the pre- and post-images represent really the same skin area. Further, with known methods, the precision of the analysis relies strongly on exactly the same positioning of the camera on the skin and the same field of view.
  • a first aspect of the invention provides an analysis unit for assessment of hair condition, the analysis unit comprising a map processor, the map processor being arranged to at least obtain a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and analyse at least the first follicular map to determine an analysis result suitable for assessment of hair condition.
  • Assessment of hair condition may comprise supporting examination of symptoms of hair disorder, supporting examination of symptoms of skin disorder, supporting diagnosis of hair disorder, supporting diagnosis of skin disorder, supporting examination of treatment, supporting examination of a change in hair condition, or supporting examination and/or evaluation of treatment efficiency.
  • the analysis result suitable for assessment of hair condition may relate to, consist of or comprise a parameter known and used in triochoscopy such as hair density.
  • the analysis result suitable for assessment of hair condition may relate to, consist of or comprise any other analysis result for assessment hair condition, such as, for example, one of the analysis results described with reference to embodiments below, for example being indicative of AGA.
  • the analysis unit may be arranged to support diagnosis of hair disorder.
  • the analysis unit may additionally or alternatively be arranged to support examination and/or evaluation of treatment efficiency.
  • the analysis unit may be used in trichoscopy to assess hair condition. Using the first follicular map representing the first plurality of hair root positions as an alternative of using a corresponding first videodermoscopy image, or in addition to using the corresponding first videodermoscopy image, may provide an analysis result that is better suitable for assessment of hair condition according to known methods.
  • the analysis result may, for example, comprise a known type of analysis result, such as hair density, that is more accurately determined than at last some known methods.
  • the analysis result may, additionally or alternatively, comprise a new type of result that is better suitable than known types as, for example, an result indicative of a degree of AGA. Examples are described below with reference to various embodiments.
  • the analysis unit may be arranged to analyse a plurality of follicular maps, the plurality of follicular maps comprising the first follicular map, to determine an analysis result suitable for assessment of hair condition.
  • the analysis unit may be arranged to analyse at least the first follicular map and a corresponding first videodermoscopy image, to determine an analysis result suitable for assessment of hair condition.
  • the various embodiments described below may be used autonomously or in combination of one or more embodiments.
  • the embodiments described may overcome, reduce or alleviate various limitations of known trichoscopy techniques.
  • the specific limitation or limitations that are overcome, reduced or alleviated by a specific embodiment may be different for the different embodiments and any combinations thereof.
  • the analysis unit further comprises an image processor, the image processor being arranged to perform an image processing algorithm on a first videodermoscopy image to generate the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image, and the map processor being arranged to obtain the first follicular map from the image processor.
  • the image processor is arranged to, as part of obtaining the first follicular map, cooperate with a map modification unit, the map modification unit being arranged to present the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • the map processor is arranged to, as part of analysing at least the first follicular map to determine the analysis result, perform a statistical analysis of hair root distances between hair roots positions of the first plurality of hair root positions.
  • the map processor is arranged to, as part of performing the statistical analysis of hair root distances between hair root positions, determine a hair root distance distribution, and determine at least a first and a second relative contribution to the hair root distance distribution of at least a first and a second distribution component function.
  • the relative contribution of the first distribution component function is an indication for a degree of a hair disorder of a first type.
  • the relative contribution of the first distribution component function is an indication for a degree of AGA.
  • the map processor is further arranged to at least obtain a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and determine a common skin area from the first follicular map and the second follicular map.
  • the map processor may further be arranged to use the common skin area in analysing at least the first follicular map to determine the analysis result suitable for assessment of hair condition.
  • the map processor may be arranged to use the common skin area in analysing at least the first and the second follicular map to determine the analysis result suitable for assessment of hair condition.
  • comparing the common skin area of a first follicular map obtained from a first videodermoscopy image recorded at a first moment in time with the common skin area of the second follicular map obtained from a second videodermoscopy image recorded at a second, later, moment in time may provide an analysis result that is better suitable for assessment of hair condition than known methods, such as for assessment of a change in hair condition between the first moment and the second moment, for example as result of a treatment.
  • the analysis result may comprise a change in hair density and/or a change in number of hair and/or identification of appeared and disappeared hair, which may be more accurately obtained using the common skin area than from a mere comparison of the first and second videodermoscopy images or the corresponding hair densities determined therefrom.
  • Other examples are described below with reference to various embodiments.
  • the image processor is further arranged to perform an image processing algorithm on a second videodermoscopy image to generate the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image, and the map processor is arranged to obtain the second follicular map from the image processor.
  • the map processor comprises a matching unit, the matching unit being arranged to at least relate hair root positions in the second follicular map to hair root positions of the first follicular map in the common skin area to determine a plurality of related hair root positions, each related hair root position of a hair root in the second follicular map being related to a hair root position in the first follicular map of the same hair root.
  • a related hair root position of a hair root in the second follicular map may hereby be related to a hair root position in the first follicular map of the same hair root representing presumably the same hair follicle in the second and first videodermoscopy images.
  • the map processor may further be arranged to compare a change in condition of individual hair between the first and second videodermoscopy image to determine the analysis result suitable for assessment of hair condition.
  • the matching unit is arranged to initialize a transformation function, and to iteratively adapt the transformation function, the iterative adaptation comprising applying the transformation function to the first plurality of hair root positions of the first follicular map to obtain a first plurality of transformed hair root positions, relating the first plurality of transformed hair root positions to the second plurality of hair root positions of the second follicular map, determining relative distances between transformed hair root positions of the first plurality of transformed hair root positions and the related hair root positions of the second plurality of hair root positions to obtain a correspondence metric, and adapting the transformation function to minimize the correspondence metric.
  • the matching unit is arranged to, as part of iteratively adapting the transformation function, further use at least one parameter of hair associated with the transformed hair root positions and hair associated with the related hair root positions to obtain the correspondence metric, the at least one parameter comprising at least one parameter from a group consisting of hair shaft diameter, hair length, hair growth, hair color.
  • the matching unit is arranged to, as part of initializing the transformation function, detect positions of a first plurality of reference symbols on the skin in the first videodermoscopy image, detect positions of a second plurality of reference symbols on the skin in the second videodermoscopy image, and initialize the transformation function to reflect a transformation from the positions of a first plurality of reference symbols to the positions of a second plurality of reference symbols.
  • the map processor is further arranged to at least analyse differences between at least the common skin area in the first follicular map and the common skin area in the second follicular map to determine the analysis result suitable for assessment of hair condition.
  • the map processor is further arranged to, in determining the analysis result, identify an appearing of new hair shafts in the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • the map processor is further arranged to, in determining the analysis result, identify a disappearing of hair from the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • the map processor is further arranged to at least analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image to determine the analysis result suitable for assessment of hair condition.
  • the map processor is further arranged to, in analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image, determine differences between at least one parameters of a group of parameters consisting of average hair diameter, hair diameter distribution, average hair length, hair length distribution, hair colors, hair color distribution, and/or at least one hair density.
  • a second aspect of the invention provides a system for assessment of hair condition, the system comprising an upload unit, an analysis unit according to any one of the preceding claims, and a presentation unit, the upload unit being arranged to receive one or more videodermoscopy images, the one or more videodermoscopy images comprising at least the first videodermoscopy image and to upload the one or more videodermoscopy images to the analysis unit, the analysis unit being arranged to receive the one or more videodermoscopy images from the upload unit and to obtain a videodermoscopic analysis result from the one or more videodermoscopy images, the videodermoscopic analysis result comprising the analysis result suitable for assessment of hair condition and/or an examination result derived from the analysis result, and the presentation unit being arranged to receive the videodermoscopic analysis result from the analysis unit and to present at least part of the analysis result to a user.
  • system further comprising a result check unit, the result check unit being arranged to receive the videodermoscopic analysis result from the analysis unit, review the videodermoscopic analysis result and to modify the videodermoscopic analysis result, provide the videodermoscopic analysis result as modified to the presentation unit to allow the presentation unit to present at least part of the videodermoscopic analysis result as modified to the user.
  • the upload unit is connected to the analysis unit via a communication network.
  • the presentation unit is connected to the analysis unit via a communication network.
  • system further comprises a user terminal, the user terminal comprising the upload unit and the presentation unit, the user terminal being connected to the analysis unit via a communication network.
  • a third aspect of the invention provides a method for assessment of hair condition, the method comprising obtaining a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and analysing at least the first follicular map to determine an analysis result suitable for assessment of hair condition.
  • the method further comprises performing an image processing algorithm on a first videodermoscopy image to obtain the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image.
  • the method further comprises obtaining a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and determining a common skin area from the first follicular map and the second follicular map.
  • the method further comprises performing an image processing algorithm on a second videodermoscopy image to obtain the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image.
  • the method comprises uploading one or more videodermoscopy images to an analysis unit via a communication network, for letting the analysis unit perform the method according to any one of the embodiments above, and receiving the videodermoscopic analysis result from the analysis via the communication network.
  • the method further comprises receiving one or more videodermoscopy images by an upload unit, uploading the one or more videodermoscopy images from the upload unit to an analysis unit via a communication network, for letting the analysis unit perform the method according to an embodiment, and presenting at least part of the videodermoscopic analysis result to a user.
  • a fourth aspect of the invention provides a computer program product comprising a computer program comprising instructions arranged to, when executed by a computer, execute at least part of the method of any one of the embodiments.
  • FIG. 1 shows an analysis unit for assessment of hair condition according to an embodiment
  • FIG. 2 shows an analysis unit for assessment of hair condition according to a further embodiment
  • FIG. 3 a and FIG. 3 b schematically shows distributions measured in follicular maps of a first and a second person respectively
  • FIG. 4 shows an analysis unit for assessment of hair condition according to another embodiment
  • FIG. 5 schematically shows a first and second follicular map and the common skin area
  • FIG. 6 shows an analysis unit for assessment of hair condition according to again another embodiment
  • FIG. 7 shows an analysis unit for assessment of hair condition according to again another embodiment
  • FIG. 8 schematically shows a system SYS for assessment of hair condition
  • FIG. 9 - FIG. 11 schematically show methods for assessment of hair condition according to embodiments.
  • FIG. 12 shows a computer readable medium comprising a computer program product.
  • FIG. 1 shows an analysis unit ANA for assessment of hair condition according to an embodiment.
  • the analysis unit ANA comprises a map processor MPP.
  • the map processor is arranged to at least obtain a first follicular map FM 1 representing a first plurality of hair root positions in a first videodermoscopy image.
  • the map processor is further arranged to analyse at least the first follicular map FM 1 to determine an analysis result ANR 1 suitable for assessment of hair condition.
  • the map processor MPP may be arranged to obtain the first follicular map FM 1 from a storage.
  • the map processor MPP may be arranged to obtain the first follicular map FM 1 from receiving the first follicular map FM 1 over a communication channel, such as from a communication network.
  • the map processor may obtain the first follicular map from a storage, such as from a patient database wherein the first follicular map is stored.
  • the map processor may alternatively obtain the first follicular map from an image processor that is arranged to generate the first follicular map from a first videodermoscopy image.
  • Examination of hair condition may relate to diagnosis of hair disorders. Examination of hair condition may additionally or alternatively relate to identification and/or measurement of an advancement of hair disorder, measurement of a result of a treatment of a hair disorder, measurement of an effect and/or effectiveness of a medical treatment, or measurement of an effect and/or effectiveness of a cosmetic treatment.
  • the analysis result may, e.g., comprise an average hair root density, an average distance between hair roots, statistical parameters representing a statistics of distances between hair roots, or another parameter derivable from hair root positions.
  • FIG. 2 shows a further embodiment of an analysis unit ANA′ for assessment of hair condition according to an embodiment.
  • the analysis unit ANA′ comprises an image processor IMP and a map processor MPP.
  • the image processor IMP is arranged to perform an image processing algorithm on a first videodermoscopy image IM 1 to generate a first follicular map FM 1 representing a first plurality of hair root positions in the first videodermoscopy image IM 1 .
  • the map processor MPP is arranged to obtain the first follicular map from the image processor IMP.
  • the map processor is further arranged to analyse at least the first follicular map FM 1 to determine an analysis result ANR 1 suitable for assessment of hair condition.
  • the map processor may be connected to the image processor and arranged to obtain the first follicular map directly from the image processor.
  • the map processor may be connected to the image processor via one or more intermediate devices or channels and the map processor is arranged to obtain the first follicular map from the image processor via the one or more intermediate devices or channels.
  • the map processor is connected to a storage unit, the image processor is connected to the storage unit, the image processor is arranged to store the first follicular map in the storage unit, and the map processor is arranged to obtain the first follicular map from the image processor by retrieving it from the storage unit. The retrieval from the storage unit may occur substantially immediately after the first follicular map was stored in the storage unit by the image processor.
  • the retrieval from the storage unit may alternatively occur at a much later moment in time than when the first follicular map was stored in the storage unit by the image processor, to allow a later analysis of the first follicular map, for example, when a second follicular map has become available after a period of time, to allow to compare a change of the follicular map over time to support the examination of symptoms of hair diseases.
  • the image processing algorithm performed on the first videodermoscopy image IM 1 to generate the first follicular map FM 1 representing the first plurality of hair root positions in the first videodermoscopy image IM 1 may comprise any combination of suitable pattern recognition algorithms and qualification algorithms, such as binarization, adaptive thresholding, noise detection, blob detection, blob recombination, line tracking, hair crossing recombination, end detection, watershed division, and tip-follicle qualification.
  • the image processing algorithm may be supplemented by a manual correction by operators, for, e.g., removal of mistakes, addition of non-detected hair, removal of falsely detected hair, addition or removal of hair follicles.
  • the spatial coordinates of the hair follicles identified in the field of view of the first videodermoscopy image may be referred to as the first follicular map FM 1 .
  • the first follicular map FM 1 thus represents the first plurality of hair root positions in the first videodermoscopy image IM 1 .
  • the first follicular map FM 1 may be stored and/or presented as a list of spatial coordinates, such as (x, y) coordinates in the first videodermoscopy image IM 1 , as a graphical representation, or in any other suitable form.
  • the first follicular map FM 1 may, e.g., be presented on screen together with the first videodermoscopy image, such as side-by-side with the first videodermoscopy image or as an overlay on the first videodermoscopy image.
  • FIG. 2 thus shows an embodiment of an analysis unit ANA′ for assessment of hair condition, the analysis unit comprising an image processor IMP arranged to at least perform an image processing algorithm on a first videodermoscopy image IM 1 to obtain a first follicular map FM 1 representing a first plurality of hair root positions in the first videodermoscopy image IM 1 , and a map processor MPP arranged to analyse at least the first follicular map FM 1 to determine an analysis result ANR 1 suitable for assessment of hair condition.
  • an image processor IMP arranged to at least perform an image processing algorithm on a first videodermoscopy image IM 1 to obtain a first follicular map FM 1 representing a first plurality of hair root positions in the first videodermoscopy image IM 1
  • a map processor MPP arranged to analyse at least the first follicular map FM 1 to determine an analysis result ANR 1 suitable for assessment of hair condition.
  • the analysis unit ANA′ may comprise a map modification unit MOD.
  • the image processor IMP is arranged to, as part of obtaining the first follicular map FM 1 , cooperate with the map modification unit MOD.
  • the map modification unit MOD is arranged to present the first follicular map FM 1 as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map FM 1 and to modify the first follicular map FM 1 such as to, at least, add and/or remove one or more hair root positions from the first follicular map FM 1 .
  • the first follicular map as reviewed and modified is thereafter used for analyzing at least the first follicular map to determine the analysis result suitable for assessment of hair condition. Using review by human assistants may improve the quality of the follicular map significantly.
  • the map modification unit MOD is further arranged to present the first follicular map FM 1 as obtained from the performing of the image processing algorithm on the first videodermoscopy image IM 1 to a plurality of human assistants, to allow each of the human assistant to review the first follicular map and to propose to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • the map modification unit MOD may be arranged to compare the proposals from the plurality of human assistants for removing one or more hair root positions from the first follicular map FM 1 , and to decide from the comparison which hair root position of the proposed one or more hair root positions from the first follicular map to delete.
  • the map modification unit MOD may be arranged to use a majority voting in deciding which of the proposed one or more hair root positions is to be deleted. Using majority voting of a plurality of reviews by human assistants may improve the quality of the follicular map even further.
  • the map processor MPP may be arranged to, as part of analysing at least the first follicular map to determine the analysis result, perform a statistical analysis of hair root distances between hair roots positions of the first plurality of hair root positions.
  • the map processor MPP may be arranged to, as part of performing the statistical analysis of hair root distances between hair root positions, determine a hair root distance distribution, and determine at least a first and a second relative contribution to the hair root distance distribution of at least a first and a second distribution component function.
  • the first and second relative contributions may be obtained from, a two-component fit to the distribution, with the first contribution reflecting the dominant component for a specific hair disorder and the second contribution reflecting the dominant component for healthy hair.
  • more contributions may be used reflecting respective dominant component for other specific hair disorders.
  • the relative contribution of the first distribution component function being an indication for a degree of a hair disorder of a first type.
  • a first indicator range such as larger than 35%
  • this may be an indication of androgenetic alopecia (AGA).
  • FIG. 3 a shows the distribution of distances to all other hair follicles in the follicular map of a first, healthy, person.
  • FIG. 3 b shows the distribution of distances to all other hair follicles in the follicular map of a second person who has AGA in an advanced stage.
  • the resulting distribution indicated as points P(r) is fitted in the low distance range (in this example, distances r ⁇ 600 ⁇ m) by a model consisting of a sum of two components: a first component labeled AGA that is a linear distribution, and a second component labeled non-AGA that is peaked at low values and has a varying width.
  • the first component represents a distribution that is characteristic for AGA.
  • the second component represents a distribution that is characteristic for healthy individuals.
  • the relative contribution of the first component provides a measure to assess AGA advancement.
  • FIG. 4 shows an analysis unit ANA′′ for assessment of hair condition according to another embodiment.
  • the analysis unit ANA′′ comprises a map processor MPP′.
  • the map processor MPP′ is arranged to at least obtain a first follicular map FM 1 representing a first plurality of hair root positions in a first videodermoscopy image and a second follicular map FM 2 representing a second plurality of hair root positions in a second videodermoscopy image.
  • the map processor MPP′ is further arranged to analyse at least the first follicular map FM 1 and the second follicular map FM 2 to determine an analysis result ANR 2 suitable for assessment of hair condition.
  • the map processor MPP′ may, similarly as described with respect to the map processors shown in FIG. 1 and FIG. 2 , obtain the first and second follicular map FM 1 , FM 2 from retrieving the maps from a storage, receiving them from a communication network, or receive them from the image processor IMP′.
  • the image processor IMP′ may be further arranged to perform an image processing algorithm on a second videodermoscopy image to generate the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image, and the map processor MPP′ may be arranged to obtain the second follicular map from the image processor.
  • the analysis unit ANA′′ may comprise a map modification unit MOD′.
  • the image processor IMP is arranged to, as part of obtaining the first follicular map FM 1 as well as part of obtaining the second follicular map FM 2 , cooperate with the map modification unit MOD′.
  • the map modification unit MOD′ is arranged to present the first and second follicular maps FM 1 , FM 2 as obtained from the performing of the image processing algorithm on the first and second videodermoscopy image to a human assistant, and allow the human assistant to review the first and second follicular map FM 1 , FM 2 and to modify the first and second follicular map FM 1 , FM 2 such as to, at least, add and/or remove one or more hair root positions from the first and/or second follicular map FM 1 , FM 2 .
  • the first and second follicular map as reviewed and modified is thereafter used for analyzing at least the first and second follicular map to determine the analysis result suitable for assessment of hair condition.
  • the map processor MPP′ is further arranged to determine a common skin area from the first follicular map FM 1 and the second follicular map FM 2 .
  • FIG. 5 schematically shows a first and second follicular map FM 1 , FM 2 and the common skin area.
  • the common skin area thus corresponds to a part OV 1 of the first follicular map FM 1 that corresponds to a part OV 2 of the second follicular map FM 2 which corresponds to the same skin area as the part OV 1 of the first follicular map FM 1 .
  • the map processor MPP′ may be arranged to as part of determining the common skin area from the first follicular map FM 1 and the second follicular map FM 2 , determine a transformation function TF 12 which relates hair root positions in the first follicular map FM 1 to hair root positions of the same hair in the second follicular map FM 2 .
  • the common skin area OV 1 of the first follicular map FM 1 may thus be related to the common skin area OV 2 of the second follicular map FM 2 by transformation function TF 12 as schematically illustrated in FIG. 5 .
  • the first follicular map FM 1 relates to a first videodermoscopy image registered at a first moment in time, such as before a treatment
  • the second follicular map FM 2 relates to a second videodermoscopy image registered at a second later moment in time, such as after the treatment
  • analyzing differences between the common skin area of the first follicular map FM 1 and the common skin area of the second follicular map FM 2 and/or analyzing differences between the common skin area of the first videodermoscopy image IM 1 and the common skin area of the second videodermoscopy image IM 2 may allow a largely improved precision compared to known techniques.
  • follicular maps to identify the common skin area, i.e., corresponding skin areas in both follicular maps, largely improves the precision of the analysis.
  • prior art techniques relied strongly on exactly the same positioning of the camera on the skin and the same field of view
  • the use of the matching follicular maps makes the analysis largely independent of size, shape and distortion of the area used for the videodermoscopic analysis.
  • the size, shape and distortion of the area of the skin registered on a videodermoscopy image may vary significantly when two images are registered at different moments in time and/or different locations. For example, if the skin is stretched or displaced by pressing the videodermoscopic lens, the actual measurement area may differ up to 30%, which results in an inaccurate analysis with known techniques.
  • identifying which of the hair root positions in the first follicular map FM 1 , and hence which hair in the first videodermoscopy image IM 1 corresponds to which of the hair root positions in the second follicular map FM 2 , and hence which hair in the second videodermoscopy image IM 2 allows an accurate determination of which hair has appeared and which hair has disappeared, based on tracking individual hair rather than mere statistics over the overlap area.
  • determining that the number of hair has increased from 100 to 105 for a specific subject after a certain period of time it may be determined that 5 hair were lost and 10 came new.
  • Such knowledge may be of relevance when assessing certain kinds of hair disorder. For determination of therapeutic effects of new substances in clinical trials, this technique and the corresponding precision improvement, may allow to reduce the number of test patient samples necessary to obtained conclusive result.
  • the map processor MPP′ may a matching unit MAT.
  • the matching unit MAT may be arranged to at least relate hair root positions in the second follicular map to hair root positions of the first follicular map in at least the common skin area to determine a plurality of related hair root positions.
  • Each related hair root position of a hair root in the second follicular map may thus be related to a hair root position in the first follicular map of the same hair root. This may be performed as part of determining the common skin area, or after the common skin are has been determined.
  • the method determines relates hair root positions of the second plurality of hair root positions in the second videodermoscopy image to hair root positions of the first plurality of hair root positions in the first videodermoscopy image in at least the common skin area to determine related hair root positions.
  • the map processor MAP may be arranged to, in determining the common skin area from at least analyzing the first plurality of positions of hair roots and the second plurality of positions of hair roots, find corresponding positions of hair roots by minimizing their relative distance in one or more iterations.
  • the matching unit MAP is arranged to, as part of relating hair root positions and/or while determining a common skin area from the first follicular map FM 1 and the second follicular map FM 2 , initialize a transformation function TF 12 and iteratively adapt the transformation function TF 12 .
  • the iterative adaptation comprises:
  • the matching unit MAP is arranged to, as part of iteratively adapting the transformation function, further use at least one parameter of hair associated with the transformed hair root positions and hair associated with the related hair root positions to obtain the correspondence metric, the at least one parameter comprising at least one parameter from a group consisting of hair shaft diameter, hair length, hair growth, hair color.
  • the matching unit MAP is arranged to, as part of initializing the transformation function TF, detect positions of a first plurality of reference symbols REF 1 on the skin in the first videodermoscopy image IM 1 , detect positions of a second plurality of reference symbols REF 2 on the skin in the second videodermoscopy image IM 2 , initialize the transformation function TF 12 to reflect a transformation from the positions of a first plurality of reference symbols REF 1 to the positions of a second plurality of reference symbols REF 2 .
  • the first and second plurality of reference symbols REF 1 , REF 2 may be a plurality of micro-tattoos on the skin, for example 2, 3, 4, 6, 9, 16 or any suitable number of micro-tattoos.
  • the micro-tattoos may, as in known methods, be used to roughly position the videodermoscope at roughly corresponding positions on the skin to register suitable videodermoscopy images at subsequent moments in time.
  • the map processor MPP′ may be further arranged to at least analyse differences between at least the common skin area OV 1 in the first follicular map FM 1 and the common skin area OV 2 in the second follicular map FM 2 to determine the analysis result suitable for assessment of hair condition.
  • the map processor MPP′ may thus analyze differences between hair root positions, number of hair roots and hair root density.
  • the first follicular map fm 1 may, e.g., be associated with a first videodermoscopy image im 1 registered before the start of a treatment
  • the second follicular map FM 2 may associated with a second videodermoscopy image IM 2 registered after a certain duration of the treatment.
  • Analysing the differences may then give an analysis result suitable for supporting the examination of hair condition, in particular whether symptoms have changed as a result of the treatment.
  • the map processor MPP′ may be arranged to, in determining the analysis result, identify an appearing of new hair roots in the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • the map processor MPP′ may provide the appearing of new hair roots as an indication of new growth as part of the analysis result.
  • the map processor MPP′ may be arranged to, in determining the analysis result, identify a disappearing of hair roots from the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • the map processor MPP′ may provide the disappearing of hair roots as an indication of hair loss as part of the analysis result.
  • the map processor MPP′ may be arranged to, in determining the analysis result, determine a difference in total number of hair roots in the common skin area OV 2 in the second follicular map FM 1 compared to the common skin area OV 1 in the first follicular map OV 2 .
  • the map processor MPP′ may be arranged to, in determining the analysis result, determine a difference in hair density in the common skin area OV 2 in the second follicular map FM 2 compared to the common skin area OV 1 in the first follicular map FM 1 .
  • the map processor MPP′ is arranged to at least analyse differences between at least the common skin area in the first videodermoscopy image IM 1 and the common skin area in the second videodermoscopy image IM 2 to determine the analysis result suitable for assessment of hair condition.
  • the map processor MPP′ may thus analyze differences between hair in the common skin area of the first videodermoscopy image IM 1 and hair in the common skin area in the second videodermoscopy image IM 2 . Individual hair may be compared as for each hair in the first videodermoscopy image IM 1 , the related hair in the second videodermoscopy image IM 2 can be identified, e.g.
  • the first videodermoscopy image IM 1 may have been registered before the start of a treatment, and the second videodermoscopy IM 2 image may have been registered after a certain duration of the treatment. Analysing the differences may then give an analysis result suitable for supporting the examination of hair condition, in particular whether symptoms have changed as a result of the treatment.
  • the map processor MPP′ may hereto be arranged to, in analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image, determine differences between at least one parameters of a group of parameters consisting of average hair diameter, hair diameter distribution, average hair length, hair length distribution, hair colors, hair color distribution, and/or at least one hair density.
  • FIG. 6 shows another analysis unit ANA′′′ for assessment of hair condition according to another embodiment.
  • the analysis unit ANA′′′ shown in FIG. 6 differs from the analysis unit ANA′′ shown in FIG. 4 in that the analysis unit ANA′′′ further comprises a storage interface unit SIF arranged to cooperate with a storage unit STOR.
  • Storage unit STOR is shown to be external to the analysis unit ANA′′′, but may in alternative embodiments ne an integral part of analysis unit ANA′′′.
  • Storage unit STOR may be a cloud device, and may as such be connected to the analysis unit ANA′′′ permanently or only when the analysis unit ANA′′′ connects to the storage unit STOR.
  • the storage interface unit SIF is arranged to store the follicular maps FM 1 , FM 2 , and optionally the videodermoscopy images IM 1 , IM 2 , in the storage unit STOR after the follicular maps FM 1 , FM 2 have been obtained from the image processor IMP′ or the map modification unit MOD.
  • the storage interface unit SIF is further arranged to retrieve the follicular maps FM 1 , FM 2 , and optionally the videodermoscopy images IM 1 , IM 2 , from the storage unit STOR for analysis.
  • FIG. 7 shows another analysis unit ANA′′′′ for assessment of hair condition according to another embodiment.
  • the analysis unit ANA′′′ shown in FIG. 7 differs from the analysis unit ANA′′′ shown in FIG. 6 in that the analysis unit ANA′′′′ does not comprise the image processor IMP′ and the map modification unit MOD.
  • the image processor IMP′ and the map modification unit MOD′ are instead provided as part of a separate unit shown as image provided IMPRO.
  • the image provider IMPRO further comprises a first storage interface unit SIF 1 arranged to cooperate with a storage unit STOR.
  • Storage unit STOR may be a cloud device, and may as such be connected to the image provider IMPRO and the analysis unit ANA′′′ permanently or only when image provider IMPRO or the analysis unit ANA′′′ connects to the storage unit STOR.
  • the storage interface unit SIF is arranged to store the follicular maps FM 1 , FM 2 , and optionally the videodermoscopy images IM 1 , IM 2 , in the storage unit STOR after the follicular maps FM 1 , FM 2 have been obtained from the image processor IMP′.
  • the analysis unit ANA′′′′ comprises a second storage interface unit SIF 2 arranged to cooperate with the storage unit STOR.
  • the second storage interface unit SIF 2 is arranged to retrieve the follicular maps FM 1 , FM 2 , and optionally the videodermoscopy images IM 1 , IM 2 , from the storage unit STOR for analysis.
  • FIG. 8 schematically shows a system SYS for assessment of hair condition.
  • the system SYS comprises an upload unit UPL, an analysis unit ANU, and a presentation unit PRES.
  • the upload unit UPL is arranged to receive one or more videodermoscopy images, e.g. from a dermatologist or an assistant thereof that registered the videodermoscopy images on a scalp from a patient, e.g. by feeding into a scanner, by retrieving from a storage for example at the dematologists'clinic, or by retrieving from a communication network.
  • the one or more videodermoscopy images comprise at least the first videodermoscopy image.
  • the upload unit UPL is further arranged to upload the one or more videodermoscopy images to the analysis unit, for example via a communication network, or via e-mail, or as a hardcopy via surface mail or a delivery service.
  • the analysis unit ANU is arranged to receive the one or more videodermoscopy images from the upload unit UPL.
  • the analysis unit ANU is arranged to obtain a videodermoscopic analysis result from the one or more videodermoscopy images.
  • Tthe videodermoscopic analysis result comprises at least one of the analysis result suitable for assessment of hair condition as obtained by one of the analysis units described with reference to FIG. 1 - FIG. 7 , and an examination result derived from the analysis result result.
  • the presentation unit PRES is arranged to receive the videodermoscopic analysis result from the analysis unit ANU and to present at least part of the analysis result to a user.
  • the presentation unit PRES may be arranged to present at least part of the analysis result to a user on, for example, a display, on paper, in a computer-readable data format, in a human-readable form or on a data storage medium, in a qualitative or quantitative manner, as a graphical or textual, such as table or phrases, representation.
  • the user may, e.g., be a patient, a general practitioner, a dermatology nurse, a dermatologist, or a scientist.
  • FIG. 8 shows that the system SYS may further comprise a result check unit RCHK.
  • the result check unit RCHK is arranged to receive the videodermoscopic analysis result from the analysis unit ANU.
  • the result check unit RCHK is arranged to review the videodermoscopic analysis result and to modify the videodermoscopic analysis result.
  • the videodermoscopic analysis result may be changed, supplemented, summarized or reformatted.
  • the videodermoscopic analysis result may e.g. be supplemented with a diagnosis of a hair disorder, a treatment proposal or a treatment change.
  • an expert system or a human expert dermatologist may draw a diagnosis as the presence, or lack of presence, of AGA.
  • the result check unit RCHK is arranged to provide the videodermoscopic analysis result as modified to the presentation unit to allow the presentation unit to present at least part of the videodermoscopic analysis result as modified to the user.
  • the result check unit RCHK may be connected to the analysis unit ANU via a communication network such as the internet, whereby the result check and the image processing may take place at different geographical locations.
  • the result check unit RCHK may alternatively be directly connected to the analysis unit ANU and part of a single unit, e.g., the analysis unit ANU and the result check unit RCHK may be implemented in a personal computer of a dermatologist.
  • the upload unit UPL may be connected to the analysis unit ANU via a communication network COMM.
  • the communication network COMM may be a virtual private network.
  • the communication network COMM may be the Internet.
  • videodermoscopic images registered at various places, e.g. at various dermatologic clinics may be sent via the internet to the analysis unit ANU at a centralized location, at which centralized location a consistent and quality-controlled processing may be performed to obtain the follicular maps and the analysis results.
  • the processing at the centralized location simplifies the use of a pool of well-trained staff to do the review and modification of the follicular maps described above with reference to the map modification unit MOD.
  • the presentation unit PRES may be connected to the analysis unit ANU via a communication network.
  • the communication network may be the same virtual private network, another virtual private network, or, for example, the Internet.
  • the analysis unit ANU may thus be at a central location, and the presentation unit PRES may be at a general practitioner, a nurse, a patient, or elsewhere.
  • the system SYS comprises a user terminal TERM.
  • the user terminal TERM comprises the upload unit UPL and the presentation unit PRES.
  • the user terminal TERM is connected to the analysis unit ANU via a communication network COMM.
  • the user terminal TERM may, e.g., be a computer at a dermatologist's clinic that can connect via the internet, e.g. using a virtual private network, to the analysing unit ANU at a centralized location.
  • FIG. 9 schematically shows a method M_ANA for assessment of hair condition according to an embodiment.
  • the method M_ANA comprises obtaining OBT a first follicular map FM 1 representing a first plurality of hair root positions in a first videodermoscopy image.
  • the method M_ANA comprises analysing M_MPP at least the first follicular map to determine an analysis result ANR 1 suitable for assessment of hair condition. Reference is further made to FIG. 1 .
  • FIG. 10 schematically shows a method M_ANA′ for assessment of hair condition according to a further embodiment.
  • the method M_ANA′ comprises performing M_IMP an image processing algorithm on a first videodermoscopy image IM 1 to obtain a first follicular map FM 1 representing a first plurality of hair root positions in the first videodermoscopy image.
  • the method M_ANA′ further comprises analysing M_MPP at least the first follicular map to determine an analysis result ANR 1 suitable for assessment of hair condition.
  • the method M_ANA′ may further comprise a present-and-modify option M_MOD comprising presenting the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • a present-and-modify option M_MOD comprising presenting the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • FIG. 11 schematically shows a method M_ANA′′ for assessment of hair condition according to again a further embodiment.
  • the method M_ANA′′′ comprises performing M_IMP′ an image processing algorithm on a first videodermoscopy image IM 1 to obtain a first follicular map FM 1 representing a first plurality of hair root positions in the first videodermoscopy image and performing M_IMP′ an image processing algorithm on a second videodermoscopy image to obtain the second follicular map representing the second plurality of hair root positions in the first videodermoscopy image to obtain FM 2 a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image.
  • the method M_ANA′′′ may further comprise a present-and-modify option M_MOD′ comprising presenting the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map, and presenting the second follicular map as obtained from the performing of the image processing algorithm on the second videodermoscopy image to a human assistant, and allow the human assistant to review the second follicular map and to modify the second follicular map such as to, at least, add and/or remove one or more hair root positions from second first follicular map.
  • the method M_ANA′′′ may comprises determining a common skin area from the first follicular map FM 1 and the second follicular map FM 2 .
  • the method may comprise uploading one or more videodermoscopy images to an analysis unit via a communication network, for letting the analysis unit perform the method according to any one of the embodiments above, and receiving the videodermoscopic analysis result from the analysis via the communication network.
  • the method may further comprise receiving one or more videodermoscopy images by an upload unit, uploading the one or more videodermoscopy images from the upload unit to an analysis unit via a communication network, for letting the analysis unit perform the method, and presenting at least part of the videodermoscopic analysis result to a user.
  • FIG. 12 shows a computer readable medium CRMED comprising a computer program product CPP, the computer program product CPP comprising instructions for causing a processor apparatus to perform a method according to any one embodiment or a part of thereof.
  • the computer program product CPP may be embodied on the computer readable medium CRMED as physical marks or by means of magnetization of the computer readable medium CPP.
  • any other suitable embodiment is conceivable as well.
  • the computer readable medium CRMED is shown in FIG. 9 as an optical disc, the computer readable medium CRMED may be any suitable computer readable medium, such as a hard disk, solid state memory, flash memory, etc., and may be non-recordable or recordable.
  • the computer program product CPP may thus comprise a computer program comprising instructions arranged to, when executed by a computer, execute at least part of the method of any one of the embodiments described above.
  • the invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention.
  • the computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • the computer program may be provided on a data carrier, such as a CD-type optical disc, a DVD-type optical disc, a hard disk, or diskette, stored with data loadable in a memory of a computer system, the data representing the computer program.
  • the data carrier may thus be a tangible data carrier.
  • the data carrier may be a data connection, such as a telephone cable or a network cable.
  • the data carrier may further be a non-tangible data carrier such as a wireless connection.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim.
  • the article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Dermatology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Signal Processing (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Psychiatry (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A system for assessment of hair condition. The system has an analysis unit arranged to at least obtain a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image. The analysis unit is arranged to analyse at least the first follicular map to determine an analysis result suitable for assessment of hair condition. The system may be arranged to perform an image processing algorithm on a first videodermoscopy image to obtain the first follicular map.

Description

    FIELD The invention relates to an analysis unit for assessment of hair condition, a system for assessment of hair condition, a method for assessment of hair condition, and a computer program product. BACKGROUND ART
  • Hair condition, in particular hair disorders, have traditionally been assessed by clinical inspection and a number of invasive methods including a pull-test, a trichogram obtained from extraction of approximately hundred hairs for microscopic inspection of their roots, and patomorphology which uses biopsy after extraction of skin tissue for microscopic inspection.
  • In 2006, it was proposed by Ross, E K, Vincenzi, C I, and Tosti, A. that a dermoscope or videodermoscope, traditionally used for skin lesion observations, may be used for diagnosing hair disorders. Since then their method, usually referred to as trichoscopy, has gained some popularity due to its non-invasiveness. A number of studies have been carried out to provide guidelines for disease diagnosis based on visual, qualitative inspection of the videodermosopy images by a trained dermatologist doctor. Visual trichoscopy has generally focused on setting the initial diagnosis based on certain characteristic features observed in the dermoscopy images of the scalp, such as broken hair, yellow dots, black dots, tulip hair, arborizing vessels, etc. This qualitative inspection of the videodermosopy images did not provide tools to, for example, clearly distinguish between most common conditions like to distinguish Androgenetic alopecia (AGA) from diffuse Alopecia areata (AA) and Telogen effluvium (TE), precisely measure advancement of AGA, or to precisely measure therapy efficiency once medication is introduced.
  • The term trichoscopy may further be used to refer to a technique used in the assessment of hair condition, examination of symptoms of hair disorder, diagnosis of hair disorders, and monitoring hair treatment efficiency. Trichoscopy uses a microscopic camera, a so-called videodermoscope to register high resolution images of hair and scalp or other skin. Such images may further be referred to as videodermoscopy images. In known methods, the videodermoscopy images are subject to manual or computer-assisted analysis to try to identify all hair shafts and measure hair diameters. A statistical analysis of images registered before and after the treatment allows to assess the response to treatment in terms of, for example, hair number or hair density, hair thickness and hair volume. In order to try to detect therapeutic effects in the pre- and post-image comparison, a multiple micro tattoo marking is used to help to identify the same skin location and field of view, with the aim of positioning the videodermoscope at the same position after the treatment as before. Known methods suffer from various limitations. For example, it may be difficult or even impossible to draw any conclusions if the overall hair density change is statistically insignificant. Also, currently used methods do not allow to ensure that the pre- and post-images represent really the same skin area. Further, with known methods, the precision of the analysis relies strongly on exactly the same positioning of the camera on the skin and the same field of view.
  • Known trichoscopy techniques used in the assessment of hair condition, examination of symptoms of hair disorder, diagnosis of hair disorders and monitoring hair treatment efficiency thus still suffer from various limitations.
  • SUMMARY
  • A first aspect of the invention provides an analysis unit for assessment of hair condition, the analysis unit comprising a map processor, the map processor being arranged to at least obtain a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and analyse at least the first follicular map to determine an analysis result suitable for assessment of hair condition. Assessment of hair condition may comprise supporting examination of symptoms of hair disorder, supporting examination of symptoms of skin disorder, supporting diagnosis of hair disorder, supporting diagnosis of skin disorder, supporting examination of treatment, supporting examination of a change in hair condition, or supporting examination and/or evaluation of treatment efficiency. The analysis result suitable for assessment of hair condition may relate to, consist of or comprise a parameter known and used in triochoscopy such as hair density. The analysis result suitable for assessment of hair condition may relate to, consist of or comprise any other analysis result for assessment hair condition, such as, for example, one of the analysis results described with reference to embodiments below, for example being indicative of AGA. The analysis unit may be arranged to support diagnosis of hair disorder. The analysis unit may additionally or alternatively be arranged to support examination and/or evaluation of treatment efficiency. The analysis unit may be used in trichoscopy to assess hair condition. Using the first follicular map representing the first plurality of hair root positions as an alternative of using a corresponding first videodermoscopy image, or in addition to using the corresponding first videodermoscopy image, may provide an analysis result that is better suitable for assessment of hair condition according to known methods. The analysis result may, for example, comprise a known type of analysis result, such as hair density, that is more accurately determined than at last some known methods. The analysis result may, additionally or alternatively, comprise a new type of result that is better suitable than known types as, for example, an result indicative of a degree of AGA. Examples are described below with reference to various embodiments. The analysis unit may be arranged to analyse a plurality of follicular maps, the plurality of follicular maps comprising the first follicular map, to determine an analysis result suitable for assessment of hair condition. The analysis unit may be arranged to analyse at least the first follicular map and a corresponding first videodermoscopy image, to determine an analysis result suitable for assessment of hair condition. The various embodiments described below may be used autonomously or in combination of one or more embodiments. The embodiments described may overcome, reduce or alleviate various limitations of known trichoscopy techniques. The specific limitation or limitations that are overcome, reduced or alleviated by a specific embodiment may be different for the different embodiments and any combinations thereof.
  • In an embodiment, the analysis unit further comprises an image processor, the image processor being arranged to perform an image processing algorithm on a first videodermoscopy image to generate the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image, and the map processor being arranged to obtain the first follicular map from the image processor.
  • In an embodiment, the image processor is arranged to, as part of obtaining the first follicular map, cooperate with a map modification unit, the map modification unit being arranged to present the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • In an embodiment, the map processor is arranged to, as part of analysing at least the first follicular map to determine the analysis result, perform a statistical analysis of hair root distances between hair roots positions of the first plurality of hair root positions.
  • In an embodiment, the map processor is arranged to, as part of performing the statistical analysis of hair root distances between hair root positions, determine a hair root distance distribution, and determine at least a first and a second relative contribution to the hair root distance distribution of at least a first and a second distribution component function.
  • In an embodiment, the relative contribution of the first distribution component function is an indication for a degree of a hair disorder of a first type. For example, the relative contribution of the first distribution component function is an indication for a degree of AGA.
  • In an embodiment, the map processor is further arranged to at least obtain a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and determine a common skin area from the first follicular map and the second follicular map. The map processor may further be arranged to use the common skin area in analysing at least the first follicular map to determine the analysis result suitable for assessment of hair condition. The map processor may be arranged to use the common skin area in analysing at least the first and the second follicular map to determine the analysis result suitable for assessment of hair condition. For example, comparing the common skin area of a first follicular map obtained from a first videodermoscopy image recorded at a first moment in time with the common skin area of the second follicular map obtained from a second videodermoscopy image recorded at a second, later, moment in time may provide an analysis result that is better suitable for assessment of hair condition than known methods, such as for assessment of a change in hair condition between the first moment and the second moment, for example as result of a treatment. For example, the analysis result may comprise a change in hair density and/or a change in number of hair and/or identification of appeared and disappeared hair, which may be more accurately obtained using the common skin area than from a mere comparison of the first and second videodermoscopy images or the corresponding hair densities determined therefrom. Other examples are described below with reference to various embodiments.
  • In an embodiment, the image processor is further arranged to perform an image processing algorithm on a second videodermoscopy image to generate the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image, and the map processor is arranged to obtain the second follicular map from the image processor.
  • In an embodiment, the map processor comprises a matching unit, the matching unit being arranged to at least relate hair root positions in the second follicular map to hair root positions of the first follicular map in the common skin area to determine a plurality of related hair root positions, each related hair root position of a hair root in the second follicular map being related to a hair root position in the first follicular map of the same hair root. A related hair root position of a hair root in the second follicular map may hereby be related to a hair root position in the first follicular map of the same hair root representing presumably the same hair follicle in the second and first videodermoscopy images. The map processor may further be arranged to compare a change in condition of individual hair between the first and second videodermoscopy image to determine the analysis result suitable for assessment of hair condition.
  • In an embodiment, the matching unit is arranged to initialize a transformation function, and to iteratively adapt the transformation function, the iterative adaptation comprising applying the transformation function to the first plurality of hair root positions of the first follicular map to obtain a first plurality of transformed hair root positions, relating the first plurality of transformed hair root positions to the second plurality of hair root positions of the second follicular map, determining relative distances between transformed hair root positions of the first plurality of transformed hair root positions and the related hair root positions of the second plurality of hair root positions to obtain a correspondence metric, and adapting the transformation function to minimize the correspondence metric.
  • In an embodiment, the matching unit is arranged to, as part of iteratively adapting the transformation function, further use at least one parameter of hair associated with the transformed hair root positions and hair associated with the related hair root positions to obtain the correspondence metric, the at least one parameter comprising at least one parameter from a group consisting of hair shaft diameter, hair length, hair growth, hair color.
  • In an embodiment, the matching unit is arranged to, as part of initializing the transformation function, detect positions of a first plurality of reference symbols on the skin in the first videodermoscopy image, detect positions of a second plurality of reference symbols on the skin in the second videodermoscopy image, and initialize the transformation function to reflect a transformation from the positions of a first plurality of reference symbols to the positions of a second plurality of reference symbols.
  • In an embodiment, the map processor is further arranged to at least analyse differences between at least the common skin area in the first follicular map and the common skin area in the second follicular map to determine the analysis result suitable for assessment of hair condition.
  • In an embodiment, the map processor is further arranged to, in determining the analysis result, identify an appearing of new hair shafts in the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • In an embodiment, the map processor is further arranged to, in determining the analysis result, identify a disappearing of hair from the common skin area in the second follicular map compared to the common skin area in the first follicular map.
  • In an embodiment, the map processor is further arranged to at least analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image to determine the analysis result suitable for assessment of hair condition.
  • In an embodiment, the map processor is further arranged to, in analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image, determine differences between at least one parameters of a group of parameters consisting of average hair diameter, hair diameter distribution, average hair length, hair length distribution, hair colors, hair color distribution, and/or at least one hair density.
  • A second aspect of the invention provides a system for assessment of hair condition, the system comprising an upload unit, an analysis unit according to any one of the preceding claims, and a presentation unit, the upload unit being arranged to receive one or more videodermoscopy images, the one or more videodermoscopy images comprising at least the first videodermoscopy image and to upload the one or more videodermoscopy images to the analysis unit, the analysis unit being arranged to receive the one or more videodermoscopy images from the upload unit and to obtain a videodermoscopic analysis result from the one or more videodermoscopy images, the videodermoscopic analysis result comprising the analysis result suitable for assessment of hair condition and/or an examination result derived from the analysis result, and the presentation unit being arranged to receive the videodermoscopic analysis result from the analysis unit and to present at least part of the analysis result to a user.
  • In an embodiment, the system further comprising a result check unit, the result check unit being arranged to receive the videodermoscopic analysis result from the analysis unit, review the videodermoscopic analysis result and to modify the videodermoscopic analysis result, provide the videodermoscopic analysis result as modified to the presentation unit to allow the presentation unit to present at least part of the videodermoscopic analysis result as modified to the user.
  • In an embodiment, the upload unit is connected to the analysis unit via a communication network.
  • In an embodiment, the presentation unit is connected to the analysis unit via a communication network.
  • In an embodiment, the system further comprises a user terminal, the user terminal comprising the upload unit and the presentation unit, the user terminal being connected to the analysis unit via a communication network.
  • A third aspect of the invention provides a method for assessment of hair condition, the method comprising obtaining a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and analysing at least the first follicular map to determine an analysis result suitable for assessment of hair condition.
  • In an embodiment, the method further comprises performing an image processing algorithm on a first videodermoscopy image to obtain the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image.
  • In an embodiment, the method further comprises obtaining a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and determining a common skin area from the first follicular map and the second follicular map.
  • In an embodiment, the method further comprises performing an image processing algorithm on a second videodermoscopy image to obtain the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image.
  • In an embodiment, the method comprises uploading one or more videodermoscopy images to an analysis unit via a communication network, for letting the analysis unit perform the method according to any one of the embodiments above, and receiving the videodermoscopic analysis result from the analysis via the communication network.
  • In an embodiment, the method further comprises receiving one or more videodermoscopy images by an upload unit, uploading the one or more videodermoscopy images from the upload unit to an analysis unit via a communication network, for letting the analysis unit perform the method according to an embodiment, and presenting at least part of the videodermoscopic analysis result to a user.
  • A fourth aspect of the invention provides a computer program product comprising a computer program comprising instructions arranged to, when executed by a computer, execute at least part of the method of any one of the embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,
  • FIG. 1 shows an analysis unit for assessment of hair condition according to an embodiment,
  • FIG. 2 shows an analysis unit for assessment of hair condition according to a further embodiment,
  • FIG. 3a and FIG. 3b schematically shows distributions measured in follicular maps of a first and a second person respectively,
  • FIG. 4 shows an analysis unit for assessment of hair condition according to another embodiment,
  • FIG. 5 schematically shows a first and second follicular map and the common skin area,
  • FIG. 6 shows an analysis unit for assessment of hair condition according to again another embodiment,
  • FIG. 7 shows an analysis unit for assessment of hair condition according to again another embodiment,
  • FIG. 8 schematically shows a system SYS for assessment of hair condition,
  • FIG. 9-FIG. 11 schematically show methods for assessment of hair condition according to embodiments, and
  • FIG. 12 shows a computer readable medium comprising a computer program product.
  • It should be noted that items which have the same reference numbers in different Figures, have the same or corresponding structural features and the same or corresponding functions, or are the same or corresponding signals. Where the function and/or structure of such an item has been explained, there is no necessity for repeated explanation thereof in the detailed description.
  • DETAILED DESCRIPTION
  • FIG. 1 shows an analysis unit ANA for assessment of hair condition according to an embodiment. The analysis unit ANA comprises a map processor MPP. The map processor is arranged to at least obtain a first follicular map FM1 representing a first plurality of hair root positions in a first videodermoscopy image. The map processor is further arranged to analyse at least the first follicular map FM1 to determine an analysis result ANR1 suitable for assessment of hair condition.
  • The map processor MPP may be arranged to obtain the first follicular map FM1 from a storage. The map processor MPP may be arranged to obtain the first follicular map FM1 from receiving the first follicular map FM1 over a communication channel, such as from a communication network. The map processor may obtain the first follicular map from a storage, such as from a patient database wherein the first follicular map is stored. The map processor may alternatively obtain the first follicular map from an image processor that is arranged to generate the first follicular map from a first videodermoscopy image.
  • Examination of hair condition may relate to diagnosis of hair disorders. Examination of hair condition may additionally or alternatively relate to identification and/or measurement of an advancement of hair disorder, measurement of a result of a treatment of a hair disorder, measurement of an effect and/or effectiveness of a medical treatment, or measurement of an effect and/or effectiveness of a cosmetic treatment.
  • The analysis result may, e.g., comprise an average hair root density, an average distance between hair roots, statistical parameters representing a statistics of distances between hair roots, or another parameter derivable from hair root positions.
  • FIG. 2 shows a further embodiment of an analysis unit ANA′ for assessment of hair condition according to an embodiment. The analysis unit ANA′ comprises an image processor IMP and a map processor MPP. The image processor IMP is arranged to perform an image processing algorithm on a first videodermoscopy image IM1 to generate a first follicular map FM1 representing a first plurality of hair root positions in the first videodermoscopy image IM1. The map processor MPP is arranged to obtain the first follicular map from the image processor IMP. As in the embodiment shown in FIG. 1, the map processor is further arranged to analyse at least the first follicular map FM1 to determine an analysis result ANR1 suitable for assessment of hair condition.
  • The map processor may be connected to the image processor and arranged to obtain the first follicular map directly from the image processor. The map processor may be connected to the image processor via one or more intermediate devices or channels and the map processor is arranged to obtain the first follicular map from the image processor via the one or more intermediate devices or channels. In an embodiment, the map processor is connected to a storage unit, the image processor is connected to the storage unit, the image processor is arranged to store the first follicular map in the storage unit, and the map processor is arranged to obtain the first follicular map from the image processor by retrieving it from the storage unit. The retrieval from the storage unit may occur substantially immediately after the first follicular map was stored in the storage unit by the image processor. The retrieval from the storage unit may alternatively occur at a much later moment in time than when the first follicular map was stored in the storage unit by the image processor, to allow a later analysis of the first follicular map, for example, when a second follicular map has become available after a period of time, to allow to compare a change of the follicular map over time to support the examination of symptoms of hair diseases.
  • The image processing algorithm performed on the first videodermoscopy image IM1 to generate the first follicular map FM1 representing the first plurality of hair root positions in the first videodermoscopy image IM1, may comprise any combination of suitable pattern recognition algorithms and qualification algorithms, such as binarization, adaptive thresholding, noise detection, blob detection, blob recombination, line tracking, hair crossing recombination, end detection, watershed division, and tip-follicle qualification.- The image processing algorithm may be supplemented by a manual correction by operators, for, e.g., removal of mistakes, addition of non-detected hair, removal of falsely detected hair, addition or removal of hair follicles. The spatial coordinates of the hair follicles identified in the field of view of the first videodermoscopy image may be referred to as the first follicular map FM1. The first follicular map FM1 thus represents the first plurality of hair root positions in the first videodermoscopy image IM1. The first follicular map FM1 may be stored and/or presented as a list of spatial coordinates, such as (x, y) coordinates in the first videodermoscopy image IM1, as a graphical representation, or in any other suitable form. The first follicular map FM1 may, e.g., be presented on screen together with the first videodermoscopy image, such as side-by-side with the first videodermoscopy image or as an overlay on the first videodermoscopy image.
  • FIG. 2 thus shows an embodiment of an analysis unit ANA′ for assessment of hair condition, the analysis unit comprising an image processor IMP arranged to at least perform an image processing algorithm on a first videodermoscopy image IM1 to obtain a first follicular map FM1 representing a first plurality of hair root positions in the first videodermoscopy image IM1, and a map processor MPP arranged to analyse at least the first follicular map FM1 to determine an analysis result ANR1 suitable for assessment of hair condition.
  • FIG. 2 further that, in further embodiments, the analysis unit ANA′ may comprise a map modification unit MOD. The image processor IMP is arranged to, as part of obtaining the first follicular map FM1, cooperate with the map modification unit MOD. The map modification unit MOD is arranged to present the first follicular map FM1 as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map FM1 and to modify the first follicular map FM1 such as to, at least, add and/or remove one or more hair root positions from the first follicular map FM1. The first follicular map as reviewed and modified is thereafter used for analyzing at least the first follicular map to determine the analysis result suitable for assessment of hair condition. Using review by human assistants may improve the quality of the follicular map significantly. In further embodiments, the map modification unit MOD is further arranged to present the first follicular map FM1 as obtained from the performing of the image processing algorithm on the first videodermoscopy image IM1 to a plurality of human assistants, to allow each of the human assistant to review the first follicular map and to propose to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map. In these embodiments, the map modification unit MOD may be arranged to compare the proposals from the plurality of human assistants for removing one or more hair root positions from the first follicular map FM1, and to decide from the comparison which hair root position of the proposed one or more hair root positions from the first follicular map to delete. For example, the map modification unit MOD may be arranged to use a majority voting in deciding which of the proposed one or more hair root positions is to be deleted. Using majority voting of a plurality of reviews by human assistants may improve the quality of the follicular map even further.
  • In the embodiments shown in FIG. 1 and FIG. 2, as well as in further embodiments, the map processor MPP may be arranged to, as part of analysing at least the first follicular map to determine the analysis result, perform a statistical analysis of hair root distances between hair roots positions of the first plurality of hair root positions.
  • Herein, the map processor MPP may be arranged to, as part of performing the statistical analysis of hair root distances between hair root positions, determine a hair root distance distribution, and determine at least a first and a second relative contribution to the hair root distance distribution of at least a first and a second distribution component function.
  • For example, the first and second relative contributions may be obtained from, a two-component fit to the distribution, with the first contribution reflecting the dominant component for a specific hair disorder and the second contribution reflecting the dominant component for healthy hair. Optionally more contributions may be used reflecting respective dominant component for other specific hair disorders.
  • In an embodiment, the relative contribution of the first distribution component function being an indication for a degree of a hair disorder of a first type. E.g., when the relative contribution is found to be in a first indicator range, such as larger than 35%, this may be an indication of androgenetic alopecia (AGA).
  • An example is shown in FIG. 3a and FIG. 3b . FIG. 3a shows the distribution of distances to all other hair follicles in the follicular map of a first, healthy, person. FIG. 3b shows the distribution of distances to all other hair follicles in the follicular map of a second person who has AGA in an advanced stage. In each Figure, the resulting distribution, indicated as points P(r), is fitted in the low distance range (in this example, distances r<600 μm) by a model consisting of a sum of two components: a first component labeled AGA that is a linear distribution, and a second component labeled non-AGA that is peaked at low values and has a varying width. The first component represents a distribution that is characteristic for AGA. The second component represents a distribution that is characteristic for healthy individuals. The relative contribution of the first component provides a measure to assess AGA advancement.
  • FIG. 4 shows an analysis unit ANA″ for assessment of hair condition according to another embodiment. The analysis unit ANA″ comprises a map processor MPP′. The map processor MPP′ is arranged to at least obtain a first follicular map FM1 representing a first plurality of hair root positions in a first videodermoscopy image and a second follicular map FM2 representing a second plurality of hair root positions in a second videodermoscopy image. The map processor MPP′ is further arranged to analyse at least the first follicular map FM1 and the second follicular map FM2 to determine an analysis result ANR2 suitable for assessment of hair condition.
  • The map processor MPP′ may, similarly as described with respect to the map processors shown in FIG. 1 and FIG. 2, obtain the first and second follicular map FM1, FM2 from retrieving the maps from a storage, receiving them from a communication network, or receive them from the image processor IMP′. For example, the image processor IMP′ may be further arranged to perform an image processing algorithm on a second videodermoscopy image to generate the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image, and the map processor MPP′ may be arranged to obtain the second follicular map from the image processor.
  • Similar to analysis unit ANA′ shown in FIG. 2, the analysis unit ANA″ may comprise a map modification unit MOD′. The image processor IMP is arranged to, as part of obtaining the first follicular map FM1 as well as part of obtaining the second follicular map FM2, cooperate with the map modification unit MOD′. The map modification unit MOD′ is arranged to present the first and second follicular maps FM1, FM2 as obtained from the performing of the image processing algorithm on the first and second videodermoscopy image to a human assistant, and allow the human assistant to review the first and second follicular map FM1, FM2 and to modify the first and second follicular map FM1, FM2 such as to, at least, add and/or remove one or more hair root positions from the first and/or second follicular map FM1, FM2. The first and second follicular map as reviewed and modified is thereafter used for analyzing at least the first and second follicular map to determine the analysis result suitable for assessment of hair condition.
  • In an embodiment, the map processor MPP′ is further arranged to determine a common skin area from the first follicular map FM1 and the second follicular map FM2. This is illustrated in FIG. 5. FIG. 5 schematically shows a first and second follicular map FM1, FM2 and the common skin area. The common skin area thus corresponds to a part OV1 of the first follicular map FM1 that corresponds to a part OV2 of the second follicular map FM2 which corresponds to the same skin area as the part OV1 of the first follicular map FM1. These parts may further be referred to as the common skin area OV1 of the first follicular map FM1 and the common skin area OV2 of the second follicular map FM2. The map processor MPP′ may be arranged to as part of determining the common skin area from the first follicular map FM1 and the second follicular map FM2, determine a transformation function TF12 which relates hair root positions in the first follicular map FM1 to hair root positions of the same hair in the second follicular map FM2. The common skin area OV1 of the first follicular map FM1 may thus be related to the common skin area OV2 of the second follicular map FM2 by transformation function TF12 as schematically illustrated in FIG. 5. When the first follicular map FM1 relates to a first videodermoscopy image registered at a first moment in time, such as before a treatment, and the second follicular map FM2 relates to a second videodermoscopy image registered at a second later moment in time, such as after the treatment, analyzing differences between the common skin area of the first follicular map FM1 and the common skin area of the second follicular map FM2 and/or analyzing differences between the common skin area of the first videodermoscopy image IM1 and the common skin area of the second videodermoscopy image IM2 may allow a largely improved precision compared to known techniques. Using the follicular maps to identify the common skin area, i.e., corresponding skin areas in both follicular maps, largely improves the precision of the analysis. Whereas prior art techniques relied strongly on exactly the same positioning of the camera on the skin and the same field of view, the use of the matching follicular maps makes the analysis largely independent of size, shape and distortion of the area used for the videodermoscopic analysis. The size, shape and distortion of the area of the skin registered on a videodermoscopy image may vary significantly when two images are registered at different moments in time and/or different locations. For example, if the skin is stretched or displaced by pressing the videodermoscopic lens, the actual measurement area may differ up to 30%, which results in an inaccurate analysis with known techniques.
  • For example, identifying which of the hair root positions in the first follicular map FM1, and hence which hair in the first videodermoscopy image IM1 corresponds to which of the hair root positions in the second follicular map FM2, and hence which hair in the second videodermoscopy image IM2, allows an accurate determination of which hair has appeared and which hair has disappeared, based on tracking individual hair rather than mere statistics over the overlap area. E.g., instead of determining that the number of hair has increased from 100 to 105 for a specific subject after a certain period of time, it may be determined that 5 hair were lost and 10 came new. Such knowledge may be of relevance when assessing certain kinds of hair disorder. For determination of therapeutic effects of new substances in clinical trials, this technique and the corresponding precision improvement, may allow to reduce the number of test patient samples necessary to obtained conclusive result.
  • As shown in FIG. 4, the map processor MPP′ may a matching unit MAT. The matching unit MAT may be arranged to at least relate hair root positions in the second follicular map to hair root positions of the first follicular map in at least the common skin area to determine a plurality of related hair root positions. Each related hair root position of a hair root in the second follicular map may thus be related to a hair root position in the first follicular map of the same hair root. This may be performed as part of determining the common skin area, or after the common skin are has been determined. Hereby, the method determines relates hair root positions of the second plurality of hair root positions in the second videodermoscopy image to hair root positions of the first plurality of hair root positions in the first videodermoscopy image in at least the common skin area to determine related hair root positions.
  • The map processor MAP may be arranged to, in determining the common skin area from at least analyzing the first plurality of positions of hair roots and the second plurality of positions of hair roots, find corresponding positions of hair roots by minimizing their relative distance in one or more iterations.
  • In embodiments, the matching unit MAP is arranged to, as part of relating hair root positions and/or while determining a common skin area from the first follicular map FM1 and the second follicular map FM2, initialize a transformation function TF12 and iteratively adapt the transformation function TF12. The iterative adaptation comprises:
      • applying the transformation function TF12 to the first plurality of hair root positions of the first follicular map FM1 to obtain a first plurality of transformed hair root positions,
      • relating the first plurality of transformed hair root positions to the second plurality of hair root positions of the second follicular map FM2,
      • determining relative distances between transformed hair root positions of the first plurality of transformed hair root positions and the related hair root positions of the second plurality of hair root positions to obtain a correspondence metric, and
      • adapting the transformation function TF12 to minimize the correspondence metric.
  • In further embodiments, the matching unit MAP is arranged to, as part of iteratively adapting the transformation function, further use at least one parameter of hair associated with the transformed hair root positions and hair associated with the related hair root positions to obtain the correspondence metric, the at least one parameter comprising at least one parameter from a group consisting of hair shaft diameter, hair length, hair growth, hair color.
  • In further embodiments, the matching unit MAP is arranged to, as part of initializing the transformation function TF, detect positions of a first plurality of reference symbols REF1 on the skin in the first videodermoscopy image IM1, detect positions of a second plurality of reference symbols REF2 on the skin in the second videodermoscopy image IM2, initialize the transformation function TF12 to reflect a transformation from the positions of a first plurality of reference symbols REF1 to the positions of a second plurality of reference symbols REF2.
  • The first and second plurality of reference symbols REF1, REF2 may be a plurality of micro-tattoos on the skin, for example 2, 3, 4, 6, 9, 16 or any suitable number of micro-tattoos. The micro-tattoos may, as in known methods, be used to roughly position the videodermoscope at roughly corresponding positions on the skin to register suitable videodermoscopy images at subsequent moments in time.
  • The map processor MPP′ may be further arranged to at least analyse differences between at least the common skin area OV1 in the first follicular map FM1 and the common skin area OV2 in the second follicular map FM2 to determine the analysis result suitable for assessment of hair condition. The map processor MPP′ may thus analyze differences between hair root positions, number of hair roots and hair root density. The first follicular map fm1 may, e.g., be associated with a first videodermoscopy image im1 registered before the start of a treatment, and the second follicular map FM2 may associated with a second videodermoscopy image IM2 registered after a certain duration of the treatment. Analysing the differences may then give an analysis result suitable for supporting the examination of hair condition, in particular whether symptoms have changed as a result of the treatment. The map processor MPP′ may be arranged to, in determining the analysis result, identify an appearing of new hair roots in the common skin area in the second follicular map compared to the common skin area in the first follicular map. The map processor MPP′ may provide the appearing of new hair roots as an indication of new growth as part of the analysis result. The map processor MPP′ may be arranged to, in determining the analysis result, identify a disappearing of hair roots from the common skin area in the second follicular map compared to the common skin area in the first follicular map. The map processor MPP′ may provide the disappearing of hair roots as an indication of hair loss as part of the analysis result. The map processor MPP′ may be arranged to, in determining the analysis result, determine a difference in total number of hair roots in the common skin area OV2 in the second follicular map FM1 compared to the common skin area OV1 in the first follicular map OV2. The map processor MPP′ may be arranged to, in determining the analysis result, determine a difference in hair density in the common skin area OV2 in the second follicular map FM2 compared to the common skin area OV1 in the first follicular map FM1.
  • In further or alternative embodiments, the map processor MPP′ is arranged to at least analyse differences between at least the common skin area in the first videodermoscopy image IM1 and the common skin area in the second videodermoscopy image IM2 to determine the analysis result suitable for assessment of hair condition. The map processor MPP′ may thus analyze differences between hair in the common skin area of the first videodermoscopy image IM1 and hair in the common skin area in the second videodermoscopy image IM2. Individual hair may be compared as for each hair in the first videodermoscopy image IM1, the related hair in the second videodermoscopy image IM2 can be identified, e.g. by applying the transformation function TF12 to the hair root position from the first follicular map FM1 to find the related hair root position in the second follicular map FM2. The first videodermoscopy image IM1 may have been registered before the start of a treatment, and the second videodermoscopy IM2 image may have been registered after a certain duration of the treatment. Analysing the differences may then give an analysis result suitable for supporting the examination of hair condition, in particular whether symptoms have changed as a result of the treatment. The map processor MPP′ may hereto be arranged to, in analyse differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image, determine differences between at least one parameters of a group of parameters consisting of average hair diameter, hair diameter distribution, average hair length, hair length distribution, hair colors, hair color distribution, and/or at least one hair density.
  • FIG. 6 shows another analysis unit ANA″′ for assessment of hair condition according to another embodiment. The analysis unit ANA′″ shown in FIG. 6 differs from the analysis unit ANA″ shown in FIG. 4 in that the analysis unit ANA″′ further comprises a storage interface unit SIF arranged to cooperate with a storage unit STOR. Storage unit STOR is shown to be external to the analysis unit ANA′″, but may in alternative embodiments ne an integral part of analysis unit ANA′″. Storage unit STOR may be a cloud device, and may as such be connected to the analysis unit ANA″′ permanently or only when the analysis unit ANA″′ connects to the storage unit STOR. The storage interface unit SIF is arranged to store the follicular maps FM1, FM2, and optionally the videodermoscopy images IM1, IM2, in the storage unit STOR after the follicular maps FM1, FM2 have been obtained from the image processor IMP′ or the map modification unit MOD. The storage interface unit SIF is further arranged to retrieve the follicular maps FM1, FM2, and optionally the videodermoscopy images IM1, IM2, from the storage unit STOR for analysis.
  • FIG. 7 shows another analysis unit ANA″″ for assessment of hair condition according to another embodiment. The analysis unit ANA″′ shown in FIG. 7 differs from the analysis unit ANA″′ shown in FIG. 6 in that the analysis unit ANA″″ does not comprise the image processor IMP′ and the map modification unit MOD. The image processor IMP′ and the map modification unit MOD′ are instead provided as part of a separate unit shown as image provided IMPRO. The image provider IMPRO further comprises a first storage interface unit SIF1 arranged to cooperate with a storage unit STOR. Storage unit STOR may be a cloud device, and may as such be connected to the image provider IMPRO and the analysis unit ANA′″ permanently or only when image provider IMPRO or the analysis unit ANA″′ connects to the storage unit STOR. The storage interface unit SIF is arranged to store the follicular maps FM1, FM2, and optionally the videodermoscopy images IM1, IM2, in the storage unit STOR after the follicular maps FM1, FM2 have been obtained from the image processor IMP′. The analysis unit ANA″″ comprises a second storage interface unit SIF2 arranged to cooperate with the storage unit STOR. The second storage interface unit SIF2 is arranged to retrieve the follicular maps FM1, FM2, and optionally the videodermoscopy images IM1, IM2, from the storage unit STOR for analysis.
  • FIG. 8 schematically shows a system SYS for assessment of hair condition. The system SYS comprises an upload unit UPL, an analysis unit ANU, and a presentation unit PRES. The upload unit UPL is arranged to receive one or more videodermoscopy images, e.g. from a dermatologist or an assistant thereof that registered the videodermoscopy images on a scalp from a patient, e.g. by feeding into a scanner, by retrieving from a storage for example at the dematologists'clinic, or by retrieving from a communication network. The one or more videodermoscopy images comprise at least the first videodermoscopy image. The upload unit UPL is further arranged to upload the one or more videodermoscopy images to the analysis unit, for example via a communication network, or via e-mail, or as a hardcopy via surface mail or a delivery service. The analysis unit ANU is arranged to receive the one or more videodermoscopy images from the upload unit UPL. The analysis unit ANU is arranged to obtain a videodermoscopic analysis result from the one or more videodermoscopy images. Tthe videodermoscopic analysis result comprises at least one of the analysis result suitable for assessment of hair condition as obtained by one of the analysis units described with reference to FIG. 1-FIG. 7, and an examination result derived from the analysis result result. The presentation unit PRES is arranged to receive the videodermoscopic analysis result from the analysis unit ANU and to present at least part of the analysis result to a user. The presentation unit PRES may be arranged to present at least part of the analysis result to a user on, for example, a display, on paper, in a computer-readable data format, in a human-readable form or on a data storage medium, in a qualitative or quantitative manner, as a graphical or textual, such as table or phrases, representation. The user may, e.g., be a patient, a general practitioner, a dermatology nurse, a dermatologist, or a scientist.
  • FIG. 8 shows that the system SYS may further comprise a result check unit RCHK. The result check unit RCHK is arranged to receive the videodermoscopic analysis result from the analysis unit ANU. The result check unit RCHK is arranged to review the videodermoscopic analysis result and to modify the videodermoscopic analysis result. For example, the videodermoscopic analysis result may be changed, supplemented, summarized or reformatted. The videodermoscopic analysis result may e.g. be supplemented with a diagnosis of a hair disorder, a treatment proposal or a treatment change. E.g., an expert system or a human expert dermatologist may draw a diagnosis as the presence, or lack of presence, of AGA. The result check unit RCHK is arranged to provide the videodermoscopic analysis result as modified to the presentation unit to allow the presentation unit to present at least part of the videodermoscopic analysis result as modified to the user. The result check unit RCHK may be connected to the analysis unit ANU via a communication network such as the internet, whereby the result check and the image processing may take place at different geographical locations. The result check unit RCHK may alternatively be directly connected to the analysis unit ANU and part of a single unit, e.g., the analysis unit ANU and the result check unit RCHK may be implemented in a personal computer of a dermatologist.
  • As shown in FIG. 8, the upload unit UPL may be connected to the analysis unit ANU via a communication network COMM. The communication network COMM may be a virtual private network. The communication network COMM may be the Internet. Hereby, videodermoscopic images registered at various places, e.g. at various dermatologic clinics, may be sent via the internet to the analysis unit ANU at a centralized location, at which centralized location a consistent and quality-controlled processing may be performed to obtain the follicular maps and the analysis results. The processing at the centralized location simplifies the use of a pool of well-trained staff to do the review and modification of the follicular maps described above with reference to the map modification unit MOD.
  • As shown in FIG. 8, the presentation unit PRES may be connected to the analysis unit ANU via a communication network. The communication network may be the same virtual private network, another virtual private network, or, for example, the Internet. The analysis unit ANU may thus be at a central location, and the presentation unit PRES may be at a general practitioner, a nurse, a patient, or elsewhere.
  • In embodiments, the system SYS comprises a user terminal TERM. The user terminal TERM comprises the upload unit UPL and the presentation unit PRES. The user terminal TERM is connected to the analysis unit ANU via a communication network COMM. The user terminal TERM may, e.g., be a computer at a dermatologist's clinic that can connect via the internet, e.g. using a virtual private network, to the analysing unit ANU at a centralized location.
  • FIG. 9 schematically shows a method M_ANA for assessment of hair condition according to an embodiment. The method M_ANA comprises obtaining OBT a first follicular map FM1 representing a first plurality of hair root positions in a first videodermoscopy image. The method M_ANA comprises analysing M_MPP at least the first follicular map to determine an analysis result ANR1 suitable for assessment of hair condition. Reference is further made to FIG. 1.
  • FIG. 10 schematically shows a method M_ANA′ for assessment of hair condition according to a further embodiment. The method M_ANA′ comprises performing M_IMP an image processing algorithm on a first videodermoscopy image IM1 to obtain a first follicular map FM1 representing a first plurality of hair root positions in the first videodermoscopy image. The method M_ANA′ further comprises analysing M_MPP at least the first follicular map to determine an analysis result ANR1 suitable for assessment of hair condition. The method M_ANA′ may further comprise a present-and-modify option M_MOD comprising presenting the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
  • FIG. 11 schematically shows a method M_ANA″ for assessment of hair condition according to again a further embodiment. The method M_ANA″′ comprises performing M_IMP′ an image processing algorithm on a first videodermoscopy image IM1 to obtain a first follicular map FM1 representing a first plurality of hair root positions in the first videodermoscopy image and performing M_IMP′ an image processing algorithm on a second videodermoscopy image to obtain the second follicular map representing the second plurality of hair root positions in the first videodermoscopy image to obtain FM2 a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image. The method M_ANA″′ may further comprise a present-and-modify option M_MOD′ comprising presenting the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map, and presenting the second follicular map as obtained from the performing of the image processing algorithm on the second videodermoscopy image to a human assistant, and allow the human assistant to review the second follicular map and to modify the second follicular map such as to, at least, add and/or remove one or more hair root positions from second first follicular map. The method M_ANA″′ may comprises determining a common skin area from the first follicular map FM1 and the second follicular map FM2.
  • The method may comprise uploading one or more videodermoscopy images to an analysis unit via a communication network, for letting the analysis unit perform the method according to any one of the embodiments above, and receiving the videodermoscopic analysis result from the analysis via the communication network.
  • The method may further comprise receiving one or more videodermoscopy images by an upload unit, uploading the one or more videodermoscopy images from the upload unit to an analysis unit via a communication network, for letting the analysis unit perform the method, and presenting at least part of the videodermoscopic analysis result to a user.
  • FIG. 12 shows a computer readable medium CRMED comprising a computer program product CPP, the computer program product CPP comprising instructions for causing a processor apparatus to perform a method according to any one embodiment or a part of thereof. The computer program product CPP may be embodied on the computer readable medium CRMED as physical marks or by means of magnetization of the computer readable medium CPP. However, any other suitable embodiment is conceivable as well. Furthermore, it will be appreciated that, although the computer readable medium CRMED is shown in FIG. 9 as an optical disc, the computer readable medium CRMED may be any suitable computer readable medium, such as a hard disk, solid state memory, flash memory, etc., and may be non-recordable or recordable. The computer program product CPP may thus comprise a computer program comprising instructions arranged to, when executed by a computer, execute at least part of the method of any one of the embodiments described above.
  • It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments.
  • The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. The computer program may be provided on a data carrier, such as a CD-type optical disc, a DVD-type optical disc, a hard disk, or diskette, stored with data loadable in a memory of a computer system, the data representing the computer program. The data carrier may thus be a tangible data carrier. The data carrier may be a data connection, such as a telephone cable or a network cable. The data carrier may further be a non-tangible data carrier such as a wireless connection.
  • In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (21)

1. An analysis unit for assessment of hair condition, the analysis unit comprising a map processor, the map processor being configured to:
obtain a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and
analyze at least the first follicular map to determine an analysis result suitable for assessment of hair condition.
2. The analysis unit according to claim 1, further comprising:
an image processor,
the image processor configured to perform an image processing algorithm on a first videodermoscopy image to generate the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image, and
the map processor being configured to obtain the first follicular map from the image processor.
3. The analysis unit according to claim 2, the image processor being configured to, as part of obtaining the first follicular map, cooperate with a map modification unit, the map modification unit being configured to:
present the first follicular map as obtained from the performing of the image processing algorithm on the first videodermoscopy image to a human assistant, and
allow the human assistant to review the first follicular map and to modify the first follicular map such as to, at least, add and/or remove one or more hair root positions from the first follicular map.
4. The analysis unit according to claim 1, the map processor being configured to, as part of analyzing at least the first follicular map to determine the analysis result, perform a statistical analysis of hair root distances between hair roots positions of the first plurality of hair root positions.
5. The analysis unit according to claim 4, the map processor being configured to, as part of performing the statistical analysis of hair root distances between hair root positions:
determine a hair root distance distribution, and
determine at least a first and a second relative contribution to the hair root distance distribution of at least a first and a second distribution component function.
6. The analysis unit according to claim 5, the relative contribution of the first distribution component function being an indication for a degree of a hair disorder of a first type.
7. The analysis unit according to claim 1, the map processor being further configured to at least:
obtain a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and
determine a common skin area from the first follicular map and the second follicular map.
8. The analysis unit according to claim 7, the image processor being further configured to perform an image processing algorithm on a second videodermoscopy image to generate the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image, and
the map processor being configured to obtain the second follicular map from the image processor.
9. The analysis unit according to claim 7, the map processor including a matching unit, the matching unit being configured to at least:
relate hair root positions in the second follicular map to hair root positions of the first follicular map in the common skin area to determine a plurality of related hair root positions, each related hair root position of a hair root in the second follicular map being related to a hair root position in the first follicular map of the same hair root.
10. The analysis unit according to claim 7, the map processor being further configured to at least:
analyze differences between the common skin area in the first follicular map and the common skin area in the second follicular map to determine the analysis result suitable for assessment of hair condition.
11. The analysis unit according to claim 10, the map processor being further configured to, in determining the analysis result:
identify an appearing of new hair roots in the common skin area in the second follicular map compared to the common skin area in the first follicular map, and/or
identify a disappearing of hair roots from the common skin area in the second follicular map compared to the common skin area in the first follicular map.
12. The analysis unit according to claim 7, the map processor being further configured to at least analyze differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image to determine the analysis result suitable for assessment of hair condition.
13. The analysis unit according to claim 12, the map processor being further configured to, in analyzing differences between at least the common skin area in the first videodermoscopy image and the common skin area in the second videodermoscopy image, determine differences between at least one parameters of a group of parameters consisting of average hair diameter, hair diameter distribution, average hair length, hair length distribution, hair colors, hair color distribution, and/or at least one hair density.
14. A system for assessment of hair condition, further comprising:
an upload unit;
an analysis unit for assessment of hair condition, the analysis unit comprising:
a map processor, the map processor being configured to:
obtain a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and
analyze at least the first follicular map to determine an analysis result suitable for assessment of hair condition; and
a presentation unit,
the upload unit being arranged to receive one or more videodermoscopy images, the one or more videodermoscopy images comprising at least the first videodermoscopy image and to upload the one or more videodermoscopy images to the analysis unit, and
the analysis unit being arranged to receive the one or more videodermoscopy images from the upload unit and to obtain a videodermoscopic analysis result from the one or more videodermoscopy images, the videodermoscopic analysis result comprising at least one of:
the analysis result suitable for assessment of hair condition, and
an examination result derived from at least one of the analysis result suitable for assessment of hair condition, and
the presentation unit being arranged to receive the videodermoscopic analysis result from the analysis unit and to present at least part of the analysis result to a user.
15. The system according to claim 14, further comprising a result check unit, the result check unit being configured to:
receive the videodermoscopic analysis result from the analysis unit,
review the videodermoscopic analysis result and to modify the videodermoscopic analysis result, and
provide the videodermoscopic analysis result as modified to the presentation unit to allow the presentation unit to present at least part of the videodermoscopic analysis result as modified to the user.
16. The system according to claim 14, at least one of the upload unit and the presentation unit being connected to the analysis unit via a communication network.
17. A method for assessment of hair condition, the method comprising:
obtaining a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image, and
analyzing at least the first follicular map to determine an analysis result suitable for assessment of hair condition.
18. The method of claim 17, further comprising:
performing an image processing algorithm on a first videodermoscopy image to obtain the first follicular map representing the first plurality of hair root positions in the first videodermoscopy image.
19. The method of claim 17, further comprising:
obtaining a second follicular map representing a second plurality of hair root positions in a second videodermoscopy image, and
determining a common skin area from the first follicular map and the second follicular map.
20. The method of claim 19, further comprising:
performing an image processing algorithm on a second videodermoscopy image to obtain the second follicular map representing the second plurality of hair root positions in the second videodermoscopy image.
21. A non-transitory computer readable storage media having computer-executable instructions configured to, when executed by a processor, perform the steps comprising:
obtaining a first follicular map representing a first plurality of hair root positions in a first videodermoscopy image; and
analyzing at least the first follicular map to determine an analysis result suitable for assessment of hair condition. .
US15/418,549 2016-12-29 2017-01-27 Analysis unit and system for assessment of hair condition Abandoned US20180189976A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
PL42002316 2016-12-29
PLP.420023 2016-12-29

Publications (1)

Publication Number Publication Date
US20180189976A1 true US20180189976A1 (en) 2018-07-05

Family

ID=57944261

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/418,549 Abandoned US20180189976A1 (en) 2016-12-29 2017-01-27 Analysis unit and system for assessment of hair condition
US15/675,117 Active 2038-05-10 US10573026B2 (en) 2016-12-29 2017-08-11 Analysis unit and system for assessment of hair condition
US16/744,049 Active US11080893B2 (en) 2016-12-29 2020-01-15 Analysis unit and system for assessment of hair condition

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/675,117 Active 2038-05-10 US10573026B2 (en) 2016-12-29 2017-08-11 Analysis unit and system for assessment of hair condition
US16/744,049 Active US11080893B2 (en) 2016-12-29 2020-01-15 Analysis unit and system for assessment of hair condition

Country Status (3)

Country Link
US (3) US20180189976A1 (en)
EP (2) EP3342331B1 (en)
ES (2) ES2791523T3 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489823A (en) * 2020-04-08 2020-08-04 Oppo广东移动通信有限公司 Hair health condition evaluation method and device, mobile terminal and storage medium
US20220277442A1 (en) * 2019-09-20 2022-09-01 Koninklijke Philips N.V. Determining whether hairs on an area of skin have been treated with a light pulse
CN119279502A (en) * 2024-09-20 2025-01-10 广州帅威美容设备有限公司 A skin and hair analysis method and system based on mobile screen projection interconnection detector

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020251938A1 (en) 2019-06-09 2020-12-17 Canfield Scientific, Incorporated Hair analysis methods and apparatuses
EP3786884B1 (en) * 2019-08-28 2023-07-12 TrichoLAB GmbH Hair transplant planning system
CA3163846A1 (en) * 2020-01-10 2021-07-15 Michael Rabin Method and device for evaluating hair growth treatments
EP4113436A4 (en) * 2020-02-27 2023-08-16 Panasonic Intellectual Property Management Co., Ltd. IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
CN112084965B (en) * 2020-09-11 2024-07-02 义乌市悦美科技有限公司 Scalp hair detection device and system
US20240032856A1 (en) * 2021-02-23 2024-02-01 Becon Co., Ltd. Method and device for providing alopecia information
US11682143B2 (en) * 2021-04-27 2023-06-20 Revieve Oy System and method for hair analysis of user
US20250108529A1 (en) * 2023-10-02 2025-04-03 The Gillette Company Llc Hair cutting appliance and associated power transform module

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5331472A (en) 1992-09-14 1994-07-19 Rassman William R Method and apparatus for measuring hair density
US20040201694A1 (en) 2001-02-07 2004-10-14 Vladimir Gartstein Noninvasive methods and apparatus for monitoring at least one hair characteristic
US10299871B2 (en) * 2005-09-30 2019-05-28 Restoration Robotics, Inc. Automated system and method for hair removal
US7962192B2 (en) 2005-09-30 2011-06-14 Restoration Robotics, Inc. Systems and methods for aligning a tool with a desired location or object
US20080049993A1 (en) * 2006-08-25 2008-02-28 Restoration Robotics, Inc. System and method for counting follicular units
US7477782B2 (en) 2006-08-25 2009-01-13 Restoration Robotics, Inc. System and method for classifying follicular units
US8115807B2 (en) 2007-03-06 2012-02-14 William Rassman Apparatus and method for mapping hair metric
US20090036800A1 (en) 2007-07-30 2009-02-05 Michael Rabin Hair Densitometer
WO2009124146A2 (en) * 2008-04-01 2009-10-08 Bella Nella Salon & Day Spa, Inc. Method and system for applying beauty treatments to an individual by generalizing their features
US8652186B2 (en) 2008-06-04 2014-02-18 Restoration Robotics, Inc. System and method for selecting follicular units for harvesting
US8848974B2 (en) * 2008-09-29 2014-09-30 Restoration Robotics, Inc. Object-tracking systems and methods
US9498289B2 (en) * 2010-12-21 2016-11-22 Restoration Robotics, Inc. Methods and systems for directing movement of a tool in hair transplantation procedures
KR101194773B1 (en) 2011-03-29 2012-10-26 강진수 Measurement method of alopecia progression degree
US8945150B2 (en) * 2011-05-18 2015-02-03 Restoration Robotics, Inc. Systems and methods for selecting a desired quantity of follicular units
EP2732284A4 (en) 2011-07-12 2015-04-01 Procter & Gamble Method for assessing condition of skin and/or scalp
US20140028822A1 (en) 2012-07-30 2014-01-30 Alex A. Khadavi Hair loss monitor
WO2015066618A1 (en) * 2013-11-01 2015-05-07 The Florida International University Board Of Trustees Context based algorithmic framework for identifying and classifying embedded images of follicle units
CN107106560A (en) 2014-10-29 2017-08-29 萨姆森临床私人有限公司 Detection and treatment of excessive hair loss
US10013642B2 (en) 2015-07-30 2018-07-03 Restoration Robotics, Inc. Systems and methods for hair loss management

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Rahman US Patent Application Publication no US 2016/0253799 A1 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220277442A1 (en) * 2019-09-20 2022-09-01 Koninklijke Philips N.V. Determining whether hairs on an area of skin have been treated with a light pulse
US12243225B2 (en) * 2019-09-20 2025-03-04 Koninklijke Philips N.V. Determining whether hairs on an area of skin have been treated with a light pulse
CN111489823A (en) * 2020-04-08 2020-08-04 Oppo广东移动通信有限公司 Hair health condition evaluation method and device, mobile terminal and storage medium
CN119279502A (en) * 2024-09-20 2025-01-10 广州帅威美容设备有限公司 A skin and hair analysis method and system based on mobile screen projection interconnection detector

Also Published As

Publication number Publication date
EP3659500B1 (en) 2024-03-20
ES2791523T3 (en) 2020-11-04
EP3342331A1 (en) 2018-07-04
US20180184968A1 (en) 2018-07-05
EP3342331B1 (en) 2020-02-26
US10573026B2 (en) 2020-02-25
EP3659500C0 (en) 2024-03-20
US20200202565A1 (en) 2020-06-25
US11080893B2 (en) 2021-08-03
EP3659500A1 (en) 2020-06-03
ES2980141T3 (en) 2024-09-30

Similar Documents

Publication Publication Date Title
US20180189976A1 (en) Analysis unit and system for assessment of hair condition
Mendonça et al. PH 2-A dermoscopic image database for research and benchmarking
Sahu et al. Evaluation of a combined reflectance confocal microscopy–optical coherence tomography device for detection and depth assessment of basal cell carcinoma
US7233693B2 (en) Methods and systems for computer analysis of skin image
Alghamdi et al. Assessment methods for the evaluation of vitiligo
EP3424406A1 (en) Method and system for classifying optic nerve head
US11471218B2 (en) Hair transplant planning system
Heidari et al. Optical coherence tomography as an oral cancer screening adjunct in a low resource settings
JP6814172B2 (en) Skin internal structure estimation method, skin internal structure estimation program, and skin internal structure estimation device
JP2013524957A (en) Method and device for quality assessment of electrical impedance measurements in tissue
CN116509376B (en) Wound detection system, detection device and detection method
US9962089B2 (en) Methodology and apparatus for objective assessment and rating of psoriasis lesion thickness using digital imaging
Hani et al. Body surface area measurement and soft clustering for PASI area assessment
KR102239575B1 (en) Apparatus and Method for skin condition diagnosis
CN117916766A (en) Fibrotic cap detection in medical images
KR102333120B1 (en) Self Scalp Diagnostic System and Method
CN113693617A (en) Automatic measuring system and method for focus volume in vivo
Yow et al. Automated in vivo 3D high-definition optical coherence tomography skin analysis system
Agostini et al. AI powered detection and assessment of onychomycosis: A spotlight on yellow and deep learning
JP6189808B2 (en) Method for creating pore evaluation index, pore evaluation index creating system and performance evaluation method
Di Leo et al. A web-based application for dermoscopic measurements and learning
CN120154296A (en) Online monitoring and early warning method and system
KR20250038971A (en) System for monitoring internal body status for patients during radiotherapy treatment and method thereof
WO2024215969A1 (en) Accurate learning models for tissue characterization
Suchecki et al. Computer planimetry in allergology skin prick tests

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION