KR20220133834A - 학습 모델을 이용한 데이터 처리 방법 - Google Patents
학습 모델을 이용한 데이터 처리 방법 Download PDFInfo
- Publication number
- KR20220133834A KR20220133834A KR1020220118766A KR20220118766A KR20220133834A KR 20220133834 A KR20220133834 A KR 20220133834A KR 1020220118766 A KR1020220118766 A KR 1020220118766A KR 20220118766 A KR20220118766 A KR 20220118766A KR 20220133834 A KR20220133834 A KR 20220133834A
- Authority
- KR
- South Korea
- Prior art keywords
- data
- target data
- image
- data processing
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/501—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/096—Transfer learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Surgery (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physiology (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
Abstract
Description
도 2는 본 발명의 일실시예에 따른 랜드마크의 제1 예를 도시한 도면이다.
도 3은 본 발명의 데이터 처리 장치가 수행하는 랜드 마크 추출 방법을 플로우챠트로 도시한 도면이다.
도 4는 본 발명의 일실시예에 따른 랜드 마크 추출 과정을 실제 도면을 통해 나타낸 도면이다.
도 5는 본 발명의 일실시예에 따른 데이터 처리 장치가 수행하는 학습 모델의 학습 방법을 도시한 도면이다.
도 6a 내지 도 6c는 본 발명의 일실시예에 따른 데이터 확장을 수행한 예를 도시한 도면이다.
도 7은 본 발명의 일실시예에 따른 데이터 처리 장치가 수행하는 학습 모델을 이용한 환자의 골격 분류 방법을 도시한 도면이다.
도 8은 본 발명의 일실시예에 따른 랜드 마크의 제2 예를 도시한 도면이다.
도 9는 본 발명의 일실시예에 따른 데이터 처리 장치가 제공하는 모델의 구조를 도시한 도면이다.
| Method | Male | Female | ||
| Mean | SD | Mean | SD | |
| ANB | 2.166 | 3.167 | 3.421 | 2.838 |
| Wits appraisal | -4.310 | 4.707 | -3.495 | 3.930 |
| 396.433 | 6.627 | 398.902 | 6.457 | |
| Jarabak's ratio | 65.243 | 5.196 | 62.864 | 4.826 |
| 시상 골격(Sagittal) | 수직 골격(Vertical) | |||||||
| Class I | Class II | Class III | Total | Normal | hyperdivergent | hypodivergent | Total | |
| Model I | 4326 | 779 | 785 | 5890 | 4115 | 835 | 940 | 5890 |
| Model II | 3738 | 504 | 581 | 4823 | 3511 | 598 | 650 | 4759 |
| Model III | 3398 | 412 | 511 | 4321 | 3120 | 491 | 548 | 4159 |
110 : 데이터 처리 장치
111 : 프로세서
120 : 환자의 이미지 데이터
130 : 학습 모델
140 : 디스플레이
Claims (6)
- 환자의 타겟 데이터를 서로 다른 모달리티(Modality)를 가지는 상기 타겟 데이터의 특징 정보 및 상기 환자의 속성 정보를 융합 분석하여 복수의 패턴들 중 어느 하나의 패턴으로 분류하는 단계;
상기 타겟 데이터의 패턴 분류를 수행하는 학습 모델에 적용하기 위하여 상기 복수의 패턴들에 대응하는 환자의 타겟 데이터들 중 미리 설정된 기준에 따라 일부의 타겟 데이터들을 추출함으로써 훈련 데이터 셋을 결정하는 단계;
상기 훈련 데이터 셋에 포함된 상기 타겟 데이터들의 패턴 간 비율을 일치시키는 단계;
상기 패턴 간 비율이 일치된 상기 훈련 데이터 셋 내의 타겟 데이터들을 이용하여 상기 학습 모델을 학습하는 단계; 및
설명 가능한 인공지능(eXplainable AI, XAI) 기법을 이용하여 상기 학습된 학습 모델의 질(Quality)을 평가하는 단계
를 포함하는 데이터 처리 방법. - 제1항에 있어서,
상기 결정하는 단계는,
상기 복수의 패턴들 각각에 포함된 상기 환자의 타겟 데이터들 중 패턴 별로 동일한 개수의 타겟 데이터들을 추출하여 상기 학습 모델의 검증을 위한 유효성 검증 셋을 생성하는 단계; 및
상기 복수의 패턴들 각각에 포함된 상기 환자의 타겟 데이터들 중 상기 유효성 검증 셋에 포함된 타겟 데이터들을 제외한 타겟 데이터들을 훈련 데이터 셋으로 결정하는 단계
를 포함하는 데이터 처리 방법. - 제2항에 있어서,
상기 생성하는 단계는,
상기 복수의 패턴들 중 가장 적은 개수의 타겟 데이터들을 포함하는 패턴에 기초하여 상기 유효성 검증 셋을 생성하는 데이터 처리 방법. - 제1항에 있어서,
상기 일치시키는 단계는,
상기 타겟 데이터들에서 발견되는 분산을 고려하여 데이터 확장(Data Augmentation), 오버샘플링(Oversampling), 언더샘플링(Undersampling) 및 SMOTE(synthetic minority oversampling technique) 중 적어도 하나를 수행하는 데이터 처리 방법. - 제4항에 있어서,
상기 일치시키는 단계는,
상기 타겟 데이터가 이미지 데이터인 경우, 상기 이미지 데이터를 복수의 영역들로 분할하고, 분할된 영역들 각각에 대한 이미지 히스토그램의 강도를 고르게 분산시키는 히스토그램 등화(histogram equalization) 방법을 이용하는 데이터 처리 방법. - 제1항에 있어서,
상기 학습 모델의 부하 감소와 모델 단순화를 위하여 상기 환자의 타겟 데이터에 대한 다운 샘플링을 수행하는 단계
를 더 포함하고,
상기 분류하는 단계는,
상기 다운 샘플링이 수행된 환자의 타겟 데이터에 상기 환자의 속성 데이터를 이용함으로써 상기 타겟 데이터의 패턴을 분류하는 데이터 처리 방법.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020190156762 | 2019-11-29 | ||
| KR20190156762 | 2019-11-29 | ||
| KR1020190173563 | 2019-12-24 | ||
| KR20190173563 | 2019-12-24 | ||
| KR1020200160797A KR102458324B1 (ko) | 2019-11-29 | 2020-11-26 | 학습 모델을 이용한 데이터 처리 방법 |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| KR1020200160797A Division KR102458324B1 (ko) | 2019-11-29 | 2020-11-26 | 학습 모델을 이용한 데이터 처리 방법 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| KR20220133834A true KR20220133834A (ko) | 2022-10-05 |
| KR102545906B1 KR102545906B1 (ko) | 2023-06-23 |
Family
ID=76129543
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| KR1020220118766A Active KR102545906B1 (ko) | 2019-11-29 | 2022-09-20 | 학습 모델을 이용한 데이터 처리 방법 |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR102545906B1 (ko) |
| WO (1) | WO2021107661A2 (ko) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114610911B (zh) * | 2022-03-04 | 2023-09-19 | 中国电子科技集团公司第十研究所 | 多模态知识本征表示学习方法、装置、设备及存储介质 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20160059768A (ko) * | 2014-11-19 | 2016-05-27 | 삼성전자주식회사 | 얼굴 특징 추출 방법 및 장치, 얼굴 인식 방법 및 장치 |
| KR20190078693A (ko) * | 2017-12-13 | 2019-07-05 | 재단법인대구경북과학기술원 | 학습 데이터의 분포 특성에 기초하여 학습 데이터를 생성하는 방법 및 장치 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20180079995A (ko) * | 2017-01-03 | 2018-07-11 | 주식회사 데일리인텔리전스 | 머신러닝을 기반으로 데이터를 분석하는 프로그램을 생성하기 위한 방법 |
| KR102016812B1 (ko) * | 2017-11-24 | 2019-10-21 | 한국생산기술연구원 | 데이터 불균형 환경에서 머신러닝 모델을 통해 공정 불량 원인을 도출하고 시각화하는 방법 |
-
2020
- 2020-11-27 WO PCT/KR2020/017032 patent/WO2021107661A2/ko not_active Ceased
-
2022
- 2022-09-20 KR KR1020220118766A patent/KR102545906B1/ko active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20160059768A (ko) * | 2014-11-19 | 2016-05-27 | 삼성전자주식회사 | 얼굴 특징 추출 방법 및 장치, 얼굴 인식 방법 및 장치 |
| KR20190078693A (ko) * | 2017-12-13 | 2019-07-05 | 재단법인대구경북과학기술원 | 학습 데이터의 분포 특성에 기초하여 학습 데이터를 생성하는 방법 및 장치 |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021107661A3 (ko) | 2021-07-22 |
| WO2021107661A2 (ko) | 2021-06-03 |
| KR102545906B1 (ko) | 2023-06-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102458324B1 (ko) | 학습 모델을 이용한 데이터 처리 방법 | |
| ES2997308T3 (en) | Automated determination of a canonical pose of a 3d objects and superimposition of 3d objects using deep learning | |
| Oh et al. | Deep anatomical context feature learning for cephalometric landmark detection | |
| AU2017292642B2 (en) | System and method for automatic detection, localization, and semantic segmentation of anatomical objects | |
| US11615879B2 (en) | System and method for automated labeling and annotating unstructured medical datasets | |
| CN112508965B (zh) | 医学影像中正常器官的轮廓线自动勾画系统 | |
| Ma et al. | Automatic 3D landmarking model using patch‐based deep neural networks for CT image of oral and maxillofacial surgery | |
| Wang et al. | Evaluation and comparison of anatomical landmark detection methods for cephalometric x-ray images: a grand challenge | |
| EP3195257B1 (en) | Systems and methods for segmenting medical images based on anatomical landmark-based features | |
| Vivanti et al. | Automatic liver tumor segmentation in follow-up CT studies using convolutional neural networks | |
| US20220215625A1 (en) | Image-based methods for estimating a patient-specific reference bone model for a patient with a craniomaxillofacial defect and related systems | |
| CN113096137A (zh) | 一种oct视网膜图像领域适应分割方法及系统 | |
| WO2008141293A9 (en) | Image segmentation system and method | |
| CN118279302A (zh) | 面向脑肿瘤图像的三维重建检测方法及系统 | |
| Elkhill et al. | Geometric learning and statistical modeling for surgical outcomes evaluation in craniosynostosis using 3D photogrammetry | |
| Diniz et al. | A deep learning method with residual blocks for automatic spinal cord segmentation in planning CT | |
| Hong et al. | Automated cephalometric landmark detection using deep reinforcement learning | |
| Lu et al. | Collaborative multi-metadata fusion to improve the classification of lumbar disc herniation | |
| Neeraja et al. | CephXNet: a deep convolutional squeeze-and-excitation model for landmark prediction on lateral cephalograms | |
| KR102545906B1 (ko) | 학습 모델을 이용한 데이터 처리 방법 | |
| Zhang et al. | Topology-preserving segmentation network: A deep learning segmentation framework for connected component | |
| Qian et al. | Attention-based Shape-Deformation networks for Artifact-Free geometry reconstruction of lumbar spine from MR images | |
| Niemeijer et al. | Automatic Detection of the Optic Disc, Fovea and Vacular Arch in Digital Color Photographs of the Retina. | |
| Kumar et al. | Multilevel thresholding-based medical image segmentation using hybrid particle cuckoo swarm optimization | |
| EP4364088A1 (en) | Classification of organ of interest shapes for autosegmentation quality assurance |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| A107 | Divisional application of patent | ||
| PA0107 | Divisional application |
St.27 status event code: A-0-1-A10-A16-div-PA0107 St.27 status event code: A-0-1-A10-A18-div-PA0107 |
|
| PA0201 | Request for examination |
St.27 status event code: A-1-2-D10-D11-exm-PA0201 |
|
| PG1501 | Laying open of application |
St.27 status event code: A-1-1-Q10-Q12-nap-PG1501 |
|
| R18-X000 | Changes to party contact information recorded |
St.27 status event code: A-3-3-R10-R18-oth-X000 |
|
| PN2301 | Change of applicant |
St.27 status event code: A-3-3-R10-R11-asn-PN2301 St.27 status event code: A-3-3-R10-R13-asn-PN2301 |
|
| P22-X000 | Classification modified |
St.27 status event code: A-2-2-P10-P22-nap-X000 |
|
| E701 | Decision to grant or registration of patent right | ||
| PE0701 | Decision of registration |
St.27 status event code: A-1-2-D10-D22-exm-PE0701 |
|
| PR0701 | Registration of establishment |
St.27 status event code: A-2-4-F10-F11-exm-PR0701 |
|
| PR1002 | Payment of registration fee |
Fee payment year number: 1 St.27 status event code: A-2-2-U10-U11-oth-PR1002 |
|
| PG1601 | Publication of registration |
St.27 status event code: A-4-4-Q10-Q13-nap-PG1601 |
|
| P22-X000 | Classification modified |
St.27 status event code: A-4-4-P10-P22-nap-X000 |
|
| P22-X000 | Classification modified |
St.27 status event code: A-4-4-P10-P22-nap-X000 |