2015 MICCAI Challenge
(39 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
= Head and Neck Auto Segmentation Challenge = | = Head and Neck Auto Segmentation Challenge = | ||
+ | |||
+ | __NOTOC__ | ||
[[File:miccai2015_banner.jpg]] | [[File:miccai2015_banner.jpg]] | ||
+ | == CITATION == | ||
+ | To cite the challenge or the data here provided please refer to: | ||
+ | |||
+ | Raudaschl, P. F., Zaffino, P., Sharp, G. C., Spadea, M. F., Chen, A., Dawant, B. M., ... & Jung, F. (2017). | ||
+ | |||
+ | Evaluation of segmentation methods on head and neck CT: Auto‐segmentation challenge 2015. | ||
+ | |||
+ | Medical Physics, 44(5), 2020-2036. | ||
== CALL FOR SUBMISSION == | == CALL FOR SUBMISSION == | ||
Line 8: | Line 18: | ||
We are pleased to announce the third Head And Neck Auto-Segmentation Challenge, that will be held in conjunction with the “Medical Image Computing and Computer Assisted Interventions” conference ([http://www.miccai2015.org MICCAI2015]) in Munich, Germany. | We are pleased to announce the third Head And Neck Auto-Segmentation Challenge, that will be held in conjunction with the “Medical Image Computing and Computer Assisted Interventions” conference ([http://www.miccai2015.org MICCAI2015]) in Munich, Germany. | ||
− | The primary objective of this challenge is to establish an unbiased benchmark of automatic segmentation performance. | + | The primary objective of this challenge is to establish an unbiased benchmark of automatic segmentation performance. It is hoped that this challenge will give valuable feedback both to researchers and the user community about algorithm performance in real world applications. In addition the challenge will provide medical imaging researchers a venue where they can discuss and form collaborations on medical image segmentation for radiotherapy planning. |
== CHALLENGE FORMAT == | == CHALLENGE FORMAT == | ||
+ | |||
+ | [[File:mandible.jpg|150px|right|Mandible]] | ||
+ | [[File:optics.jpg|150px|right|Optics]] | ||
+ | [[File:brainstem.jpg|150px|right|Brainstem]] | ||
+ | [[File:parotid.jpg|150px|right|Parotid]] | ||
+ | |||
+ | === Data description === | ||
+ | |||
+ | Segmentation data can be downloaded from the PDDCA web site: | ||
+ | |||
+ | * http://www.imagenglab.com/pddca_18.html | ||
+ | |||
+ | The data set are split in 4 sub-packages: | ||
+ | |||
+ | * Image and labels of datasets 0522c001 to 0522c0328 (25) have been provided as training set | ||
+ | * Image and labels of datasets 0522c329 to 0522c0479 (8) have been provided as optional additional cases for the training set | ||
+ | * Images of datasets 0522c555 to 0522c0746 (10) have been provided as test set for the offsite part of the challenge | ||
+ | * Images of datasets 0522c0788 to 0522c0878 (5) have been used as test set for the onsite segmentation challenge | ||
+ | |||
+ | A detailed description of the manual delineation method is found in pddca.odt. | ||
=== Training data === | === Training data === | ||
− | 25 training cases, which consist of CT images and manually delineated data of | + | 25 training cases (from 0522c001 to 0522c0328), which consist of CT images and manually delineated data of |
* brainstem | * brainstem | ||
Line 23: | Line 53: | ||
* left and right submandibular glands | * left and right submandibular glands | ||
− | + | === Test data: part 1 === | |
− | + | 10 test cases without manual segmentations (from 0522c555 to 0522c0746). | |
− | + | These cases are segmented by participants, and the results are returned to the challenge organizers. | |
− | + | Participants may choose to segment some or all of the structures. | |
− | + | Submissions should be sent to the organizers by Sept 11, 2015. | |
+ | The organizers will evaluate these submissions and will present the results at the challenge on Oct 9th, 2015. | ||
− | === | + | === Submission of segmentation results === |
− | + | Please submit your results until Sept 11th, 2015 based on the following guidelines: | |
− | + | ||
− | + | * The file format for the segmented labels can be either DICOM-RT, or a file format that can be loaded by ITK (Nifti, Meta-IO, nrrd,Analyze) | |
+ | * If possible please use the same folder and file structure that has been used for the training data (i.e. one labelmap per organ/patient) | ||
+ | * The results can be sent by e-mail or by providing a download link | ||
=== Workshop article submission === | === Workshop article submission === | ||
Line 46: | Line 79: | ||
=== Test data: part 2 === | === Test data: part 2 === | ||
− | + | 5 test cases without manual segmentations (from 0522c0788 to 0522c0878). | |
+ | |||
+ | It is expected that these cases will be segmented by participants during the challenge. The segmentations will be given to, and evaluated by, the challenge organizers on the day of the challenge. | ||
+ | |||
+ | === Testing Methodology === | ||
+ | |||
+ | ==== Evaluation and ranking for single structure segmentation ==== | ||
+ | |||
+ | |||
+ | # Computing DICE score and 95% Hausdorff distance for all structures. For paired organs (PG,SG, optic nerves): Computing average Dice scores and 95% Hausdorff distances | ||
+ | # For each structure/pair of structures: Submissions are ranked according to the | ||
+ | ::* average Dice scores | ||
+ | ::* avg. 95% Hausdorff distance | ||
+ | ::independently for test dataset 1 and on-site test dataset to get an "overall ranking" for each structure. Both metrics equally contribute to the ranking | ||
+ | |||
+ | ==== Ranking for the segmentation of all structures ==== | ||
+ | |||
+ | # For all teams who submitted labels for all structures and both test datasets (test dataset 1 and on-site test) the individual rankings for each structure are summed up independently for both datasets in order to get an "overall ranking" for all test dataset 1 and the onsite test dataset | ||
+ | # Overall ranks for both datasets are summed in order to get an overall result for each team, who submitted all structures | ||
== TIMELINE == | == TIMELINE == | ||
− | * Sept 11, 2015 | + | * <del>Sept 11, 2015</del> '''EXTENDED DEADLINE to Sept 19, 2015''' Submission of 10 test cases to organizers |
− | * Sept 19, 2015 Submission of | + | * Sept 19, 2015 Submission of workshop articles to organizers |
− | * Oct 9, 2015 | + | * Oct 9, 2015 Challenge at MICCAI and evaluation of test cases presented by organizers |
− | * Nov 1, 2015 | + | * <del>Nov 1, 2015 Final revised papers to organizers</del> '''There is no fixed deadline for submitting the final paper. Every participant can decide when to submit his/her paper to the MIDAS challenge. Detailed information on the MIDAS submission/upload process will be distributed as soon as we have final information about the special issue for our challenge from the MIDAS journal editors.''' |
+ | |||
+ | == TENTATIVE PROGRAM == | ||
+ | |||
+ | * 13:30: Welcome | ||
+ | * 13:35 - 15:30: On-site segmentation of additional training cases | ||
+ | * 15:30 - 15:45: Coffee break | ||
+ | * 15:45 - 17:00: Oral presentations (10-12 min for each participant/team) | ||
+ | * 17:00 - 17:30: Qualitative evaluation | ||
+ | * 17:30 Announcement of the results + Adjourn | ||
== CONTACT INFORMATION == | == CONTACT INFORMATION == | ||
− | Please send questions to the | + | The organizing committee is: |
+ | |||
+ | * Karl Fritscher, UMIT | ||
+ | * Patrik Raudaschl, UMIT | ||
+ | * Greg Sharp, MGH | ||
+ | * Paolo Zaffino, UMG | ||
+ | |||
+ | Please send questions to the organizing committee at: [mailto:headnecksegchallengeMICCAI2015@gmail.com headnecksegchallengeMICCAI2015@gmail.com] |
Latest revision as of 17:21, 7 November 2017
[edit] Head and Neck Auto Segmentation Challenge
[edit] CITATION
To cite the challenge or the data here provided please refer to:
Raudaschl, P. F., Zaffino, P., Sharp, G. C., Spadea, M. F., Chen, A., Dawant, B. M., ... & Jung, F. (2017).
Evaluation of segmentation methods on head and neck CT: Auto‐segmentation challenge 2015.
Medical Physics, 44(5), 2020-2036.
[edit] CALL FOR SUBMISSION
We are pleased to announce the third Head And Neck Auto-Segmentation Challenge, that will be held in conjunction with the “Medical Image Computing and Computer Assisted Interventions” conference (MICCAI2015) in Munich, Germany.
The primary objective of this challenge is to establish an unbiased benchmark of automatic segmentation performance. It is hoped that this challenge will give valuable feedback both to researchers and the user community about algorithm performance in real world applications. In addition the challenge will provide medical imaging researchers a venue where they can discuss and form collaborations on medical image segmentation for radiotherapy planning.
[edit] CHALLENGE FORMAT
[edit] Data description
Segmentation data can be downloaded from the PDDCA web site:
The data set are split in 4 sub-packages:
- Image and labels of datasets 0522c001 to 0522c0328 (25) have been provided as training set
- Image and labels of datasets 0522c329 to 0522c0479 (8) have been provided as optional additional cases for the training set
- Images of datasets 0522c555 to 0522c0746 (10) have been provided as test set for the offsite part of the challenge
- Images of datasets 0522c0788 to 0522c0878 (5) have been used as test set for the onsite segmentation challenge
A detailed description of the manual delineation method is found in pddca.odt.
[edit] Training data
25 training cases (from 0522c001 to 0522c0328), which consist of CT images and manually delineated data of
- brainstem
- mandible
- left and right optic nerves
- optic chiasm
- left and right parotid glands
- left and right submandibular glands
[edit] Test data: part 1
10 test cases without manual segmentations (from 0522c555 to 0522c0746).
These cases are segmented by participants, and the results are returned to the challenge organizers. Participants may choose to segment some or all of the structures. Submissions should be sent to the organizers by Sept 11, 2015. The organizers will evaluate these submissions and will present the results at the challenge on Oct 9th, 2015.
[edit] Submission of segmentation results
Please submit your results until Sept 11th, 2015 based on the following guidelines:
- The file format for the segmented labels can be either DICOM-RT, or a file format that can be loaded by ITK (Nifti, Meta-IO, nrrd,Analyze)
- If possible please use the same folder and file structure that has been used for the training data (i.e. one labelmap per organ/patient)
- The results can be sent by e-mail or by providing a download link
[edit] Workshop article submission
In addition, all researcher who want to take part at the challenge have to submit papers, which describe the approaches, which have been used for the challenge. Submissions ranging from extended abstracts (2 pages) to full papers (8 pages) will be accepted.
All submissions should be formatted in Lecture Notes in Computer Science style. Please refer to the submission format guidelines of MICCAI 2015. All submissions should be submitted until Sep 19th, 2015. After the challenge and before publishing a revised version including quantitative segmentation results should be prepared by Nov 1, 2015. Submissions will be disseminated through the Midas Journal (http://www.midasjournal.org/).
[edit] Test data: part 2
5 test cases without manual segmentations (from 0522c0788 to 0522c0878).
It is expected that these cases will be segmented by participants during the challenge. The segmentations will be given to, and evaluated by, the challenge organizers on the day of the challenge.
[edit] Testing Methodology
[edit] Evaluation and ranking for single structure segmentation
- Computing DICE score and 95% Hausdorff distance for all structures. For paired organs (PG,SG, optic nerves): Computing average Dice scores and 95% Hausdorff distances
- For each structure/pair of structures: Submissions are ranked according to the
- average Dice scores
- avg. 95% Hausdorff distance
- independently for test dataset 1 and on-site test dataset to get an "overall ranking" for each structure. Both metrics equally contribute to the ranking
[edit] Ranking for the segmentation of all structures
- For all teams who submitted labels for all structures and both test datasets (test dataset 1 and on-site test) the individual rankings for each structure are summed up independently for both datasets in order to get an "overall ranking" for all test dataset 1 and the onsite test dataset
- Overall ranks for both datasets are summed in order to get an overall result for each team, who submitted all structures
[edit] TIMELINE
-
Sept 11, 2015EXTENDED DEADLINE to Sept 19, 2015 Submission of 10 test cases to organizers - Sept 19, 2015 Submission of workshop articles to organizers
- Oct 9, 2015 Challenge at MICCAI and evaluation of test cases presented by organizers
-
Nov 1, 2015 Final revised papers to organizersThere is no fixed deadline for submitting the final paper. Every participant can decide when to submit his/her paper to the MIDAS challenge. Detailed information on the MIDAS submission/upload process will be distributed as soon as we have final information about the special issue for our challenge from the MIDAS journal editors.
[edit] TENTATIVE PROGRAM
- 13:30: Welcome
- 13:35 - 15:30: On-site segmentation of additional training cases
- 15:30 - 15:45: Coffee break
- 15:45 - 17:00: Oral presentations (10-12 min for each participant/team)
- 17:00 - 17:30: Qualitative evaluation
- 17:30 Announcement of the results + Adjourn
[edit] CONTACT INFORMATION
The organizing committee is:
- Karl Fritscher, UMIT
- Patrik Raudaschl, UMIT
- Greg Sharp, MGH
- Paolo Zaffino, UMG
Please send questions to the organizing committee at: headnecksegchallengeMICCAI2015@gmail.com