2015 MICCAI Challenge

From Imageng
(Difference between revisions)
Jump to: navigation, search
(Evaluation and ranking for single structure segmentation)
(Ranking for the segmentation of all structures)
Line 96: Line 96:
 
Ranks for both datasets are summed in order to get an "overall ranking" for each structure
 
Ranks for both datasets are summed in order to get an "overall ranking" for each structure
  
== Ranking  for the segmentation of all structures ==
+
==== Ranking  for the segmentation of all structures ====
  
1) For all teams who submitted labels for all structures and for both test datasets (test dataset 1 and on-site test) the individual rankings for each structure are summed up
+
# For all teams who submitted labels for all structures and for both test datasets (test dataset 1 and on-site test) the individual rankings for each structure are summed up independently for both datasets in order to get an "overall ranking" for all test dataset 1 and the onsite test dataset
 
+
# Overall ranks for both datasets are summed in order to get an overall result for each team, who submitted all structures
2) Structure ranks of each team are summed up independently for both datasets in order to get an "overall ranking" for all structures independently for both datasets
+
 
+
3) Overall ranks for both datasets are summed in order to get an overall result for each team, who submitted all structures
+
  
 
== TIMELINE ==
 
== TIMELINE ==

Revision as of 10:14, 21 October 2015

Head and Neck Auto Segmentation Challenge

Miccai2015 banner.jpg


CALL FOR SUBMISSION

We are pleased to announce the third Head And Neck Auto-Segmentation Challenge, that will be held in conjunction with the “Medical Image Computing and Computer Assisted Interventions” conference (MICCAI2015) in Munich, Germany.

The primary objective of this challenge is to establish an unbiased benchmark of automatic segmentation performance. It is hoped that this challenge will give valuable feedback both to researchers and the user community about algorithm performance in real world applications. In addition the challenge will provide medical imaging researchers a venue where they can discuss and form collaborations on medical image segmentation for radiotherapy planning.

CHALLENGE FORMAT

Mandible
Optics
Brainstem
Parotid

Training data

25 training cases, which consist of CT images and manually delineated data of

  • brainstem
  • mandible
  • left and right optic nerves
  • optic chiasm
  • left and right parotid glands
  • left and right submandibular glands

Segmentation data can be downloaded from the PDDCA web site:

The official training cases are found in the files pddca_part1.zip and pddca_part2.zip. Optional additional cases are found in pddca_part3.zip. A detailed description of the manual delineation method is found in pddca.odt.

Test data: part 1

10 test cases without manual segmentations are available for download at http://www.imagenglab.com/data/pddca/test_cases_part1.zip. These cases are segmented by participants, and the results are returned to the challenge organizers. Participants may choose to segment some or all of the structures. Submissions should be sent to the organizers by Sept 11, 2015. The organizers will evaluate these submissions and will present the results at the challenge on Oct 9th, 2015.

Test data: part 1 UPDATE

We regret that there has been an error in the test data. The HU values were converted incorrectly, with negative values truncated to 0 (e.g air had a value of 0 instead of -1024). This error only affects the test data; the training data is not affected. Everyone who thinks that this fact does not affect his/her results can feel free to continue to use the old dataset without redoing the segmentation using the new images.

Please download the new test data from the following link: http://www.imagenglab.com/data/pddca/test_cases_part1_v2.zip

Submission of segmentation results

Please submit your results until Sept 11th, 2015 based on the following guidelines:

  • The file format for the segmented labels can be either DICOM-RT, or a file format that can be loaded by ITK (Nifti, Meta-IO, nrrd,Analyze)
  • If possible please use the same folder and file structure that has been used for the training data (i.e. one labelmap per organ/patient)
  • The results can be sent by e-mail or by providing a download link

Workshop article submission

In addition, all researcher who want to take part at the challenge have to submit papers, which describe the approaches, which have been used for the challenge. Submissions ranging from extended abstracts (2 pages) to full papers (8 pages) will be accepted.

All submissions should be formatted in Lecture Notes in Computer Science style. Please refer to the submission format guidelines of MICCAI 2015. All submissions should be submitted until Sep 19th, 2015. After the challenge and before publishing a revised version including quantitative segmentation results should be prepared by Nov 1, 2015. Submissions will be disseminated through the Midas Journal (http://www.midasjournal.org/).

Test data: part 2

An additional set of between 5 and 10 test cases without manual segmentations will be provided to participants. It is expected that these cases will be segmented by participants during the challenge. The segmentations will be given to, and evaluated by, the challenge organizers on the day of the challenge.

Onsite segmentation

Here the link for onsite segmentation

https://drive.google.com/file/d/0B93Xbj4Fv3ZxbTNONFp4VHZoR1E/view?usp=sharing

Testing Methodology

Evaluation and ranking for single structure segmentation

1) Computing DICE score and 95% Hausdorff distance for all structures For paired organs (PG,SG, optic nerves): Computing average Dice scores and 95% Hausdorff distances

2) For each structure/pair of structures: Submissions are ranked according to the

  • average Dice scores
  • avg. 95% Hausdorff distance
for test dataset 1 and on-site test dataset. Both metrics equally contribute to the ranking.

3) Submissions for both datsets (pre-test and on-site) are ranked independently. Ranks for both datasets are summed in order to get an "overall ranking" for each structure

Ranking for the segmentation of all structures

  1. For all teams who submitted labels for all structures and for both test datasets (test dataset 1 and on-site test) the individual rankings for each structure are summed up independently for both datasets in order to get an "overall ranking" for all test dataset 1 and the onsite test dataset
  2. Overall ranks for both datasets are summed in order to get an overall result for each team, who submitted all structures

TIMELINE

  • Sept 11, 2015 EXTENDED DEADLINE to Sept 19, 2015 Submission of 10 test cases to organizers
  • Sept 19, 2015 Submission of workshop articles to organizers
  • Oct 9, 2015 Challenge at MICCAI and evaluation of test cases presented by organizers
  • Nov 1, 2015 Final revised papers to organizers

TENTATIVE PROGRAM

  • 13:30: Welcome
  • 13:35 - 15:30: On-site segmentation of additional training cases
  • 15:30 - 15:45: Coffee break
  • 15:45 - 17:00: Oral presentations (10-12 min for each participant/team)
  • 17:00 - 17:30: Qualitative evaluation
  • 17:30 Announcement of the results + Adjourn

CONTACT INFORMATION

The organizing committee is:

  • Karl Fritscher, UMIT
  • Patrik Raudaschl, UMIT
  • Greg Sharp, MGH
  • Paolo Zaffino, UMG

Please send questions to the organizing committee at: headnecksegchallengeMICCAI2015@gmail.com

Personal tools
Namespaces

Variants
Actions
Navigation
Tools