Can I publish papers using the data or evaluation results?
Of course you can. Please cite the following papers when you use the data for publications:
 Xiahai Zhuang and Juan Shen: Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI, Medical Image Analysis, vol.31, pp.77-87, 2016
 Xiahai Zhuang: Challenges and Methodologies of Fully Automatic Whole Heart Segmentation: A Review. Journal of Healthcare Engineering 4 (3): 371–407, 2013
 Zhuang, X., Rhode, K., Razavi, R., Hawkes, D. J., Ourselin, S.: A Registration-Based Propagation Framework for Automatic Whole Heart Segmentation of Cardiac MRI. IEEE Transactions on Medical Imaging, 29 (9): 1612-1625, 2010.
Can I use semi-automatic segmentation algorithms in this challenge?
The challenge aims at both semi-automatic and fully-automatic segmentation methods. Two types of method will be ranked separately.
Why some cases have pretty poor image quality (lots of motion artifacts)?
All the data were collected based on in vivo clinical environment and the data were used in clinics. So the data had various image quality, some data were with relative poor quality. However, it is necessary to include these datasets to validate the robustness of the developed algorithms when it comes to real clinical usage.
Are there training data available for this challenge?
For each of the 40 training data sets (20 CT and 20 MRI representative), we will provide one manual segmentation of the whole heart substructures.
Can I use our own training data?
We only rank the teams who use our 40 training cases for their algorithms and only offer the prizes to the winning teams among them. However, we welcome other submissions which may use their own training data (either public or non-public) or focus on other issues of the cardiac image computing topic. Therefore, for the ranking and prizes, the answer would be NO, the competitors are supposed to use the data we offered for a fair comparison and for benchmarking different algorithms.