The following rules are applicable to those who request to participate (individually or as a team) and download the data.
All the information while registration as a team or individual must be complete and correct
Note: Anonymous registrations are not allowed.
 To participate in the challenge and have your results visible on this website it is mandatory to submit a 4-page paper in an appropriate ISBI style to the organizers in which you explain your algorithm.
 Participants can submit up to three methods to be evaluated per team for each subchallenge.
 Multiple submissions are only allowed when the utilized algorithm differs significantly from the algorithm used in the first submission. The consecutive submissions must include a short description of the difference of the method from the previous submissions.
Participating teams maintain full ownership of their algorithms and associated intellectual property they develop in the course of participating in the challenge.
The top-performing participating teams and individuals (decided by the experts based on the performance of the method) will be invited to contribute to a joint journal paper(s) with maximum of 2 authors per team describing and summarizing the methods used and results found in this challenge. The paper will be submitted to a high-impact journal in the field.
The organizers will review the paper for sufficient detail to be able to understand and reproduce the method and hold the right to exclude participants from the joint journal paper in case their method description was not adequate.
An appropriate citation is to be made in scientific publications (journals, conference papers, technical reports, presentations at conferences and meetings) that use the data shared in this challenge
For all sub-challenges, participants may use other datasets for the development of a method that will be submitted to the challenge, provided that the datasets are publicly available, listed on the challenge website and clearly stated in the submitted paper.
For instance, Kaggle DR, IDRiD, Messidor and APTOS datasets. This is a non-exhaustive list and addition is possible when the organizing team identifies (themselves/through interested participants) some other data source in due course of time by December 31, 2019, will be allowed to use for this challenge.
Also, this challenge discourages participant grading on public data. i.e. though the teams are allowed to use public data they are allowed to do private annotations for some specific tasks and then train their models. Teams need to release their data and annotation to the public through this challenge to have their performance counted using the private annotation.