Challenge
Sub-challenge 2

Image Quality Estimation

Aim

The purpose of this challenge is to evaluate fundus image quality

Background

Recently, automatic DR diagnosis has gained significant research and clinical interests, while the image quality plays a key role in the diagnosis accuracy. Even though the technology used in digital fundus imaging has improved, non-biological factors resulting from the improper operation can still reduce image quality. The major factors in fundus image quality assessment are image artifact, clarity, and field definition as shown in Figure.

Figure. Images (a)-(c) are ungradable for DR due to image artifacts (a), clarity (b), and field definition (c), respectively. Image (d) is a gradable image.

Table. Image Quality Scoring Criteria

Type Image quality specification level
Overall quality
Quality is good enough for the diagnosis of retinal diseases
Quality is not good enough for the diagnosis of retinal diseases
1
0
Artifact
Do not contain artifacts
Outside the aortic arch with range less than 1/4 of the image
Do not affect the macular area with scope less than 1/4
Cover more than 1/4, less than 1/2 of the image
Cover more than 1/2 without fully cover the posterior pole
Cover the entire posterior pole
0
1
4
6
8
10
Clarity
Only Level 1 vascular arch can be identified
Can identify Level 2 vascular arch and a small number of lesions
Can identify Level 3 vascular arch and some lesions
Can identify Level 3 vascular arch and most lesions
Can identify Level 3 vascular arch and all lesions
1
4
6
8
10
Field definition
Do not include the optic disc and macular
Only contain either optic disc or macula
Contain both optic disc and macula
The optic disc and macula are within 2PD of the center
The optic disc and macula are within 1PD of the center
1
4
6
8
10
Task

Fundus quality assessment. The sub challenge can be divided in four different tasks; participants can submit results for at least one of the following tasks:

a) Overall image quality

b) Artifact

c) Clarity

d) Field definition

Availability of the data

Initially 60% of the database with ground truth (i.e. Training set with labels) will be released on December 1, 2019 (four months before the date of the symposium). The validation set 20% will be provided on January 15, 2020 (two and half months before the day of on-site competition) and 20% (test set without labels) of remaining data will be released on the day of challenge. The released data will be divided proportionately per grade. The test set results will be used for comparison and analysis of submitted results.

Evaluation metrics

For image quality estimation (sub-challenge2), this challenge evaluates the performance of the algorithms for image classification accuracy using the available grades from the experts.

Result Submission

Submissions of results will be accepted in .csv file with header three headers as Test Image No. – Quality Grade.

Performance Evaluation

This challenge evaluates the performance of the algorithms for image classification accuracy using the available grades from the experts.