This research project focused on building a deep learning-based system to detect Down Syndrome in children using facial images. The goal was to provide a non-invasive, early screening tool that could assist clinicians or caregivers in identifying potential signs of the disorder with high accuracy.
Down Syndrome, caused by an extra chromosome 21, is one of the most common genetic disorders globally. Early detection is crucial for timely medical and social interventions. Since facial characteristics are often indicative of the condition, we explored whether convolutional neural networks (CNNs) could learn to distinguish these features effectively.
We used a public Kaggle dataset containing around 3,000 facial images of children—roughly half diagnosed with Down Syndrome and half without. The images included children aged from 0 to 15 years.
To enhance model generalization, we performed data preprocessing steps including:
We applied transfer learning using four state-of-the-art pre-trained CNN architectures:
We froze early layers to preserve learned general features and fine-tuned deeper layers specific to our task. Careful tuning of batch size, learning rate, and dropout helped optimize each model's performance.
All models achieved high accuracy, with Xception delivering the best overall performance. It effectively learned to capture subtle facial differences between Down Syndrome and non-Down Syndrome children. Its architecture — based on depthwise separable convolutions — allowed efficient and fine-grained feature extraction, leading to accurate and stable predictions.