Devathone Task.

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Title: Infected Chest Classification

Task Overview:

Develop an AI-based solution to accurately classify chest infections using a given image dataset. The
dataset consists of chest X-ray images labeled as either "infected" or "not infected." Participants are
encouraged to use their best knowledge and skills in machine learning, deep learning, or generative AI
to tackle this binary classification problem. Additionally, participants must create a user-friendly
interface to interact with the model.

The Chest Infection Classification Challenge is an AI devathone where participants are tasked with
developing a high-performing model to classify chest X-ray images into two categories:

- Infected
- Not Infected

The goal is to encourage the development of innovative and accurate AI models for medical image
analysis.

Dataset:
Participants will be provided with a dataset containing a large number of chest X-ray images. Each image
in the dataset is labeled as either "Infected" or "Not Infected". The dataset will be split into training
testing and validation sets.

Data link:

https://drive.google.com/drive/folders/1YHJAK6R5oYfJMI6hkow7qK2HeuN-uXtz?usp=sharing

Data Description

- Dataset Name: Chest Infection

- Task Type: Binary Classification

- Labels: Infected, Not Infected

- Image Format: PNG or JPEG

- Resolution: Varies

Task Description:

Data Preprocessing:

Participants are required to preprocess the provided dataset, which may include data
augmentation, resizing, and normalization to prepare it for model training.

Model Development:
Participants are expected to design and train a deep learning model for image classification. They
can use any architecture, such as Convolutional Neural Networks (CNNs), and are encouraged to
explore state-of-the-art techniques and pre-trained models.

Model Training:

Participants will train their models using the training dataset. They are encouraged to optimize
their models by experimenting with hyper-parameters and other relevant techniques.

Model Evaluation:

Participants will evaluate their models on a separate validation dataset to measure their
performance. The evaluation metric will be accuracy, but participants are also encouraged to
report additional metrics like precision, recall, and F1-score.

User Interface to interact with model:

A user interface based on flask/Django/streamlit is required for the model deployment and
doing predictions on further X-ray images.

Submission:

Participants will be required to submit their trained models and predictions on a provided test
data set for final evaluation.

- Documentation:

Include a README file explaining:

How to set up and run your solution.

A brief description of the approach used, including any unique aspects or


innovations.

Instructions for using the user interface.

- Video Description of Project:

Create a 1 minute video explaining the project aspects and usability’s

Evaluation Criteria:

- Model Accuracy:

The primary metric for evaluating the performance of the models.

- Model Efficiency:

How well the model performs with limited computational resources.

- Innovation:
Any novel techniques or approaches used in the solution.
- User Interface (UI) Usability:
Ease of use and aesthetic of the user interface.

Competition Rules:
Participants are expected to adhere to ethical guidelines and follow all applicable laws and regulations
regarding the use of medical data.

Collaboration among participants is not allowed; each participant should work independently.

The competition will have a specified timeline for model development, training, and submission.

The models should be implemented using open-source deep learning frameworks such as TensorFlow.

Plagiarism or the use of pre-trained models without proper attribution will result in disqualification.

Evaluation Criteria:

The models will be evaluated primarily based on accuracy, with additional consideration for the quality
and novelty of the approach as outlined in the documentation.

You might also like