A multi-modal, interpretable AI system for glaucoma detection and clinical insight generation
Overview • Architecture • Technologies • Project Structure • Installation • Usage • Results • Contributing • Snapshots
Explainable AI-Based Glaucoma Detection leverages Transfer Learning, LIME, and LLMs to deliver accurate and interpretable glaucoma screening from fundus images.
- LLAMA 3.2 90B Vision Model: Analyzes LIME-based visual explanations to refine interpretability.
- LLAMA 3.1 70B LLM: Generates detailed textual reasoning, combining model outputs and vision insights.
System Components:
- Frontend: React.js-based interface for image upload and result visualization
- Backend: Express.js and FastAPI for handling predictions and hosting LLMs
- Image Analysis: Combines CNN-based classification with LIME and LLAMA models for interpretability
- Database: MongoDB stores user feedback to refine predictions
- Image Upload: Users upload fundus images for evaluation
- Classification: Pre-trained CNN model classifies images as Normal or Glaucoma
- LIME Explanation: Highlights critical regions influencing the model’s decision
- Vision Model Analysis: LLAMA 3.2 90B Vision Model refines LIME heatmaps and extracts superpixel importance
- Reasoning and Explanation: LLAMA 3.1 70B LLM generates detailed textual explanations
- Feedback: Users can submit feedback for continuous improvement
- Frontend: React.js
- Backend: Express.js, FastAPI
- AI Models: Transfer Learning (CNN, e.g., VGG16), LIME, LLAMA 3.2 90B Vision Model, LLAMA 3.1 70B LLM
- Database: MongoDB
Project/
│
├── Backend_Model/ # Backend for the machine learning model and API
│ ├── model/ # Model-related files
│ ├── Notebooks/ # Model Training Notebooks
│ ├── predictenv/ # Environment setup for model predictions
│ ├── server/ # Fast API backend for model deployment
│
├── Glaucoma-Detection-using-Transfer-Learning/ # Main application
│ ├── node_modules/ # Node.js dependencies
│ ├── public/ # Static files for frontend
│ ├── server/ # Node.js and Express.js backend configuration
│ ├── server.js # Express.js API server entry point
│ ├── src/ # React.js source code (frontend)
│ ├── .env # Environment variables for configuration
│ ├── package.json # NPM package configuration
│ ├── vite.config.js # Vite configuration for frontend
│
├── Test Images/ # Test images for model validation
│ ├── Glaucoma/ # Images of glaucoma-affected eyes
│ └── Normal/ # Images of normal eyes
│
├── .gitignore # Files to be ignored by git
└── README.md # Project documentationgit clone https://github.com/VaibhavDaveDev/Explainable-AI-Based-Glaucoma-Detection-using-Transfer-Learning-LIME-and-LLMs.git
cd Explainable-AI-Based-Glaucoma-Detection-using-Transfer-Learning-LIME-and-LLMscd Backend_Model/predictenv
pip install -r requirements.txtcd ../../Glaucoma-Detection-using-Transfer-Learning
npm install- Install and start MongoDB locally, or configure a remote MongoDB instance.
Create a .env file in both Backend_Model/server and Glaucoma-Detection-using-Transfer-Learning directories. Example for Express.js backend:
MONGO_URI=mongodb://localhost:27017/glaucomaDetection
PORT=5000Ensure MongoDB is running on your machine.
- Express.js:
cd ../../Glaucoma-Detection-using-Transfer-Learning node server.js - FastAPI:
cd Backend_Model/server/ uvicorn main:app --reload
cd ../../Glaucoma-Detection-using-Transfer-Learning
npm run devAccess the application at: http://localhost:3000
- Classification Result: Identifies the input image as Normal or Glaucoma
- LIME Heatmaps: Highlights important regions in the fundus image
- LLAMA-Enhanced Explanation:
- Visual Analysis (LLAMA 3.2 90B): Refines LIME outputs by identifying key superpixels and their relevance
- Reasoning (LLAMA 3.1 70B): Provides detailed natural language explanations, offering clinical insights
- Upload a test image (e.g., from the
Test Imagesfolder) - Receive the classification result and LIME-based visual heatmap
- View detailed reasoning generated by the LLAMA models
- Optionally submit feedback to improve the system
We welcome contributions! Follow these steps:
- Fork the repository
- Create a new branch (
git checkout -b feature/YourFeature) - Commit your changes (
git commit -m 'Add your feature') - Push to the branch (
git push origin feature/YourFeature) - Open a Pull Request
Here are some snapshots of the system in action:












