A Docker Compose-based MVP application for AI-powered mock exam generation using LangChain, LangGraph, MCP (Model Context Protocol), and Ollama with Gemma2.
- AI-Powered Exam Generation: Uses Gemma2 via Ollama for intelligent question generation
- Multi-Agent Architecture: LangGraph-orchestrated agents for syllabus analysis, question generation, and feedback
- Realistic Exam Experience: Proper timing, sections, negative marking, and navigation
- Comprehensive Feedback: Topic-wise analysis with Fast Track and Deep Mastery study plans
- Chat Integration: WhatsApp/Telegram-ready natural language exam generation
- Syllabus-True Content: Questions aligned to latest exam patterns and blueprints
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Flask API β β LangGraph β β Ollama β
β (FastAPI) βββββΊβ Workflow βββββΊβ (Gemma2) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β
βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ
β MongoDB β β Redis β
β (Database) β β (Cache) β
βββββββββββββββββββ βββββββββββββββββββ
- Docker (20.10+)
- Docker Compose (1.29+)
- 4GB+ RAM available for containers
- 10GB+ disk space for models and data
# Clone the repository files into your project directory
mkdir pymock-mvp
cd pymock-mvp
# Copy all the provided files into appropriate locations
# (Use the artifacts provided above)mkdir -p app/agents app/database app/services logs
touch app/__init__.py app/agents/__init__.py app/database/__init__.py app/services/__init__.py# Make startup script executable
chmod +x startup.sh
# Run the startup script (recommended)
./startup.sh
# OR start manually
docker-compose up -d# Check health
curl http://localhost:5000/health
# Generate a test exam
curl -X POST http://localhost:5000/api/exam/generate \
-H "Content-Type: application/json" \
-d '{
"exam_type": "JEE_MAIN",
"subject_areas": ["Mathematics", "Physics"],
"difficulty": "medium",
"duration": 60,
"num_questions": 10
}'| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
System health check |
POST |
/api/exam/generate |
Generate new mock exam |
GET |
/api/exam/{exam_id} |
Retrieve exam details |
POST |
/api/exam/{exam_id}/submit |
Submit exam answers |
GET |
/api/feedback/{submission_id} |
Get detailed feedback |
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/chat/generate-exam |
Natural language exam generation |
{
"exam_type": "NEET",
"subject_areas": ["Physics", "Chemistry", "Biology"],
"difficulty": "medium",
"duration": 180,
"num_questions": 50,
"negative_marking": true
}{
"answers": {
"Q001": "A",
"Q002": "B",
"Q003": "C"
},
"time_taken": {
"Q001": 45,
"Q002": 60,
"Q003": 30
}
}{
"message": "Generate a medium difficulty JEE Main mock test",
"user_id": "user123"
}- Analyzes exam syllabi and topic weights
- Provides blueprint-aligned content structure
- Caches syllabus data for efficiency
- Generates questions based on syllabus requirements
- Maintains difficulty distribution
- Creates realistic question formats
- Assembles complete exam structure
- Handles sections, timing, and navigation
- Creates exam sessions
- Analyzes performance across multiple dimensions
- Generates actionable improvement plans
- Provides time management insights
{
"exam_id": "uuid",
"exam_type": "JEE_MAIN",
"title": "JEE Main Mock Test",
"duration": 180,
"questions": {...},
"sections": {...},
"created_at": "ISO_DATE"
}{
"submission_id": "uuid",
"exam_id": "uuid",
"student_id": "string",
"answers": {...},
"performance": {...},
"submitted_at": "ISO_DATE"
}# Database
MONGODB_URI=mongodb://pymock:pymock123@mongodb:27017/pymock?authSource=admin
REDIS_URL=redis://redis:6379
# AI Model
OLLAMA_BASE_URL=http://ollama:11434
MODEL_NAME=gemma2:2b
# Application
FLASK_DEBUG=1
MAX_QUESTIONS_PER_EXAM=100
DEFAULT_EXAM_DURATION=180- NEET: Medical entrance (Physics, Chemistry, Biology)
- JEE_MAIN: Engineering entrance (Math, Physics, Chemistry)
- SSC_CGL: Government jobs (4 sections)
# Application logs
docker-compose logs -f pymock-api
# Ollama logs
docker-compose logs -f ollama
# Database logs
docker-compose logs -f mongodb# Check service health
curl http://localhost:5000/health
# Monitor resource usage
docker-compose top- Install Python Dependencies
pip install -r requirements.txt- Run Individual Services
# Start only infrastructure
docker-compose up -d mongodb redis ollama
# Run Flask app locally
cd app
python main.py- Testing
# Run tests (when implemented)
pytest
# Manual API testing
python -m pytest tests/ -v- Update
config.pywith new exam configuration - Add syllabus templates in Syllabus Agent
- Test with new exam type
- Create new agent in
app/agents/ - Add to workflow in
agents/workflow.py - Update API endpoints as needed
Ollama Model Not Found
# Manually pull model
curl -X POST http://localhost:11434/api/pull -d '{"name": "gemma2:2b"}'MongoDB Connection Failed
# Check MongoDB logs
docker-compose logs mongodb
# Reset MongoDB data
docker-compose down -v
docker-compose up -d mongodbRedis Connection Issues
# Test Redis connection
docker-compose exec redis redis-cli ping
# Clear Redis cache
docker-compose exec redis redis-cli FLUSHALL- Scale Services
# In docker-compose.yml
pymock-api:
deploy:
replicas: 3- Optimize Model Usage
- Use smaller models for development
- Implement request batching
- Add response caching
- Database Optimization
- Add appropriate indexes
- Implement connection pooling
- Use read replicas for scaling
- Change default passwords in production
- Use environment-specific configurations
- Implement rate limiting for API endpoints
- Add authentication for sensitive operations
- Secure token-based exam access
# docker-compose.prod.yml
version: '3.8'
services:
pymock-api:
deploy:
replicas: 3
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf- Split agents into separate services
- Use message queues for communication
- Implement circuit breakers
- Add service discovery
- Fork the repository
- Create feature branch
- Add tests for new functionality
- Submit pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- WhatsApp/Telegram bot integration
- Real-time proctoring features
- Advanced analytics dashboard
- Multi-language support
- Mobile app (React Native)
- Certification partnerships
- Institute management features
Ready to revolutionize exam preparation with AI! π
# Clone repo
git clone <your-repo-url>
cd pymock
# Build and start all services
docker-compose up -d
# Wait 2-5 minutes for Ollama to pull and load gemma2:2b
# Check if model is ready:
curl http://localhost:11434/api/tags
# Check app health:
curl http://localhost:5000/healthβ First run triggers
model-initservice β it pullsgemma2:2binto Ollama automatically.
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Returns status of DB, Cache, LLM |
POST |
/api/exam/generate |
Generate new mock exam |
POST |
/api/exam/<id>/submit |
Submit answers β get AI feedback |
GET |
/api/exam/<id> |
Retrieve exam |
GET |
/api/feedback/<id> |
Get feedback report |
POST |
/api/chat/generate-exam |
Natural language β exam (e.g., WhatsApp) |
GET |
/exam/<token> |
Resolve secure token β exam |
| Service | Role | Port | Image / Build |
|---|---|---|---|
pymock-api |
Main Flask app | 5000:5000 |
Built from Dockerfile |
ollama |
Local LLM server (gemma2:2b) | 11434:11434 |
ollama/ollama:latest |
mongodb |
Stores exams, submissions, feedback | 27017:27017 |
mongo:7.0 |
redis |
Caching, secure tokens, sessions | 6379:6379 |
redis:7-alpine |
model-init |
Auto-pulls gemma2:2b on first boot |
β | curlimages/curl |
Persistent volumes:
ollama_dataβ stores downloaded modelsmongodb_dataβ database filesredis_dataβ cache persistence
[Flask App] β [Agent Workflow] β [MCP Client Calls] β [MCP Server (FastAPI)] β [MongoDB] [Redis] [Internal Logic] β [Return Structured JSON] β [LLM uses data to generate Qs/Feedback]
.
βββ app/
β βββ agents/ # SyllabusAgent, PaperAgent, MockAgent, FeedbackAgent
β βββ database/ # MongoDBClient.py
β βββ services/ # CacheService.py
β βββ __init__.py
βββ logs/ # App logs (mounted from host)
βββ config.py # Loads env vars (OLLAMA_BASE_URL, REDIS_URL, etc.)
βββ app.py # Flask app + routes
βββ Dockerfile # Builds pymock-api image
βββ docker-compose.yml # Defines all services & networking
./mcp_server/
βββ Dockerfile
βββ main.py # FastAPI/MCP server
βββ tools/
β βββ syllabus_tool.py
β βββ question_tool.py
β βββ feedback_tool.py
β βββ __init__.py
βββ models.py # Pydantic models for MCP
βββ requirements.txt
Set in docker-compose.yml β passed to Flask app:
# Flask
FLASK_ENV=development
FLASK_DEBUG=1
# Database
MONGODB_URI=mongodb://pymock:pymock123@mongodb:27017/pymock?authSource=admin
# Cache
REDIS_URL=redis://redis:6379
# AI / Ollama
OLLAMA_BASE_URL=http://ollama:11434
MODEL_NAME=gemma2:2bβ Your agents in
app/agents/workflow.pywill use these to call Ollamaβs/api/generateendpoint.
Handles Redis operations:
create_secure_token(exam_params)β returns UUID, stores in Redis with TTLvalidate_secure_token(token)β retrieves + validates exam paramscache_exam(exam_id, exam_data)β caches exam JSONhealth_check()β pings Redis
MongoDB wrapper:
store_exam(),get_exam()store_submission(),get_feedback()health_check()β runspingcommand
LangChain-based agentic pipeline:
workflow = create_exam_workflow() # Chains 4 agents
result = workflow.invoke({
"exam_type": "NEET",
"subject_areas": ["Biology"],
"difficulty": "medium",
"duration": 180
})β Uses gemma2:2b via Ollama for question & feedback generation.
curl -X POST http://localhost:5000/api/exam/generate \
-H "Content-Type: application/json" \
-d '{
"exam_type": "JEE Main",
"subject_areas": ["Physics", "Maths"],
"difficulty": "hard",
"duration": 180,
"num_questions": 25
}'curl -X POST http://localhost:5000/api/chat/generate-exam \
-H "Content-Type: application/json" \
-d '{"message": "Generate a 30-min Biology mock for NEET"}'β Returns secure link: {"exam_url": "/exam/abc123-xyz789", "expires_in": "24 hours"}
curl http://localhost:5000/exam/abc123-xyz789β Triggers exam generation and returns full exam object.
docker-compose build pymock-api
docker-compose up -ddocker-compose logs -f pymock_api # Flask app
docker-compose logs -f ollama # LLM server
docker-compose logs pymock_model_init # Model pull statuscurl http://localhost:11434/api/tags
# Should include: { "name": "gemma2:2b", ... }curl -f http://localhost:5000/health
# Returns 200 if MongoDB, Redis, and app are healthyStop containers and delete volumes (models, DB, cache):
docker-compose down -v- Model not loading? β Check
model-initlogs. If failed, run manually:docker-compose run --rm pymock_model_init
- LLM timeout? β Increase timeout in agent or try smaller model like
phi3:mini. - Connection refused? β Wait 60s after
upβ Ollama needs time to load model. - Cache/DB not connecting? β Verify service names in env vars match Docker Compose service names (
mongodb,redis,ollama).
- Docker & Docker Compose
- Minimum 8GB RAM (Ollama + gemma2:2b is memory-heavy)
- Python 3.9+ (for local dev outside container)
- ~5GB disk space for models + DB
π‘ Tip: For faster iteration, consider swapping
gemma2:2bwithphi3:miniorllama3:8b-instruct-q4_K_Mif you have more RAM β better reasoning for exam generation.
Let me know if you want:
- A
Makefilefor common commands - VS Code
devcontainer.json - Sample
.envfile for local non-Docker dev - Instructions to swap in a different Ollama model
Happy coding! π