Skip to content

A Docker Compose-based MVP application for AI-powered mock exam generation using LangChain, LangGraph, MCP (Model Context Protocol), and Ollama with Gemma2.

Notifications You must be signed in to change notification settings

patra-om/mock-test

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Pymock MVP - AI Mock Exam Platform

A Docker Compose-based MVP application for AI-powered mock exam generation using LangChain, LangGraph, MCP (Model Context Protocol), and Ollama with Gemma2.

🎯 Features

  • AI-Powered Exam Generation: Uses Gemma2 via Ollama for intelligent question generation
  • Multi-Agent Architecture: LangGraph-orchestrated agents for syllabus analysis, question generation, and feedback
  • Realistic Exam Experience: Proper timing, sections, negative marking, and navigation
  • Comprehensive Feedback: Topic-wise analysis with Fast Track and Deep Mastery study plans
  • Chat Integration: WhatsApp/Telegram-ready natural language exam generation
  • Syllabus-True Content: Questions aligned to latest exam patterns and blueprints

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Flask API     β”‚    β”‚   LangGraph     β”‚    β”‚     Ollama      β”‚
β”‚   (FastAPI)     │◄──►│   Workflow      │◄──►│   (Gemma2)      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚
         β–Ό                       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    MongoDB      β”‚    β”‚     Redis       β”‚
β”‚   (Database)    β”‚    β”‚    (Cache)      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Docker (20.10+)
  • Docker Compose (1.29+)
  • 4GB+ RAM available for containers
  • 10GB+ disk space for models and data

1. Clone and Setup

# Clone the repository files into your project directory
mkdir pymock-mvp
cd pymock-mvp

# Copy all the provided files into appropriate locations
# (Use the artifacts provided above)

2. Create Directory Structure

mkdir -p app/agents app/database app/services logs
touch app/__init__.py app/agents/__init__.py app/database/__init__.py app/services/__init__.py

3. Start the Application

# Make startup script executable
chmod +x startup.sh

# Run the startup script (recommended)
./startup.sh

# OR start manually
docker-compose up -d

4. Verify Installation

# Check health
curl http://localhost:5000/health

# Generate a test exam
curl -X POST http://localhost:5000/api/exam/generate \
     -H "Content-Type: application/json" \
     -d '{
       "exam_type": "JEE_MAIN",
       "subject_areas": ["Mathematics", "Physics"],
       "difficulty": "medium",
       "duration": 60,
       "num_questions": 10
     }'

πŸ“‘ API Endpoints

Core Exam APIs

Method Endpoint Description
GET /health System health check
POST /api/exam/generate Generate new mock exam
GET /api/exam/{exam_id} Retrieve exam details
POST /api/exam/{exam_id}/submit Submit exam answers
GET /api/feedback/{submission_id} Get detailed feedback

Chat Integration

Method Endpoint Description
POST /api/chat/generate-exam Natural language exam generation

Request Examples

Generate Exam

{
  "exam_type": "NEET",
  "subject_areas": ["Physics", "Chemistry", "Biology"],
  "difficulty": "medium",
  "duration": 180,
  "num_questions": 50,
  "negative_marking": true
}

Submit Exam

{
  "answers": {
    "Q001": "A",
    "Q002": "B",
    "Q003": "C"
  },
  "time_taken": {
    "Q001": 45,
    "Q002": 60,
    "Q003": 30
  }
}

Chat Generation

{
  "message": "Generate a medium difficulty JEE Main mock test",
  "user_id": "user123"
}

πŸ€– Agent Architecture

1. Syllabus Agent

  • Analyzes exam syllabi and topic weights
  • Provides blueprint-aligned content structure
  • Caches syllabus data for efficiency

2. Paper Agent

  • Generates questions based on syllabus requirements
  • Maintains difficulty distribution
  • Creates realistic question formats

3. Mock Agent

  • Assembles complete exam structure
  • Handles sections, timing, and navigation
  • Creates exam sessions

4. Feedback Agent

  • Analyzes performance across multiple dimensions
  • Generates actionable improvement plans
  • Provides time management insights

πŸ—ƒοΈ Database Schema

Exams Collection

{
  "exam_id": "uuid",
  "exam_type": "JEE_MAIN",
  "title": "JEE Main Mock Test",
  "duration": 180,
  "questions": {...},
  "sections": {...},
  "created_at": "ISO_DATE"
}

Submissions Collection

{
  "submission_id": "uuid",
  "exam_id": "uuid",
  "student_id": "string",
  "answers": {...},
  "performance": {...},
  "submitted_at": "ISO_DATE"
}

βš™οΈ Configuration

Environment Variables

# Database
MONGODB_URI=mongodb://pymock:pymock123@mongodb:27017/pymock?authSource=admin
REDIS_URL=redis://redis:6379

# AI Model
OLLAMA_BASE_URL=http://ollama:11434
MODEL_NAME=gemma2:2b

# Application
FLASK_DEBUG=1
MAX_QUESTIONS_PER_EXAM=100
DEFAULT_EXAM_DURATION=180

Supported Exam Types

  • NEET: Medical entrance (Physics, Chemistry, Biology)
  • JEE_MAIN: Engineering entrance (Math, Physics, Chemistry)
  • SSC_CGL: Government jobs (4 sections)

πŸ” Monitoring and Logs

View Logs

# Application logs
docker-compose logs -f pymock-api

# Ollama logs
docker-compose logs -f ollama

# Database logs
docker-compose logs -f mongodb

Health Monitoring

# Check service health
curl http://localhost:5000/health

# Monitor resource usage
docker-compose top

πŸ› οΈ Development

Local Development Setup

  1. Install Python Dependencies
pip install -r requirements.txt
  1. Run Individual Services
# Start only infrastructure
docker-compose up -d mongodb redis ollama

# Run Flask app locally
cd app
python main.py
  1. Testing
# Run tests (when implemented)
pytest

# Manual API testing
python -m pytest tests/ -v

Adding New Exam Types

  1. Update config.py with new exam configuration
  2. Add syllabus templates in Syllabus Agent
  3. Test with new exam type

Extending Agents

  1. Create new agent in app/agents/
  2. Add to workflow in agents/workflow.py
  3. Update API endpoints as needed

🚨 Troubleshooting

Common Issues

Ollama Model Not Found

# Manually pull model
curl -X POST http://localhost:11434/api/pull -d '{"name": "gemma2:2b"}'

MongoDB Connection Failed

# Check MongoDB logs
docker-compose logs mongodb

# Reset MongoDB data
docker-compose down -v
docker-compose up -d mongodb

Redis Connection Issues

# Test Redis connection
docker-compose exec redis redis-cli ping

# Clear Redis cache
docker-compose exec redis redis-cli FLUSHALL

Performance Optimization

  1. Scale Services
# In docker-compose.yml
pymock-api:
  deploy:
    replicas: 3
  1. Optimize Model Usage
  • Use smaller models for development
  • Implement request batching
  • Add response caching
  1. Database Optimization
  • Add appropriate indexes
  • Implement connection pooling
  • Use read replicas for scaling

πŸ”’ Security Considerations

  • Change default passwords in production
  • Use environment-specific configurations
  • Implement rate limiting for API endpoints
  • Add authentication for sensitive operations
  • Secure token-based exam access

πŸ“ˆ Scaling for Production

Horizontal Scaling

# docker-compose.prod.yml
version: '3.8'
services:
  pymock-api:
    deploy:
      replicas: 3
    
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf

Microservices Architecture

  • Split agents into separate services
  • Use message queues for communication
  • Implement circuit breakers
  • Add service discovery

🀝 Contributing

  1. Fork the repository
  2. Create feature branch
  3. Add tests for new functionality
  4. Submit pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🎯 Roadmap

  • WhatsApp/Telegram bot integration
  • Real-time proctoring features
  • Advanced analytics dashboard
  • Multi-language support
  • Mobile app (React Native)
  • Certification partnerships
  • Institute management features

Ready to revolutionize exam preparation with AI! πŸš€


πŸš€ Quick Start

# Clone repo
git clone <your-repo-url>
cd pymock

# Build and start all services
docker-compose up -d

# Wait 2-5 minutes for Ollama to pull and load gemma2:2b
# Check if model is ready:
curl http://localhost:11434/api/tags

# Check app health:
curl http://localhost:5000/health

βœ… First run triggers model-init service β€” it pulls gemma2:2b into Ollama automatically.


🌐 Core Endpoints

Method Endpoint Description
GET /health Returns status of DB, Cache, LLM
POST /api/exam/generate Generate new mock exam
POST /api/exam/<id>/submit Submit answers β†’ get AI feedback
GET /api/exam/<id> Retrieve exam
GET /api/feedback/<id> Get feedback report
POST /api/chat/generate-exam Natural language β†’ exam (e.g., WhatsApp)
GET /exam/<token> Resolve secure token β†’ exam

🐳 Docker Services Overview

Service Role Port Image / Build
pymock-api Main Flask app 5000:5000 Built from Dockerfile
ollama Local LLM server (gemma2:2b) 11434:11434 ollama/ollama:latest
mongodb Stores exams, submissions, feedback 27017:27017 mongo:7.0
redis Caching, secure tokens, sessions 6379:6379 redis:7-alpine
model-init Auto-pulls gemma2:2b on first boot β€” curlimages/curl

Persistent volumes:

  • ollama_data β†’ stores downloaded models
  • mongodb_data β†’ database files
  • redis_data β†’ cache persistence

Architecture

[Flask App] β†’ [Agent Workflow] ↓ [MCP Client Calls] ↓ [MCP Server (FastAPI)] ↓ [MongoDB] [Redis] [Internal Logic] ↓ [Return Structured JSON] ↓ [LLM uses data to generate Qs/Feedback]

πŸ“ Code Structure

.
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ agents/           # SyllabusAgent, PaperAgent, MockAgent, FeedbackAgent
β”‚   β”œβ”€β”€ database/         # MongoDBClient.py
β”‚   β”œβ”€β”€ services/         # CacheService.py
β”‚   └── __init__.py
β”œβ”€β”€ logs/                 # App logs (mounted from host)
β”œβ”€β”€ config.py             # Loads env vars (OLLAMA_BASE_URL, REDIS_URL, etc.)
β”œβ”€β”€ app.py                # Flask app + routes
β”œβ”€β”€ Dockerfile            # Builds pymock-api image
└── docker-compose.yml    # Defines all services & networking

./mcp_server/
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ main.py          # FastAPI/MCP server
β”œβ”€β”€ tools/
β”‚   β”œβ”€β”€ syllabus_tool.py
β”‚   β”œβ”€β”€ question_tool.py
β”‚   β”œβ”€β”€ feedback_tool.py
β”‚   └── __init__.py
β”œβ”€β”€ models.py        # Pydantic models for MCP
└── requirements.txt

βš™οΈ Environment Variables

Set in docker-compose.yml β†’ passed to Flask app:

# Flask
FLASK_ENV=development
FLASK_DEBUG=1

# Database
MONGODB_URI=mongodb://pymock:pymock123@mongodb:27017/pymock?authSource=admin

# Cache
REDIS_URL=redis://redis:6379

# AI / Ollama
OLLAMA_BASE_URL=http://ollama:11434
MODEL_NAME=gemma2:2b

βœ… Your agents in app/agents/workflow.py will use these to call Ollama’s /api/generate endpoint.


🧠 Key Modules

➀ services/cache_service.py

Handles Redis operations:

  • create_secure_token(exam_params) β†’ returns UUID, stores in Redis with TTL
  • validate_secure_token(token) β†’ retrieves + validates exam params
  • cache_exam(exam_id, exam_data) β†’ caches exam JSON
  • health_check() β†’ pings Redis

➀ database/mongodb.py

MongoDB wrapper:

  • store_exam(), get_exam()
  • store_submission(), get_feedback()
  • health_check() β†’ runs ping command

➀ agents/workflow.py

LangChain-based agentic pipeline:

workflow = create_exam_workflow()  # Chains 4 agents

result = workflow.invoke({
    "exam_type": "NEET",
    "subject_areas": ["Biology"],
    "difficulty": "medium",
    "duration": 180
})

β†’ Uses gemma2:2b via Ollama for question & feedback generation.


πŸ§ͺ Sample API Usage

1. Generate Exam

curl -X POST http://localhost:5000/api/exam/generate \
  -H "Content-Type: application/json" \
  -d '{
    "exam_type": "JEE Main",
    "subject_areas": ["Physics", "Maths"],
    "difficulty": "hard",
    "duration": 180,
    "num_questions": 25
  }'

2. Chat-Based Exam (e.g., for WhatsApp)

curl -X POST http://localhost:5000/api/chat/generate-exam \
  -H "Content-Type: application/json" \
  -d '{"message": "Generate a 30-min Biology mock for NEET"}'

β†’ Returns secure link: {"exam_url": "/exam/abc123-xyz789", "expires_in": "24 hours"}

3. Resolve Token

curl http://localhost:5000/exam/abc123-xyz789

β†’ Triggers exam generation and returns full exam object.


πŸ› οΈ Development Tips

Rebuild After Code Changes

docker-compose build pymock-api
docker-compose up -d

View Logs

docker-compose logs -f pymock_api     # Flask app
docker-compose logs -f ollama         # LLM server
docker-compose logs pymock_model_init # Model pull status

Verify Model Loaded

curl http://localhost:11434/api/tags
# Should include: { "name": "gemma2:2b", ... }

Health Check

curl -f http://localhost:5000/health
# Returns 200 if MongoDB, Redis, and app are healthy

🧹 Cleanup

Stop containers and delete volumes (models, DB, cache):

docker-compose down -v

πŸ›‘ Troubleshooting

  • Model not loading? β†’ Check model-init logs. If failed, run manually:
    docker-compose run --rm pymock_model_init
  • LLM timeout? β†’ Increase timeout in agent or try smaller model like phi3:mini.
  • Connection refused? β†’ Wait 60s after up β€” Ollama needs time to load model.
  • Cache/DB not connecting? β†’ Verify service names in env vars match Docker Compose service names (mongodb, redis, ollama).

βœ… System Requirements

  • Docker & Docker Compose
  • Minimum 8GB RAM (Ollama + gemma2:2b is memory-heavy)
  • Python 3.9+ (for local dev outside container)
  • ~5GB disk space for models + DB

πŸ’‘ Tip: For faster iteration, consider swapping gemma2:2b with phi3:mini or llama3:8b-instruct-q4_K_M if you have more RAM β€” better reasoning for exam generation.


Let me know if you want:

  • A Makefile for common commands
  • VS Code devcontainer.json
  • Sample .env file for local non-Docker dev
  • Instructions to swap in a different Ollama model

Happy coding! πŸš€

About

A Docker Compose-based MVP application for AI-powered mock exam generation using LangChain, LangGraph, MCP (Model Context Protocol), and Ollama with Gemma2.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published