This Docker container provides a minimal environment for running FastAI course notebooks with local NVIDIA GPU support. It's specifically tested with the NVIDIA RTX 5070 Ti (sm120).
- Docker installed on your system
- NVIDIA GPU drivers installed
- NVIDIA Container Toolkit (nvidia-docker2)
- Minimal base image to reduce container size
- CUDA support for GPU acceleration
- Jupyter Lab environment
- FastAI library and dependencies
- Python 3.x environment
- Using the docker compose (recommended):
docker-compose up --build- Or manually build and run:
docker volume create fastai-notebooksdocker build -t fastai-local-v2 .- Run the container:
docker run --gpus all -p 8888:8888 -v fastai-notebooks:/home/jupyter/ fastai-local-v2This will:
- Enable GPU access with
--gpus all - Map port 8888 for Jupyter Lab access
- Mount current directory to /workspace in container
Once the container is running:
- Open your browser to
http://localhost:8888 - The Jupyter Lab interface will load with access to your notebooks
- Any notebooks in your current directory will be available in the /workspace folder
To verify GPU access inside the container, you can run:
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name())- This container is optimized for the NVIDIA RTX 5070 Ti with Compute Capability 12.0
- Adjust CUDA version in Dockerfile if using different GPU models
- Container includes minimal dependencies to reduce size and build time
This project is open-source and available under the MIT License.