This repository contains code for training and testing Deep Q-Network (DQN) agents on Atari Breakout using Gymnasium and PyTorch. It supports both automated agent play and human play via keyboard controls.
- DQN Training: Train agents using experience replay and target networks.
- Model Saving: Periodically saves model checkpoints during training.
- Testing: Evaluate trained models in human-rendered mode.
- Human Play: Play Breakout interactively using keyboard controls.
- Frame Preprocessing: Converts frames to grayscale and crops/resizes for input to the neural network.
.
├── main.ipynb # DQN training notebook
├── test_model.py # Test trained DQN agent (4 actions)
├── test_model2.py # Test trained DQN agent (3 actions)
├── play_human.py # Play Breakout with keyboard
├── models/ # Saved model checkpoints (DQN)
├── models2/ # Saved model checkpoints (alternate DQN)
├── runs/ # Training logs (optional)
├── README.md # This file
└── .gitignore
pip install torch gymnasium[atari] ale-py opencv-python matplotlib pygameYou may need to install Atari ROMs using AutoROM:
pip install autorom
AutoROM --accept-licenseOpen and run main.ipynb to train a DQN agent. Model checkpoints are saved in models as dqn_model_episode_{N}.pth.
Training metrics such as loss curves and episode rewards are logged using TensorBoard. To view these graphs:
tensorboard --logdir runsThen open the displayed URL in your browser to explore the training progress interactively.
To test a trained model:
python test_model.pyor
python test_model2.pyNote:
test_model.pyruns a full episode until the game ends (all lives lost).test_model2.pyterminates the episode immediately upon life loss (for more granular evaluation).
Edit the model path in the script to select a specific checkpoint.
Play Breakout using your keyboard:
python play_human.pyControls:
- Left / A: Move left
- Right / D: Move right
- Space: FIRE (launch ball)
- Esc / Window close: Quit
See DQN and DQN for details. The agent uses a convolutional neural network with stacked grayscale frames as input.
Frames are converted to grayscale, resized to preprocess_image.
- Playing Atari with Deep Reinforcement Learning — Mnih et al., 2013
- Human-level control through deep reinforcement learning — Mnih et al., 2015
MIT License
For questions or issues, please open an issue or discussion.