A CLI tool to generate professional summaries for GitHub pull requests using LLMs (via Ollama) and OpenMP spec context.
- Summarizes each file in a PR using a local LLM (Ollama, e.g., Llama3).
- Automatically references relevant OpenMP specification context for more accurate and professional summaries.
- Exports summaries as Markdown, plain text, or JSON.
- Works offline after LLM model is downloaded.
- CLI tool (pip install) and Standalone Windows executable available.
- Supports image attachments in summaries (see Exporting & Including Images).
- Python 3.8+ (if using the pip version)
- Ollama LLM server (Download & Install Ollama)
You must have Ollama installed and running on your machine for this tool to generate AI summaries. - Llama3 model (or other supported models) pulled via
ollama pull llama3 - GitHub Personal Access Token (for authenticated API access)
- Windows: For the
.exe, no Python is needed. For source/pip: Python 3.8+
Clone this repo and install locally:
git clone https://github.com/Manoj-Kumar-BV/GenAI-Powered-Pull-Request-Summaries.git
cd GenAI-Powered-Pull-Request-Summaries
pip install .pip install -e .pip install git+https://github.com/Manoj-Kumar-BV/GenAI-Powered-Pull-Request-Summaries.gitNote:
pip install genai-pr-summarizerwill not work unless the package is published on PyPI. If you want to install using that command, the maintainer must first publish the package to PyPI.
- Go to: https://ollama.com/download
- Download and install for your platform (Windows, Mac, Linux).
Open a new terminal and run:
ollama serveMake sure this terminal stays open while you use GenAI PR Summarizer.
If you have not already pulled a model (e.g. llama3):
ollama pull llama3- Install the tool (see Installation above).
- Set up your
.envfile with your GitHub token (cp .env.example .envand edit). - Start Ollama (see above).
- Run the CLI:
genai-pr-summarizer
- Download and extract the
genai-pr-summarizer.zipfrom GitHub Releases. - Copy
.env.exampleto.envand add your GitHub token. - Start Ollama (
ollama serve). - Open Command Prompt in the extracted folder.
- Run:
genai-pr-summarizer.exe
- When exporting summaries, you can attach images by placing them in the
summaries/folder (created on first export). - Images will be referenced in the Markdown export using this format:
 - Number images in reverse order of upload (last image is
image1.png, second last isimage2.png, etc.).
For each file changed in a pull request, the tool:
- Extracts the code diff (patch),
- Uses a semantic search (via a helper script and FAISS) over a pre-indexed OpenMP specification,
- Retrieves the most relevant OpenMP spec sections,
- Feeds both the diff and spec context to the LLM,
- The summary references the spec, making reviews more accurate.
-
Ollama not running:
The CLI will check for Ollama. If not running, you'll see an error like:
Error: Ollama server is not running at http://localhost:11434. Please start Ollama (see Requirements above). -
First-time model load:
The first time a model is used, Ollama will download it. This may take several minutes. -
No AI summaries:
If Ollama is not running, or the required model is missing, the tool cannot generate summaries. -
pip install error:
If you seeERROR: Could not find a version that satisfies the requirement genai-pr-summarizer, it means the package is not published to PyPI. Usepip install .from your local directory instead.
For issues, please open an issue in this repository.