Skip to content

ribomo/llama-cpp-cuda-build-linux

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

llama.cpp CUDA Build Script for Linux

This script builds the llama.cpp binaries with CUDA support.

It follows the official llama.cpp build guide, while using a Conda managed CUDA toolchain for the Linux setup here:

https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md

Requirement

Conda is required and must be available in your PATH before running this script.

If you do not already have Conda installed, follow the install instructions from Miniforge:

https://github.com/conda-forge/miniforge

Why I made this

Ubuntu 26.04, at the time of writing, only provides CUDA 13.1 through its APT repositories, which caused build issues for llama.cpp, including:

/usr/include/x86_64-linux-gnu/bits/mathcalls.h: error:
exception specification is incompatible with that of previous function "rsqrt"

exception specification is incompatible with that of previous function "rsqrtf"

This script avoids that system CUDA setup by using Conda to install cuda-toolkit=13.2 in an isolated environment.

Usage

bash build-llama-cpp-cuda.sh

The script will:

  • clone or update llama.cpp
  • create a Conda environment named llama-cpp-cuda132
  • install CUDA 13.2, cmake, and a C++ compiler
  • build llama.cpp with GGML_CUDA=ON

Build output:

./llama.cpp/build/bin

About

This script builds the llama.cpp binaries with CUDA support on Linux.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages