Skip to content

Add llama-cpp service (OpenAI-compatible LLM inference server) #26

Add llama-cpp service (OpenAI-compatible LLM inference server)

Add llama-cpp service (OpenAI-compatible LLM inference server) #26

Triggered via push February 22, 2026 17:21
Status Failure
Total duration 9s
Artifacts
update-rsync-exclude
4s
update-rsync-exclude
Fit to window
Zoom out
Zoom in

Annotations

1 error
update-rsync-exclude
Process completed with exit code 128.