Add llama-cpp service (OpenAI-compatible LLM inference server) #26
concatenate_rsync_exclude.yml
on: push
update-rsync-exclude
4s
Annotations
1 error
|
update-rsync-exclude
Process completed with exit code 128.
|