models: - name: <MODEL_NAME> provider: llama.cpp model: <MODEL_ID> apiBase: http://localhost:8080
Was this page helpful?