Integrations
Ollama
Use local Ollama models in GridWork HQ pipelines.
Connect GridWork HQ to Ollama and use local or self-hosted language models in pipelines. No external API costs.
Prerequisites
- Ollama running locally or on a server accessible from your GridWork pipeline server
- At least one model installed in Ollama (e.g.,
qwen2.5-coder,mistral,llama2)
Connect to Ollama
- Ensure Ollama is running on your local machine or accessible server.
- Note your Ollama base URL:
- Local:
http://localhost:11434 - Remote:
https://ollama.yourserver.com
- Local:
- In GridWork HQ, go to Settings > Integrations > Ollama.
- Enter your Ollama base URL.
- Click Fetch Models to populate the model dropdown.
- Select your preferred default model.
- Click Connect.
What It Enables
Once connected, LLM pipeline nodes can use local Ollama models. You can:
- Use any Ollama model installed on your instance.
- Run large models on your hardware without external API costs.
- Keep all requests and data on your infrastructure.
- Customize models with your own prompts and fine-tuning.
Model availability depends on what you've installed in Ollama. Pull new models from Ollama before selecting them in GridWork.
Disconnect from Ollama
- Go to Settings > Integrations > Ollama.
- Click Remove or Disconnect.
- Your Ollama instance continues running; only the GridWork connection is removed.