Integrations

Ollama

Use local Ollama models in GridWork HQ pipelines.

Connect GridWork HQ to Ollama and use local or self-hosted language models in pipelines. No external API costs.

Prerequisites

  • Ollama running locally or on a server accessible from your GridWork pipeline server
  • At least one model installed in Ollama (e.g., qwen2.5-coder, mistral, llama2)

Connect to Ollama

  1. Ensure Ollama is running on your local machine or accessible server.
  2. Note your Ollama base URL:
    • Local: http://localhost:11434
    • Remote: https://ollama.yourserver.com
  3. In GridWork HQ, go to Settings > Integrations > Ollama.
  4. Enter your Ollama base URL.
  5. Click Fetch Models to populate the model dropdown.
  6. Select your preferred default model.
  7. Click Connect.

What It Enables

Once connected, LLM pipeline nodes can use local Ollama models. You can:

  • Use any Ollama model installed on your instance.
  • Run large models on your hardware without external API costs.
  • Keep all requests and data on your infrastructure.
  • Customize models with your own prompts and fine-tuning.

Model availability depends on what you've installed in Ollama. Pull new models from Ollama before selecting them in GridWork.

Disconnect from Ollama

  1. Go to Settings > Integrations > Ollama.
  2. Click Remove or Disconnect.
  3. Your Ollama instance continues running; only the GridWork connection is removed.

On this page