Ollama
Ollama
Section titled “Ollama”Leverage OLLAM to use LLM locally directly on your device
What it does
Section titled “What it does”Provides access to locally-run Ollama models for AI agent nodes without external API calls.
When to use it
Section titled “When to use it”Connect this to AI agents when you want to use local, privacy-focused LLMs running on your machine via Ollama.
Inputs and settings
Section titled “Inputs and settings”| Setting | Notes |
|---|---|
| Model | Source-backed field from the node schema. |
Outputs
Section titled “Outputs”Returns a chat model dependency that can be connected to AI agent nodes.
Dependencies and credentials
Section titled “Dependencies and credentials”- No explicit credential or node dependency is declared in the node description.
Example workflow
Section titled “Example workflow”Connect Ollama to an AI agent or dependency input that accepts this dependency type, then run the agent with data from previous nodes.
Troubleshooting
Section titled “Troubleshooting”- Check that required settings are present before running the node.
- If the node uses browser page data, run it on the target tab after the page has loaded.
- If it calls an external service, verify credentials, permissions, and rate limits.
- No dedicated source test was found next to this node; verify behavior manually when changing this page.