Web LLM
Web LLM
Section titled “Web LLM”Use LLM locally directly on your device
What it does
Section titled “What it does”Runs language models locally in the browser using WebGPU for hardware acceleration.
When to use it
Section titled “When to use it”Connect this to AI agents for privacy-focused, offline AI that runs entirely in the browser without external dependencies.
Inputs and settings
Section titled “Inputs and settings”| Setting | Notes |
|---|---|
| Model | Source-backed field from the node schema. |
| Temperature | Source-backed field from the node schema. |
| Options | Source-backed field from the node schema. |
Outputs
Section titled “Outputs”Returns a chat model dependency that can be connected to AI agent nodes.
Dependencies and credentials
Section titled “Dependencies and credentials”- No explicit credential or node dependency is declared in the node description.
Example workflow
Section titled “Example workflow”Connect Web LLM to an AI agent or dependency input that accepts this dependency type, then run the agent with data from previous nodes.
Troubleshooting
Section titled “Troubleshooting”- Check that required settings are present before running the node.
- If the node uses browser page data, run it on the target tab after the page has loaded.
- If it calls an external service, verify credentials, permissions, and rate limits.
- No dedicated source test was found next to this node; verify behavior manually when changing this page.