Skip to content

Web LLM

Use LLM locally directly on your device

Runs language models locally in the browser using WebGPU for hardware acceleration.

Connect this to AI agents for privacy-focused, offline AI that runs entirely in the browser without external dependencies.

SettingNotes
ModelSource-backed field from the node schema.
TemperatureSource-backed field from the node schema.
OptionsSource-backed field from the node schema.

Returns a chat model dependency that can be connected to AI agent nodes.

  • No explicit credential or node dependency is declared in the node description.

Connect Web LLM to an AI agent or dependency input that accepts this dependency type, then run the agent with data from previous nodes.

  • Check that required settings are present before running the node.
  • If the node uses browser page data, run it on the target tab after the page has loaded.
  • If it calls an external service, verify credentials, permissions, and rate limits.
  • No dedicated source test was found next to this node; verify behavior manually when changing this page.