Skip to content

AI Nodes

AI nodes are split into agent nodes and dependency nodes. Agent nodes execute AI tasks, while dependencies provide chat models, memory, tools, output parsers, vector stores, embeddings, and text splitters.

Start with Basic LLM Chain for simple text generation, Tools Agent when the model should use tools, or RAG Agent when the workflow needs retrieval from indexed content.

flowchart LR
  Input["Workflow data"] --> Agent["AI agent or chain"]
  LLM["Chat model dependency"] --> Agent
  Parser["Optional output parser"] --> Agent
  Tools["Optional tools"] --> Agent
  Store["Optional vector store"] --> Agent
  Agent --> Output["Structured or text output"]

Agent nodes do the work. Dependency nodes are reusable configuration blocks that agent nodes rely on.

  1. Pick the agent node that matches the task.
  2. Connect the required model dependency.
  3. Add embeddings, vector stores, memory, tools, or parsers only when the agent node needs them.
  4. Test the output shape before connecting the AI result to browser actions or integrations.

For concept-level guidance, see Model Dependencies and Prompting and Structured Outputs.