AI Agents
Agents coordinate reasoning, tool use, and multi-step decisions.
AI workflows are not magic boxes. They are systems made from several smaller parts: a model, a prompt, data context, optional memory, optional tools, and a way to check whether the result is good enough.
This section explains those parts so you can design AI workflows deliberately instead of debugging vague model behavior after the fact.
flowchart TB Goal["User goal"] --> Prompt["Prompt and instructions"] Data["Workflow data"] --> Context["Context window"] Retrieval["RAG / vector search"] --> Context Memory["Memory"] --> Context Prompt --> Model["Chat model"] Context --> Model Model --> Output["Generated output"] Output --> Parser["Optional parser / validation"] Parser --> Action["Next workflow action"] Tools["Tools"] <--> Model style Model fill:#e1f5fe,stroke:#0277bd style Context fill:#e8f5e9,stroke:#2e7d32 style Parser fill:#fff3e0,stroke:#ef6c00
Use this stack as a checklist. If an AI step behaves poorly, identify which layer is weak: unclear prompt, missing context, wrong model, noisy retrieval, no parser, or missing evaluation.
AI Agents
Agents coordinate reasoning, tool use, and multi-step decisions.
Models & Dependencies
Chat models, embeddings, vector stores, memory, tools, parsers, and splitters are dependency nodes.
Prompting & Outputs
Prompts define the task. Parsers and schemas make outputs usable by later workflow nodes.
Embeddings & Vectors
Embeddings convert meaning into vectors so documents can be searched semantically.
RAG
Retrieval-Augmented Generation gives the model source material before it answers.
Memory & Context
Memory preserves useful conversation state. Context is what the model can see right now.
Tool Selection
Tool selection explains when an agent should search, extract, transform, call APIs, or stop.
Workflow Intelligence
Intelligent workflows combine branching, retrieval, AI reasoning, and feedback loops.
Evaluation & Testing
Evaluation checks whether AI outputs are correct, useful, grounded, and stable.
| Goal | Start with | Add next |
|---|---|---|
| Summarize page text | Basic LLM Chain | Prompting and output structure |
| Answer questions from docs | RAG Agent | Embeddings, vector store, evaluation |
| Research across websites | Tools Agent | Browser context, tool selection, rate limits |
| Keep conversation context | Q&A Agent | Local Memory, context boundaries |
| Produce rows or JSON | Structured Output Parser | Validation and fallback handling |
If the answer must be based on source material, use retrieval. If the answer must trigger downstream automation, structure the output. If the task requires choosing between actions, use an agent. If the workflow must be reliable, evaluate it with realistic examples.