Basic LLM Chain
Basic LLM Chain
Section titled “Basic LLM Chain”A simple chain to prompt a LLM and return the response
What it does
Section titled “What it does”Sends a simple prompt to a language model and returns the generated response.
When to use it
Section titled “When to use it”Use this agent for straightforward LLM interactions without tools or knowledge retrieval, such as text generation, summarization, or simple Q&A.
Inputs and settings
Section titled “Inputs and settings”| Setting | Notes |
|---|---|
| System message | Source-backed field from the node schema. |
| Options | Source-backed field from the node schema. |
Outputs
Section titled “Outputs”Returns node-specific output described by the implementation and visible in workflow execution data.
Dependencies and credentials
Section titled “Dependencies and credentials”- No explicit credential or node dependency is declared in the node description.
Example workflow
Section titled “Example workflow”Connect Basic LLM Chain to an AI agent or dependency input that accepts this dependency type, then run the agent with data from previous nodes.
Troubleshooting
Section titled “Troubleshooting”- Check that required settings are present before running the node.
- If the node uses browser page data, run it on the target tab after the page has loaded.
- If it calls an external service, verify credentials, permissions, and rate limits.
- This node has source tests; use them as the reference for edge-case behavior during maintenance.