Understanding workflows (graphs) and how they relate to agents.
What is a Workflow?
A workflow (also called graph) is a process that orchestrates multiple steps to accomplish a task. Unlike a simple LLM call, workflows can:
- Call different models for different steps
- Use tools and external APIs
- Make decisions and route logic
- Manage conversation state
- Handle complex multi-turn interactions
Simple example:
markdownUser: "Search for pricing info and summarize it" Workflow steps: 1. Router → detects "search" intent 2. Tool call → searches knowledge base 3. LLM call → summarizes results with LLM 4. Return → formatted response to user
Different models can be used at each step - LLM for reasoning, Claude for summarization, embedding models for search.
Workflow vs Agent
Workflow (Graph) = Code/process template
- Defines execution logic (what steps to run)
- Deployed once:
flutch graph deploy - Versioned: 1.0.0, 1.1.0, 2.0.0
- Reusable across many agents
Agent = Configured instance for users
- Uses a specific workflow version
- Has unique configuration (prompts, models, tools)
- Has user-facing properties (name, description, avatar)
- Has access controls and rate limits
- Created in console UI
Key Difference
Workflow = "How to process requests" Agent = "Who processes requests and with what settings"
Example
markdownWorkflow: "acme.support::1.0.0" Defines process: 1. Classify user intent 2. Search knowledge base if needed 3. Generate response 4. Validate before sending This workflow powers multiple agents: Agent "General Support Bot" - Workflow: acme.support::1.0.0 - Config: { model: "gpt-4o", systemPrompt: "You are helpful..." } - Access: Public - Rate limit: 100 msgs/day Agent "Premium Support Bot" - Workflow: acme.support::1.0.0 - Config: { model: "claude-4-5-sonnet", systemPrompt: "You are premium..." } - Access: Paid users only - Rate limit: 1000 msgs/day Agent "Internal Support Bot" - Workflow: acme.support::1.0.0 - Config: { model: "gpt-5", systemPrompt: "You are internal...", tools: ["crm_access"] } - Access: Company employees only - Rate limit: Unlimited
Same workflow code, different configurations and policies.
Runtime Configuration
When a user sends a message to an agent, the platform:
- Looks up agent → Gets agent ID
- Loads agent config → Gets prompts, model settings, tools
- Finds workflow → Determines which graph version to use
- Injects config into workflow → Passes agent config to graph
- Executes workflow → Runs with agent-specific settings
What gets injected:
typescript// Platform sends to workflow { "message": "Hello", "threadId": "thread_123", "agentConfig": { "systemPrompt": "You are a helpful assistant...", "modelSettings": { "modelId": "gpt-5", "temperature": 0.7, "maxTokens": 2000 }, "tools": ["web_search", "knowledge_base"], "customSettings": { "responseStyle": "concise", "language": "en" } } }
How workflow uses it:
typescriptasync function generateNode(state) { // Access injected config via SDK const config = flutch.config.get(); // Initialize model with agent's settings const llm = await flutch.models.initializeChatModel({ modelId: config.modelSettings.modelId, temperature: config.modelSettings.temperature, maxTokens: config.modelSettings.maxTokens }); // Use agent's system prompt const messages = [ { role: "system", content: config.systemPrompt }, ...state.messages ]; // Call LLM with agent-specific configuration const response = await llm.invoke(messages); return { messages: [response] }; }
Benefits:
- One workflow code serves many use cases
- Configuration managed in UI (no code changes)
- Easy A/B testing (same workflow, different configs)
- Role-based customization (sales vs support)
Workflow Structure
Workflows consist of three components:
1. Nodes
Nodes are steps in your process:
typescript// LLM node - calls language model async function generateNode(state) { const response = await llm.invoke(state.messages); return { messages: [response] }; } // Tool node - calls external API async function searchNode(state) { const results = await searchAPI.query(state.query); return { searchResults: results }; } // Router node - decides what to do next async function routerNode(state) { if (needsSearch(state.messages)) { return "search"; } return "respond"; }
2. Edges
Edges connect nodes and define flow:
typescript// Fixed flow: A → B workflow.addEdge("nodeA", "nodeB"); // Conditional: router decides where to go workflow.addConditionalEdge( "router", routingFunction, { "search": "searchNode", "respond": "generateNode" } );
3. State
State is shared data flowing through workflow:
typescriptinterface WorkflowState { messages: Message[]; // Conversation history searchResults?: any[]; // From search node currentIntent?: string; // From router }
Each node reads state, processes, and returns updates.
Simple Workflow Example
Goal: Support bot that searches docs when needed
typescript// Define state interface State { messages: Message[]; searchResults?: any[]; } // Node 1: Router async function router(state) { const lastMessage = state.messages[state.messages.length - 1]; if (needsDocSearch(lastMessage.content)) { return "search"; } return "respond"; } // Node 2: Search docs async function search(state) { const query = extractQuery(state.messages); const results = await docsAPI.search(query); return { searchResults: results }; } // Node 3: Generate response async function generate(state) { const config = flutch.config.get(); const llm = await flutch.models.initializeChatModel({ modelId: config.modelSettings.modelId, temperature: config.modelSettings.temperature }); const messages = [ { role: "system", content: config.systemPrompt }, ...state.messages ]; // Add search results as context if available if (state.searchResults) { messages.push({ role: "system", content: `Search results: ${JSON.stringify(state.searchResults)}` }); } const response = await llm.invoke(messages); return { messages: [response] }; } // Build workflow const workflow = new StateGraph(State); workflow.addNode("router", router); workflow.addNode("search", search); workflow.addNode("generate", generate); workflow.addConditionalEdge("router", router, { "search": "search", "respond": "generate" }); workflow.addEdge("search", "generate"); export const graph = workflow.compile();
Execution example:
markdownUser: "What is our pricing?" 1. Router → detects "pricing" needs docs search 2. Search → finds pricing documentation 3. Generate → GPT-5 summarizes docs with agent's prompt 4. Response: "Our pricing starts at..."
Same workflow, different agents:
- Agent A: Uses GPT-5, formal tone
- Agent B: Uses Claude, casual tone
- Agent C: Uses GPT-5-mini, concise responses