About this section
n8n is a powerful open-source automation platform with a visual workflow builder. By connecting n8n to Flutch, you get:
- Multi-channel distribution — web chat, Telegram, Slack, Discord from a single workflow
- Full analytics — tokens, costs, response time, user metrics
- Conversation history — thread management and conversation context
- LLM tracing — detailed analysis of language model calls
- Ready-to-use testing UI — no need to write curl requests
- Configuration versioning — agent version management with rollback capability
- A/B testing — run experiments with different agent configurations
Prerequisites
On the n8n side
- Working n8n instance — cloud (n8n.cloud) or self-hosted
- Activated workflow with a Webhook trigger
- Production Webhook URL — publicly accessible HTTPS address
On the Flutch side
- Flutch account — register at console.flutch.ai
- Created company — automatically created during onboarding
Important to know
- Webhook URL must be publicly accessible (not localhost)
- Workflow in n8n must be activated — Production URL only works for active workflows
- Workflow response must be returned via the "Respond to Webhook" node
Step 1: Setting up workflow in n8n
Minimum requirements
To connect n8n to Flutch, you only need a Webhook — this is the minimum for integration. However, for full functionality, it is recommended to also set up response return.
| Component | Required? | Purpose |
|---|---|---|
| Webhook | ✅ Yes | Receive messages from Flutch |
| Respond to Webhook | ⚠️ Recommended | Return response to user |
| Build Trace Data | ❌ Optional | Detailed analytics and LLM tracing |
Creating a basic workflow
-
Open n8n and create a new workflow
-
Add Webhook trigger:
- Select the "Webhook" node
- Method:
POST - Path: any convenient path, e.g.,
/flutch-chat - Response Mode: When Last Node Finishes (important!)
-
Add processing logic:
- This can be OpenAI, Claude, Gemini, or any other logic
- Flutch sends the user message in the
messagefield
-
Add "Respond to Webhook" node at the end of the workflow:
- This node returns the response back to Flutch
- See section Setting up Respond to Webhook below
Simple workflow example
[Webhook] → [OpenAI Chat Model] → [Respond to Webhook]
Getting the Webhook URL
- Click on the Webhook node
- Copy the Production Webhook URL
- Format:
https://yourname.app.n8n.cloud/webhook/abc123
- Format:
- Activate the workflow — click the "Active" toggle
⚠️ Important: Production URL only works when the workflow is activated!
Setting up Respond to Webhook
The Respond to Webhook node returns the response back to Flutch.
Minimal setup (text response only):
{{ { "text": $json.output } }}
Full setup (with metrics for analytics):
{{ {
"text": $json.response.text,
"usageMetrics": $json.response.usageMetadata,
"trace": {
"modelCalls": $json.modelCalls,
"totalMetrics": $json.totalMetrics
}
} }}
📖 n8n Documentation: Respond to Webhook
Setting up Build Trace Data (for analytics)
To get detailed LLM usage analytics (tokens, cost, execution time), add a Code node before Respond to Webhook.
Workflow example with tracing:
[Webhook] → [Gemini/OpenAI] → [Build Trace Data] → [Respond to Webhook]
Code for Build Trace Data node:
javascript// Build trace data from LLM response (example for Gemini) const llmResponse = $input.first().json; // Extract token usage from the response const promptTokens = llmResponse.usageMetadata?.promptTokenCount || 0; const completionTokens = llmResponse.usageMetadata?.candidatesTokenCount || 0; const totalTokens = llmResponse.usageMetadata?.totalTokenCount || 0; // Get execution time from the workflow const executionTime = $execution.executionTime || 0; // Build the trace data structure const traceData = { modelCalls: [ { model: llmResponse.modelVersion || "gemini-2.0-flash", promptTokens: promptTokens, completionTokens: completionTokens, totalTokens: totalTokens, executionTimeMs: executionTime, responseId: llmResponse.responseId || "", finishReason: llmResponse.candidates?.[0]?.finishReason || "STOP", }, ], totalMetrics: { promptTokens: promptTokens, completionTokens: completionTokens, totalTokens: totalTokens, totalExecutionTimeMs: executionTime, requestCount: 1, errorCount: 0, }, response: { text: llmResponse.candidates?.[0]?.content?.parts?.[0]?.text || "", usageMetadata: { promptTokens: promptTokens, completionTokens: completionTokens, totalTokens: totalTokens, totalCost: 0, requestCount: 1, totalExecutionTimeMs: executionTime, errorCount: 0, }, }, }; return traceData;
📖 n8n Documentation: Code Node
What tracing provides:
- Token counting (input and output)
- Request execution time
- Model information
- Cost calculation in Flutch analytics
Step 2: Creating an agent in Flutch
Via onboarding (for new users)
- Start onboarding — you will be automatically redirected after registration
- At the agent type selection step choose "External Agent"
- Select n8n from the list of external services
- Enter the Webhook URL from n8n
- Name your agent and complete creation
Via admin panel (for existing users)
- Go to "My Agents" section in the sidebar menu
- Click "+ Create New Agent"
- Select the "External Agent" template or similar
- In settings:
- External service type: n8n
- Webhook URL: paste URL from n8n
- Agent name: descriptive name
- Click "Create Agent"
After creation
- Agent will appear in your agents list
- Web chat for testing will be automatically generated
- In agent settings, you can connect Telegram, Slack, Discord
Step 3: Testing
Testing via web chat
- Find the agent in "My Agents" list
- Click the "Open" button on the agent card
- Web chat will open in a new tab
- Send a test message: "Hello!"
- Wait for the response — it will come from your n8n workflow
Verification in n8n
- Open execution history in n8n
- Make sure the workflow was triggered when the message was sent
- Check input data — your message should be there
What Flutch sends to n8n
For each user message, n8n receives a POST request:
json{ "message": "User message text", "requestId": "a6409910-9574-4e10-9177-2c11f3164b8e", "threadId": "692c8ba7aa4e1b7bd34e737b", "userId": "692c71f46f5a1417003c1d5c", "agentId": "692c8ab6aa4e1b7bd34e733c", "graphType": "flutch.n8n::1.0.0", "graphSettings": { "webhookUrl": "https://your_account.app.n8n.cloud/webhook-test/887acd6e-5e10-4cbc-84aa-35bf3cf598dd" }, "context": { "configurable": { "thread_id": "692c8ba7aa4e1b7bd34e737b", "userId": "692c71f46f5a1417003c1d5c", "agentId": "692c8ab6aa4e1b7bd34e733c", "graphSettings": { "webhookUrl": "https://your_account.app.n8n.cloud/webhook-test/887acd6e-5e10-4cbc-84aa-35bf3cf598dd" } } } }
| Field | Description |
|---|---|
message | User message text |
requestId | Unique request ID (UUID) |
threadId | Thread/conversation ID |
userId | User ID in Flutch |
agentId | Your agent ID |
graphType | Graph type (always flutch.n8n::1.0.0 for n8n) |
graphSettings | Graph settings, including your webhook URL |
context | Execution context with configuration |
Using request data in n8n
In n8n, you can use any parameters from the incoming request via expressions. Data is available in the $json.body object:
| Expression | Returns |
|---|---|
{{ $json.body.message }} | User message text |
{{ $json.body.userId }} | User ID |
{{ $json.body.threadId }} | Current thread ID |
{{ $json.body.agentId }} | Agent ID |
{{ $json.body.requestId }} | Request ID |
Example usage in OpenAI/Claude prompt:
The user asks a question: {{ $json.body.message }}. Use a sarcastic tone in your response.
Example conditional logic:
javascript// In Code node or in expressions const userId = $json.body.userId; const message = $json.body.message; // Can be used for personalization, logging, etc.
Response format from n8n
Flutch understands several response formats:
json// Recommended format { "text": "Agent response" } // Alternative formats (also work) { "output": "Agent response" } { "response": "Agent response" } { "message": "Agent response" } { "content": "Agent response" } // Or just a string "Agent response"
Step 4: Connecting channels
After creating the agent, you can distribute it to different platforms.
Telegram
- Go to agent settings → "Channels" tab
- Select Telegram
- Create a bot via @BotFather in Telegram
- Copy the bot token and paste it into settings
- Save — the bot is ready!
Slack
- Go to agent settings → "Channels" tab
- Select Slack
- Follow the instructions to create a Slack App
- Set up OAuth and add the bot to your workspace
Discord
- Go to agent settings → "Channels" tab
- Select Discord
- Create an application in Discord Developer Portal
- Set up the bot and add it to your server
One workflow — all channels: Your n8n workflow handles messages from all channels equally. Logic stays in n8n, Flutch handles delivery.
Analytics and monitoring
What is tracked automatically
For each message, Flutch collects:
| Metric | Description |
|---|---|
| Response time | How many seconds it took to get a response from n8n |
| Request status | Success or error |
| Message history | All messages in the thread |
| User metadata | ID, channel, timestamp |
LLM tracing (optional)
If your n8n workflow returns LLM usage information, Flutch will show:
- Model — which model was used (GPT-4, Claude, etc.)
- Input tokens — number of tokens in the request
- Output tokens — number of tokens in the response
- Cost — cost calculation in USD
Where to view analytics
- Agent dashboard — click on the agent in the list
- Message Audit — detailed message history
- Company analytics — overall statistics for all agents
Troubleshooting
Problem: Messages not sending / timeout
Possible causes:
-
Workflow not activated in n8n
- Solution: Open n8n, enable the "Active" toggle
-
Incorrect Webhook URL
- Solution: Check the URL in agent settings, it must exactly match the URL in n8n
-
n8n unavailable
- Solution: Check n8n is working, try sending a test request via curl
-
Workflow takes too long (more than 3 minutes)
- Solution: Optimize the workflow or split into parts
Problem: Empty response
Possible causes:
-
No "Respond to Webhook" node
- Solution: Add the node at the end of the workflow
-
Incorrect response format
- Solution: Use format
{ "text": "response" }
- Solution: Use format
-
Workflow error
- Solution: Check execution history in n8n for errors
Problem: Error when creating agent
Possible causes:
-
Invalid URL
- URL must start with
https:// - URL must be publicly accessible
- URL must start with
-
URL not reachable
- Check that URL responds to POST requests
How to verify n8n is working
Run a test request:
bashcurl -X POST "YOUR_WEBHOOK_URL" \ -H "Content-Type: application/json" \ -d '{"message": "test"}'
If you get a response — n8n is working correctly.
Advanced scenarios
Connecting Flutch Knowledge Base
You can use Flutch Knowledge Base directly from your n8n workflow:
-
Create a knowledge base in Flutch and upload documents
-
Get an API key in company settings
-
In n8n add HTTP Request:
- URL:
https://api.flutch.ai/kb/search - Method: POST
- Body:
{"query": "{{ $json.message }}", "kbId": "your-kb-id"} - Headers:
Authorization: Bearer YOUR_API_KEY
- URL:
-
Use the results in your LLM prompt
Passing variables from Flutch to n8n
In agent settings, you can set environment variables. They will be passed in every request to n8n:
json{ "message": "user question", "config": { "COMPANY_NAME": "Your Company", "SUPPORT_EMAIL": "[email protected]" } }
In n8n, access them via {{ $json.config.COMPANY_NAME }}.
⚠️ Do not pass API keys through Flutch variables. Use Credentials in n8n for secrets.
Error handling
It is recommended to add error handling to your n8n workflow:
- Use the "Error Trigger" node to catch errors
- Set up a fallback response for errors
- Log errors for debugging
When to use n8n
n8n is suitable when:
- Team does not write code — product managers, marketers, analysts
- Quick prototype needed — visual builder speeds up development
- Many integrations — n8n has 400+ ready connectors
- Simple to medium complexity — linear workflows, conditional branching
Consider LangGraph (code) when:
- Complex state management — multi-step conversations with context
- Performance is critical — high load, low latency
- Developer team — version control, code review, tests
- Advanced AI patterns — ReAct agents, multi-agent systems
Hybrid approach
Many teams use both tools:
- n8n for simple agents — FAQ bots, lead qualification
- LangGraph for complex ones — technical support, code generation
Both work with Flutch infrastructure the same way.
What is next?
After setting up your n8n agent, we recommend:
- Configure channels — connect Telegram, Slack, Discord
- Explore analytics — monitor metrics
- Create knowledge base — for RAG scenarios
- Configure agent — detailed configuration
Useful links
- n8n Documentation — official n8n guides
- n8n Community — questions and answers
- Workflow Library — ready-made templates