Start webhook
LLM Call gpt-4o
If / Else score > 0.8
Tool Call web_search
Response json

Visual Flow Runtime

Wire agents visually with a node-based editor

Drag-and-drop 20+ node types to build multi-step agent pipelines. No code required.

eval — 3/4 passed
validates output schema 42ms
handles edge cases 67ms
tone matches brand 128ms
latency under 200ms 31ms

Built-in Eval

Catch regressions before they ship

Define test cases with assertions. Run suites against your agents. Track pass rates over time.

Terminal
$ wrangler deploy --env production
Compiling worker...
Uploading bundle (24.3 KB)
Deployed to production
https://agent.workers.dev
$

One-Click Deploy

From local to 300+ edge locations in seconds

Generate a production-ready Cloudflare Worker with Bun runtime and Elysia REST server.

Claude 4.6 Opus
GPT-5.3-Codex
Gemini 3.0 Pro
Grok 4.1 Thinking
Claude 4.5 Sonnet
Gemini 3.0 Flash
KIMI K2.5 Thinking
GPT-OSS-120B
K-EXAONE-236B-A32B
GPT-5.2-Chat

Bring Your Own Model

Never locked to one provider

Every LLM node accepts any API-compatible endpoint. Swap models per-node or route by cost.

main feat/custom-nodes fix/runtime feat/streaming v1.0 v1.1

Open Source

AGPL V3 licensed, zero strings attached

Read every line. Add custom nodes, swap the executor, embed the runtime in your product.

git clone https://github.com/MercuriusDream/AkompANI.git && cd akompani && bun install && bun run dev
You Summarize the latest reports
Found 3 reports. Revenue up 12%, user signups increased 28%, churn reduced to 4.1%. Key takeaway: growth accelerating across all metrics.
Want me to draft an executive summary?

Chat with agents you've built

Test your agents in a real conversation. Select any finalized agent and start chatting — see how it handles real inputs, iterates on responses, and uses your configured tools.

Open Chat Console