AEVION Multichat Engine

One backend, five LLM providers, two modes. Pick a single-model chat for quick answers, or a multi-agent pipeline when you need a second (and third) pair of eyes on the answer.

S
Single chat
One provider · one model · fastest path

Classic chat experience. Pick Claude, GPT, Gemini, DeepSeek or Grok, ask a question, get an answer. Best for quick lookups and informal conversation.

Open single chat →
New
MA
Multi-agent
Analyst → Writer → Critic · inspectable

Three specialized agents coordinate on every answer. Pick Sequential for a classic reflection loop, Parallel for two writers on different models merged by a Judge, or Debate where a Pro and a Con advocate argue and a Moderator synthesizes a balanced recommendation.

3 strategiesLive streamingLive cost + tokensMixed models per roleSaveable presetsEdit & resendWebhook on donePublic share + OG previewExport JSON + Markdown↩ Thread continuation📋 Templates⚡ Batch runs
Open multi-agent →
📊 Analytics🧪 Eval harness📝 Prompts library⚡ Batch runs🕐 Scheduled batches
Backend healthConfigured providersRole defaults