Skip to main content

Core Concepts

A Unified Platform for Agentic AI

Building production-ready AI agents today often requires juggling multiple fragmented tools: some platforms focus only on agent creation, others only on workflows, while critical aspects like observability, evaluation, and integrations are scattered across disconnected systems. This fragmentation makes it difficult to deliver scalable, reliable AI solutions.

The Ubility open-source platform addresses this gap by bringing everything together under one roof. It enables developers to design, run, and scale conversational and background agents with:

  • Hybrid orchestration: combining deterministic state machines with LLM-driven reasoning.

  • Multi-agent collaboration: agents specialize in different domains and coordinate seamlessly.

  • Integrated tools and protocols: including MCP support and hundreds of native integrations.

  • Built-in observability and evaluation: offering the deep insights required for production readiness.

The result is an all-in-one environment where teams can confidently build AI solutions that are not just intelligent, but also controllable, efficient, and scalable.

Why Multi-Agent Systems Matter

No single AI agent can handle the full diversity of real-world tasks. A customer support interaction may involve:

  • Recommending a product.

  • Checking inventory.

  • Processing a payment.

  • Following up with logistics.

Each step requires specialized capabilities. Our platform supports a multi-agent approach, where agents with distinct roles collaborate dynamically. Conversations remain fluid and natural while each agent contributes its expertise to achieve the user’s goal.

The Challenge of Conversational State

Unlike static workflows, conversations are non-linear and stateful:

  • A user might ask about a product, then suddenly switch to a payment question, and then return to product comparisons.

  • Each stage requires maintaining context (preferences, selections, prior answers).

  • Agents must adapt in real time, not restart from scratch.

Traditional workflow orchestrators (e.g., n8n) are powerful for sequential automation but not suited for dynamic conversations:

  • They restart from the beginning for every new input.

  • They are stateless per message unless paired with external storage.

  • They struggle with topic-switching and interruptions.

  • They consume more tokens and increase latency, leading to generic or disjointed experiences.

Conversational AI Orchestrator

Our Conversational AI Orchestrator is designed specifically for these challenges. It enables agents to:

  • Maintain state: each agent manages the conversation with the user until fulfilling his goal.

  • Route dynamically: hand off control between agents without getting back to the first node.

  • Adapt seamlessly: respond intelligently to unexpected user inputs.

  • Optimize token usage: minimize token consumption by only involving the responsible agent.

This ensures conversations feel natural, personalized, and efficient; a core requirement for production-grade AI applications.

Key Advantages:

  • Dynamic routing between agents based on real-time inputs.

  • Stateful context retained across multiple conversation stages.

  • Personalization through memory-driven continuity.

  • Efficiency with reduced latency and lower token use.

  • Scalability across thousands of parallel conversations.

Pure LLM vs Hybrid Approach

There are two primary approaches to building conversational agents:

  • Pure LLM: The model handles the full lifecycle, understanding, reasoning, deciding, and responding.

    • Strengths: Flexible, creative, good for open-ended use cases (e.g., brainstorming, tutoring, storytelling).

    • Limitations: Harder to control, costly in tokens, and less predictable in high-stakes environments.

  • Hybrid: Combines LLM interpretation with deterministic orchestration (state machines, workflows, business rules).

    • Strengths: Reliable, efficient, and production-ready; ensures critical steps follow controlled logic.

    • Use cases: Task-oriented scenarios like booking, refunds, or troubleshooting where accuracy is vital.

Our platform supports both, but emphasizes a hybrid model to balance creativity with control.

Orchestration Beyond Workflows

Workflows are excellent for task automation, repetitive, predictable, structured jobs. They were designed to take orders/triggered by machines/apps (email, CRM, tickets, etc.) following a predefined process. But what about human interactions? Unlike static workflows, conversations are non-linear and stateful: a user might ask about a product, then suddenly switch to a payment question, and then return to product comparisons. Each stage requires maintaining context (preferences, selections, prior answers).

Traditional workflow orchestrators (e.g., n8n) are powerful for sequential automation but not suited for dynamic conversations:

  • They restart from the beginning for every new input.

  • They are stateless per message unless paired with external storage.

  • They struggle with topic-switching and interruptions.

  • They consume more tokens and increase latency, leading to generic or disjointed experiences.

Agents must adapt in real time, not restart from scratch. Our Conversational AI Orchestrator is designed specifically for these challenges. It enables agents to:

  • Maintain state: each agent tracks its own memory and context across turns.

  • Maintain state: each agent manages the conversation with the user until fulfilling his goal.

  • Route dynamically: hand off control between agents without getting back to the first node. losing context.

  • Adapt seamlessly: respond intelligently to unexpected user inputs.

  • Optimize token usage efficiency: minimize token consumption by only involving the responsible agent, sending relevant context.

This ensures conversations feel natural, personalized, and efficient; a core requirement for production-grade AI applications.

Our vision goes beyond workflows:

  • Agentic orchestration: build agents once, reuse them across tasks.

  • Single state and memory: ensuring consistency across multi-agent interactions.

  • Integrated ecosystem: MCP and other tools available natively.

  • Human-in-the-loop: combine automation with oversight where needed.

  • Multimodal + integrations: not just language, but also data manipulation and external system actions.

This approach ensures you can deliver AI agents that are both intelligent and trustworthy, capable of solving real-world problems at scale.

Dynamic Routing in Conversational AI

Conversational AI uses stateful routing, meaning each AI Agent understands where it stands within the broader conversation and can hand over to another agent without restarting or reprocessing.

Routing decisions are made contextually based on user intent, previous exchanges, and the agent’s defined state.

  • Routing Type: Dynamic and context-driven

  • Context Handling: Stateful, each agent maintains its own memory

  • Behavior: Agents can transfer control fluidly, skipping or reordering steps as needed

  • Advantage: Users can change topics or goals mid-conversation without interruption

Example: In the same e-commerce scenario, if the user starts tracking an order and then decides to browse new products, the orchestrator routes them instantly from the Order Tracking Agent to the Product Browsing Agent.

No steps are repeated, and the conversation continues seamlessly from the user’s latest intent.

Comparison Summary

CapabilityWorkflow-Based RoutingConversational AI Routing
Routing TypeSequential, fixed orderDynamic, intent-based
Context MemoryStatelessStateful
AdaptabilityLow, must restart on changeHigh, adapts in real time
User FlowLinear, predefinedFluid, context-aware
Conversation ExperienceMechanical, step-by-stepNatural and continuous