Ardoq’s platform was built from day one to be data-driven, graph-based, and architecture-native. We aren’t bolting AI onto static diagrams, we’re embedding it into a live, structured, permission-aware model of the enterprise.
With our proprietary graph engine, flexible metamodels, and deep domain knowledge, we provide AI with the data and structured input it needs to generate meaningful, explainable, and trustworthy insights. Ardoq isn’t just using AI. We’re elevating EA with it.
It means our AI augments—not replaces—the architect. AI can generate value streams, propose capability models, and surface reasoning behind decisions. But the final judgment and validation always stays with the human. We see AI as the first responder for EAs, not the final authority.
No. While we do integrate with LLMs like Claude or OpenAI models, Ardoq's value comes from what surrounds the model—structured enterprise data and context, a purpose-built query interface (MCP), and strict governance layers. We bring the architecture intelligence. LLMs bring language fluency. Together, they power meaningful EA conversations.
We use a federated, AI approach. Ardoq doesn’t create the foundation models. Instead, we use the best available models (e.g., Claude, GPT) depending on the task. This approach gives us flexibility, rapid innovation, and cost-efficiency. Also allowing us to focus our IP on context modeling, graph structure, and intelligent orchestration.
We will fine-tune the reasoning layer, not the model itself. Our value lies in how we curate prompts, structure model context, apply architectural constraints, and govern how models interact with your data.
AI Gateway (MCP Server) is a new way of securely exposing your architecture data to AI tools. MCP provides structured, contextual data that enables deep reasoning, letting AI answer questions like: “Which apps should we decommission—and why?” It’s also governed: read-only, permission-aware, and 100% explainable.
Not yet, but we’re laying the groundwork. We’re not pushing for full autonomy in the short term. What we are doing is enabling agentic workflows where AI can reason through structured models, suggest next steps, and eventually trigger guided workflows. But a human is always in the loop for now. Think of it as going from insight to orchestration, one decision at a time.