Ardoq AI Principles

Built for trust. Grounded in context.
Designed for business impact.


Ardoq-brain
AI-Principles_Robot-Image-2

Smarter Decisions.
Faster Transformation.
Less Grunt Work.

We didn’t just embed AI into Ardoq. We reimagined how EAs work and how AI-assisted augmentations can amplify insight, accelerate action, and unlock new ways to drive business value, without sacrificing trust, context, or control.

We’re building deeply embedded, context-aware AI that helps EAs model change, predict outcomes, and guide business decisions. This isn’t AI for show. It’s AI for workflow.

Our Commitment to Responsible AI

Every AI capability we build is grounded in one core belief: Enterprise Architects must stay in control. We develop AI that’s explainable, accountable, and useful.

icon

Our AI Principles

1. Human-in-the-Loop

We never automate strategic decisions. AI assists but humans review, adapt, and approve.

Example:
Scenario-branching keeps AI-generated content separate until approved.

2. Context-Rich Reasoning

We believe smarter outputs require smarter inputs. Our AI understands the structure, relationships, and business context in your live architecture.

Example:
MCP queries powered by graph-native architecture context.

3. Design With, Not For

We don’t build AI because it’s trendy, we build it because it solves real EA challenges.

Example:
AI-generated value propositions and capability models built around customer needs.

4. AI Without Red Tape

Ardoq is shifting toward running LLMs in-house, giving existing customers less contractual friction, and a more flexible foundation for future AI deployments.

Example:
Q4 2025 explorations into in-house LLM hosting for improved privacy and choice.

5. Transparency & Explainability

Every AI output should be understandable, traceable and not a black box.

Example:
All AI-generated summaries are tied to source metadata and can be regenerated, traced, or revised by the user.

6. Secure & Governed by Default

We apply the same permissions and API guardrails to AI as the rest of the platform.


Example:
MCP only accesses what users already have permission to see.

AI @ Ardoq - (FAQs)

Ardoq’s platform was built from day one to be data-driven, graph-based, and architecture-native. We aren’t bolting AI onto static diagrams, we’re embedding it into a live, structured, permission-aware model of the enterprise.

With our proprietary graph engine, flexible metamodels, and deep domain knowledge, we provide AI with the data and structured input it needs to generate meaningful, explainable, and trustworthy insights. Ardoq isn’t just using AI. We’re elevating EA with it.

It means our AI augments—not replaces—the architect. AI can generate value streams, propose capability models, and surface reasoning behind decisions. But the final judgment and validation always stays with the human. We see AI as the first responder for EAs, not the final authority.

No. While we do integrate with LLMs like Claude or OpenAI models, Ardoq's value comes from what surrounds the model—structured enterprise data and context, a purpose-built query interface (MCP), and strict governance layers. We bring the architecture intelligence. LLMs bring language fluency. Together, they power meaningful EA conversations.

We use a federated, AI approach. Ardoq doesn’t create the foundation models. Instead, we use the best available models (e.g., Claude, GPT) depending on the task. This approach gives us flexibility, rapid innovation, and cost-efficiency. Also allowing us to focus our IP on context modeling, graph structure, and intelligent orchestration.

We will fine-tune the reasoning layer, not the model itself. Our value lies in how we curate prompts, structure model context, apply architectural constraints, and govern how models interact with your data.

AI Gateway (MCP Server) is a new way of securely exposing your architecture data to AI tools. MCP provides structured, contextual data that enables deep reasoning, letting AI answer questions like: “Which apps should we decommission—and why?” It’s also governed: read-only, permission-aware, and 100% explainable.

Not yet, but we’re laying the groundwork. We’re not pushing for full autonomy in the short term. What we are doing is enabling agentic workflows where AI can reason through structured models, suggest next steps, and eventually trigger guided workflows. But a human is always in the loop for now. Think of it as going from insight to orchestration, one decision at a time.

whats-next

What's Next?

We’re evolving Ardoq AI toward even more guided, conversational capabilities:

  • Conversational Assistant inside Ardoq
  • AI Agent Governance Solutions
  • AI Labs experiments tested in public and built with your feedback