So this quarter, we shifted gears with a different ambition: to make AI in the enterprise governable, explainable, and accountable by design.
In this quarter’s roundup, you’ll see how we turned that vision into action: embedding AI governance, removing hidden pricing traps, converting visuals into data, and enabling query-driven discovery. Let’s dive in.
AI Lens provides a system of record for the AI landscape:
By grounding AI discovery in the enterprise graph, AI Lens helps organizations innovate responsibly, ensuring that innovation doesn’t outpace control.
👉 Read: AI Lens: The First Step to Controlling AI Chaos
👉 Watch: Manage and Govern AI With Confidence
Architecture doesn’t start in a tool, it starts on whiteboards and in diagramming sessions. The AI Visual Importer (open beta), converts those visuals (any image in PNG or JPEG format of slides, screenshots, or Visio diagrams) into structured, queryable data models inside Ardoq.
Using AI vision processing, the importer recognizes shapes, labels, and relationships, automatically mapping them to the correct components and references in your workspace.
It may seem like a small feature, but it delivers outsized impact: cutting hours of rework, reducing data loss, and accelerating modernization projects by turning informal visuals into living, connected architecture.
How teams are using it:
By automating this first mile of data capture, the AI Visual Importer bridges the gap between human design and machine understanding, transforming scattered visuals into actionable enterprise intelligence.
👉 Read: A similar feature, Ardoq + Lucidchart Integration and AI-Powered Importer
👉 Watch: AI Visual Importer Demo (1 min)
Q4 also introduced Chat With Ardoq, a native conversational interface that lets users query live enterprise architecture in plain English.
Ask questions like “Which applications support our finance transformation initiative?” or “What’s the dependency chain if we decommission Salesforce?” and Chat with Ardoq returns a graph-grounded, policy-aware answer you can trust.
This Natural Language Assistant stands out for a few key reasons:
Powered by our new model routing engine, Ardoq intelligently chooses the best model for each task from in-house, privacy-optimized LLMs to large, reasoning-heavy cloud models. It’s conversational AI, governed and context-aware, built for the complexity of real enterprise data.
👉 Read: Chat With Ardoq
👉 Watch: Chat With Ardoq Demo (1 min)
With the launch of MCP, Ardoq has created a new guide for a new discipline: context engineering. This guide shows EAs how to design and structure information so AI Agents can deliver real business value reliably, securely, and at scale. Think of it as the playbook for the future of architecture: practical techniques, proven patterns, and forward-looking guidance to help you harness AI in a way no one else in the industry is doing today.
It explains how to structure models, relationships, and metadata so AI agents can deliver value safely and consistently. This isn’t theoretical; it’s a practical discipline emerging directly from our work with customers adopting MCP and AI Lens. Context engineering will become foundational to every AI-native architecture practice in 2026.
👉 Read the guide: Context Engineering: Getting the most from Ardoq MCP
👉 Watch: Webinar - From Experiments to Enterprise Value
Enterprise data can be rich, but querying it shouldn’t require a data scientist. With Advanced Search from Natural Language, which is in beta right now, users can now create complex, graph-powered queries in plain English, no Gremlin syntax required!
Type a request like:
“Show me all applications that cost more than $100k and are marked for decommission.”
And Ardoq automatically builds the equivalent Advanced Search query, ready to run, review, or save as a report.
How it helps teams:
Common use cases include identifying redundant applications, analyzing process ownership gaps, or tracking capability maturity trends, all without touching a single line of code.
It’s a powerful step toward making architecture knowledge universally accessible and another example of Ardoq’s vision for AI-assisted, human-directed decision intelligence.
👉 Watch: Advanced Search Demo
In Q4, we expanded the MCP Server with two practical improvements driven directly by early customer feedback: support for fields in the Metamodel Tool and direct URLs to Reports and Dashboards.
These updates make MCP responses more precise, more navigable, and far easier to act on. Instead of giving users a generic description of their architecture, MCP can now surface the specific fields that matter, whether that’s cost, lifecycle, owner, or technical risk, and point them directly to the report or dashboard where those insights live.
For customers, this means:
It’s a small enhancement on paper, but a big step toward our larger vision of making AI in EA not just conversational, but actionable.
👉 Read: Context engineering: Getting the most from Ardoq MCP
👉 Watch: Application Portfolio Management Use Case With MCP
Ardoq AI is fully embedded into our platform with no additional costs, no token caps, and no ecosystem lock-ins.
Every customer gets full access to Ardoq AI features across their entire IT stack, with transparent pricing that is not tied to AI usage. This means teams can explore, build, and experiment without worrying about hidden costs or restricted access.
It’s not just a pricing choice, it’s a philosophical one: AI should amplify enterprise value, not gate it.
Behind every AI-powered feature at Ardoq is a systematic evaluation process that ensures reliability and accountability. AI Evaluations or automated quality checks that continuously test AI outputs for accuracy, consistency, and stability.
Think of it like software testing, but for intelligence:
This is how Ardoq operationalizes “trustworthy AI.” Not as a tagline, but as a discipline baked into our development lifecycle.
As 2025 draws to a close, one theme has defined our work at Ardoq: responsibility through acceleration. We’ve released more AI-powered capabilities this year than ever before. But every launch has been guided by a single principle: build what you can trust.
Our roadmap has never been just about new features. It’s about reshaping how Enterprise Architects and technology leaders approach AI, moving from one-off copilots to connected, governed intelligence that spans the entire enterprise.
From AI Lens to Context Engineering, from Visual Importer to Chat with Ardoq, we’re building an AI-native platform grounded in data accuracy, transparency, and explainability.
That’s why initiatives like AI Evaluations and AI Without Red Tape matter so much. They reflect our belief that AI innovation should never come at the cost of clarity or control.
As we move into 2026, our focus turns to AI guidance: systems that don’t just describe your organization, but help you simulate, test, and steer it responsibly.
Thank you to all our Ardoq Labs customers, partners, and community members who joined us on this journey. The next chapter of AI-native Enterprise Architecture starts here.
— The Ardoq AI & Innovation Team