At Ardoq, we’ve always believed that meaningful innovation comes from combining strong technical foundations with clear strategic intent. As AI rapidly reshapes how software platforms are built and used, it’s increasingly important for us to be explicit about how we think about AI, not as an add-on, but as a core capability.
That’s why we’re excited to introduce Jarand Narbuvold, Director of AI at Ardoq.

Jarand joined Ardoq in late 2025 and is responsible for defining and driving our AI strategy across product and platform. He works closely with our CTO, CPO, and product teams to ensure AI is applied thoughtfully, responsibly, and most importantly, in ways that create real value for customers.
Before Ardoq, Jarand held senior AI and data leadership roles across a range of data-intensive SaaS environments. His background spans machine learning, analytics, and large-scale data platforms, including time as Chief AI Officer and Co-founder at Zerolytics, and Head of Data, Insight & Research at Inspera Assessment. Across these roles, his focus has consistently been on turning complex data into systems that people can trust to make better decisions.
We sat down with Jarand to explore how he sees the AI landscape evolving and what that means for data-driven platforms like Ardoq.
Q&A With Jarand Narbuvold, Director of AI
1. You’ve spent your career building AI and data-driven platforms across very different domains. What patterns do you consistently see in the companies that are actually getting value from AI, versus those that are just experimenting with it?
First, successful companies don’t just apply AI to existing features, they look past the surface of their workflows. They stop thinking about buttons, screens, and forms. Instead, they zoom out and ask: What is the actual outcome we’re trying to achieve? And then they design a process and an AI system together to get the most out of it.
In a way, what I often see from companies who do not succeed is analogous to the early days of personal computers. When computers first emerged, they were treated as specialized IT tools. People simply mapped paper-and-pencil workflows onto digital interfaces. That approach worked okay in the best case but, at worst, failed—because it assumed the old way of working still made sense in a new digital world. AI is no different. If you just "drizzle" AI into existing features like adding a button to summarize text or auto-highlight descriptions, then you’re not gaining real value. It’s surface-level, and it doesn’t solve the actual problem.
The real power of AI comes when you reframe the entire process. One of the most impactful examples I’ve seen reduced three days of full-time work into a 10-minute operation using AI. That wasn’t achieved by adding AI to every step - it was achieved by rethinking the end goal and designing a process where AI handles what it does best, and humans handle what they do best.
Culture also plays a major role. Many companies are unwilling to experiment at all and are locked in by tradition. Others experiment but don’t set clear goals. They build prototypes and stop. This mirrors the machine learning era, where many companies spent heavily on proof-of-concepts and then abandoned them. The core issue? Lack of quality data or information.
Today, with large language models, the requirements are different. Unlike machine learning, which needs high-quality, labeled data for training and validation, modern LLMs don’t require training or fine-tuning. You just need good context. But that doesn’t mean they’re omniscient. LLMs don’t know your company, your features, your background or your goals unless you provide that information. So information quality and availability remain critical. Without it, AI produces hallucinations, not facts.
Another key difference: successful companies don’t just feed AI context, they design it with the ability to perform actions. That means empowering AI agents to perform tasks autonomously, not just respond to queries. Instead of relying on error propagation through a system, they build agentic systems that can act independently to achieve goals and thus recover from unforeseen challenges.
And here’s something often overlooked:
“Successful companies don’t treat AI as a magic solution. They establish clear boundaries because trust is essential. If users don’t trust AI, they won’t use it. And trust is fragile. Trust is earned in drops and lost in buckets.”
There are two aspects to this. One is trusting AI itself, and the other is trusting humans to use AI in a good way. This trust is built through visibility and accountability. When you use AI, it is important that you can easily see where the information comes from and understand why you get the output you do.
When AI is used, the person using it is responsible for the output. Whether it’s a document, a code snippet, or a feature, the user owns the result. If there’s a typo or a bug, it’s the user’s fault not the AI’s. This applies to internal tools and external features alike. To ensure accountability, companies must have clear approval processes. Nothing should go into production without human review. And that review must be easy—no 40-page legal documents. It should be simple, consumable, and focused on catching small errors. Over time, as users see that AI delivers quality and saves time, trust grows.
“Ultimately, success with AI isn’t about deploying more features. It’s about rethinking outcomes, empowering AI to act, and building trust through clear ownership and accountability.”

2. There’s a lot of noise right now about “AI plateauing” versus “AI accelerating.” From where you sit, what’s genuinely changing in the next 12–24 months, especially at the platform and data layer?
That’s a great question, and I hear it constantly: “Is AI plateauing? Is the hype cycle over?”
I totally understand why people feel that way. But the short answer is: No, the technology isn't slowing down. Actually, moving very fast.
The reason it feels like a plateau is because there is a massive disconnect between what the labs claim these models can do and what we, as users, actually experience day-to-day.
I think there are really three reasons for that disconnect:
1. The Black Box Problem: First, as a user, you rarely know what you’re actually interacting with. You might see a label like 'Gemini' or 'ChatGPT,' but under the hood, are you using the most capable version, or a cheaper, lightweight version? Are you using a reasoning model? A lot of people do not know what model type they are using and even what the difference between them are. On top of that, providers use a technique called quantization to save money. Basically, they take a massive, high-precision model and remove how many decimal places it has, essentially shrinking its 'brain' to save on memory and energy. This is actually very efficient and you can do this to a large extent with little loss in fidelity but only up to a certain point. It is strictly speaking the same model but “shrunk”. It saves them a lot, but for the user, it can make the AI feel inconsistent or a bit 'dumber' than it was a week ago.
2. The Silo Effect: Our AI tools are trapped in silos. Right now, you might have an AI in your email and a different one in your calendar, maybe yet another one that has access to the internet but they don't talk to each other. An AI can’t give you a brilliant answer if it doesn't have the information that would allow it to give you a brilliant answer. Until we have agents that can look across these silos the answers are going to feel limited, maybe incorrect and provide little value.
3. The Invisible Revolution: But if you look past the user interface at the actual engineering happening in the last 12 to 24 months, the progress is actually mind-blowing.
- Speed of obsolescence: Anything older than six or nine months is practically a fossil right now. The frontier moves that fast.
- Efficiency: We are realizing that the huge models of the past were 'overbuilt.' We are now seeing local models that you can run on private servers and even your own laptop that are outperforming the original GPT-4o, but they are at least 10x smaller and several orders of magnitude cheaper to run.
- Coding over Chatting: The biggest shift isn't better chatting; it's better doing. We've moved to an 'orchestration' model. Instead of just answering a question, the latest models can write ad-hoc code to go out, use tools, fetch data, and combine it all. We call this 'agentic behavior'—the ability to reliably coordinate tools to get a job done. It is worth noting that up until quite recently, the top model on the ARC-AGI leaderboard (a leaderboard that is focused on tracking LLM progress towards AGI) is not a single model but an orchestration layer that uses multiple models. If you are using a frontier model, it's less about the exact model than about how you’re using it.
So, to answer your question: The science isn't plateauing—it's getting much more efficient and capable. The product experience is just lagging behind. There is a huge gap between implementation and the frontier.
3. Ardoq is fundamentally a data modeling and knowledge platform. Why do you believe structured, connected data is becoming more important — not less — in an AI-first world?
To understand why, you really have to look at how AI accesses information. You can think of an LLM as having two distinct sources of knowledge.
First, you have the internalized information from its training. This is quite diffuse and more like an intuition rather than memory. It’s what gives the model that human-like ability to 'get' what you mean, even if you don't describe it perfectly. That's incredibly useful for understanding intent, but it’s not something you want to rely on for hard facts.
The second source is the context you provide in the moment. And this is where we run into a major issue: hallucinations.
When people experience hallucinations, one of the key things often happening in the background is something called attention dilution. Essentially, if you dump an entire document or hundreds of pages of text into the context window, you dilute the model's ability to focus. It gets overwhelmed by the noise. Even though a model can have a really big context window, the actual ability for a model to use information in that context is finite, and typically much much lower than context window size.
So, for LLMs—and by extension, AI agents—to work effectively, you want to provide the minimum amount of relevant information to solve the problem. Too much and you get attention dilution. Too little and the agent does not have the required information to solve the task.
This is why structured data, and specifically graph data because of its flexibility, is becoming so critical. Instead of dumping a massive PDF, structured data allows you to be surgical. You can pick out the specific connections and tailored details the model needs to solve the problem, without the fluff.
“Your agent is only going to be as good as the information you give it. That ability to have data that is both flexible and structured is exactly what Ardoq provides, and it's why it's essential for an AI-first world.”
4. Many AI initiatives fail not because of models, but because of data quality, context, and trust. How do you think about designing AI systems that decision-makers can actually rely on?
That is a great question. Fundamentally, I believe designing reliable AI boils down to a shift in mindset: moving from data quality to information quality.
If we look back at the 'traditional machine learning' era, the barrier was almost always data quality, whether it was structured and what format it was in. If you had a document or an image, you were stuck; the models then couldn't process it. But today, Large Language Models have largely solved that specific hurdle. They can read contracts, images, and unstructured text without issue.
However, with that ability to read everything, many organizations fall into the “data dump” trap. They think, 'Great, I can just dump all my data into the LLM, and it will solve the problem.'
But as I mentioned, you can’t expect an agent to solve a problem if it doesn’t have the right information. Just because the model can read the file doesn't mean the file is true. This is where we need to distinguish between data quality and information quality.
While data quality is about whether a file is readable, information quality is focused on whether the content is trustworthy. A lot of important data is still manually typed in. This means that human error is still an important factor. Even in structured sales systems, if the person inputting the data made a mistake, the AI will make a mistake. For example:
- High Information Quality: A signed, active contract. We trust what is in there to be true.
- Low Information Quality: Documentation written five years ago by an employee who no longer works there. It’s readable, sure, but is it accurate? Probably not.
We need to build systems where the user can see exactly where the information comes from. If an executive sees a strange number or decision, they should be able to click a button and inspect the source immediately.
This allows them to answer the critical question: 'Did the AI make a reasoning mistake, or did we give it the wrong information?' That transparency is the only way to build genuine trust.
5. When you look ahead, what’s the most misunderstood thing about applying AI inside complex enterprises, and what are you most excited to build at Ardoq as a result?
I think the biggest misunderstanding right now—especially with established products—is what I call "AI Drizzle." It’s very easy to look at existing features and ask, "Can we sprinkle a little AI on this?" You end up solving minor quality-of-life issues, shaving off ten minutes here or there. Those improvements aren't 'bad'—they do solve problems—but they ultimately fail to leverage what AI is actually capable of. They aren't the revolution; they’re just polish.
What really excites me about Ardoq is that the ambition here goes way beyond the “drizzle.”This is a company that looks past individual steps and focuses on the outcome. The digitalization parallel I like to compare this to the transition from pen and paper to computers.
Initially, people just took the form of paper and translated that onto a screen. It was marginally better, but it created convoluted workflows that didn't deliver on the promise of digitalization.The real revolution happened when people were willing to look beyond the existing process and redesign it entirely with the computer in mind.
That’s the stage we are at with AI. To be successful, you have to be bold enough to look at the outcome and reshape the process itself. That outcome-focused design mindset—rather than just digitizing old habits—is exactly what resonated with me about Ardoq.
Looking Ahead
As enterprises move from experimenting with AI to operationalizing it, the role of strong data foundations, context, and architectural clarity becomes even more critical. Jarand’s work at Ardoq focuses on ensuring that AI enhances, rather than obscures, how organizations understand and evolve their technology landscapes.
Ashima Bhatt
Ashima is a Product Marketing Director at Ardoq. She loves turning complex technical concepts into clear, simple analogies that everyone can understand. Her favorite part of the job is connecting the dots between technical innovation and real customer results.