Generative AI and Enterprise Architecture: Modeling the Enterprise

20 Feb 2024

by Edward Granger

Will AI Be Able to Build Reliable Representations of Organizations?

Welcome back to part two of our five-part blog series “Generative AI and Enterprise Architecture.” This series provides an in-depth perspective on how generative AI will reshape EA as we know it. If you’ve eagerly awaited this follow-up but would like a refresher on the first part, hop to Generative AI and Enterprise Architecture: Impact on Enterprise Complexity.

In the first part, we introduced five axioms:
1. Enterprise complexity will tend to increase.
2. Enterprise Architecture must model the enterprise.
3. Enterprise Architecture must roadmap the future.
4. Enterprise Architecture must be driven by business outcomes.
5. Enterprise Architecture must be close to the changemakers.

Here, we delve into the second axiom, which touches on the very core of EA as a practice: modeling the enterprise. 
Generative-AI-and-EA-Part2


Axiom Two: Enterprise Architecture Must Model the Enterprise

Good principles persist. Back in 1970, Cyberneticists Roger Conant and William Ross Ashby published a paper titled “The Good Regulator Theorem”. In it, they stated that a “every good regulator of a system must be a model of that system.” 

This principle is the core of our second axiom: Enterprise Architecture Must Model the Enterprise. 

But what is a "good regulator" anyway? It’s anything that positively controls or influences the operation and evolution of the system. So, if our system is the enterprise system, then to guide and course-correct both its operation and its evolution, we need to model it.

Modeling has been the foundation of enterprise architecture since the first Enterprise Architecture(EA) frameworks rose to prominence thirty years ago. And while it's been an uphill battle to convince non-architects of the value of spidery Visio maps, architects are passionate about building models because, like Conant and Ashby, they know that fundamentally, models represent control.

The question for us now is what form those models will take in the age of generative AI.

We need to attack that question from two angles: How will those models be experienced? And how will they be stored?

Breaking the Language Barrier

Experience matters because understanding informs action.

Unfortunately, for a long time, Enterprise Architects have been caught in a UX trap: On the one hand, to build a holistic view of the organization, we need to standardize semantics. Standards like ArchiMate or BPMN are languages we must learn, and doing that requires us to become a little more machine-like in our thinking.

While these common languages and notations help with collaboration, they also represent a significant learning hurdle. A big reason for their lack of adoption as wider enterprise planning tools is that many changemakers just aren’t used to thinking like engineers.

Generative AI, on the other hand, offers the potential to express these concepts in the users’ own terms. It allows them to explore, reframe, and interrogate. Talk to your process map? Why not?

The ability for machines to talk like humans, rather than requiring humans to think like machines.

This ability offers Enterprise Architects the tantalizing possibility of truly opening up the concept and value of architecture to a mass audience.

Establishing Ground Truth

But whatever experiences we build, they have to be grounded in truth. 

Personalization can’t mean providing different answers to the same question, only different presentations of those facts.  But where will those facts reside; where will our models live?

Generative AI has upended many conventional assumptions around knowledge representation. The rapid growth in LLM performance, along with the emergence of new database technologies, means that the information architecture of enterprise modeling is probably more fluid than it’s been for decades. At this point, there seem to be three, maybe four candidates for where we build our models.

The first is in the LLM itself. 

Now, this one’s a little mind-blowing for those who grew up thinking that data is data and code is code. However, the LLM’s own neural net serves as both simultaneously. It is both a dynamic process and a form of knowledge representation. 

Its tokens, embeddings, parameters, and weights simulate the use of language - and, therefore, an image of the world based on the contents of its training dataset. This is why ChatGPT can chat without needing to go off and look up the answer first.

One of the critical limitations of LLMs is the knowledge cutoff. GPT-4 may have an estimated 1.76 trillion parameters and have been trained on a corpus consisting of pretty much the entire public internet, but none of that was your enterprise’s data. 

So, a lot of effort has gone into developing techniques for augmenting the LLMs’ knowledge with your own. There are different and fast-evolving patterns for doing this. 

  • Fine-tuning: Extending the LLM’s own training to cover your domain-specific knowledge.
  • Grounding: Getting the relevant information into the LLM at the time of query. Retrieval-Augmented Generation (RAG) has become the leading pattern for this.

Learn how you can leverage RAG to ease application governance on the Ardoq platform: AI-based reference creation in Ardoq with simple Retrieval Augmented Generation.

Even so, that augmentation has to come from an existing external knowledge source. Where should this be?

One source is unstructured content. All those documents, pictures, audio, and video files that crowd out your SharePoint or Google Drives represent a rich seam of enterprise knowledge. Until recently, this option was a non-starter as this content was not easily machine-readable. 

LLMs have changed that. It’s hard to overstate the impact of having a machine that can not only understand but apparently reason in natural language.

However, a fundamental limitation to these grounding approaches is input size — there’s a limit to how much knowledge you can feed into the LLM at query time. This, plus the need to maintain knowledge consistency across multiple conversations, means we still need to integrate and persist knowledge outside of the LLM.

This requirement has led to the rise of another relative technology newcomer, the vector database, which can persist representations of those knowledge assets as embeddings for consumption into the LLM.

Superficially, what both LLMs and vector databases enable you to do is treat your unstructured content as though it were structured data. This is huge because, historically, extracting machine-readable data that can be processed and reasoned about by machines from unstructured content has been an expensive and error-prone process — think of all those enterprise ETL and information governance processes. 

Not least, making information machine-readable has required us humans to think more like machines ourselves, marshaling our understanding into IDs, tables, and relations rather than our intuitive language of conversations and stories.

So, it’s not hard to see why the idea of speaking in our own human language is a seductive one, allowing us to jettison this whole painful process of formalizing and integrating knowledge. 

It’s also, we believe, wrong.

Facts Versus Probabilities

The real issue isn’t whether the information is machine-readable. It’s whether it can produce consistent and reliable results. Indexing a PDF document that describes your architecture into a vector database, or directly into an LLM, has two big problems: hallucinations and information siloes

Problem 1: Hallucinations

The first is while both can represent the architectural relationships described in that document, the way those relationships are represented is inherently probabilistic. Saying that two elements, such as a business process and an application, are probably related based on the similarity of their vectors is not the same as definitively stating that they are related. At worst, this can lead to misleading hallucinations, a well-known limitation of LLMs, presenting false or misleading information. This would be damaging to users’ confidence in the knowledge base.

Problem 2: Information Siloes

The second problem relates to EA’s ability to look across information siloes.

EA models describe how business and technology elements are related to inform impact and risk analysis, as well as change planning. But when you’re dealing with unstructured content, those relationships may be scattered across multiple documents. So, the likelihood is high that a search won’t reveal a relationship even if there is one in reality. The LLM’s understanding is only as connected as its underlying information.

In a real operational context, that means critical impacts and dependencies are going to get missed.

Reliability is the Hurdle to Adoption

We’ve gone into detail here because these are real issues for enterprise adoption of LLMs. Even some of generative AI’s biggest cheerleaders admit their current accuracy is somewhere around the 90% mark. For critical functions like cyber risk management or financial planning, a process that fails 10% of the time is unacceptable.

This is why the data foundation of our generative AI capability is likely to remain the conventional structured database — whether RDBMS or NoSQL technologies like Graph — at least in the near term. However effort-intensive they may be to build, populate, and maintain, they make statements about facts, not probabilities.

And this seems to be reinforced by the recent surge of interest in technologies like Knowledge Graphs. Technologies like these hold the key to unlocking the LLM’s potential through their ability to reduce the two key issues mentioned earlier: hallucinations and knowledge cutoff.

This is a fast-evolving space. Given the current pace of innovation, will we have a different answer in six months’ time? 

It would be pretty unwise to rule it out. However, as the fundamental architecture of LLMs is based on probability, until it’s proven they can more reliably represent the knowledge of the enterprise than conventional technologies, the conventional database will probably be the bedrock of our AI capability for some time to come.


The third part of this blog series looks at one of the function’s most requested artifacts, roadmaps, and whether we can or should trust generative AI’s views of the future enterprise: Generative AI and Enterprise Architecture: Roadmapping the Future

Find out how more about AI and its impact on EA in our on-demand webinar: Enhancing Enterprise Architecture with AI: An Ardoq Perspective

New call-to-action

Jason_Baragry_Profile_Picture

 

This blog series has been co-authored with Ardoq’s Chief Enterprise Architect, Jason Baragry.

 

More to Explore
Edward Granger Edward Granger With over 20 years of experience in the industry, Ed Granger is at the forefront of driving innovation and has a strong belief that for EA to control Digital, it must be Digital.
Ardoq Insights & Events

Subscribe to Ardoq's Newsletter

A monthly digest of the latest news, articles, and resources.