Generative AI and Enterprise Architecture: Driving Business Outcomes

29 Feb 2024

by Edward Granger

Can AI Help Accelerate EA's Ability to Drive Business Outcomes?

Part four of our blog series “Generative AI and Enterprise Architecture” by Ed Granger and Ardoq’s Chief Enterprise Architect Jason Baragry takes a look at how EA’s ability to drive business outcomes could change with the advent of AI.

This blog series on AI and EA provides an in-depth perspective on how generative AI will reshape EA as we know it. To catch up on the rest of the series: 

  1. Generative AI and Enterprise Architecture: Impact on Enterprise Complexity
  2. Generative AI and Enterprise Architecture: Modeling the Enterprise
  3. Generative AI and Enterprise Architecture: Roadmapping the Future

generative ai enterprise architecture part 4


Axiom Four: Enterprise Architecture Must Be Driven by Business Outcomes

It may sound obvious that transformation investment decisions should be grounded in business performance, but it’s a lesson Enterprise Architecture has been slow to learn.

Probably because of its engineering heritage, the first waves of Enterprise Architecture initiatives focused overwhelmingly on technology planning and standardization. Unfortunately, those programs often came unstuck when challenged to justify their recommendations in terms of operating costs, revenue generation, or business risk. 

In many cases, they simply couldn’t.

The immediate result was EA initiatives being pushed to the back of the queue for funding. The longer-term fallout was the organization questioning the whole point of having an EA team in the first place. 

Thankfully, the needle has shifted. Smart EA teams have sharpened their game.

In this, they’ve been helped by industry analysts like Gartner, with their unwieldy-sounding BODEA (Business Outcome-Driven Enterprise Architecture) framework, who have done a lot to promote ways hard-pressed EA teams can make the value of their recommendations come alive.

So this is Axiom Number Four: Enterprise Architecture decisions must be driven by — and articulated in terms of — business performance, both now and for the foreseeable future.

Outcome-oriented Architecture

How do enterprise architecture teams do that?  In two main ways:

Leverage Business Terms on Performance

The first is simply to speak the language of business performance in the first place. No maths is required here — it’s just about understanding the market drivers and performance indicators that motivate each of your stakeholders.

For example, in engineering companies, asset utilization is a key factor in operational efficiency, and Operating Equipment Effectiveness (OEE) is a key measure of asset performance. So, being able to hold a conversation about OEE — rather than just talking about data pipelines and assuming your audience knows or cares why they matter — gains you much greater traction.

Know Your Numbers

The second is actually being able to calculate the performance gains that Enterprise Architecture initiatives will enable. The bad news is this can be very difficult to do (although we can offer some guidance on developing a business case).

Let’s take a very common example: Calculating the total cost of an organization’s applications as part of an Application Portfolio Management (APM) exercise.

  • First, cost must be broken down into different expense types like Capital Expenditure (CapEx) and Operating Expense (OpEx).
  • Second, application cost is not a single line item on the purchase ledger but an aggregate of different cost components: purchase costs, internal and external support costs, and hosting costs, including hardware depreciation (one reason SaaS is so compelling is it rolls all these into one number).
  • Third, because EA is holistic, you’re not doing this for one or even ten apps but a whole portfolio of hundreds or even thousands of apps.
  • Fourth, to articulate the benefit of application rationalization you need to provide current and estimated future costs. It’s not as simple as just striking cost line items off the list for each decommissioned application. User numbers and processing volumes may need to be migrated, changing the cost profiles of the remaining applications.

This complexity is exactly why there are whole industries, from consultants with proprietary spreadsheets to high-end applications like Apptio, whose sole purpose is to do these calculations.

Of course, not everything is about money. Any organization will focus on a wide range of business outcomes for each context it operates in — market, customer, regulatory, community, and environment — and track them via measures like net promoter score, compliance risk, or carbon footprint.

Whatever the context, Enterprise Architects (EAs) need to tell a story about how their initiatives will move the needle on these, which means both understanding current and projecting future performance.

Whatever the measure and the methodology, from KPIs and balanced scorecards to OKRs, articulating the business outcomes of EA is a challenging task.

So, can generative AI transform how we articulate benefits?

Can AI Do Sums?

Answering that question depends on answering a far more fundamental one first: Can LLMs actually do maths?

This, in turn, cuts to the heart of an even more fundamental question, one we covered in the previous article on whether AI can aid EA roadmapping: Can LLMs reason at all?

In that article, focusing on Axiom Three, Enterprise Architecture Must Roadmap the Future, we looked at how deriving a To-Be architecture from an As-Is architecture requires deductive reasoning where we extrapolate the target architecture from the a priori causes (the current) based on some kind of formalized logic.

So, if we want to measure the benefit of that To-Be architecture, we’re going to need another kind of reasoning: Symbolic reasoning, which is ultimately the foundation of arithmetic.

ChatGPT’s difficulty with maths problems has been widely commented on, and, when it comes to arithmetic, there do seem to be some fundamental limitations to the capabilities of current LLMs. One of the most obvious ones is that they don’t natively have direct numeric representations but instead work with tokens associated via probabilistic relationships. Another is the lack of in-built error-checking, which, given their propensity to hallucinate, can undermine confidence in the output, even when they do get the answer right.

Given these limitations, plus the obvious fact that the enterprise already has mature computing resources for the kinds of repeating calculations that underpin business performance measures, it’s likely that in the near term, we’ll be looking at a hybrid approach. Here, the LLM can be used to both interpret the inputs into and present the outputs from an external maths service — be it a COTS application or a call to a function written in a conventional programming language via an orchestration framework like LangChain.

All the same, as an active area of research, arithmetic reasoning for LLMs couldn’t be hotter, so in another year or two, the answer might be very different. Emerging approaches include:

  • Specialized pre-training of LLMs on benchmark quantitative reasoning problem datasets to dramatically increase the reliability of their calculations.
  • Using zero-shot Chain of Thought (CoT) prompting to force the LLM to step through a mathematical reasoning process.
  • Generating code snippets – for example, Python – in parallel to those reasoning steps to help formalize reasoning, cross-check, and verify results.

Compared to human-level performance, the gains are certainly impressive, but even a pocket calculator can outperform humans. The actual question is whether generative AI can outperform traditional algorithmic computing for reliability. Ultimately, its calculating ability is arrived at via a very different route from traditional computing: through training and its ability to generalize from that training dataset. But this is no guarantee that it won’t encounter novel problems that fall outside of that training, something deterministic approaches are inherently able to cope with.

Breaking the Analysis Paralysis

So, given that we already have mature technologies for adding up numbers, are there any real advantages to the Enterprise Architect in bringing this capability — either natively or via integration — into an LLM? 

Well, there definitely are, but they don’t relate as much to the core ability to calculate as they do to how those calculations are consumed.

What we’re talking about here is productivity for decision-makers.

While it may be difficult to calculate the numbers in the first place, it’s often just as much work for changemakers to interpret them into a decision. Execs have long complained about being swamped in metrics with little true information.

Enterprise Architecture represents a particularly challenging decision space as any decision is multidimensional — impacting people, processes, and technology. Of course, you can argue that’s true of any decision, but Enterprise Architecture makes it explicit.

We love our models, so we EAs tend to assume our mass of connected data represents an asset. However, a decision-maker may not see it that way. Those same connections mean that any decision option has a wealth of unwanted impacts and dependencies leading to analysis paralysis where more data actually inhibits decision-making. 

Fortunately, LLMs carry the potential to be very effective guides through complex decision spaces, highlighting, serving up, cutting, and augmenting with their own knowledge. 

So, even if we’re not yet ready to trust an LLM to calculate the metrics on our dashboard, it can be very helpful in navigating one, interpreting numbers, and analyzing pros and cons. A guide and a sparring partner in synthesizing and weighing options could speed up the whole process of decision-making.

This capability needn’t be restricted to only higher management. There are a huge variety of roles, from executive to operational, who can benefit from quick access to information about impacts, risks, and performance insights. This is a process that often took weeks or even months to collate the data for, and the complexity of the architecture has conventionally meant interpreting it for non-architects has been a major challenge. 

So, while generative AI in its current form is not by itself going to immediately transform our ability to calculate benefits, it’s highly promising in its ability to speak the language of business performance by interpreting dense architecture data to enable business outcome-driven decisions.


The fifth part of this blog series looks at whether AI can empower changemakers with contextualized and personalized insights.

If you're hungry for more content on AI and EA, watch our webinar on-demand: Enhancing Enterprise Architecture with AI: An Ardoq Perspective
New call-to-action

Jason_Baragry_Profile_Picture

 

This blog series has been co-authored with Ardoq’s Chief Enterprise Architect, Jason Baragry.

 

More to Explore
Edward Granger Edward Granger With over 20 years of experience in the industry, Ed Granger is at the forefront of driving innovation and has a strong belief that for EA to control Digital, it must be Digital.
Ardoq Insights & Events

Subscribe to Ardoq's Newsletter

A monthly digest of the latest news, articles, and resources.