AI Cybersecurity Risks and How to Manage Them

19 Apr 2024

by Stuart Armstrong

AI is everywhere. In 2023, one-third of respondents to a McKinsey survey confirmed they used generative AI for at least one core business function. 

The promises of AI are exciting, and the vendors of the tools your organization uses every day are adding AI integrations that can be activated with just a button press. It can be tempting to turn these features on and start innovating without considering the consequences. However, without a suitable digital security foundation in place, AI can bring cybersecurity risks to your organization, opening it up to threats from malicious actors.

We spoke to Ardoq’s CISO, Nick Murison, to learn more about these risks and how Ardoq has mitigated them across our platform and business as a whole.


AI and Cybersecurity Risks: What Should Organizations Watch Out for?

“It’s easy for businesses to turn on AI features without thinking about where the data is going and how it will be used.”
- Nick Murison, Chief Information Security Officer at Ardoq

When businesses talk about introducing AI, they’re really thinking about LLMs (Large Language Models). These AI algorithms can recognize and generate text, such as questions input by a user through a chat interface. LLMs have exploded in popularity over the last few years through the introduction of publicly available models such as ChatGPT. These can be trained on thousands of gigabytes of data, and it is in how they use this data where most of the risk lies.

Risks to Customer Data

When a customer uses an AI integration in a platform, data flows from a customer to a vendor via a third party, such as OpenAI. Under the General Data Protection Regulation (GDPR), which governs the personal data of British and EU citizens, the data controller must tell the vendor how to treat this data correctly — it is not enough for the vendor to simply be GDPR compliant. You must also tell your customers what you will do with their data and get their consent for it to be processed by the AI provider. Even if your data use is not governed by GDPR, you should consider what you are contractually able to do and if your use of AI goes against this.

Training the AI using customer data is tricky. Under GDPR, as a vendor who processes the data of your customers, you are the data controller and they are the data subject. This means you must have their permission if you wish to train the model using AI. Chances are, you won’t get 100% of your customers to agree to this, so your model will never completely know every one of your customers. The best you can hope for is inference — if customer A has been using the platform well in a way that works for their business, you could infer that customer B works similarly if they share characteristics such as industry, size, and goals.

When we began to offer customers the option to try our platform’s AI features, we had them sign an addendum to their agreement, which made clear that their data would be going to a third party — the host of the LLM. We also anonymize the data so users or data subjects cannot be identified.

Risks to Business Data

For an AI LLM to work well for your organization, it must be trained on how your business works. Third-party AIs like the publicly available ChatGPT will have primarily been trained outside your business, so they’ll only need a little custom training to suit your needs.

But if you’re building something for your organization from scratch, you’ll need to consider your approach in a lot more detail.

If the data you input is being used to train the model, think about where it will go. Do you want it to be used to train the LLM globally, or should it stay only within your organization’s model? Once data is in a training model, there is no easy way to remove it. Moreover, it poses a security risk: attackers could extract it with a malicious prompt and potentially gain access to your confidential information — another reason to avoid or at least be very careful when using private data.

Some platforms have also experienced difficulties with identifying different users. This can mean that they show data meant for one user to another. Unfortunately, no AI model is 100% guaranteed not to leak data.

Finally, there is a risk that you train your AI using biased data. Biased inputs will produce biased outputs. Even with the best training data, no AI is 100% accurate and sometimes will make up (or ‘hallucinate’) results if it isn’t sure of the answer.

For our internal AI usage, we signed an agreement with the third-party provider that our internal data would only be used to train our internal model — it would not be added to the LLM’s general public model. That way, our data would be much less likely to make it outside our organization.

Other Security Risks

Data aside, another major risk of AI is that it can sound correct even though it’s not. The hallucinations LLMs produce can be misleading; they can seem plausible to us because they are written in a positive tone and in a way that suggests the user is interacting with a human.

We used caution when introducing AI into our organization. Every Ardoqian has been trained on how to safely handle data and use AI responsibly while considering these risks and thinking critically about the results.

Balancing Risk and Reward

“Think of AI in business like any other kind of organizational change. It needs to be planned for and introduced carefully.” - Nick Murison

The cybersecurity risks of AI sound scary, maybe scary enough for you to think twice about using it at all. But the future is here, and even if you decide to be completely risk-averse and refuse to use AI in your organization, this may not necessarily protect your business. The chances are that at least one organization you work with will be using it —not to mention your competitors.

At Ardoq, we prioritized agility and innovation across the business with AI and experimentation to find out what works. We balanced this with safety from the beginning by imposing clear guardrails: avoiding using customer data, sensitive data, and Personally Identifiable Information (PII), and bearing in mind the risks of events such as hallucinations.

AI is an exciting technology with the potential to transform business capabilities across the organization — but it shouldn’t be taken lightly. Like any other technology your business uses, using it responsibly and having clear rules in place to govern its usage will go a long way toward helping you realize the benefits without seeing them outweighed by AI cybersecurity risks.

Learn more about how we manage AI at Ardoq in our upcoming webinar on April 30, "Unlocking AI Potential: How to Manage AI Innovation In Your Organization." This ninth episode in our Amplify Webinar series will see Senior Enterprise Architect Simon Field and Chief Information Security Officer Nick Murison discuss seven steps for effectively managing AI in large organizations and unlocking value: Register now

How to Manage AI Innovation In Your Organization.

More to Explore
Stuart Armstrong Stuart Armstrong Stuart is a Senior Content Writer at Ardoq. He specializes in making the complex accessible. And puns.
Ardoq Insights & Events

Subscribe to Ardoq's Newsletter

A monthly digest of the latest news, articles, and resources.