AI is already embedded in most businesses. Not in a big, transformational way but in lots of small, everyday decisions:
- Drafting emails.
- Analysing data.
- Supporting reporting.
- Helping teams move faster.
The challenge is that governance has not kept up, and that creates risk.
We recently recorded a podcast episode with Rebecca Steer from Keystone Law, where we talked about the legal and commercial realities of using AI in a business. What became abundantly clear during the conversation is that most businesses are already using AI, but very few have properly thought through the implications.
We don’t recommend you avoid AI.
Instead, we suggest understanding where the risks sit and managing them properly.
Why AI risk is different
As CFOs, we are used to managing risk. We think about financial risk, operational risk, and commercial risk all the time.
But AI introduces something slightly different.
With people, there is accountability.
If your accountant gets something wrong, there is professional indemnity insurance. If your lawyer drafts a contract incorrectly, there is a process to resolve that.
With AI, that accountability doesn’t exist.
Most AI tools explicitly limit their liability to a very small amount. In practical terms, if something goes wrong and you have relied on it, the risk sits with your business.
That is the shift that needs to be understood: AI can be incredibly powerful, but it doesn’t take responsibility for the outcome.
The 5 key AI risks CFOs need to understand
1. Data and confidentiality risk
One of the most immediate risks is how data is being used.
If your team is uploading client data, financial information or commercially sensitive material into AI tools, you need to be very clear on whether you have the right to do that.
In many cases, businesses have contracts in place that restrict how data can be used. Uploading that data into an AI tool may breach those agreements, even if the intention is simply to analyse or summarise it.
There is also the question of where that data goes and whether it is being used to train models which is not always obvious.
2. Incorrect outputs and hallucinations
AI gets things wrong, sometimes subtly, other times, completely.
This is often referred to as “hallucination”, where the tool produces an answer that looks credible but is incorrect. This creates a very practical challenge.
If you are using AI to support reporting, forecasting or decision-making, you still need a human layer of review. Otherwise, there is a risk that incorrect outputs become embedded in your numbers or your processes.
In some cases, the time saved by AI can be offset by the time required to check it properly.
3. Ownership and intellectual property
This is one of the less obvious risks.
Most AI tools say that you own the output they generate. That sounds reassuring. But legally, it isn’t that simple. Under current UK law, copyright relies on human input, skill and judgement. AI-generated content doesn’t always meet that test.
Which means there is a real possibility that some AI-generated outputs are not protected by copyright at all.
For many businesses, this will not matter. But if you are developing proprietary content, creating code or producing client deliverables, this becomes much more important.
4. Liability risk
This is the one most people simply overlook.
If you use AI to produce something that is then shared externally, whether that is a report, a model or a deliverable for a client, who is responsible if it is wrong?
With a human team, you can stand behind the work.
With AI, you cannot see how the output has been generated. You do not know what data it has been trained on. And the provider is unlikely to accept any meaningful liability.
That means the risk sits with you.
This becomes particularly important when:
- outputs are shared with clients
- decisions are made based on AI-generated insights
- contracts assume a level of accuracy or ownership.
5. Legal privilege risk
This is probably the least well understood risk.
If you are dealing with a sensitive or potentially contentious situation, for example a dispute, conversations with your lawyer are usually protected by legal privilege. Which means those conversations do not have to be disclosed in court.
However, if you upload that information into an AI tool, it is not yet clear whether that protection still applies.
There is a risk that doing so could effectively make that information disclosable, even if the tool is set up in a “closed” environment.
The safest approach at the moment is simple: If something could become contentious, do not put it into an AI tool.
What should CFOs actually do?
This is not about shutting AI down as most businesses will benefit significantly from using it well.
But it does need structure.
A sensible starting point would be:
1. Understand current usage
Where is AI already being used across the business, which tools are being used and for what purpose?
In many cases, usage is far more widespread than leadership teams realise.
2. Introduce simple governance
You do not need a complex framework.
A simple red, amber, green classification of tools can be very effective:
- Green: approved for general use
- Amber: under review or limited use
- Red: not permitted
This gives teams clarity without slowing things down.
3. Be clear on your data rules
What can and cannot be uploaded into AI tools?
Your AI data rules should cover:
- Client data
- Financial data
- Sensitive internal information
Clarity here reduces a significant amount of risk.
4. Review contracts and liability
If AI is being used to produce outputs for clients, it is worth reviewing what you are committing to, what liability you are taking on and also whether that still makes sense.
5. Train your team
Whether you know it or not, AI is already being used. The question is whether it is being used well.
Training helps people understand:
- Where it adds value
- Where the risks sit
- When to use it and when not to
Final thought
AI isn’t something to avoid.
It is already part of how businesses operate. The opportunity of AI is significant but so is the risk if it isn’t managed properly.
For most CFOs, this is simply the next evolution of the role: no longer just reporting on what has happened but helping the business make better decisions, safely. And making sure the right guardrails are in place as things move forward.
Need a CFO to turn AI challenges into business growth?
AI is transforming finance, but without the right financial leadership, it is easy to miss opportunities or expose your business to risk.
At Artemis Clarke, our CFOs and FDs step in as part of your team, delivering the strategic oversight you need to:
- Close compliance gaps
- Future-proof your finance function
- Capitalise on AI’s opportunities
Let’s talk. Get in touch to discuss how we can support your business.
Photo by Immo Wegmann on Unsplash