What risks do solicitors firms face when using AI with client data, and how do you use it with confidence?
What risks do solicitors firms face when using AI with client data, and how do you use it with confidence?
Why this matters right now
The Solicitors Regulation Authority is not banning AI. It is watching how firms use it. In its Risk Outlook on AI in the legal market the SRA named three concerns it expects firms to manage: confidentiality breaches when client data leaks into AI training sets, competence failures from over reliance on AI outputs without verification, and supervision gaps where junior staff use AI tools with no qualified oversight.
Two new SRA resources are due in 2026: a Generative AI FAQ and a Good Practice Note on AI use and client data. In February 2026 the SRA authorised Garfield.Law Ltd, the first AI driven law firm, under strict conditions covering confidentiality, conflicts, user approval at every stage, and a ban on the AI proposing case law. That authorisation tells you the direction of travel. AI in legal practice is now an operational standard the SRA expects firms to meet, not a novelty to experiment with informally.
By the 2026 Clio Legal Trends report, 79% of legal professionals now use AI tools. 44% of firms have no formal governance policy in place. That gap is where the regulatory risk sits.
What are the actual risks to client data?
There are five risks worth naming specifically. Each has a different mitigation.
1. Client data used to train a public AI model. The free version of ChatGPT, the consumer tier of Claude, and most public chatbots may use your prompts as training data unless you have switched that off or signed the right agreement. A paragraph of a client's witness statement pasted into a consumer tool has, in effect, been disclosed to the provider. Once a model has trained on that text you cannot pull it back.
2. Privilege loss. In United States v. Heppner (February 2026) a US court ruled that 31 documents generated through a consumer AI tool were not protected by attorney client privilege. Judge Rakoff noted that enterprise grade tools with zero data retention could produce a materially different analysis. The UK courts have not ruled on this directly yet, but the logic applies: if the tool uses your inputs for training, or retains them where third parties could access them, you have weakened the confidentiality claim that privilege depends on.
3. Hallucinated outputs treated as fact. In Ayinde v London Borough of Haringey the court considered referring a barrister to the Bar Standards Board after five fabricated case citations appeared in submissions. In Ndaryiyumvire v Birmingham City University a wasted costs order was made against a firm whose administrative staff had filed pleadings containing two fictitious cases generated by AI. These are not edge cases. They are what happens when AI output reaches a court without a qualified solicitor verifying every citation.
4. Shadow AI. When a firm bans AI entirely, fee earners under pressure find workarounds on personal devices. You lose all visibility of where client data is going. A blanket ban is usually worse than a controlled adoption, because you still get the data leakage, you just do not know about it.
5. Supervision gaps. A junior lawyer running a contract through an AI tool without partner review is a regulatory problem even if the tool is enterprise grade. The SRA expects the Compliance Officer for Legal Practice to be responsible for AI governance the same way they are for any other new technology. That means the COLP should be briefed before a tool goes into use, not after.
What does safe AI adoption actually look like for a UK firm?
There are four layers to get right. None of them are particularly complex. They are just rarely all done at once.
Layer 1: the tool itself. Use enterprise or API access, not a consumer account. For Claude specifically, that means Claude for Work, Claude Enterprise, or the Anthropic API. On the Commercial Terms of Service your data is not used for training. On the consumer terms, by default since September 2025, it is.
Layer 2: zero data retention. For the most sensitive matters, ask for a zero data retention arrangement. Under ZDR, prompts and responses are not stored beyond the immediate processing window except where required by law. This is only available on Anthropic API and Enterprise tiers and requires a signed agreement.
Layer 3: governance and policy. Classify use cases by sensitivity. Green light: internal drafting, marketing copy, non client specific research. Amber: redacted client context with review. Red: raw client identifiable data into any public tool, use of AI for case law without human verification, automated decisions affecting client outcomes. Document this in a one page policy signed off by the COLP and reviewed every six months.
Layer 4: supervision and audit. Keep a register of which tools are approved, which matters use them, and who reviewed each AI assisted output. Run regular audits. If a judge asks why AI generated case citations ended up in a bundle, "no one checked" is not a defence.
How does Claude specifically help give UK firms confidence?
Used on the right tier, Claude offers a set of controls that map directly onto what the SRA expects firms to demonstrate. The emphasis is on used on the right tier.
| Concern | Claude control | Tier required |
|---|---|---|
| Data not used for training | Commercial Terms of Service explicitly prohibit training on inputs | Claude for Work, Enterprise, API |
| Short retention | 7 day default on API, down from 30 days since September 2025 | API |
| Zero data retention | Prompts and responses not stored after processing | Enterprise, API with ZDR agreement |
| UK GDPR compliance | Data Processing Addendum available | All commercial tiers |
| Security certification | SOC 2 Type II | All commercial tiers |
| Regional processing | Available via AWS Bedrock and Google Vertex AI | Platform dependent |
| Audit logs | Immutable access logs | Enterprise |
| Admin controls | SSO, role based access, usage monitoring | Team, Enterprise |
Two things to flag. First, the "Claude Pro" consumer tier sounds like a business product but is not. A 50 person law firm that equips its fee earners with Claude Pro accounts is still on consumer terms. Client data going through those accounts is, by default, eligible for training retention unless every user has manually opted out. Second, ZDR does not override every feature. Some capabilities that require storing prompts, like long term memory, are disabled under ZDR. That is the correct trade off for a regulated firm.
What should I do this week if I run a firm?
Four specific steps, in order.
- Audit what is already in use. Ask every fee earner and support staff member what AI tools they use for client work, on which devices, and on which account type. You will find more than you expect. This is shadow AI inventory, not a disciplinary exercise.
- Close the consumer tool gap. If anyone is using a free or Pro tier of any AI tool with client data, stop it this week. Move them to an enterprise tier of an approved tool, or back to manual until you have.
- Write the one page policy. Green, amber, red use cases. Name the approved tools. Name the COLP as the accountable owner. Circulate it, have every fee earner confirm they have read it, and keep the record.
- Build the audit trail. Any AI assisted output that leaves the firm, especially anything going to court or to a client, needs a verification step by a qualified solicitor. Document it. A tick box on your matter management system is enough, provided it is actually used.
None of this requires a large budget. It requires a decision at partner level that AI governance is a real operational responsibility, and a compliance officer with the authority to enforce it.
Frequently asked questions
Is Claude safe for UK law firms?On the right tier, yes. On Claude for Work, Claude Enterprise, or the Anthropic API with appropriate settings, client data is not used for training, retention is 7 days or less, and a UK GDPR compliant Data Processing Addendum is available. On consumer tiers, Claude Pro or Free, it is not suitable for confidential client data without further configuration.
Does the SRA allow solicitors to use AI?Yes. The SRA has been explicit that firms are free to use AI tools provided they meet the existing Code of Conduct. Principle 7 (acting in the client's best interests), Rule 6 (confidentiality), and Rule 2 (business systems and record keeping) all apply. The SRA has not set specific operational AI standards yet, but it has signalled that a Good Practice Note on AI use and client data is coming in 2026.
What is zero data retention and do we need it?Zero data retention is an enterprise agreement where prompts and AI responses are not stored by the provider after processing, except where required by law. For firms handling particularly sensitive matters, it removes the temporary storage window that consumer tools rely on. Not every matter needs ZDR, but if you act for regulated clients or handle sensitive commercial data, it is worth discussing with your provider.
Who in the firm is accountable for AI governance?The Compliance Officer for Legal Practice. The SRA has been clear that COLPs are responsible for regulatory compliance when new technology is introduced. That means briefing the COLP before a tool is adopted, not after a complaint arises.
Do we need to tell clients we are using AI?The position is still developing. Good practice is to update engagement letters to disclose that the firm uses AI tools to support efficiency while maintaining human oversight and confidentiality. The ICO has emphasised transparency around how personal data is processed. Not mentioning AI when it has materially shaped advice is a risk you should not take.
What about hallucinations in legal research?Treat any AI generated case citation, statutory reference, or legal authority as unverified until a qualified solicitor has checked it against the primary source. That is the single most important rule. Ayinde and Ndaryiyumvire both reached the courts because this rule was not followed.
Is a firm wide ban on AI a safer option?Usually not. Blanket bans tend to push usage underground onto personal devices and consumer accounts. You lose the ability to see what is happening with client data. Controlled adoption, with approved tools, a written policy, and enforced supervision, gives you both the efficiency gains and the regulatory cover.