Introduction
I can’t remember the last time I browsed Reddit, Hacker News or anything else without AI being mentioned in some form. Claude Code this, agent that, to hell with the consequences, give me results now!
With all the noise, you might forgive yourself for glazing over when it comes to considering the impact of using AI to vibe code your product or help you deliver your services more efficiently. But there’s an underlying issue which you need to ask yourself:
Is it possible that my/our use of AI could cause harm? And to what extent would I/we be liable for it?
The UK Jurisdiction Taskforce (UKJT) is a panel set up by LawtechUK which looks at key questions around the use of technology under English law. Their latest paper is titled “Liability for AI Harms under the private law of England and Wales” (access it here) and it does a pretty good job of addressing this question from a legal perspective, with the goal being to predict how the existing laws of negligence, strict liability and defamation might apply. This is a public consultation that is open for comments until 13 February 2026 - details on how to participate are the end of this post.
Yesterday (27 January 2026) I attended the public consultation event where the draft statement was discussed in detail with several questions being raised. This post is a collection of my thoughts on the statement, the discussion at the event, and some practical steps tech startups might want to take to help reduce their liability for harms caused by AI.
Please note I have no affiliation with UKJT - I just thought this would be an interesting topic to look at from the perspective of tech startups, many of which rely heavily on AI nowadays.
Great. But I’m not a lawyer and I don’t have time to read 90 pages of legal mumbo jumbo.
Fair. The purpose of this post is not to regurgitate the statement in 500 words or less (you can use Chat for that - along with all the harms of it confidently hallucinating nonsense). But there are some basic legal concepts we need to establish in order to understand exactly what AI harm is, and how it might apply to your business.
Trying to define ‘artificial intelligence’ for legal purposes
The statement defines artificial intelligence as “a technology that is autonomous”.1
On first glance this doesn’t sit right with me. The dictionary definition of autonomy is about self-government. But the statement clarifies the definition by suggesting that:
-
‘technology’2 is something that doesn’t occur naturally, and
-
‘autonomous’3 (in this context) refers to systems that generate outputs that were not programmed (in other words, are non-deterministic). It elaborates on this by suggesting that in order to be autonomous, the system should consist of:
-
an unpredictable relationship between input and output,
-
opacity in how it gets from input to output (the black box), and
-
a limited ability for humans to control the output.
-
OK, now it’s starting to sound like an LLM, or an agentic system composed of multiple LLMs, or a SaaS product that utilises AI behind the scenes to do something that the end-user has little control over, or a platform that provides medical or legal advice. This seems to clearly exclude systems that use rule based automation4 (i.e. that which is deterministic - like Robolawyer).
As some further reassurance, this definition is in line with the EU AI Act and it is supposedly the UK Government’s preferred definition5.
What is ‘harm’ in this context?
There are two main types of harm: economic and physical. Economic harm relates to pure financial loss, or a loss suffered from relying on bad advice or false information, or damage to business or financial interests. Physical harm relates to personal injury, damage to property or death. There is a third category, reptuational harm, which relates to defamation - a distinct category that arises when AI generates false statements about someone.
What ‘negligence’ is
You can think of negligence as “falling below the standard of what a reasonable person or business would do”. A court might consider it this way: “what would a sensible company in your position have done?”. If you do less than that, and someone is harmed, then you might be liable for negligence.
The concepts of ‘vicarious liability’ and ‘strict liability’
Vicarious liability is (again, very generally) where your company is responsible for the acts of its employees. Strict liability is where you have a defective product that causes physical harm, regardless of whether you were negligent.
What ‘causation’ means
To win a legal claim, a person must prove that “if the defendant hadn’t taken this action (or failed to take it), the harm would not have happened”.
The AI supply chain - where does your business fit into it?
Just in case you aren’t already familiar with who the main players are, here’s a 10,000 mile view. There is a much more detailed breakdown in paragraphs 19-23 of the statement.
Foundation model developers
These are the superhumans that create the underlying AI models that power everything you actually interact with. OpenAI, Anthropic, Google do this (but they also provides a way for you to interact with those models through their chat interfaces - ChatGPT, Claude, Gemini). Others simply make models available (such as Kimi by Moonshot AI, or Qwen by Alibaba). End-users don’t typically interact with foundation models directly.
App developers
This could be you (including your business and its employees). You build tools using foundation models and add specific features like RAG and agents to them.
Users
This could also be you (including your business and its employees). You might use AI tools that you didn’t directly develop in your backend to serve your customers or clients.
Side note: during the consultation I asked for some clarity on the use of ‘users’ here. I typically consider a ‘user’ to be an individual in the context of software (think of the user model in your data model) - but it was clarified that the intent is for it to apply to all (so for example, a tenant in a data model would also be a ‘user’ for this purpose).
Affected third parties
These are the end-users: your customers, clients or the public. These are the people that might suffer harm through your use of AI, and to whom you might be liable for negligence in your use of AI.
When could your business be liable for causing AI harm?
This is best explained with a few scenarios that seem to crop up quite frequently.
You built your MVP using AI and have real users
It’s becoming increasingly common for non-programmers to use tools like Bolt and Lovable to produce MVPs through the use of prompts. This helps accelerate the feedback and iteration process that comes with developing a new piece of software. But there is a significant risk that “even with the best prompts, the information provided may be inaccurate, incomplete, misleading or biased”.6
So how might you protect yourself against negligence if you’re doing this? Ask yourself:
- do you have the appropriate domain knowledge about the underlying problem you are trying to solve?
- did you choose an appropriate AI tool to help you build the MVP? How did you determine it was appropriate?
- did you test the tool properly for my use case? How did you test it?
- did you supervise/monitor the output it provided? How did you do this?
- how will you act - and how will justify that your response is appropriate - when problems emerge?
Your startup uses AI behind the scenes
It’s trendy for companies to use AI for internal ops - whether that is automated hiring tools, AI powered customer service or decision-making. The risk is that you remain responsible for decisions your AI makes7, even if you didn’t directly make them.
So if you’re a CEO with 2 full time devs who use Claude Code to pump out software as quickly as possible, you need to be confident that your staff is doing this responsibly. Paragraph 33 of the statement makes it fairly clear that when AI is used as a tool by your business, the analysis is “no different from any other situation in which an employer is liable for the actions or omissions of its employee”.
How do you protect yourself in this situation? Ask yourself:
- did you conduct proper due diligence before selecting the AI system you (or your devs) are using?
- do you understand how the AI makes decisions and what its limitations are?
- have you tested it for bias, particularly if it affects people (for example, sifting through job applications)?
- do you have any processes to catch errors or unfair outcomes before they cause harm?
- could you explain and justify the decisions made by your chosen AI, if they were ever challenged?
- do you have a process for humans to review and override AI decisions, if appropriate?
Your company develops and sells AI apps
If your startup specifically builds AI apps for other businesses or consumers, you’re right in the middle of the supply chain - between foundation model providers and end-users. This means you have significant responsibility for ensuring that your product is safe and fit for purpose8.
The statement suggests that you might owe a duty of care to someone if you “knew or should have known that errors in [your] output were likely to exist, be difficult to detect by human [users], and to cause harm.”9. Your liability increases even more if you’re actively pushing security updates, as this suggests you’ve taken on responsibility for them.
Consider the following:
- did you thoroughly test your app before release, including edge cases? Are you relying solely on unit tests or have you also used integration and black box testing?
- do your terms of service and marketing literature clearly communicate the limitations and appropriate use cases for your app/service/platform?
- do you warn users about the potential for errors and the need for human oversight?
- have you made it clear in your terms of service or elsewhere what you will and won’t take responsibility for?
- do you have a system for monitoring how your product performs in the real world?
- when you discover or are notified of issues, do you promptly act to resolve them and notify those affected?
- are you actively pushing security updates and therefore taking on the additional responsibilities for risk suggested by doing this?
- have you considered what happens if your users deploy your service in ways you didn’t anticipate?
You’re a professional services provider and you use AI behind the scenes
Professional services firms - lawyers, accountants, consultants, private medical services - need to be particularly cautious when using AI. Paragraph 50 of the statement is quite clear: “if a professional acts negligently in relation to AI use […] then if that negligence causes [loss], the professional can expect to be held liable”.
In these circumstances you aren’t just a user of AI; you are a trusted professional. Your clients rely on your expertise and judgment. There are countless tales of lawyers being reprimanded for including hallucinated case law in their submissions. There really is no excuse. If you are using AI to deliver professional services, consider these:
- do you have sufficient understanding of how the AI tools you are using or rely on actually work?
- can you explain to your clients when and how you are utilising AI?10
- are you transparent with your clients about AI use, especially when it specifically involves their matter?
- how are you satisfying yourself that your AI tool protects client confidentiality and/or legal privilege?11
- do you always review and verify AI output before sharing it with a client?
- has your professional body put out any guidance about AI use, and are you following it?
- would you be able to demonstrate to your regulator that you used the proper professional judgment in serving your client, where AI was involved?
You’ve developed some hardware which incorporates AI into it
If you’re in the business of manufacturing physical products with embedded AI, you’ll need to keep in mind strict liability. This is because the Consumer Protection Act 1987 (CPA) imposes ‘no fault’ liability for defective products. In practice this means that if your AI-powered product is defective and causes injury or damage, you could be liable even if you were not negligent.12
During the consultation a question was raised as to whether AI itself could be interpreted as a product by the UK courts. The panel mentioned a recent judgment where the judge concluded that it should make no difference whether software happens to be available on a CD (tangible) or over the cloud (not tangible)13. However, for strict liability to be applied to non-tangible products, the change would be so large that it’s ultimately a matter for parliament to discuss and change, based on the recommendations of the Law Commission.
Given that negligence is not a prerequisite for strict liability, you might approach protecting yourself differently:
- have you conducted extensive real world testing, rather than just virtual simulations?
- does your product meet the safety standards people are ‘generally entitled to expect’?
- have you tested for edge cases and vulnerable user groups (e.g. wheelchair users, children, the elderly)?
- are your warnings and instructions adequate?
- do you have proper insurance in place for product liability?
- have you documented how you test your product and its safety?
- if a defect is discovered, how will you deal with it?
There’s a fascinating example about a factory robot in paragraph 49 which outlines what might happen if a company skips real world testing and how they would likely be found to be negligent if their AI failed to recognise a wheelchair user, resulting in physical harm.
Your AI generates content that is made public
If you use AI to generate marketing materials, social media posts or blog content, there are risks of defamation (false statements about third parties), copyright infringement (using other sources without permission), or negligent misstatements (where people rely on false information). You’re effectively a “publisher” if you have control over the AI’s output, even if you didn’t write the specific words. If you’re doing this:
- do you review AI generated content before it goes public, or does it post automatically?
- have you implemented any filters or guardrails (technological or human) to prevent defamatory or false statements?
- do you have clear disclaimers that your content is AI generated and may contain errors?
- can you quickly take down or amend content if problems are identified?
- are you monitoring what your AI publishes and how people respond to it, for example on social media?
- do you have a process for fact checking claims made by AI generated content?
- have you considered whether presenting AI output as your company’s voice creates any additional responsibility on your part?
Bear in mind the Air Canada chatbot case14, which provides a stark warning about using AI this way. In that case, the airline was liable for false information its chatbot provided about bereavement fares, with the court nothing that “it should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
Protecting your business with contracts
As an app developer or user of AI, you will have entered into at least one contract. This might be for your use of an LLM over an API, or in relation to your use of Cursor or some other IDE or vibe coding platform to develop your app.
Contracts are the first line of defence for each player in the supply chain. They will allocate liability using contracts. This is done through the use of warranties (“we promise that our software does this”), indemnities (“you’ll pay us if we suffer a loss through your use of our product”) and limitations to liability (“we aren’t liable for losses you suffer if AWS goes down for a day”).
Practically, this means you need to check every contract in your supply chain so you know where you stand. When you’re buying AI services, you should be clear about who bears the risk of failure (“I lost £10,000 in revenue because the ChatGPT API wasn’t working”). If you’re selling AI services, you’ll want to ensure you are limiting your liability as far as possible (but note: you can’t exclude liability for physical harm or death). These sorts of provisions should appear in your terms of service.
Where contracts don’t help
If the contract you’re using doesn’t specifically cover the harm suffered, then it won’t shift liability15. In some circumstances, there might not be a contract in place at all. For example, if your AI powered delivery robot injures a pedestrian16.
If there’s no contract in place, or it doesn’t address the problem, then the law of negligence applies: did you act reasonably in doing whatever you were doing?17
Proving what went wrong
The statement goes into great detail about causation.14 I’m not going to go into this too much, as it’s more a topic for the lawyers after things go wrong - while this post is really about trying to put the appropriate safeguards in place beforehand. But generally speaking, it can be challenging to demonstrate causation because:
-
There is opacity in the way AI works. It can be hard to see WHY the AI made a specific decision.18 The statement notes that “the output and inner functioning of at least some [machine learning] systems may be difficult to predict or explain through traditional concepts of cause and effect”.19
-
There may be multiple parties involved - was it the foundation model, the app developer, the user, or the affected third party’s misuse that caused the harm?20
-
There may be missing evidence - smaller AI providers may not have the proper tooling in place to trace what happened and when. AI interactions may not be logged, configurations may not be documented, and critical decision points may be lost.21
The good news is that courts have dealt with similar evidential challenges before (such as in asbestos exposure cases where it was scientifically impossible to identify which exposure caused cancer),22 and English law has shown itself capable of adapting. In fact, where a defendant should have kept records but failed to do so, courts may take a “benevolent” approach to the claimant’s evidence and a critical approach to the defendant’s.23
The best course of action is to keep logs of the AI decision process, document your testing and oversight, and retain evidence of how the AI was configured and used by who. If you are unable to explain what happened, courts might assume the worst.24 Think of good record-keeping as both a shield (helping you defend claims) and a sword (helping you prove you acted responsibly).
What do you think?
I think that we are at an inflection point in the use of AI. Sooner or later, government regulation is going to get in the way of the current wild west usage of AI in many of the situations I’ve outlined above.
This is a good opportunity get involved with how the UK’s AI policy is being shaped - the consultation is still open for comments until mid February 2026. Details on how to get involved are available here.
Once the consultation period closes, they’ll make suitable amendments before publishing the final version and deciding what to do next.
Footnotes
-
paragraph 7 ↩
-
paragraph 8(a) ↩
-
paragraph 8(b) ↩
-
paragraph 11 ↩
-
paragraph 10 ↩
-
paragraph 139 goes into detail about the limitations of prompts and how reliable the output they generate might be. ↩
-
paragraph 33 notes that employers “generally would” be vicariously liable for harm caused by negligent use of AI by employees. ↩
-
see paragraph 21(f) which defines app developers as “those who design, build, test, deploy, operate, maintain and secure applications that use or build upon foundation models or fine-tuned versions [of foundation models]”. ↩
-
paragraph 41 explains this in more detail, using the example of a radiologist interpreting scans produced by AI. ↩
-
paragraph 59(b) suggests that professionals should be able to explain “at a minimum, in broad terms” how their AI works. ↩
-
paragraph 59(d) suggests that a solicitor putting confidential or privileged information into an insecure AI system is “highly likely to be a breach of duty to their client”. ↩
-
The CPA is discussed in detail in paragraphs 68-75. ↩
-
Paragraph 69 notes that pure software is not “goods” under current law, so the CPA doesn’t apply to cloud based AI services. ↩
-
paragraph 26 provides much more detail on what happens when you voluntarily take on responsibility for risks via a contract ↩
-
paragraph 34 goes into depth about the difference between physical and economic harm and when these might arise. ↩
-
paragraph 47 summarises that “whether and when a person involved in the development, supply or deployment of AI might be liable in negligence for physical harms is highly fact sensitive [but] in many cases, the [existing principles of negligence] will apply in the context of AI use”. ↩
-
This opacity is identified as one of the key characteristics that can give rise to legal difficulties in paragraph 15(a). ↩
-
This is also discussed in paragraph 15(a). ↩
-
The complexity of supply chains and multiple actors is discussed at paragraphs 18-22 and the causation implications at paragraph 82. ↩
-
The absence of causal evidence is discussed at paragraphs 87-91. ↩
-
The Fairchild v Glenhaven case is discussed at paragraphs 94-97, where courts developed a “material increase in risk” test for such situations. ↩
-
Paragraph 88 refers to Keefe v Isle of Man Steam Packet Company where courts treated claimants’ evidence benevolently when defendants had failed to keep proper records. ↩
-
This principle is reflected in paragraph 90, which notes courts may make “evidential presumptions” against defendants who failed to record information they should have kept. ↩