Shadow AI and Privilege Forfeiture: Non-Technical People do not Understand AI
In R (on the application of Munir) v Secretary of State for the Home Department, the UK Upper Tribunal addressed a development that has recently been plagueing legal proceedings – lawyers utilising generative AI tools with little supervision or technical understanding. This has led courts to be overwhelmed by hallucinated case law or non-existent authorities being used as part of legal arguments. Additionally, the case shows that the judiciary themselves also might not fully understand the technical functioning of AI.
Outlining the two cases
The judgment contained two separate cases. The first case concerned a level 3 accredited adviser of TMF Immigration Lawyers, who was responsible for an immigration case, who, as part of an application for permission to appeal a decision to the Upper Tribunal, had included a nonexistent case, where the case number referred to a different case unrelated to the subject matter of the proceedings. Though the person attested that no large language models were used in drafting the grounds of appeal and that company policy was to either restrict or outright prohibit the use of such models for any legal components, they did acknowledge that publicly available AI platforms were used occasionally to assist in administrative tasks.
However, the case did conclude by the adviser acknowledging that while no AI tools had deliberately been used for the proceedings, the new AI-related tools that have been embedded everywhere and that are now easily accessible as part of a normal web usage – for example the “AI Mode” of a Google search query, can be mistaken as authoritative outputs. This means that for a non-technical person, the output of a Google search query, which, throughout its 25-plus-year existence, has become a mainstay on the web, has potentially become more confusing insofar as the embedded AI results operate under a similar design template as the top results of a web query do. That can also explain why the adviser neglected to double check the hallucinated court case – top-level query results have long become authoritative sources to an extent that humans can do psychological shortcuts when collecting information.
However, there is also a second element – uploading confidential documents to open-source AI tools, such as ChatGPT, places that information in the public domain. This action breaches client confidentiality and irreversibly waives legal privilege. The Tribunal stated that such breaches should be referred to the Information Commissioner’s Office. The Tribunal explicitly differentiated open-source platforms from closed-source AI tools (e.g., Microsoft Copilot), which do not publish information to the public domain and therefore avoid these specific risks.
The second case concerned an immigration proceeding case where, following a negative outcome for the person in question, a judicial review proceedings were lodged in the Upper Tribunal. This proceeding was lodged as a part of a bundle of 89 pages, which included a concise Statement of Facts signed by the solicitor representing the claimant, which amounted to twenty pages of single-spaced type. However, it was found that the summary at various points contained misleading statements, including either nonexistent cases or cases that have been materially falsely interpreted. An order was put to identify the author of this document bundle.
The law firm, responding to the order, acknowledged that a third person – a part-time trainee lawyer – had drafted the grounds of judicial review. The justification of the solicitor responsible was to blame outdated blogs and personal notes and an allusion for his mother’s recent brain stroke making it challenging for him to take proper care of the proceedings. It also turned out that the trainee lawyer in question was the brother of the solicitor in question. Further investigation showed that the law firm was unable to verify on which cases the brother in quesiton had also worked in the past, further implying that this sort of carelessness with legal proceedings could have been more systemic within the company.
The solicitor in this case also showed a worrying lack of understanding of the extent to which AI is available in the modern world, mirroring the accessibility and related challenges to AI that also surfaced in the first case.
The conclusions thereof made the Tribunal assert that legal professionals are obliged to ensure that legal arguments and documentation provided to the First-tier Tribunal or Upper Tribunal are factually and legally accurate. This also includes instances where work has been delegated to other fee-earners – supervisors remain ultimately responsible. Most importantly however, is the judgment acknowledging that uploading confidential documents into a publicly available AI platforms such as ChatGPT (mistakenly labelled in the judgment as open-source, which ChatGPT is not), is to place this information on the internet in the public domain. This entails a breach of client confidentiality and a waive of legal professional privilege. Such conduct can also warrant referral to a regulatory body.
What are the challenges?
The Munir judgment illustrates the severe operational risks generated when non-technical professionals engage with AI. First, users frequently conflate traditional search engine queries with generative AI outputs. This blurs the line between retrieving existing public records and generating novel, potentially hallucinated text, reinforcing the absolute necessity of independent source verification. This is the next-level of an SEO-optimised web: it is not just bad hyperlinks that have been pushed to the top that can act as threat vectors, but rather AI-summaries that have been visually designed in a way that a non-technical person cannot immediately distinguish them from other more authoritative sources.
Second, the Tribunal itself demonstrates a fundamental misunderstanding of AI infrastructure. The judgment erroneously categorises ChatGPT as an “open-source” tool that places information directly into the “public domain.” Mechanically, this is false. ChatGPT is a proprietary, closed-source model; inputting a prompt transmits data to a private entity, not a searchable public webpage. However, under standard consumer Terms of Service, AI providers retain the right to ingest this input to train future model iterations. Because the model can subsequently regurgitate this proprietary data to unrelated third parties, the legal expectation of confidentiality is destroyed. Therefore, while the Tribunal’s technical reasoning is flawed, its legal conclusion remains absolute: utilising consumer-grade AI constitutes an irreversible waiver of privilege.
Furthermore, the Tribunal creates a false dichotomy between platforms, erroneously suggesting that tools like Microsoft Copilot are inherently safe. The critical distinction is not the brand of the AI, but the licensing tier. An enterprise license for Microsoft Copilot, ChatGPT, or Gemini – governed by strict Data Processing Agreements that prohibit model training on user data – safeguards confidentiality, at least for the current generation’s models and their behaviour. Conversely, utilising Copilot on a personal device under consumer terms triggers the exact same data-sharing liabilities as the free tier of ChatGPT. However, given how rapidly the field is evolving, this understanding too might become outdated sooner than later.
This judicial technical illiteracy creates a distinct hazard for corporate governance. If an executive incorrectly assumes compliance by working on a corporate Copilot account in the office, but subsequently logs into a personal Copilot account at home to finish drafting a sensitive memorandum, the data is instantly compromised, resulting in the loss of privilege over all related communications. To prevent worst-case scenario privilege forfeiture, in-house counsel must audit and strictly dictate the specific data-sharing agreements governing all internal AI usage. Absolute prohibition of consumer-grade “shadow AI,” coupled with mandatory routing through contractually ring-fenced enterprise environments, is the only viable mechanism to secure corporate data sovereignty.