
By Patrick Rode of WINDORFER RODE
This article follows the 23rd WLF Litigation Summit panel on “Artificial Intelligence Disputes: Liability, Regulation & Ethics in Litigation” | Dubai, January 20, 2026
The question is no longer whether artificial intelligence will generate litigation – it’s whether companies can prove they deployed AI responsibly when disputes inevitably arise. This was the central theme that emerged from a wide-ranging discussion with legal practitioners representing Brazil, Turkey, the UAE, Hong Kong, and the United States at this year’s WLF Litigation Summit in Dubai.
A. The Regulatory Watershed Moment
In seven months, the EU AI Act’s high-risk provisions become fully enforceable. Penalties reach € 35 million or 7% of global annual turnover – potentially billions for large technology companies. But the financial risk is only part of the story.
What fundamentally changes in August 2026 is the burden of proof in AI-related litigation. The AI Act’s risk-based classification system creates mandatory obligations for AI systems deployed in eight high-risk areas: biometric identification, critical infrastructure, education, employment, access to essential services (including credit scoring), law enforcement, migration control, and administration of justice.
Companies operating in these sectors must demonstrate human oversight, maintain technical documentation, conduct algorithmic impact assessments, and ensure accuracy and robustness. These aren’t compliance checkboxes – they’re the evidence that will determine litigation outcomes.
As we’re already seeing in pre-litigation discovery, plaintiffs’ counsel are targeting AI Act compliance documentation. Without it, defendants walk into court with no defense. The “black box” argument is dead. The “trade secret, can’t disclose” defense is dead.
Come December 2026, the EU’s Revised Product Liability Directive completes this transformation by classifying AI explicitly as a “product” and reversing causation burdens. If plaintiffs can show an AI system caused harm and they couldn’t access information to prove the defect—the near-universal situation with proprietary algorithms – defendants must prove their system wasn’t defective.
B. Global Convergence Despite Fragmented Paths

The panel revealed a striking pattern: despite radically different legal systems and regulatory approaches, jurisdictions worldwide are converging on identical core principles.
Brazil is advancing comprehensive AI legislation (PL 2338/2023), approved by the Senate in December 2024) that adopts the EU’s risk-based model while distributing liability along the AI supply chain. The Brazilian Data Protection Authority (ANPD) will coordinate enforcement, with penalties reaching R$50 million or 2% of revenue. What’s particularly notable is how Brazil is embedding AI governance within its strong constitutional tradition of individual rights – creating a hybrid of European regulatory structure and Latin American rights-based enforcement.
Turkey is aligning with EU AI Act principles as its 2021-2025 National AI Strategy concludes. While no AI-specific law exists yet, Turkish courts are applying constitutional protections – human dignity, privacy, equality – to algorithmic decisions, and the data protection authority (KVKK Board) is gaining enforcement capability. This constitutional framework may ultimately prove more flexible than rigid statutory schemes.
The UAE presents a different model entirely: innovation first, regulation when necessary. The National AI Strategy 2031 positions the Emirates as an AI hub, relying on voluntary ethics frameworks while free zones like DIFC and ADGM experiment with tailored approaches for fintech and AI clusters. Litigation remains minimal as the focus stays on economic development – but procurement disputes for government smart city projects are emerging.
Hong Kong navigates between European principles and mainland Chinese influence. The common law system allows judicial flexibility, and the Privacy Commissioner is active, but Hong Kong appears to be watching rather than leading – a strategic wait-and-see as both the EU and China develop their frameworks.
The United States remains the outlier – no federal law, aggressive state action, and litigation creating de facto standards. Colorado’s AI Act takes effect June 30, 2026, becoming the first state law addressing algorithmic discrimination. New York City requires bias audits for employment tools. California has multiple bills pending. But the real standardization is happening in courtrooms.
C. Litigation as Regulatory Force

Mobley v. Workday, certified as a collective action in May 2025, fundamentally changed vendor liability. The court held that Workday, an AI hiring platform, could be liable as an “agent” even without a direct employment relationship. The potential class encompasses hundreds of millions of applicants rejected by the system since 2020.
Garcia v. Character.AI, where a mother sued after her 14-year-old son’s suicide following extensive chatbot interactions, established that AI outputs can be treated as a “product” rather than protected speech – rejecting the First Amendment defense and opening the door to strict product liability for AI systems.
These aren’t theoretical risks. They’re setting precedent right now.
D. The Ethics Dimension: From Aspiration to Evidence
Perhaps the most significant development discussed was the transformation of ethics from aspirational principle to evidentiary practice and litigation argument.
Plaintiffs are no longer limiting claims to statutory violations – they’re alleging failures of ethical obligation: failure to audit systems for bias, failure to prevent foreseeable discriminatory harm, failure to ensure explainability. Courts are receptive. The question they’re asking is: “Can you demonstrate your system was responsible?”
Meanwhile, the legal profession faces its own reckoning. The Mata v. Avianca sanctions – where lawyers were penalized for submitting ChatGPT-generated fake citations in 2023 – established professional responsibility for AI tool verification. Bar associations globally are now developing guidance, but the fundamental duty is clear: verify every AI-generated citation, case, statute. Efficiency cannot trump accuracy.
E. Three Categories, One Question

AI disputes are crystallizing into three categories, each touched by converging regulatory frameworks:
- AI as source of harm: Hiring algorithms that discriminate, credit scoring systems that deny financing without explanation, content moderation that damages reputations.
- AI as contractual malfunction: Promised capabilities that fail, accuracy thresholds unmet, performance warranties breached.
- AI as rights infringement: GDPR Article 22 discrimination claims, data protection violations, intellectual property disputes over training data.
Across all three, courts ask the same question: “Who controlled this system, and can they prove they exercised that control responsibly?”
Liability is following control. Companies cannot contract out of responsibility by using third-party AI tools. If you deploy it, benefit from it, control its use – you bear responsibility.
F. The New Era
The era of “move fast and break things” is over. We’re now in the era of “prove you moved responsibly, or pay the consequences.”
- For companies: Redesign contracts to include AI performance warranties, risk allocation indemnities, and mandatory compliance clauses. Conduct algorithmic impact assessments not because regulators require them, but because they’re future litigation evidence. Map your AI systems by risk level now – August 2026 is not far away.
- For litigators: Pursue algorithmic audit reports, model training logs, data lineage evidence. Challenge black box defenses through explainability expectations. Reframe technical errors as governance failures.
- For all of us: Recognize that documentation isn’t bureaucracy – it’s the paper trail that will determine who wins when AI systems fail.
- The global message is unified: human oversight, traceability, bias prevention, demonstrable accountability. Different paths, same destination. And that destination is enforceable liability for those who cannot prove responsible AI deployment.
- Technology may automate decisions. It cannot automate responsibility. Responsibility remains human. And that’s what we prove in litigation.


No responses yet