AI faces a wave of litigation and three legal battles that will determine its future

The article discusses the growing legal challenges facing AI development, highlighting three major categories of litigation that will shape its future: intellectual property disputes over training data, privacy and data protection concerns, and ethical/liability issues when AI causes harm. It draws parallels to the early internet era (e.g., Napster's copyright battles) and presents decentralized AI (DeAI) as a potential solution.

  • Intellectual Property Litigation: AI companies face lawsuits (e.g., Getty vs. Stability AI, authors vs. OpenAI/Meta) over unauthorized use of copyrighted data for training. Rulings favoring creators could increase costs and stifle innovation, favoring large corporations.
  • Privacy & Data Protection: Cases like Clearview AI’s fines (€30.5M), OpenAI’s GDPR violation (€15M), and Amazon’s Alexa penalties ($25M) underscore stricter data governance needs, raising compliance burdens.
  • Ethics & Liability: Examples include Google’s Gemini generating inaccurate images, ChatGPT falsely accusing an Australian mayor, and Amazon’s biased hiring tool. These cases highlight demands for accountability and bias mitigation.

The article proposes decentralized AI (DeAI)—leveraging blockchain for transparent, user-controlled data—as a way to address these challenges by ensuring fairness, reducing IP conflicts, and enhancing privacy. The legal outcomes will critically influence AI’s trajectory, balancing innovation with regulation.

Summary

The World Economic Forum, held in the snow and ice of Davos, Switzerland, undoubtedly has two core topics: Trump and AI. Behind the grand narrative of AI, a wave of legal proceedings is intensifying.

A similar case occurred in the early internet era: Napster, an open free music sharing service, took off in 1999. However, copyright lawsuits from artists and the music industry ultimately led to its closure in 2001. This event accelerated the shift to paid, centralized digital music distribution - iTunes was launched as a legal music purchase platform in 2001, and subscription-based streaming services such as Spotify emerged in the mid-2000s.

Today, similar struggles and evolutions are playing out in the AI space. This article will explore three core categories of AI-related litigation and reveal an inevitable trend: decentralized AI (DeAI) is emerging as a solution to these legal and ethical challenges.

Three major legal battle lines for AI data

  1. Intellectual Property (IP) Litigation: Who Owns the Data Used for AI Training?

  2. Privacy and Data Protection Litigation: Who Controls Personal and Sensitive Data in AI?

  3. Ethics and liability litigation: Who should be held responsible when AI causes harm?

These legal battles will profoundly affect the future development of AI. Intellectual property disputes may force AI companies to pay licensing fees for training data, increasing data collection costs. Privacy lawsuits will promote stricter data governance, making compliance a key challenge and favoring privacy-focused AI models. Liability lawsuits will require clearer accountability mechanisms, which may slow down the application of AI in high-risk industries and lead to stricter AI regulation.

Intellectual Property Litigation: Who owns AI training data?

AI models rely on huge data sets—books, articles, pictures, music, etc., which are often captured without authorization. Copyright holders believe that AI companies profit from using their works without permission, triggering a series of lawsuits over whether AI training constitutes fair use or copyright infringement.

  • In January 2023, Getty Images filed a lawsuit against Stability AI, accusing the company of infringing intellectual property rights by crawling millions of images from the Getty platform without authorization to train its AI model Stable Diffusion.

  • OpenAI and Meta are also facing similar lawsuits, accused of using pirated book data to train AI models, allegedly infringing the authors' copyright.

If the court rules in favor of content creators, AI companies will be forced to obtain legal licenses for training data. This will significantly increase operating costs as companies will need to negotiate and pay for the use of copyrighted materials. In addition, licensing requirements may limit the availability of high-quality training data, especially for small AI startups with limited funds, which may find it difficult to compete with large technology companies. This may lead to a slowdown in innovation in the AI industry and a market structure that favors large companies with strong financial resources that can afford data licensing costs.

Privacy and data protection litigation: Who controls personal data in AI?

AI systems process vast amounts of personal data, including conversations, search histories, biometric information and even medical records. Regulators and consumers are pushing back, demanding tighter controls on data collection and use.

  • Clearview AI, an American facial recognition company, was fined by US and EU regulators for scraping images without user consent. In 2024, the Dutch data protection authority fined it €30.5 million, while several US states objected to its privacy settlement for not providing fair compensation.

  • Italy fined OpenAI €15 million in 2024, accusing it of violating GDPR (EU General Data Protection Regulation), processing data without authorization, and failing to provide sufficient transparency. The country's regulator also pointed out that its age verification mechanism was insufficient.

  • In 2023, Amazon was fined $25 million by the Federal Trade Commission (FTC) for indefinitely storing children's Alexa voice recordings.

  • Google is also facing a lawsuit for allegedly recording users without their consent.

Stricter privacy regulations will require AI companies to obtain explicit user consent before collecting or processing data. This will require more transparent policies, stronger security measures, and greater user control over data use. While these measures can enhance user privacy and trust, they may also increase compliance costs and slow down the development of AI.

Ethics and Liability Litigation: Who is Responsible When AI Makes Mistakes?

As AI plays an increasingly important role in decision-making in hiring, medical diagnosis, content moderation, and more, a key legal question has emerged: Who should be held liable when AI makes mistakes or causes harm? Can AI be sued for misleading, biased, or discriminatory behavior?

  • In February 2024, Google's Gemini AI was criticized for generating historically inaccurate images, such as depicting American founding fathers and Nazi soldiers as people of color. Some accused the AI of being "overly politically correct" and distorting historical facts. Google subsequently suspended Gemini's image generation function to improve accuracy.

  • In April 2023, an Australian mayor considered suing OpenAI after ChatGPT falsely claimed he was involved in a bribery scandal. The case highlights the legal challenges that could arise from AI-generated false information and defamation.

  • In 2018, Amazon was forced to stop using its recruitment AI tool because it discriminated against women. The AI was trained based on resume data from the past decade, causing it to prefer male candidates and automatically lower the scores of resumes containing words related to "women" or women's colleges. This incident highlights the challenge of fairness issues in AI recruitment.

If stronger AI liability laws come into effect, companies will be forced to improve bias detection and transparency to ensure that AI systems are more fair and accountable. However, if regulations are too lax, it may increase the risk of false information and AI-driven discrimination, as companies may prioritize the rapid iteration of AI products while ignoring ethical safeguards. Finding a balance between regulation and innovation will be an important challenge in the future.

Decentralized AI (DeAI): A Viable Solution

In the context of AI legal battles, decentralized AI (DeAI) provides a viable solution. Based on blockchain and decentralized networks, DeAI allows global users to voluntarily contribute data, ensuring that the data collection and processing process is transparent and traceable. All data collection, processing, and use will be recorded on the blockchain in an unalterable manner, which not only reduces intellectual property conflicts, but also enhances privacy protection through user autonomous data control, thereby reducing the risk of unauthorized access or abuse.

Unlike centralized AI that relies on expensive proprietary data, DeAI collects data from a global distributed network, which is more diverse and fair. In addition, with the decentralized governance of blockchain, AI models are audited and improved by the community rather than being monopolized by a single company.

As AI-related legal challenges continue to unfold, decentralized AI (DeAI) is becoming an important direction for building an open, fair, and trustworthy AI future.

Author: Dr. Chong Li, founder of OORT and professor at Columbia University

Originally published in Forbes: https://www.forbes.com/sites/digital-assets/2025/01/20/from-chip-war-to-data-war-ais-next-battleground-explained/

Share to:

Author: OORT

This article represents the views of PANews columnist and does not represent PANews' position or legal liability.

The article and opinions do not constitute investment advice

Image source: OORT. Please contact the author for removal if there is infringement.

Follow PANews official accounts, navigate bull and bear markets together
Recommended Reading
8 hour ago
2025-12-06 14:10
2025-12-06 13:23
2025-12-06 01:43
2025-12-06 00:28
2025-12-05 15:55

Popular Articles

Industry News
Market Trends
Curated Readings

Curated Series

App内阅读