All involved accountable for AI-related IP violations
Editor's note:?China recently released a road map aimed at promoting efforts to build itself into an intellectual property powerhouse. Xinhua Daily Telegraph spoke to some researchers about the importance of IP protection. Below are excerpts of the interview. The views don't necessarily represent those of China Daily.
The battle over IP in the AI era is about the restructuring of an entire innovation order. The contributors to AI-generated content are no longer just individual creators. Data providers, computing infrastructure suppliers, model trainers and application developers all play a role, and much of the resulting value falls into legal grey areas. The most obvious weakness in today's system is the lack of a workable framework for allocating benefits and responsibilities among the multiple actors. Responsibility in the AI ecosystem should not be determined solely by who controls the technology. It should also depend on who benefits from it. Profit distribution must become the primary benchmark for assigning legal obligations.
Contrary to some claims, society is not entering a legal vacuum. China's Civil Code, copyright law and regulations governing deep synthesis technologies have already established basic boundaries around personality rights, copyright protection and platform responsibilities.
The deeper problem is that enforcement mechanisms cannot keep pace with the speed at which infringement takes place, while coordination across platforms remains weak and public understanding is often inaccurate. Many people still mistakenly believe that "noncommercial use is exempt" or that labeling content as AI-generated removes liability. Neither is true.
For ordinary citizens, however, the most immediate threat does not come from abstract disputes over training data. It comes from AI face-swapping, voice cloning and other forms of personality-rights infringement. Such technologies are inexpensive to deploy, spread rapidly online and can inflict irreversible damage on personal dignity and reputation.
Rights protection is undeniably difficult, but placing the entire compliance burden on a single party is not the solution. Shifting all responsibility either on the platforms or on those who developed the application would undermine the long-term sustainability of the content industry.
A practical approach is shared responsibility. Rights holders should make fuller use of notice and takedown mechanisms, platforms should provide low-cost verification and complaint tools and application developers should bear responsibility when they "know or should know" that infringement is taking place. Only a layered system of responsibility can ensure that every line of defense is covered.
The issue of training data is unavoidable. Requiring prior authorization for data training would make one-on-one negotiations nearly impossible and significantly restrict innovation. Yet allowing unrestricted data scraping would effectively turn original creators into unpaid fuel for AI development. The challenge is to draw workable boundaries between these two extremes while introducing compensation mechanisms that protect creators' interests.
The music industry offers a useful lesson. Collective licensing provides a more realistic path. Platforms could pay annual blanket licensing fees to collective management organizations, which would then distribute revenue to rights holders according to usage proportions. Extending collective management systems into AI training could dramatically reduce transaction costs while ensuring creators receive compensation without having to negotiate separately.
The purpose of IP protection was never to prevent innovation, but to sustain it through credible incentives. The real test in the AI era is whether society can build a system in which creators receive fair benefits without crushing innovators under excessive compliance costs.
































