Artificial Intelligence (AI) is progressing at a faster pace than the legal system can keep up with. This leaves the courts with the responsibility to decide how laws apply to AI. Generative AI platforms, like ChatGPT, require large amounts of data, often including unlicensed third-party content, to train their systems to create generative AI outputs. Some content creators have entered into licensing agreements with AI companies, but when AI companies use unlicensed material to train their Language Learning Models (LLMs), copyright law comes into play.
The legal doctrine of fair use promotes freedom of expression by allowing the unlicensed use of copyrighted works in certain circumstances. The court's analysis will likely distinguish between misuses related to training the LLMs and specific generative AI outputs. An AI company's use of unlicensed content to train their LLMs, without creating an output that is substantially similar to the underlying input, presents a more nuanced analysis of intermediate copying and whether such copying amounts to copyright infringement.
Recently, The New York Times filed a lawsuit against Microsoft and OpenAI for copyright infringement, highlighting the tension between AI companies and content owners who want to be compensated and given proper attribution for the use of their works. The NYT claims that the defendants' generative artificial intelligence tools rely on large-language models built by copying and using millions of Times copyrighted news articles. The newspaper argues that the defendants are using its intellectual property without paying for it and enriching themselves. The NYT has lost billions of dollars due to this exploitation, and it has asked for an injunction to stop the alleged unlawful conduct from continuing. OpenAI denies the allegations and argues that the Times paid someone to hack OpenAI's products. Microsoft responded strongly by comparing the New York Times lawsuit to the one waged by the Motion Picture Association of America and Hollywood against the VCR.
As AI becomes more prevalent in the music industry, legal and ethical considerations arise. The rise of AI voice cloning is revolutionising the music industry, and many artists are looking for ways to protect their voices and attributes in light of this new technology. While tools like YouTube's "Dream Track" showcase the power and potential of AI in redefining music production, but poses legal and ethical challenges. In India, the legal framework governing AI and personality rights is still in its infancy. While there are instances where unique voice elements can be legally safeguarded, the voice of an artist primarily falls within the realm of personality and publicity rights. The question of cloning the voices of deceased artists like, Md Rafi, Kishor Kumar, or Sidhu Moosewala raises several ethical and moral questions, as posthumous personality rights are not recognised under Indian law. There is a pressing need for laws that recognise personality and publicity rights in the context of personal attributes, including voice
When it comes to voice training one of the most significant issues is the acquisition of data for AI training. If AI developers use existing recordings without proper licensing from music labels, it could lead to potential copyright infringement issues. There are currently two categories of questions that need to be addressed concerning this issue: whether the training required to create complex AI models implicates copyright law, and whether creating something using AI based on another work is a derivative work that only the original copyright owner can make. Legislation must evolve to address the unique challenges posed by AI, ensuring that the rights of artists are respected, both living and deceased.
The use of artificial intelligence (AI) in the news publishing industry has also become increasingly prevalent in recent years. However, there has been growing concern about the lack of regulation in this area. To address this issue, the Indian government has proposed to frame laws aimed at safeguarding the interests of news publishers and content creators. Although this is a step in the right direction, the proposed laws must take into account some critical aspects of the use of AI in news publishing.
One such aspect is the protection of public service channels from hacks and false takedown claims. The proposed laws need to regulate various ways that automated processes powered by AI, can potentially hinder the dissemination of information and its accuracy, especially during key public events and seasons like elections and budget seasons. Another critical aspect that the proposed laws need to address is the protection of commercial interests of paywalled and subscription news models. The issue of fair remuneration and publicly available and paid-for information offered by content creators under paywalls and subscription memberships should be clearly defined in the proposed Act. The proposed laws must ensure that content creators are fairly compensated for their work and that consumers have access to accurate and reliable information.
The tension between AI companies and content creators highlights the urgent need for updated legislation that balances innovation with the protection of intellectual property rights. Recent lawsuits and proposed regulations have shown that policymakers are trying to address issues such as copyright infringement, personality rights, and fair compensation for content creators. By enacting comprehensive laws that address these concerns, societies can encourage responsible AI development while protecting the rights of artists, publishers, and the public. Legal frameworks must evolve in parallel with technological advancements to promote innovation while preventing potential harm.
Dear Readers,
Thank you for your continued readership of IP Wave. We will be taking a brief pause as we are working on upgrading the website. Rest assured, we'll return with refreshed newsletters in just one month. We appreciate your patience and look forward to sharing our improved content with you soon. Thank you.