Have you considered the possibility of someone copying an image, video, or text created by your AI model, and whether you have the legal right to prevent it? Or to put it other way around, are you in any way infringing on others’ rights by using an AI model?
ChatGPT4 has taken the world by storm. In May 2023, Google released the new version of Bard. AI has brought tremendous opportunities for innovation and efficiency across various industries. However, while we celebrate its benefits, it’s essential to be aware of the legal implications and challenges that come with its widespread use. As a responsible company, whether you’re an AI user or a vendor, it’s crucial not to let the Fear of Missing Out (FOMO) blind you to the potential legal risks.
In this blog post, we will delve into the legal consequences that may arise from the use of AI, examining key areas such as data privacy, intellectual property, liability, and ethical considerations.
Data Privacy and Security – Do we know how GDPR-friendly AI is?
The use of AI often involves collecting, processing, and analyzing vast amounts of data. Data privacy regulations, such as the European Union’s General Data Protection Regulation (GDPR), impose strict requirements on organizations handling personal information. Non-compliance can lead to severe consequences, including hefty fines and damage to your reputation.
Italy’s Data Protection Body, “Garante” in the first half of 2023 issued a temporary stop-processing order on ChatGBT due to concerns about possible violations of EU data protection laws. An investigation into suspected breaches of the GDPR followed. Garante expressed concern over the lack of a legal basis for the extensive collection and storage of personal data for ChatGBT’s training and collecting of minors’ personal data. OpenAI responded by offering tools to verify users’ ages in Italy and implemented measures to comply with GDPR requirements, including age-gating to protect minors’ data. Also, OpenAI took steps to address GDPR concerns, expanding its privacy policy and providing users with more information about personal data processing. Users now have the right to opt out of data processing for training algorithms, and Europeans can request data exclusion from AI training. However, some questions remain about historical data used for training and the legal basis for processing before these changes were made.
Organizations must prioritize consent mechanisms, robust data protection measures, and transparent data processing practices when incorporating AI technologies into their operations. In an AI business context, once you have everything set and arranged correctly while being mindful of legal risks, everything should run smoothly in the future.
AI set the Intellectual Property Legal World on Fire
The IP legal world is being shaken to its core. AI opened so many intriguing questions and issues concerning intellectual property rights.
For instance, who owns the rights to AI-generated works? Can AI systems be considered inventors? Can ChatGPT use my copyright work and, if so, when and how?
These complex issues challenge conventional legal frameworks, leading to ongoing debates and uncertainties about AI-generated content and inventions. Here are the most interesting examples of the new disputes that may set some of the precedents relevant globally.
Recently, two prominent authors from Massachusetts, Paul Tremblay, and Mona Awad, filed a proposed class action lawsuit against OpenAI in the San Francisco federal court[1]. The authors claim that OpenAI misused their literary works to “train” its popular generative artificial intelligence system, ChatGPT. Tremblay and Awad assert that ChatGPT mined data from thousands of books without obtaining proper permission, thereby infringing upon the authors’ copyrights. They argue that their creative efforts were used without consent, leading to potential financial losses and damage to their reputations.
The authors’ case is part of a broader trend where legal challenges have been filed concerning the material used to train cutting-edge AI systems. Other plaintiffs include source-code owners taking action against OpenAI and Microsoft’s GitHub, as well as visual artists targeting Stability AI, Midjourney, etc.
One notable illustration is how The Federal Trade Commission (FTC) is employing algorithmic disgorgement as a punitive measure against companies that employ unlawfully sourced data in their algorithm development and training processes[2]. The main objective is to deter others from using illegal data and to hold offending firms accountable. Recently, algorithmic disgorgement has come into focus once again at the FTC, specifically about a recent settlement involving WW International, Inc. (formerly Weight Watchers) and its subsidiary, Kurbo, Inc., for alleged violations of the Children’s Online Privacy Protection Act (COPPA). In this settlement, the defendants are required to pay a $1.5 million penalty and implement various corrective measures related to data retention and parental consent. Additionally, the settlement mandates the deletion of any algorithms developed by the defendants using data that the FTC deems to have been improperly acquired.
Another example of the said trend is the legal action of Getty Images against Stability AI for Copyright Infringement. At the beginning of 2023, Getty Images initiated legal proceedings in the High Court of Justice in London against Stability AI, alleging infringement of intellectual property rights, including copyright in content owned or represented by Getty Images. The lawsuit claims that Stability AI unlawfully copied and processed millions of copyrighted images and associated metadata without obtaining a license, to further their commercial interests at the expense of content creators[3].
What if I use AI to create work? Is it protected by copyright?
Another aspect of the IP complexities when it comes to AI is the copyright protection related to the artwork entirely generated by the AI.
Namely, the recent ruling of the District of Columbia’s United States District Court has solidified the necessity of human authorship for copyright registration. By endorsing the United States Copyright Office’s (“USCO”) motion for summary judgment, the court confirmed USCO’s previous rejection of copyright registration for a piece crafted solely by AI. This case represents one of the first examples within a series of recent judgments that wrestle with the inquiry of whether copyright protections encompass creations originating from generative AI.
The ruling of the US court and the reasoning behind it could very well serve as a guideline for courts in other countries worldwide. On the other hand, other stances could also be taken across different jurisdictions, depending on the regulations and stances taken by the regulatory bodies and courts. There is also a third way – legislative incorporation of the special (sui generis) protection for AI-generated work. One of the examples of such regulation is a new Law on Copyright and Related Rights entered into force in Ukraine on January 1, 2023. This law imposed copywriting protection for AI/software-generated works with a validity period of 25 years[4]!
At the end of the day, even if most of the world’s courts start delivering similar verdicts, the protection of AI-generated works will still be feasible through other legal means with the help of legal experts equipped for the digital age.
Addressing Legal Challenges in the Age of AI and Creative Industries
In today’s rapidly evolving business landscape, various industries are embracing cutting-edge technologies like AI to enhance their operations and boost productivity. However, with these advancements, new legal challenges are emerging, particularly in creative sectors where the design of specific images, designs, or copywritten work is essential.
Understanding the technical background of AI models becomes paramount, considering that the way AI works, and the model based on which it gets its “ideas” or produces its creations may make a difference when setting up an AI business, but also in case of potential court disputes.
What about the liability?
As AI systems become more autonomous and make decisions that impact individuals or society, determining liability becomes a significant concern. If an AI system causes harm or makes biased decisions, who should be held responsible: the developers, the operators, or the AI itself? Establishing legal frameworks that address liability and accountability for AI-related incidents is crucial for protecting individuals’ rights and ensuring fair outcomes.
What if, for example, one of the AI firms gets sued, and you have already used the AI-made content in your business? Another concern that might arise is your liability in case you provide services to your clients with the help of AI. In some cases, your client may request you to testify that you did not use the AI in providing services. Could you be held accountable for such actions? These complex issues can be legally solved, but they call for a fresh perspective and a thoughtful approach from legal professionals, ushering in a new era of legal complexities and the need for expert legal counsel.
Legislation should be developed to clarify liability and accountability for AI-related incidents, striking a balance between innovation and protecting the rights and safety of individuals. As we embrace the wonders of AI, we must do so responsibly. Understanding the legal implications, safeguarding data privacy, respecting intellectual property rights, and complying with regulations are vital for a successful and ethical AI journey. Be pro-AI, but also be pro-legal awareness and risk management!
Ethical Considerations
The ethical implications of AI deployment cannot be overlooked. Issues such as algorithmic bias, invasion of privacy, discrimination, and job displacement require careful examination. While ethics may not always have immediate legal consequences, they can influence public perception, regulatory decisions, and future legal developments.
Lesson Learned: Organizations should embrace ethical frameworks, conduct regular ethical assessments of their AI systems, and prioritize transparency and fairness in algorithm design and decision-making processes.
Key takeaways
The use of AI brings immense opportunities for innovation and efficiency across various sectors. However, it also presents legal challenges that need to be carefully addressed.
By proactively considering data privacy and security, navigating intellectual property complexities, establishing liability frameworks, and prioritizing ethical considerations, organizations and businesses can mitigate potential legal consequences and build trust in the AI-powered solutions they develop.
As AI continues to evolve, it is crucial for policymakers, AI legal experts, and stakeholders to collaborate and adapt legal frameworks to ensure the responsible and ethical use of AI technology in our ever-changing world. Having all this in mind, we were also interested in ChatGPT’s opinion regarding these issues. Needless to say, we could only agree with the result.
“To address these concerns, organizations, and individuals should approach AI usage responsibly and ethically, incorporating appropriate safeguards, transparency, and accountability in their applications.
It is essential to stay informed about the evolving legal landscape surrounding AI and to consult legal experts when needed.”
[1] https://www.theguardian.com/books/2023/jul/05/authors-file-a-lawsuit-against-openai-for-unlawfully-ingesting-their-books
[2] https://www.ftc.gov/news-events/news/press-releases/2022/03/ftc-takes-action-against-company-formerly-known-weight-watchers-illegally-collecting-kids-sensitive
[3] https://newsroom.gettyimages.com/en/getty-images/getty-images-statement
[4] https://www.wipo.int/wipolex/en/legislation/details/21708