As regulatory pressure for artificial intelligence grows, new lawsuits seek to take OpenAI to court

dsfsd

AI’s potential impact on data privacy and intellectual property has been a hot topic for months, but new lawsuits filed against OpenAI aim to address both issues in California courts.

In a class-action lawsuit filed last week, lawyers alleged that OpenAI violated state and federal copyright and privacy laws when it collected data used to train the language models used in ChatGPT and other generative AI applications. As of 2011 complaintOpenAI is said to have stolen personal data from people across the internet and various apps, including Snapchat, Spotify and Slack, and the health platform MyChart.

Rather than focusing solely on data privacy, the complaint — filed by the Clarkson Law Firm — also claims OpenAI violated copyright laws, which remains a legal gray area on multiple fronts. Intellectual property protection is also the focus of a separate lawsuit filed by a separate organization last week, alleging that OpenAI misused ChatGPT while training two American authors.

“Because this is happening at such a rapid pace and becoming more integrated into our daily lives, it’s important that the courts address these issues before they become too entrenched and irreversible,” Clarkson Law said. The firm’s managing partner, Ryan Clarkson, told DigiDay. We’re still trying to learn our lessons from social media and external factors, and this is pouring rocket fuel into those problems.

Clarkson’s lawsuit does not name plaintiffs directly, but includes initials for more than a dozen people. The company is actively looking for more plaintiffs to join the class action lawsuit, and has even set up a website where people can share more information about how they’ve used various AI products, including ChatGPT, OpenAI’s image generator DALL-E, and Voice Adapter. VALL-E, or AI products from other companies like Google and Meta.

OpenAI — whose technique is already used in ad platforms like Microsoft Bing Search and a new conversational ad API for publishers — did not respond to DigiDay’s request for comment. However, the This Privacy Policy was last updated on June 23rd. The company says it does not “sell” or “share” personal information for contextual advertising and “does not know” personal information of children under 13. OpenAI has a separate privacy policy for employees, applicants, contractors and guests. Updated in February. In those terms, the company says it “has not sold or shared your personal information for the purposes of targeted advertising in the last 12 months,” while another section says users have the right to opt out of “cross-contextual behavioral advertising.”

In Clarkson’s complaint, attorneys also allege OpenAI violates privacy laws by collecting and sharing data for advertising, predatory advertising targeting minors and vulnerable people, algorithmic discrimination and “other unethical and harmful practices.” Tracy Cowan, another Clarkson partner involved with the OpenAI case, said the firm represents a number of small plaintiffs who worry that AI tech is being deployed without proper protections for children. She said it raises a number of issues regarding the risks associated with the encroachment of poverty on adults.

“It really shines a spotlight on the dangers that can come with unregulated and untested technologies,” Cowan said. “We think it’s very important to have some safeguards around this technology to bring claims against minors, to get some clarity on how the companies are taking our data, how it’s being used and getting some compensation. To make sure people are willing.”

The legal challenges come as the AI ​​industry faces heightened scrutiny. Late last week, the US Federal Trade Commission published a new blog post suggesting that generative AI raises “competitive concerns” related to data, talent, computing resources and other areas. The European Union’s proposal to regulate AI with the “AI Act” prompted the executives of more than 150 companies to send an open letter to the European Commission warning regulations could be ineffective and harmful competition. Lawmakers in the US are also exploring the possibility of regulation.

Despite the uncertain and evolving legal and regulatory landscape, many marketers are moving forward, seeing AI as a new trend and one that can meaningfully impact many business sectors. However, that doesn’t mean that many are still cautiously suggesting companies don’t try.

Greg Swan, chief creative and strategy officer at Minneapolis-based agency SocialLights, said they are working on a consulting group looking to test generative AI tools to prevent generative content from being copied and pasted directly into marketing materials.

“I think about AI and this whole industry as a young adult who thinks they need everything and the rules of the road, but they still need adult supervision,” Swann said. “It’s incredibly difficult to know where the line is inspiration and plagiarism, and as with all marketing products, source material issues, plagiarism issues, fair compensation for creators issues, brand safety issues.”

Instead of scraping data without permission, some AI startups are taking an alternative approach to their process. For example, Israel-based Bria Visual AI trains its tools only on pre-licensed content. It’s more expensive but less risky — and a process the company hopes will pay off. (Bria’s partners include Getty Images, which sued Stability AI earlier this year for allegedly stealing 12 million images and using it to train its open-source AI art generator without permission.)

“The markets react much faster than the legal system,” said Vered Horesch, Bria’s head of strategic AI partnerships. In response, AI will force companies to act more responsibly… It’s a known fact that models are no longer dead. The data is the chair.”

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences