With growing concerns about artificial intelligence and generative AI, the Federal Trade Commission has urged companies developing or deploying new AI tools to continue to educate their employees on AI and engineering ethics and responsibilities. Blog post on Monday.
While many companies use generative AI tools, they can deploy harmful technologies without these workers. The FTC warned that if companies fire or lay off these workers and the agency “comes to come and convince us that they have adequately assessed risks and mitigated harm, these reductions may not be good.”
The Washington Post reported In March, major companies such as Microsoft, Twitch and Twitter laid off their AI ethics staff.
According to a blog post, the agency is focusing on the use of AI and creative AI by organizations and its potential impact on consumers. Of particular concern to the FTC is the use of AI or generative AI tools to better persuade people and change their behavior. The FTC has previously focused on AI-deception, such as making exaggerated or unsubstantiated claims and using artificial AI to commit fraud, as well as using AI tools that may be biased or biased.
According to the FTC, businesses use artificial AI tools to influence people’s beliefs, emotions and behavior in ways such as chatbots that provide information, advice, support and friendship. According to the FTC, “many of these chatbots are built to be effective at persuasion and are designed to answer questions in confident language, even if those answers are fictional. The agency notes that people may be more likely to trust machines because they believe they are impartial or neutral, which is not true because of biases inherent in their creation.
The agency’s main concern is companies that use unfair or deceptive methods to steer people into making harmful decisions, such as those related to money, health, education, housing and employment. The FTC added that such harmful uses may or may not be intentional, but the risk is the same.
For example, the FTC warned that companies using generative AI to tailor ads “should be aware that design elements that trick people into making harmful choices are a common part of recent FTC cases, such as financial offers, in-game purchases, and trials.” To cancel services. Fraud can be deceptive or unfair practice when it induces people to take actions other than their intended purpose. However, the FTC added that companies placing ads in generative AI results can also be deceptive, and it needs to be clear what is an ad and what is a search result.
The agency has issued some guidelines for companies using generative AI: risk assessments and mitigations should result in downstream uses; Employees and contractors need training and supervision; And companies need to address the use and impact of deployed devices.
The FTC also warned consumers: “For people interacting with chatbots or other AI-generated content, note Prince’s warning in 1999: ‘It’s good to use the computer.’ Don’t let the computer take advantage of you.’
We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences