The Challenge of Deception: AI and the Engineering of Consumer Belief


In the 2014 movie Ex Machina, the robot uses it to free a person from his bonds, thereby keeping the person imprisoned. The robot was designed to manipulate that person’s emotions, and, alas, it did just that. Although the situation is pure speculative fiction, companies are always looking for new ways – such as using creative AI tools – to better persuade people and change their behavior. If that behavior is commercial in nature, then we are in FTC territory, a valley where businesses must know to avoid practices that harm consumers.

In previous blog posts, we focused on AI-related. DeceptionIn terms of exaggerated and unsubstantiated claims for AI products and the use of artificial AI for fraud. Product design or use may violate the FTC Act. Unfair – Something we have demonstrated in many cases and discussed with bias or biased results in terms of AI tools. Under the FTC Act, a practice is unfair if it causes more harm than good. To be more clear, it is unfair if it causes or results in substantial harm to consumers that cannot reasonably be avoided by consumers and does not weigh against consumer or competitive advantage.

As for the new wave of generative AI tools, organizations are beginning to use them in ways that influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support and companionship. Most of these chatbots are effectively built to persuade and are designed to answer questions in confident language, even if those answers are fictional. The tendency to trust the output of these devices also stems in part from “automation bias,” whereby people may unduly trust responses from neutral or seemingly neutral machines. It also comes from the influence of anthropomorphism, which causes people to trust chatbots when they are designed to use personal pronouns and emoticons. People can easily be led to think that they are talking to someone who understands them and is on their side.

Many business actors look to these generative AI tools and their built-in benefits to rely on humans that are not available. Concerns about their malicious use are beyond the FTC’s jurisdiction. But the FTC’s main concern is that companies use them in ways that intentionally or unintentionally lead people to make harmful decisions about things like finances, health, education, housing, and employment. Companies thinking about new uses for generative AI, such as tailoring ads to specific people or groups, should be aware that design elements that trick people into making harmful choices are commonplace in FTC cases such as recent actions. Financial provisions, In-game purchasesAnd Attempts to cancel services. Fraud can be deceptive or unfair practice when it induces people to take actions other than their intended purpose. Under the FTC Act, the practices may be illegal even if not all customers are harmed, and even if those harmed do not include a class of people protected by anti-discrimination laws.

Another way marketers can take advantage of these new tools and their manipulation capabilities is to place ads. in Generative AI feature like placing ads in search results. The FTC has repeatedly studied and issued guidelines for serving online ads, whether in search results or elsewhere, to avoid deception or unfairness. This includes recent work related to dark patterns and native advertising. Among other things, it should be clear that an ad is always an ad, and search results or any generative AI output should clearly distinguish between organic and paid. People need to know if the AI ​​product response is directing them to a specific website, service provider or product Due to business relationship. And, of course, people need to know whether they’re dealing with a real person or a machine.

Given these many concerns about the use of new AI tools, it may not be the best time for organizations to remove or fire employees responsible for AI and engineering ethics and responsibility for building or deploying them. If the FTC comes along and wants to convince us that they’ve adequately assessed risks and mitigated damages, those mitigations may not be good. What is better? We have provided guidance in our previous blog posts and elsewhere. Among other things, your risk assessment and mitigations should monitor and address potential downstream uses and the need to train workers and contractors, as well as the actual use and impact of the tools that will ultimately be implemented.

If we haven’t already made it clear, the FTC staff is focusing on how companies choose to use AI technology, including innovative AI tools, in ways that can have a tangible and significant impact on consumers. And for those interacting with chatbots or other AI-generated content, heed Prince’s warning in 1999: “It’s good to use the computer. Don’t let the computer take advantage of you.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences