If you do not know the term “AI washing,” consider this a formal introduction. The phrase refers to an exaggerated or incorrect claim about how a good or service incorporates AI. This issue warrants particular attention from state attorneys general (AGs) for three reasons. First, federal enforcers lack the resources and knowledge required to protect consumers from all instances of AI washing. Second, AI washing will only become more common as the technology advances and consumers become more familiar with its benefits. Third, consumers stand to lose substantial sums due to AI washing. Whether a user regularly pays small sums for the use of an AI product that is not as advanced as alleged or invests in an expensive subscription to such a product, the net result is a thinner wallet.

A prime example of AI washing comes from a somewhat unexpected source: OnlyFans. (As an aside, the adult entertainment business has long been on the frontier of new technologies, so it is unsurprising to see the industry pushing the bounds of AI). “[A] subscription-based platform that allows people to charge for access to the materials they share,” per The New York Times. The site is better known as a place to follow, watch, and converse with creators who share adult content. So-called fans may pay to have personal conversations with their favorite content generators. Some more popular creators receive more messages than they can respond to. Every missed message, though, is missed revenue. To solve this problem, a few creators hired low-wage foreign workers to respond to messages they would have otherwise ignored due to a lack of time.

Now, as Vice reported, some of those creators are turning to AI tools to respond to user requests for personal interactions. The use of AI tools largely deprives users of the intended experience. Though some creators may approve AI-generated messages before a message is transmitted to a user, other tools respond without any meaningful human oversight. In short, there is no guarantee of a “creator in the loop” that verifies the response is something they would have said or done. Creators can even deploy tools to solicit additional business. Vice journalist Luis Prada explains that Supercreator, one AI tool, “scan[s] for inactive users and then automatically initiate[s] a conversation with them as soon as they log in for the first time in a while.” If the tool succeeds in engineering a conversation, then the creator may start profiting from artificial correspondence that requires none of their actual time.

Whether or not you feel sympathy toward OnlyFans users unsure of whether they are talking to a human or an AI bot, you may soon find yourself in a similar position. AI customer service agents have proliferated in the last few years. That trend will likely continue. If you’re the sort of person who would splurge for the chance to go back and forth with a human rather than a bot, then you likely want some assurance that your complaint over that aisle seat is being reviewed by human eyes, not AI. If you’re instead the sort that would pay extra for the savvier, kinder AI customer service agent, then you’d likewise want some guarantees you are interacting with the latest and greatest tech. The upshot is that you, me, and the whole of the public have cause (or will soon) to push back on AI washing.

A critical first step in detecting AI washing is creating and adopting clear definitions of new AI tools. So far, this has not been the case. 2025 promises to be the year of “AI agents.” However, what constitutes an AI agent has not been commonly agreed to. One company’s AI agent may more accurately be described as an AI assistant. It’s a difference with a distinction, but companies seem keen to blur the line between the two products. The distinguishing characteristic of AI agents is the ability to act autonomously on behalf of a user. AI assistants, however, respond to a single prompt from users and require continued consultation with the user. The significance becomes apparent when considering an anticipated common use of AI agents: planning a trip. While an AI assistant could provide you with suggestions for each step—the flight to book, the hotel to stay in, the restaurants to visit, etc.—an AI agent trained to know your preferences would go ahead and book the ideal flight, the hotel suited to your budget and desires, and the restaurant with that Old Fashioned cocktail you crave.

So long as there is uncertainty among regulators and regulated entities about how to categorize certain products, consumers risk getting swindled. This is where state AGs can play a major role in protecting consumers. As Massachusetts Attorney General Andrea Campbell did in April, state AGs can release regular guidance on their understanding of the differences between AI products so that consumers and businesses can respond accordingly. Ideally, such guidance should be informed by AI stakeholders, including but not limited to representatives of AI companies and technical researchers from civil society and universities, and coordinated with other AGs so as to prevent a patchwork approach to defining these terms. Federal regulators could join efforts to provide regulated entities with more clarity.

A second step is increasing consumer awareness. Many consumers are wary of products that involve AI. This caution may drive some consumers to miss out on products that could meaningfully improve their well-being. It may drive others to hastily buy whichever product claims to be the most advanced. State AGs and other regulators can counteract the very human aversion to new or complex things by providing consumer-friendly materials on new AI products. Such efforts would be even more effective if amplified by trusted community messengers such as higher education institutions, local governments, and community leaders. A more informed consumer community would simultaneously reduce the odds of scammers profiting from AI washing and increase the number of reports of AI washing allegations.

A third step is offensively leveraging AI. As mentioned above, too many companies purport to use AI to allow for meaningful review by consumer protection officials. AI can provide regulators with a new means to scope out whether an entity is running afoul of a law, regulation, or guidance. For instance, India’s Department of Consumer Affairs is looking at how to use AI to detect dark patterns—user interfaces designed to lure users into taking an action they did not intend to or want to. The Department actively worked with civil society to think through how best to design such a tool. This collaborative, open approach may be useful in the US, where leading tech thinkers have grown accustomed to partnering with governments to spot cybersecurity vulnerabilities. Any resulting AI Washer Detector should be deployed with caution and clear guidelines, forming just one part of broader enforcement efforts.

In conclusion, the rise of AI washing represents a significant challenge that demands immediate attention from state attorneys general and other regulatory bodies. AGs can play a pivotal role in mitigating the risks of misleading AI claims by providing clear definitions, fostering consumer awareness, and strategically leveraging AI tools. These efforts are not merely about protecting consumers’ wallets; they are about safeguarding trust in emerging technologies and ensuring that innovation continues to serve, rather than deceive, the public. As AI technologies become increasingly integrated into daily life, regulatory frameworks must keep pace, striking a balance between fostering innovation and holding companies accountable. The stakes are high, but with proactive measures, regulators can ensure that AI enhances rather than erodes consumer confidence.

Related Reading

This post was originally published on this site be sure to check out more of their content.