Most Internet Users Have Been Fooled by AI-Generated Content Online [Survey]

All About Cookies surveyed U.S. adults to find out how they feel about AI-generated content, what kinds of laws they think should apply to AI, and more.
We receive compensation from the products and services mentioned in this story, but the opinions are the author's own. Compensation may impact where offers appear. We have not included all available products or offers. Learn more about how we make money and our editorial policies.

Between the rise of ChatGPT and the popularity of DALL-E, artificial intelligence has bombarded the internet over the last year. But while many people and companies love using generative AI, not all actors have good intentions online.

This last year has also seen AI-driven scams, an uptick in deepfakes, and illicit AI-generated images of celebrities, such as those of Taylor Swift and Bobbi Althoff. AI-generated scripts and images were even used to trick customers into paying for a deceptive “Willy Wonka chocolate experience.” The world has quickly seen the negative implications of large-scale AI adoption.

In light of these events, our team at All About Cookies wanted to find out how users interact with and feel about AI-generated content and images online, so we surveyed more than 1,000 internet users to find out.

In these survey results
Key findings
How many people have been fooled by AI-generated content?
How should the legal system handle AI-generated content?
Should your data be used to train AI?
Who should be held accountable when AI is used to break the law?
Bottom line
Methodology

Key findings

  • How many people have been fooled? 77% of Americans reported being duped by AI content online.
  • What should we do about it?
    • 93% of respondents said companies should be legally required to disclose when they use AI-generated content.
    • 82% said celebrities and public figures should have protections from their likeness being used for AI-made content.
    • Most people (72%) believe content creators should be legally responsible for the AI-made content they post.
  • AI and personal data: 97% of Americans said there should be protections against social media companies using your likeness or personal data to train AI programs.
    • 45% believe use of user data should be opt-in only.
    • 37% said user data should only be used with exclusions or updated terms of service.
    • 25% said social media companies should never be able to use user data to train AI.
  • Who’s responsible for negative outcomes from AI-made content?
    • 72% said the users who post content should have the primary legal responsibility for illegal or illicit AI-generated content.
    • 16% said the sites where AI-made content was shared should be liable.

How many people have been fooled by AI-generated content?

Known as deepfakes, one popular way to use AI is to create “original” images, videos, and audio files to impersonate people. Some programs and users have gotten so good at using this functionality that it can be difficult to know whether or not a piece of media was made by a real person.

A graph showing how many people say they've been tricked by AI-generated or deepfaked content online. The majority of people say yes.

More than three-quarters of people, 77%, say they have encountered something online that they believed to be from a real person only to later learn that it was generated by artificial intelligence. That shows just how pervasive AI-generated content like this has become, and how good AI has gotten at creating content that is largely indistinguishable from that made by real-life people.

How do we combat the pervasiveness of AI-generated content? One early idea gaining popularity is to require companies and websites to label any AI-made content they publish as having been created by an artificial intelligence program.

A chart showing how many people say online platforms should be required to disclose AI-generated content. The majority of people say yes.

This approach has been publicly touted by leadership at Meta (the company that owns Facebook and Instagram) and a bipartisan bill requiring the labeling of AI-generated content was introduced to Congress in 2023. This tactic has broad appeal among the general public as well. The vast majority of people we surveyed, 93%, said they are in favor of companies being legally required to disclose whenever they use AI-generated content.

A chart showing whether or not people think celebrities and public figures should have special laws that protect their likeness from being used by AI models. The majority of people say yes.

Another idea with a high level of support is to give celebrities special protections that would make it illegal for AI to use their likenesses without permission. More than four out of every five people (82%) support those kinds of protections being put in place for celebrities.

Should your data be used to train AI?

Celebrities aren’t the only ones that can be harmed by the nonconsensual use of their images and data. But part of what makes public figures so vulnerable to AI is the number of images, videos, and other media available online that contain their likeness.

A chart showing how many people think online platforms should be allowed to use user-posted content to train AI models. Most people say yes, but with restrictions.

Even for non-celebrities, social media has the potential to cause problems since many people have spent years cataloging their lives on their favorite social media platforms, including posting written and audio-visual content. So should those posts and uploads be fair game for AI tools to use?

The majority of people say yes, but not with impunity. 72% of people feel that social media platforms should be allowed to use user-submitted content to train artificial intelligence tools, but only if certain conditions are met.

The most popular of these conditions is that users must opt-in or give explicit permission for their content to be used, which 45% of people are in favor of. One-quarter of people do not believe that social media companies should be allowed under any circumstances to use content uploaded by users to train AI.

Who should be held accountable when AI is used to break the law?

Of course, even laws in place outlining how artificial intelligence can and cannot be used don’t mean that everyone will follow them. When that happens, who do people believe should be held accountable for crimes committed using AI?

A chart showing who people think should be legally liable for AI-generated content posted online. The majority of people say the user who posts the content should be legally responsible.

More than 70% of people say that the individual users or companies that create illegal content with AI should primarily be held responsible. 16% of people believe that sites that host or publish illegal AI-generated content should be the ones liable for it, while 12% believe that the creators of the artificial intelligence tool in the first place ultimately hold responsibility for it.

Bottom line

New technology always brings new challenges and dangers, and artificial intelligence is no different. But it also brings exciting new possibilities and functionality to the lives of millions, which is why AI tools have gotten so popular so fast.

For anyone who wants to safely explore AI content, here are some tips to protect yourself and your data while using artificial intelligence tools:

  • Establish best practices. Knowing a few baseline tips on how to stay safe online will help make sure your information and devices remain protected, even when connected to public Wi-Fi.
  • Invest in identity theft protection. Identity theft protection services are a great way to monitor your data and information online, alerting you to data breaches and other issues while providing resources for restoration if anything is lost or stolen.
  • Download a VPN for browsing. Virtual private networks (VPNs) can be a great way to keep your personally identifiable information safe. Explore available options and use one of the best VPNs to improve your digital security.

Methodology

To collect the data for this survey, our team at All About Cookies surveyed 1,000 U.S. adults in February 2024 via Prolific. All respondents were U.S. citizens over the age of 18 and remained anonymous.

4.9
Editorial Rating
Learn More
On DeleteMe's website
Privacy Protection
DeleteMe
  • Removes your data from the web to avoid scams, spam and stalkers
  • 100+ million successful opt-out removals
  • Provides continued removals every three months

Author Details
Josh Koebert is an experienced content marketer that loves exploring how tech overlaps with topics such as sports, food, pop culture, and more. His work has been featured on sites such as CNN, ESPN, Business Insider, and Lifehacker.