All About Cookies is an independent, advertising-supported website. Some of the offers that appear on this site are from third-party advertisers from which All About Cookies receives compensation. This compensation may impact how and where products appear on this site (including, for example, the order in which they appear).
All About Cookies does not include all financial or credit offers that might be available to consumers nor do we include all companies or all available products. Information is accurate as of the publishing date and has not been provided or endorsed by the advertiser.
The All About Cookies editorial team strives to provide accurate, in-depth information and reviews to help you, our reader, make online privacy decisions with confidence. Here's what you can expect from us:
- All About Cookies makes money when you click the links on our site to some of the products and offers that we mention. These partnerships do not influence our opinions or recommendations. Read more about how we make money.
- Partners are not able to review or request changes to our content except for compliance reasons.
- We aim to make sure everything on our site is up-to-date and accurate as of the publishing date, but we cannot guarantee we haven't missed something. It's your responsibility to double-check all information before making any decision. If you spot something that looks wrong, please let us know.
Between the rise of ChatGPT and the popularity of DALL-E, artificial intelligence has bombarded the internet over the last year. But while many people and companies love using generative AI, not all actors have good intentions online.
This last year has also seen AI-driven scams, an uptick in deepfakes, and illicit AI-generated images of celebrities, such as those of Taylor Swift and Bobbi Althoff. AI-generated scripts and images were even used to trick customers into paying for a deceptive “Willy Wonka chocolate experience.” The world has quickly seen the negative implications of large-scale AI adoption.
In light of these events, our team at All About Cookies wanted to find out how users interact with and feel about AI-generated content and images online, so we surveyed more than 1,000 internet users to find out.
How many people have been fooled by AI-generated content?
How should the legal system handle AI-generated content?
Should your data be used to train AI?
Who should be held accountable when AI is used to break the law?
Bottom line
Methodology
Key findings
- How many people have been fooled? 77% of Americans reported being duped by AI content online.
- What should we do about it?
- 93% of respondents said companies should be legally required to disclose when they use AI-generated content.
- 82% said celebrities and public figures should have protections from their likeness being used for AI-made content.
- Most people (72%) believe content creators should be legally responsible for the AI-made content they post.
- AI and personal data: 97% of Americans said there should be protections against social media companies using your likeness or personal data to train AI programs.
- 45% believe use of user data should be opt-in only.
- 37% said user data should only be used with exclusions or updated terms of service.
- 25% said social media companies should never be able to use user data to train AI.
- Who’s responsible for negative outcomes from AI-made content?
- 72% said the users who post content should have the primary legal responsibility for illegal or illicit AI-generated content.
- 16% said the sites where AI-made content was shared should be liable.
How many people have been fooled by AI-generated content?
Known as deepfakes, one popular way to use AI is to create “original” images, videos, and audio files to impersonate people. Some programs and users have gotten so good at using this functionality that it can be difficult to know whether or not a piece of media was made by a real person.
More than three-quarters of people, 77%, say they have encountered something online that they believed to be from a real person only to later learn that it was generated by artificial intelligence. That shows just how pervasive AI-generated content like this has become, and how good AI has gotten at creating content that is largely indistinguishable from that made by real-life people.
How should the legal system handle AI-generated content?
How do we combat the pervasiveness of AI-generated content? One early idea gaining popularity is to require companies and websites to label any AI-made content they publish as having been created by an artificial intelligence program.
This approach has been publicly touted by leadership at Meta (the company that owns Facebook and Instagram) and a bipartisan bill requiring the labeling of AI-generated content was introduced to Congress in 2023. This tactic has broad appeal among the general public as well. The vast majority of people we surveyed, 93%, said they are in favor of companies being legally required to disclose whenever they use AI-generated content.
Another idea with a high level of support is to give celebrities special protections that would make it illegal for AI to use their likenesses without permission. More than four out of every five people (82%) support those kinds of protections being put in place for celebrities.
Should your data be used to train AI?
Celebrities aren’t the only ones that can be harmed by the nonconsensual use of their images and data. But part of what makes public figures so vulnerable to AI is the number of images, videos, and other media available online that contain their likeness.
Even for non-celebrities, social media has the potential to cause problems since many people have spent years cataloging their lives on their favorite social media platforms, including posting written and audio-visual content. So should those posts and uploads be fair game for AI tools to use?
The majority of people say yes, but not with impunity. 72% of people feel that social media platforms should be allowed to use user-submitted content to train artificial intelligence tools, but only if certain conditions are met.
The most popular of these conditions is that users must opt-in or give explicit permission for their content to be used, which 45% of people are in favor of. One-quarter of people do not believe that social media companies should be allowed under any circumstances to use content uploaded by users to train AI.
Who should be held accountable when AI is used to break the law?
Of course, even laws in place outlining how artificial intelligence can and cannot be used don’t mean that everyone will follow them. When that happens, who do people believe should be held accountable for crimes committed using AI?
More than 70% of people say that the individual users or companies that create illegal content with AI should primarily be held responsible. 16% of people believe that sites that host or publish illegal AI-generated content should be the ones liable for it, while 12% believe that the creators of the artificial intelligence tool in the first place ultimately hold responsibility for it.
Bottom line
New technology always brings new challenges and dangers, and artificial intelligence is no different. But it also brings exciting new possibilities and functionality to the lives of millions, which is why AI tools have gotten so popular so fast.
For anyone who wants to safely explore AI content, here are some tips to protect yourself and your data while using artificial intelligence tools:
- Establish best practices. Knowing a few baseline tips on how to stay safe online will help make sure your information and devices remain protected, even when connected to public Wi-Fi.
- Invest in identity theft protection. Identity theft protection services are a great way to monitor your data and information online, alerting you to data breaches and other issues while providing resources for restoration if anything is lost or stolen.
- Download a VPN for browsing. Virtual private networks (VPNs) can be a great way to keep your personally identifiable information safe. Explore available options and use one of the best VPNs to improve your digital security.
Methodology
To collect the data for this survey, our team at All About Cookies surveyed 1,000 U.S. adults in February 2024 via Prolific. All respondents were U.S. citizens over the age of 18 and remained anonymous.