All About Cookies is an independent, advertising-supported website. Some of the offers that appear on this site are from third-party advertisers from which All About Cookies receives compensation. This compensation may impact how and where products appear on this site (including, for example, the order in which they appear).
All About Cookies does not include all financial or credit offers that might be available to consumers nor do we include all companies or all available products. Information is accurate as of the publishing date and has not been provided or endorsed by the advertiser.
The All About Cookies editorial team strives to provide accurate, in-depth information and reviews to help you, our reader, make online privacy decisions with confidence. Here's what you can expect from us:
- All About Cookies makes money when you click the links on our site to some of the products and offers that we mention. These partnerships do not influence our opinions or recommendations. Read more about how we make money.
- Partners are not able to review or request changes to our content except for compliance reasons.
- We aim to make sure everything on our site is up-to-date and accurate as of the publishing date, but we cannot guarantee we haven't missed something. It's your responsibility to double-check all information before making any decision. If you spot something that looks wrong, please let us know.
On Monday, May 19, 2025, President Donald Trump signed into law the Take It Down Act, a bipartisan piece of legislation introduced by Senators Ted Cruz (R) and Amy Klobuchar (D) and favored by First Lady Melania Trump. The TIDA criminalizes the use of artificial intelligence and deepfakes to create nonconsensual intimate imagery, also known as “revenge porn,” online. The law will go into effect immediately.[1]
While the idea behind the act is noble, the broad language has many free speech advocates and digital rights groups worried that the loose definitions will lead to censorship and violations of First Amendment rights. The language of the bill requires online platforms to have access to monitor all content, including encrypted content such as private messages, so that the platform can remove reported content within the mandatory 48-hour window.
While all of this may sound like a good idea, digital privacy and security experts are concerned that there is a potential for widespread misuse and abuse, as well as potential ramifications for creators who earn a living online. Let’s take a look.
1. Overly broad interpretations
2. Lack of safeguards
3. Increased dependence on automated filters
4. Time constraints and no verification
5. Threat to encryption
6. Potential for abuse
7. Restructuring of ad revenue and platform payments
What does the Take It Down Act aim to do?
FAQs
Bottom line: Can the TIDA threaten free expression, user privacy, and due process?
7 reasons digital rights groups are concerned about the Take It Down Act
The idea behind the TIDA is something most Americans can agree on: Making it illegal to post illicit images of someone else online without their consent. However, digital rights groups believe the poorly worded and broad piece of legislation also gives leeway to many abuses and the potential for false reporting. Additionally, there is the concern that how it’s written could lead to censorship of adult, LGBTQ, and government criticism content.
Since the law is mandated to take effect immediately, platforms will scramble to develop a system that complies with the law within the 48-hour timeframe given. There are various ways this could backfire, but online constraints aren’t the only issues we now have to contend with, given the TIDA’s sloppy language.
1. Overly broad interpretations
While proponents of the legislation have argued that it will help keep children safe by quickly removing nonconsensual intimate images (NCII), the actual language is much broader. Anyone can report images they see online, and there are no guidelines as to what happens to someone making a false report or acting maliciously toward a creator they simply dislike.[2][3]
This means that any intimate image, no matter how mild, has the potential to be flagged simply because the person viewing it doesn’t like it. Now, consider the comments section of anything on the internet, and think if at least one person has a problem with the image they’re commenting on. It’s pretty easy to find at least one disgruntled person, and now that person has the power to upend an entire account.
2. Lack of safeguards
Per the Take It Down Act, the report is only required to provide:
“[A] brief statement that the identifiable individual has a good faith belief that any intimate visual depiction identified under clause (ii) is not consensual, including any relevant information for the covered platform to determine the intimate visual depiction was published without the consent of the identifiable individual.”[4]
In other words, “I promise I’m not lying, and I don’t like this image.”
The entire premise of reporting hinges on people on the internet acting selflessly and morally. If someone randomly reports an image that is consensual or approved by the subject of the image, there are no repercussions outlined in the legislation for the reporting person.
3. Increased dependence on automated filters
Since there are no provisions for how a platform will police its images, we don’t yet know how reporting will be handled. There are two approaches to take, and the most efficient alternative to human evaluation is an increased dependence on AI and automated filters. The problem with AI and automated filters is that they scan for specifics, such as a percentage of exposed skin or two people kissing, rather than considering the context of the piece.
This has the potential to include fan art of your favorite book characters sharing their first kiss, or a content creator showcasing the newest swimwear line by your favorite designer. Yes, that is how broad the definitions are in the bill. And that’s why privacy advocates are concerned.
4. Time constraints and no verification
Once implemented, online platforms will have 48 hours to remove reported images. This deadline is extremely tight, and since there’s no actual verification required to report an image, it will lead to broad content deletion across every site. While no site has released its plan for reporting, the TIDA itself doesn’t outline any verification requirements for individuals who want to flag images.
5. Threat to encryption
The TIDA requires platforms to be able to view ALL content anywhere on its site. This could also include encrypted messaging. WhatsApp is already reviewing messages in private chats, violating its claims of end-to-end encryption. Digital privacy experts are concerned that this type of broad surveillance could also permeate other encrypted platforms. As a result, many apps may even be forced to abandon encryption altogether.
6. Potential for abuse
As we’ve stated, the potential for misusing this new law is far-reaching. From dishonest reporting to government surveillance of previously encrypted messaging and beyond, the lack of specific language in the TIDA leaves room for widespread abuse. Only time will tell how America’s online landscape will be reformed by this law. Still, until it’s refined and provides stronger provisions to address its gaps, there is potential for censorship and free speech violations.
7. Restructuring of ad revenue and platform payments
While there aren’t any rules written into the TIDA specifically fining individuals, there are legal repercussions for anyone whose image is removed. This has the potential to shake up ad revenue and payment structures on a variety of social media platforms. Whether a platform needs to hire additional staff or uses demonetization as a punishment for violating the law, both stand to be an issue.
Let’s say, for example, a BookTok reviewer shares a piece of fan art from a favorite book they’ve just read. The fan art may be suggestive in nature — two people sharing a kiss or an intimate, steamy scene from a book. If even one person who views the image gets upset and reports it, the platform could demonetize the creator’s post or even their account.
Another possibility would be the need for platforms to hire more actual humans to evaluate reported images. While we don’t think this will be the case, it could hinder the amount of money the platform allocates toward creators. Creator funds could plummet significantly if the platform needs to reallocate funds to new employees.
What does the Take It Down Act aim to do?
Having laws on the books that punish people for creating and sharing NCII, especially if those images depict minors, is a good thing. Regular reports detail how teenagers are dying by suicide after being blackmailed with the release of sexual images, an epidemic that is both horrific and sad.[5] Proponents of this legislation say their main goal is to prevent this kind of exploitation and protect children and other vulnerable people from having potentially damaging images shared about them online.
However, on the other hand, free speech advocates and digital privacy experts are concerned that the bill leaves too much room for interpretation, and this bill appears to be more of a way to conceal government censorship under the guise of “protecting children.”
Instead, this new law echoes Putin’s meme ban, which was enacted over a decade ago, making it illegal to represent any public figure online in a mocking fashion.[6]
President Trump made this statement about the act in his address to Congress:
“The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”[7]
Digital privacy experts question whether his goals are actually altruistic or focused on protecting children.
Key points and objectives of the Take It Down Act
- Protect victims of nonconsensual intimate imagery (NCII) and AI “deepfakes”
- Criminalize the publication of NCII, including AI-generated NCII
- Criminalize the threat of publication of NCII, including AI-generated NCII, on social media websites and messaging services
- Require online platforms to remove such content within 48 hours of notification from a victim
- Permit good faith disclosure of NCII, such as to law enforcement
- Allow anyone to report NCII, without pretext or ramifications for false reporting
- Require all platforms to have access to all content at all times, regardless of security or encryption
- Create severe penalties for websites and social media platforms that don’t adhere to the law
FAQs
What is a deepfake bill?
A deepfake bill is a proposed bill presented to Congress to regulate the use of deepfakes. Deepfakes are realistic images and videos that copy the appearance of an individual and present them in situations that never happened. Deepfakes cover everything from political information to product or service endorsement, or even adult content.
Did President Trump say he would use the Take It Down Act for himself?
In his address to Congress, President Trump stated that he would utilize the act for his own purposes. Digital privacy experts are concerned that the administration is using the protection of children and victims of sexually explicit content as a means to introduce censorship. By signing the TIDA into law, he’s given himself more power to prosecute his political opponents.
Why is the TIDA legislation controversial?
While the Take It Down Act itself purports to have good intentions toward protecting victims of sexual exploitation online, digital rights groups believe its language is too broad not to have overreaching ramifications that will inhibit freedom of speech and increase government censorship. Because platforms will need to take down anything reported within a 48-hour window, encrypted messaging apps or email may be affected.
Bottom line: Can the TIDA threaten free expression, user privacy, and due process?
Yes, online security experts and digital rights groups contend that the TIDA will threaten free expression, user privacy, and due process. Given the narrow window for removing content, companies will need to develop a system that streamlines the process of evaluating content to determine if it actually violates terms and conditions.
We don’t believe platforms will double or triple their compliance teams with real human evaluators, so the job will likely be left to automation and subpar AI that cannot distinguish between harmful and authorized images.
While we don’t know how this will play out in the coming months and years, what we do know is that you’ll need increased security online. Digital privacy software, such as virtual private networks (VPNs), encrypts data, which can help mitigate some of the encryption and security losses resulting from this new law.
You’ll also need to read the new terms and conditions on your favorite platforms that will likely take effect. The repercussions, you’ll want to be aware of what you can and can’t do to avoid prosecution.
[1] ICYMI: President Trump Signs TAKE IT DOWN Act into Law
[2] What They’re Saying | The TAKE IT DOWN Act
[3] Congress Passes TAKE IT DOWN Act Despite Major Flaws
[5] These teenage boys were blackmailed online – and it cost them their lives
[6] Russia Bans Some Internet Memes That Mock Public Figures
[7] Trump Calls On Congress To Pass The “Take It Down” Act—So He Can Censor His Critics
/images/2023/12/05/aura-vs.-lifelock.png)
/images/2023/10/11/best-data-removal-service.png)
/images/2025/05/28/laptop_streaming_internet_content_from_multiple_sources.jpg)
/images/2025/04/17/pile_of_confiscated_cell_phones.jpg)
/images/2025/04/02/two_android_users_looking_at_their_devices.jpg)
/images/2025/03/27/delete_23andme_data.jpg)
/images/2025/02/28/openai_chatgpt_vs_deepseek_two_powerful_artificial_intelligence_deepse_9cnJHd0.jpg)
/images/2025/02/28/woman_using_a_period_tracking_app.jpg)