You can see what it will look like here:
https://www.reddit.com/r/redditdev/comments/1s3f3ag/keeping_reddit_human_a_new_app_label_for/
TL;DR:
-
Reddit is for people
-
“Good bots” will be labeled as [App]
-
We’ll continue to remove spam and bad bot activity
-
Automated or suspicious accounts may be asked to verify that there’s a human behind it
-
We are not doing sitewide human verification
-
We don’t need or want your identity
Hi everyone,
The internet feels different lately. It’s getting harder to tell who—or what—you’re interacting with. But Reddit’s purpose is for people to talk to people. And we want it to stay that way.
Our product has always been human conversation: messy, opinionated, sometimes great, sometimes not, but always real (or at least, really creative writing). As AI becomes a bigger part of the internet, we want to make sure that when you’re on Reddit, you know when you’re talking to a person and when you’re not.
So we’re making a few changes.
Our strategy here is to go from the bottom up (i.e., deal with the bots), because on Reddit, you should assume that anyone you’re talking to is a human unless otherwise labeled. A few of the principles behind how we’re approaching this:
-
Verifying someone is human is not the same as knowing who they are
-
We don’t have or want your real-world identity
-
Automated use of Reddit can be useful in some cases (i.e., “good bots”), but we have to be careful
What’s happening
1. Clear labeling for non-human accounts
At the end of last year, we launched verified profiles for brands, publishers, and creators. For professional accounts, being clearly labeled increases transparency and helps their content be accepted in relevant communities.
Next, we’re standardizing how automation shows up on Reddit. Accounts that use automation in allowed ways (what many call “good bots”) will be labeled as [App]. If you see that label, you know you’re interacting with a machine, not a person.
Developers can register their apps to receive this label (there will be more about this in ).
2. Continued removal of nefarious bots and spam
We hate it as much as you do and already remove the vast majority of it (an average of 100K accounts per day), often before anyone sees it. We’ll continue to remove nefarious bot content, including spam.
3. Human verification for automated or otherwise fishy behavior
If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it. This will be rare and will not apply to most users. Accounts that can’t pass may be restricted.
To be clear, this is not sitewide human verification, let alone sitewide ID verification.
4. Reporting suspected automation
Redditors have long been the best bullshit detectors, and increasingly great Turing testers. We’ll make reporting easier and more flexible (these days, we can infer most issues from a report without a lot of context). I’d also like to include comments from other users pointing something out (e.g., “nice post, bot, now fuck off”), since that’s most users’ preferred reporting method.
Privacy
Both due to AI reshaping the internet and increasing regulation around the world requiring various forms of identity or age verification, we are exploring ways to confirm humanness and comply with these regulations without compromising user privacy. The best long-term solutions will be decentralized, individualized, private, and ideally not require an ID at all.
If we need to verify an account is human, we’ll do it in a privacy-first way. Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn’t have to sacrifice one for the other.
When confirming that there is a human behind an account, we prefer third-party tools that keep a distance between verification and Reddit itself. Any system we use will not expose your real-world identity to Reddit nor your Reddit username or activity to any third party. There are a handful of ways to do this, and I’m sure there will be more. Each have their tradeoffs:
-
Passkeys (which are well supported by Apple, Google, YubiKey, and various password managers) - These are lightweight, require a human to do something, and don’t require your ID. The tradeoff is that there is no proof of individuality or anything other than “a human probably did something.” Nevertheless, it’s a great starting point.
-
Third-party biometric services - For example, World ID (yes, the Orb company, though they have non-Orb solutions as well). This technology unlocks proof-of-individual without requiring your name, government ID, or a centralized database. I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix.
-
Third-party government ID services - In some countries, such as the UK and Australia, governments require us to use these. These are the least secure, least private, and least preferred. When we are forced to do this, we design the integrations so that we never actually see your ID information, so your Reddit data cannot be tied to you.
What about AI-generated content?
There is, of course, the gray area of humans using AI to write. We see it too and agree that it can feel off, but we’re not going to overcorrect on that now, at least at a sitewide level. We’ll monitor its usage and see what happens as we crack down even more on automated accounts. As always, communities can set their own standards if they want.
For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you’re seeing. Before there was AI slop, there was slop. It’s not a new problem, and it’s one that Reddit, with its voting and moderation system, is better than most at dealing with.
Things are changing quickly, and we’ll adapt as best we can. We welcome any thoughts and criticism.
Thanks,