This Fortune news article reports on a data breach affecting Discord users through a third-party identity verification vendor backed by Peter Thiel, highlighting privacy violations and corporate accountability issues. The reporting acknowledges privacy rights concerns and includes editorial attribution, but structural constraints (paywall access, user tracking) and limited scope of engagement with human rights frameworks result in moderate, mixed signals regarding human rights commitment.
So does this mean Discord is scrapping its new face verification requirement for users, or imply they’re no longer using this 3rd party service (Persona) to do it? The article wasn’t too clear on that.
>Nearly 2,500 accessible files were found sitting on a U.S. government-authorized endpoint, researchers pointed out on X. The files showed Persona conducted facial recognition checks against watchlists and screened users against lists of politically exposed persons.
>Persona performs 269 distinct verification checks, including screening for “adverse media”
im sure everyone assumed this, but its good to know it.
>And the information was openly available. “We didn’t even have to write or perform a single exploit, the entire architecture was just on the doorstep,”
it is kind of scary how often these types of situations are only found out because of wild incompetence. you have to imagine that most similar situations dont suffer from the same incompetence (and thus arent known)
>“At Discord, protecting the privacy and security of our users is a top priority.
please, i wish companies would just stop saying this obvious lie. you know that you dont care. we know that you dont care.
>It’s dystopian that we want people to facedox themselves to everyone to be real online.
.... says the ceo of the company that you have to send your face ("facedox", if you will) to
> According to Discord, only a small number of users were part of this test, in which any information submitted could be stored for up to seven days before it would be deleted.
Ah yes, we only store it for 7 days. During those 7 days, we pass it to Persona, and who knows how long they keep it!
For some reason, discord has never asked more from me than a verified email address. No phone number or anything else. Maybe I'm being monitored and they don't want to spook me off the honeypot? Half joking..
the damage is already done though. Discord just burned years of goodwill and trust. Im in a few discord communities and while they aren't moving Im not looking to join any more right now because of this whole thing.
discord already had 70k government IDs breached through age verification last year. their fix was handing the next batch to a vendor with 2500 files sitting on a government endpoint.
Teter Piel (don't want to use the other name) kind of purchased a LOT of influence power via lobbyists. One lobbyist is Sebastian Lurz (also not going to use the real name here; the letter "l" is an in-country humourous take on Lüssel, Lasser and so forth - ex-politicians). The superrich buy influence and worsen the situation for the rest of us. This has to stop. The USA is currently under direct control of them - this also has to stop. I do not buy into Discord's attempt here though - they 100% knew what they were doing. The only reason they respond in this way is because they alienated and scared their user base with their idea to sniff-invade everyone. It was never about protecting kids in the first place - it was to spy.
what is such a shame is, well, two things: first, that these companies even do this kind of thing at all (i.e., age verification); and second, that it takes the kind of backlash this event has generated for them to cut ties with these companies. Apparently, it is too much to ask for any corporation to even give a damn about who runs or backs another corporation that they want to associate themselves with these days.
Does cutting ties with Persona actually take them out of the picture? Whomever they move to can then relay or sell data to Persona. Third party turtles all the way down. inb4 but they pinky promised...
The appropriate solution would be to send an RTA header [1] from the servers and the client must check to see if parental controls are enabled on the device or in the application. Not perfect, but likely sufficient to protect small children assuming the account is a child account and the parent enabled parental controls. Teens will always be able to bypass controls whether local or third party. Teens can share porn, warez, movies and more in rated-G video games with one another and small children. Or over SFTP/FTP/P2P/S3/HTTPS. Or a million other ways. Have fun playing whack-a-mole.
K-id is the vendor they were proposing which did on device processing. They were trying to downplay the initiative by saying all the k-id data stayed on device.
This was undermined by the fact they were also trialling a switch to Persona (the vendor in the story), which did not uphold that guarantee. It was horrific optics to be reassuring people that it was ok because you didn’t save data but also be trialling a switch to a vendor which did save data, which I guess is a lot of the reason this vendor switch was cancelled. (Though it does call into question discord’s judgment that they thought this was a good idea).
Anyway, Persona was also breached which is how the government links were discovered and also probably a part of this decision. This is not to be confused with the breach in November of 5CA, _another_ vendor they used in the initial UK and Australia roll outs. The fact that two vendors were breached in four months is a good example of why this is a bad idea
> So does this mean Discord is scrapping its new face verification requirement for users,
No, they’re outsourcing the verification to an external company. Just not this one.
Side note: The verification is only if you want to remove content filters, join adult-themed servers and a couple other features. If you only want to chat with your friends and use voice then no verification is required.
Same for me, and my account is almost a decade old. I think it depends a lot where are you from and the kind of activity, as i read stories of people being asked to register a number out of nowhere.
Many servers requires you to have it tho, due to spam protection. I just don't talk on those.
Each discord server can decide whether they only will allow people with a phone number on. When you hit one of those, Discord will ask you for your number.
Real time chat? Great. But entire communities, forums, and wikis moving behind the locked walled of Discord has been a disaster for information discovery.
Don't replace Discord with a similar alternative. Return to open forums and wikis!
Can someone explain to me how Discord got so big in the first place, particularly for non-gaming uses?
I saw this coming a mile away when folks started ditching slack for Discord - Slack being problematic because a) it was profit-seeking and would use its leverage over your personal data to seek rent and b) it was antithetical to the open web.
Discord has the exact same two issues so was obviously not a solution.
I think they have been steadily losing their years of goodwill and trust over time. Their client is becoming worse and worse every release, introduced ads, etc... Typical enshittification, it could be worse, but Discord already went from being cool to being tolerable. The age verification thing is just another step on the way down.
Isn't it a good thing ? It makes clearly marks companies like Persona dangerous and toxic enough to hopefully makes an example that prevents others from working with them.
The bigger shame is that it took Peter Theil's name to get people's outraged about this. Discord handed over their users' identifications to a third party without regard for how it would be used or secured. I don't care if it was backed by Peter Theil or Mother Theresa - it's a huge problem either way.
And they'll do it again too. They'll find a new partner - one with less baggage - to do the exact same thing and few people will bat an eye.
Thanks. I was curious if someone was going to address the weird use of “CATASTROPHIC” to describe source maps being available for front-end code. It’s already public. Minified is better for the regular user, and should be in production, but it’s like by far the least problematic thing in this article.
Discord isn't scrapping its plans, just assuring people that one of the vendors they trialed in a sub-market they aren't moving forward with globally. They've been trying for a multi-vendor solution from the beginning and k-ID is the vendor they've been much more publicly happy with than Persona.
(Also, from that post most notably mentioned about the global rollout is delayed in light of some of these vendor verification issues and also hoping to rollout a few more features to even further lesson the need for age verification by many users. One such feature being first-class opt-in "spoiler channels", which some servers had been using age restricted channels for that rather than opt-in roles and somewhat more complex role-based permissions.)
The problem with Discord is their upcoming IPO, and reconciling the fact that their only valuable asset is their userbase - and their billions of messages - with a way to sell this asset and make it valuable to the investors in some way.
Article reports factually on a data breach and security vulnerability, presenting information that informs public understanding of corporate practices and security issues. Bylined reporting with attribution suggests editorial standards.
FW Ratio: 57%
Observable Facts
Article carries byline 'By Catherina Gioino' with email contact provided.
Article is dated and timestamped: 'February 24, 2026, 3:02 AM ET'.
Content is categorized under 'Cybersecurity' section, showing editorial organization.
The article reports on privacy and security failures affecting users' personal data and identity verification information, touching on dignity and protection themes underlying the Preamble.
FW Ratio: 60%
Observable Facts
Article headline states 'Discord distances itself from Peter Thiel–backed verification software after its code was found on a Google Cloud endpoint'.
Content describes a third-party vendor breach affecting 70,000 users.
Article discusses identity verification and age-verification practices by Discord.
Inferences
The reporting on data breaches and vendor failures suggests concern for user dignity and protection of personal information.
The focus on corporate responsibility and distancing indicates an implicit framework of accountability relevant to Preamble principles.
Article implicitly references age verification and identity confirmation systems, which relate to protective mechanisms for children and education access, though not directly advocating for educational rights.
FW Ratio: 60%
Observable Facts
Article discusses age-verification vendors used by Discord, with reference to child safety implications.
Page includes responsive CSS and font-loading indicating some accessibility design.
Paywall restricts free access to this information about digital safety.
Inferences
The focus on age-verification systems implies recognition of protections needed for children, aligning with Article 26 protective principles.
The paywall model reduces educational access to information about digital safety practices for broader audiences.
The article discusses privacy breaches and unauthorized collection of identity verification data. Reporting frames data breach as a privacy violation affecting users.
FW Ratio: 57%
Observable Facts
Article reports on a third-party vendor breach affecting 70,000 users' identity data.
Headline emphasizes that verification code was found on a Google Cloud endpoint, suggesting unauthorized exposure.
JSON metadata indicates Mixpanel tracking is implemented on the page.
Content type indicates 'isAccessibleForFree: false', showing paid access model.
Inferences
The breach reporting suggests recognition that user privacy violations are newsworthy and potentially harmful.
Implementation of user tracking via Mixpanel contradicts editorial focus on privacy concerns, creating a negative structural signal.
The paywall restricts wider public awareness of privacy breach reporting, limiting privacy advocacy reach.
Editorial byline present (Catherina Gioino) and publication metadata visible, indicating editorial accountability. Paywall access model and ad tracking reduce structural support for unrestricted information access.
Paywall model ('isAccessibleForFree: false') restricts free access to content about digital safety practices, limiting educational reach. Responsive font design suggests basic accessibility consideration.
Site employs Mixpanel tracking code for user analytics; paywall model restricts transparent access to privacy-related reporting; no visible privacy controls offered to users.
The article reports on privacy and security failures affecting users' personal data and identity verification information, touching on dignity and protection themes underlying the Preamble.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.