The article reports on a security vulnerability in an AI-generated educational app that exposed personal data of 18,697 users, including minors. The coverage advocates for stronger privacy protections (Article 12) and platform accountability (Article 28-29), while documenting failures in remedy access (Article 8) and exercising robust freedom of expression (Article 19) through multi-perspective reporting.
The hardest part about this stuff is that as a user, you don't necessarily know if an app is vibe-coded or not. Previously, you were able to have _some_ reasonable expectation of security in that trained engineers were the ones building these things out, but that's no longer the case.
There's a lot of cool stuff being built, but also as a user, it's a scary time to be trying new things.
I've been thinking a bit about how to do security well with my generated code. I've been using tools that check deps for CVEs, static tools that check for sql injection and similar problems, and baking some security requirements into the specs I hand claude. I can't tell yet if this is better than what I did before or just theater. It seems like in this case you'd need/want to specify some tests around access.
I'm interested to hear how other people approach this.
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
Lovable is marketed to non developers, so their core users wouldn't understand a security flow if it flashed red. A lot of my non dev friends were posting their cool new apps they built on LinkedIn last year [0]. Several were made on lovable. It's not on their users to understand these flaws
The apps all look the same with a different color palette, and makes for an engaging AI post on LinkedIn. Now they are mostly abandoned, waiting for the subscription to expire... and their personal data to get exposed I guess
Vibe coding democratized shipping without democratizing the accountability. The 18,000 users absorbed the downside of a risk they didn't know they were taking.
One dev of a Lovable competitor pointed me to the rules thats supposed to ensure queries are limited to that user's data. This seems like "pretty please?" to my amateur eyes.
Yeah, my trust for new open source projects is in the toilet. Hopefully we will eventually start taking security seriously again after the vibe code gold rush.
Same way you handle preserving any other property you want to preserve while "vibecoding" -- ensure tests capture it, ensure the tests can't be skipped. It really is this simple.
Ask the LLM to create for you a POC for the vulnerability you have in mind. Last time I did this I had to repeatedly make a promise to the LLM that it was for educational purposes as it assumed this information is "dangerous".
Developers with decades of experience still make basic security holes. The general public are screwed once they start hosting their own apps and serving on the Internet.
I don't think you know what democracy means, democracy means that users can reject poorly made apps. If you can't reject or destroy something, it's not a democratic process.
Having someone dump shitty wares onto the public is only democracy if you think being held unaccountable as democratic.
The frequency with which I see contemporary apps updating (sometimes multiple times a day) says there's a change in culture that also makes professionals prone to mistakes.
I get that we'll never ship a perfect release, but if you have to push fixes once a day it seems you've lost perspective.
Vibe coding slopiness is more acceptable now because we've lowered our standards
So the problem I'm having is I don't know what I'm doing vis a vis security, so I can't audit my own understanding by just sitting in a chair, but here's what I've been doing.
I'm building a desktop app that has has authentication needs because we need to connect our internal agents and also allow the user to connect theirs. We pay for our agents, the user pays for theirs (or pays us to use ours etc.). These are, relatively speaking, VERY SIMPLE PROBLEMS, nevertheless agents are happy to consume and leak secrets, or break things in much stranger ways, like hooking the wrong agent up to the wrong auth which would have charged a user for our API calls. That seemed very unlikely to me until I saw it.
So far what has "worked" (made me feel less anxious, aside from the niggling worry that this is theater) is:
1. Having a really strong and correct understanding of our data flows. That's not about security per se so at least that I can be ok at it. This allows me to...
2. Be aggressive and paranoid about not doing it at all, if it can be helped. Where I actually handle authentication is as minimal as possible (one should have some reasonable way to prove that to yourself). Done right the space is small enough to reason about.
How do I do 1 & 2 while not knowing anything? Painfully and slowly and by reading. The web agents are good if you're honest about your level of knowledge and you ask for help in terms of sources to read. It's much more effective than googling. Ask, read what the agents say, press them for good recommendations for YOU to read, not anyone. Then go out and read those sources. Have I learned enough to supervise a frontier model? No. Absolutely not. Am I doing it anyway? Yes.
The article exercises and advocates for freedom of expression and information by reporting on a security incident with multiple perspectives. Direct quotes from the researcher, journalist reporting, and company response demonstrate open discourse.
FW Ratio: 60%
Observable Facts
The article includes direct quotes from researcher Taimur Khan describing the vulnerabilities and criticizing Lovable.
The article includes direct quotes from Lovable CISO Igor Andriushchenko providing the company's perspective and response.
The article is written by journalist Connor Jones and published by The Register, a technology news publication with editorial independence.
Inferences
The inclusion of multiple stakeholder perspectives demonstrates exercise of freedom of expression and information rights.
The publication of critical reporting alongside official company response shows editorial commitment to presenting diverse viewpoints.
The article extensively documents and advocates for privacy rights by exposing a major data breach. It demonstrates how platform failures enabled unauthorized access to personal information, framing privacy as a critical right requiring protection and accountability.
FW Ratio: 60%
Observable Facts
The article documents that 18,697 user records were exposed, including 14,928 unique email addresses, 4,538 student accounts, and 870 users with full personally identifiable information.
Khan states: 'an unauthenticated attacker could trivially access every user record, send bulk emails through the platform, delete any user account, grade student test submissions, and access organizations' admin emails.'
The article quotes Khan criticizing Lovable for not taking responsibility for apps it 'showcases to 100,000 people' and 'leaking user data.'
Inferences
The article advocates for privacy rights by documenting how platform and developer failures enabled unauthorized access and retention of personal data.
The article frames data exposure as a violation of user privacy that should trigger platform accountability and remedial action.
The article advocates for a social and international order where security responsibility is built into platform structures. Khan argues Lovable should take responsibility for apps it hosts, and the article frames this as necessary for establishing accountable systems.
FW Ratio: 67%
Observable Facts
Khan states: 'If Lovable is going to market itself as a platform that generates production-ready apps with authentication included, it bears some responsibility for the security posture of the apps it generates and promotes.'
Khan criticizes Lovable for not implementing 'a basic security scan of showcased applications,' arguing responsibility is mutual.
Inferences
The article advocates for a social order where platforms bear responsibility for the security of infrastructure they host and promote.
The article frames security responsibility as mutual duty between platforms and users. Khan argues Lovable should take responsibility; Lovable counters that users have duty to implement recommendations. The article presents this tension over duties.
FW Ratio: 67%
Observable Facts
Khan argues Lovable should take responsibility and implement security measures as part of its platform responsibilities.
Lovable CISO responds: 'Ultimately, it is at the discretion of the user to implement these recommendations. In this case, that implementation did not happen.'
Inferences
The article frames security as a shared responsibility, with both platforms and users bearing duties to protect data and implement protections.
The article advocates against systems and practices that would destroy or undermine users' fundamental rights. It argues platforms should prevent vulnerabilities that enable rights destruction through unauthorized data access.
FW Ratio: 67%
Observable Facts
The article documents how platform vulnerabilities enabled attackers to access user records, send bulk emails, delete accounts, and modify grades—actions that directly destroy user privacy, security, and autonomy.
Khan argues Lovable should implement security measures to prevent such rights-destroying vulnerabilities from being published at scale.
Inferences
The article advocates for preventing platforms from enabling practices that destroy users' fundamental privacy and security rights.
The article documents a breach of the human dignity and security the Preamble calls for, but does not explicitly engage with the foundational values of the UDHR.
FW Ratio: 50%
Observable Facts
The article reports a security incident affecting 18,697 user records, including personal information of students, teachers, and minors.
Inferences
The exposure of personal data suggests institutional failure to protect the human dignity and security the Preamble emphasizes.
The article documents unequal vulnerability: some users (with exposed PII) lack the protection others enjoy. Does not explicitly frame this as equal rights violation.
FW Ratio: 50%
Observable Facts
The article states 870 users had full personally identifiable information exposed, while others' data exposure varied.
Inferences
The uneven data exposure suggests unequal protection of user dignity and rights.
The article acknowledges users' identities were exposed (870 with full PII, plus email addresses) but does not frame this as a violation of the right to recognition as a person.
FW Ratio: 50%
Observable Facts
The article specifies that exposed data included 14,928 unique email addresses and 870 users with full personally identifiable information.
Inferences
The exposure of personal identifying information undermines the right to have one's identity protected.
The article identifies that minors from K-12 institutions were on the exposed platform, acknowledging implications for children, but does not explicitly frame data exposure as a violation of family/child protection rights.
FW Ratio: 50%
Observable Facts
The article states the affected platform included 'K-12 institutions with minors likely on the platform.'
Inferences
The exposure of minors' data undermines family and child protection rights, though this is not the article's primary framing.
The article acknowledges the educational context of affected users (exam platform for teachers and students) but does not frame data exposure as a violation of economic/social/cultural rights to education.
FW Ratio: 50%
Observable Facts
The article identifies the affected app as 'a platform for creating exam questions and viewing grades' used by teachers and students, including those from UC Berkeley, UC Davis, and K-12 institutions.
Inferences
The vulnerability in an educational platform undermines the right to participate in economic, social, and cultural life through safe access to education.
The article implicitly addresses developer rights by noting the vulnerability was AI-generated, but does not explicitly frame security responsibility as affecting workers' right to favorable working conditions.
FW Ratio: 50%
Observable Facts
The article discusses AI-generated code vulnerabilities and notes Lovable positions itself as enabling developers to build apps.
Inferences
The prevalence of AI-generated security vulnerabilities undermines developers' ability to exercise work rights under favorable security conditions.
The article discusses AI/code generation and platform accessibility but does not explicitly frame security vulnerabilities as impeding the right to participate in cultural/scientific life or share in scientific progress.
FW Ratio: 50%
Observable Facts
The article references Lovable's 'vibe-coding' platform and AI-generated code, positioning the platform as enabling participation in development culture.
Inferences
Widespread vulnerabilities in AI-generated code may undermine developers' ability to safely participate in and contribute to technological culture and scientific advancement.
The article describes a vulnerability that directly threatened user security—unauthenticated attackers could access records, delete accounts, modify grades—but frames this primarily as a technical/responsibility issue rather than a security rights violation.
FW Ratio: 67%
Observable Facts
The article states 18,697 user records were exposed, with attackers able to trivially access every user record, send bulk emails, delete accounts, and modify grades.
Lovable's platform is described as hosting apps that could be showcased to 100,000 viewers without adequate security verification.
Inferences
The data exposure and unauthorized access threats directly undermine user security and safety rights protected by Article 3.
The article identifies the affected platform as educational and notes minors were affected, acknowledging educational context. However, it does not explicitly frame data exposure in an educational platform as a violation of the right to participate in cultural life and enjoy scientific benefits.
FW Ratio: 50%
Observable Facts
The article states the affected app was 'a platform for creating exam questions and viewing grades' and that 'the userbase is naturally comprised of teachers and students' from universities and K-12 institutions.
Inferences
The security breach in an educational platform undermines students' right to participate safely in cultural and educational life.
The article documents a failure of effective remedy: Khan reported vulnerabilities via support and 'his ticket was reportedly closed without response.' This demonstrates inadequate access to remedial mechanisms for affected users.
FW Ratio: 67%
Observable Facts
According to the article, after Khan reported findings via Lovable's support, 'his ticket was reportedly closed without response.'
Lovable's CISO later clarified they received the 'proper disclosure report' on February 26 and acted within minutes, but this does not address the initial ticket closure.
Inferences
The initial lack of response to the vulnerability report suggests affected users lacked effective channels to seek remedy for data exposure.
Phrases like 'riddled with vulnerabilities,' 'spewing glitzy-looking apps laden with vulnerabilities,' and 'vibe-coding' (used derisively) carry editorial framing rather than neutral reporting.
causal oversimplification
The article simplifies the responsibility question by focusing on Lovable's role, though the CISO's counterpoint (developers must implement recommendations, external code involved, database not Lovable-hosted) complicates causation.