46 points by muskanshafat 5 days ago | 22 comments on HN
| Moderate positive
Contested
Editorial · v3.7· 2026-02-26 02:31:36 0
Summary Digital Governance & Transparency Advocates
This Substack article by Muskan Shafat reframes technology policy discourse by prioritizing data governance and institutional practices over AI hype, analyzing real-world implementations at city level and examining policy gaps evident at the India AI Summit. The piece engages positively with information access rights (free publication), freedom of expression through critical analysis, and public understanding of technology systems. The content demonstrates modest advocacy for transparent, functional data governance as a human rights concern.
>Here’s why this changes everything: most AI accountability frameworks assume a discrete, auditable dataset. EU’s GDPR gives you the right to erasure — the right to delete your data. But GDPR was written for databases. The Ontology is a graph. You can delete a node. You can’t easily delete the edges i.e, the inferred relationships between you and everything else the system has connected you to.
Edges are personal data according to GDPR so this is completely wrong. Almost all things to which the GDPR applies are edges.
'impossiblefork likes stories' is an edge.
Ontologies are also old. It's been a big research area since like the 90s.
The idea that it's harder to query and delete everything relating to a person from a well-organized graph than from the typical corporate patchwork of data systems seems very improbable. The post also reads like a barely tweaked Gemini output. I'm not a Palantir fan, but this feels flimsy.
the legal question is settled. edges are personal data under gdpr. the practical question is who audits a knowledge graph to verify deletion actually happened. palantir knows the answer is nobody.
it's funny because you'd think these trillion parameter models trained on the entirety of humanity's written works would be amazing at writing, but instead all the models just converge to the same tired overly-enthusiastic phrases
Fair correction - I should have been more precise.
The point I was reaching for is a practical enforcement one: verifying that edges have actually been deleted from an opaque, continuously updated knowledge graph has no standardized technical mechanism. Regulators have audit powers, but graph deletion verification i.e, confirming that relational inferences are gone, not just that a node was removed has no established standard. Controllers can assert compliance in ways that are genuinely difficult to challenge in practice.
The underlying research is mine (one that I am actually very passionate about) but I did run it through an LLM to smoothen the flow. Not something I will do again. Thanks for the feedback!
Fair point. I realize that I oversimplified.. My main argument is that the Ontology isn't a clean internal graph. It ingests from sources Palantir doesn't own or control, so deleting your node doesn't touch the upstream data. And inferred edges (risk scores, behavioral patterns) were never stored as discrete objects. You can't delete an inference.
And, I will hold my hand up to say I did use an LLM (Claude, actually). But only to make the text read and flow better (something I definitely won't do again). The underlying research is my own and something I am very passionate about. Thank you for your feedback! I appreciate it. :)
Thank you for your new substack, very illuminating (and scary) reading.
It's as if someone was watching the show "Person of Interest" as a guidebook for how to build (and weaponize) the all-seeing eye of sauron that we have today.
Article examines data governance and technology policy (Palantir Ontology, India AI Summit), implying engagement with information systems and public discourse about tech. Title reframes tech anxiety, suggesting critique of mainstream narratives. Encourages readers to think critically about institutional data practices
FW Ratio: 50%
Observable Facts
Article is freely accessible without paywall
Content examines institutional data systems (Palantir, city governments, AI policy)
Author identified by name (Muskan Shafat) with social media link (Twitter)
Inferences
Free access model supports information dissemination rights central to Article 19
Focus on data governance critique suggests engagement with systemic issues affecting information flows and surveillance
Individual authorship and byline enable attribution and accountability for published information
Article examines data literacy, technology governance, and institutional practices around AI/data—components of informed participation in technological society. Discussion of how cities use data well implies evaluation of technology for human benefit
FW Ratio: 60%
Observable Facts
Article focuses on data governance practices and technology policy
Subtitle mentions evaluation of cities 'using data well' implying standards for beneficial technology use
Publication tagged as 'Exploring the Frontier of Data, Tech, and Society' suggests educational mission
Inferences
Emphasis on data governance and policy contributes to public understanding of technology systems affecting society
Evaluation of institutional practices promotes informed citizenry regarding technological systems
Article examines cultural and technical dimensions of data systems and AI governance, including policy landscape (India AI Summit). Engages with knowledge systems and institutional practices
FW Ratio: 60%
Observable Facts
Article discusses India AI Summit and what it 'didn't say' suggesting analytical engagement with policy discourse
Content examines institutional approaches to data and technology governance
Author bio indicates 'Learning and unlearning on loop' suggesting engagement with knowledge production
Inferences
Analysis of policy gaps contributes to shared cultural understanding of technology governance
Examination of institutional practices engages with collective knowledge about tech and society
Title frames AI as less urgent concern than underlying data governance and infrastructure issues; suggests reframing of tech anxiety toward institutional practices
FW Ratio: 67%
Observable Facts
Article headline is 'The AI Is the Last Thing to Worry About'
Subtitle references Palantir's Ontology, city-level data use, and India AI Summit
Inferences
Title implies deprioritization of AI concerns in favor of other systemic issues, which could reflect broader concern for human dignity and collective welfare
Article focuses on data governance and surveillance infrastructure (Palantir Ontology, city-level data use) which relates to privacy. However, framing emphasizes institutional data practices rather than privacy protection explicitly
FW Ratio: 67%
Observable Facts
Subtitle references 'Palantir's Ontology' which is data infrastructure software
Content discusses 'two cities using data well' indicating municipal data governance focus
Inferences
Emphasis on data governance infrastructure could suggest concern with privacy invasion through surveillance systems, though this is not explicitly stated in available text
Substack privacy policy standard; no on-domain disclosure visible in page content
Terms of Service
—
Substack terms standard; no specific editorial guidelines visible
Identity & Mission
Mission
+0.15
Article 19 Article 27
Publication tagline 'Exploring the Frontier of Data, Tech, and Society' suggests alignment with information access and knowledge; modest positive modifier applied
Editorial Code
—
No editorial code of conduct visible on-domain
Ownership
—
Independent Substack newsletter; no corporate ownership signals that would affect HRCB
Access & Distribution
Access Model
+0.10
Article 19 Article 26
Article marked as free/open access ('isAccessibleForFree':true); supports information access rights
Ad/Tracking
—
No explicit ad or tracking disclosure visible; Substack standard platform behavior
Accessibility
—
Standard Substack article format; no explicit accessibility statements observed
Published on Substack as open-access article (isAccessibleForFree=true), enabling free dissemination of information and analysis. Platform enables individual voice and publication without institutional gatekeeping
Title frames AI as less urgent concern than underlying data governance and infrastructure issues; suggests reframing of tech anxiety toward institutional practices
Article focuses on data governance and surveillance infrastructure (Palantir Ontology, city-level data use) which relates to privacy. However, framing emphasizes institutional data practices rather than privacy protection explicitly
Substack platform allows community engagement through comments and interaction; note indicates 10 subscribers suggesting low but present community formation
Article examines data literacy, technology governance, and institutional practices around AI/data—components of informed participation in technological society. Discussion of how cities use data well implies evaluation of technology for human benefit
Article examines cultural and technical dimensions of data systems and AI governance, including policy landscape (India AI Summit). Engages with knowledge systems and institutional practices
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 13:57:54 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.