Model Comparison
Model Editorial Structural Class Conf SETL Theme
@cf/meta/llama-3.3-70b-instruct-fp8-fast lite 0.00 ND Neutral 0.90 0.00 Tech Development
@cf/meta/llama-4-scout-17b-16e-instruct lite 0.00 ND Neutral 0.90 0.00 Technology
deepseek/deepseek-v3.2-20251201 +0.17 +0.09 Mild positive 0.14 Digital Infrastructure
claude-haiku-4-5-20251001 +0.15 +0.09 Mild positive 0.33 -0.02 Knowledge Sharing & Digital Collaboration
meta-llama/llama-3.3-70b-instruct:free ND ND
Section @cf/meta/llama-3.3-70b-instruct-fp8-fast lite @cf/meta/llama-4-scout-17b-16e-instruct lite deepseek/deepseek-v3.2-20251201 claude-haiku-4-5-20251001 meta-llama/llama-3.3-70b-instruct:free
Preamble ND ND ND ND ND
Article 1 ND ND ND ND ND
Article 2 ND ND ND ND ND
Article 3 ND ND ND ND ND
Article 4 ND ND ND ND ND
Article 5 ND ND ND ND ND
Article 6 ND ND ND ND ND
Article 7 ND ND ND ND ND
Article 8 ND ND ND ND ND
Article 9 ND ND ND ND ND
Article 10 ND ND ND ND ND
Article 11 ND ND ND ND ND
Article 12 ND ND ND ND ND
Article 13 ND ND ND ND ND
Article 14 ND ND ND ND ND
Article 15 ND ND ND ND ND
Article 16 ND ND ND ND ND
Article 17 ND ND ND ND ND
Article 18 ND ND ND ND ND
Article 19 ND ND ND 0.38 ND
Article 20 ND ND ND 0.14 ND
Article 21 ND ND ND ND ND
Article 22 ND ND ND ND ND
Article 23 ND ND ND ND ND
Article 24 ND ND ND ND ND
Article 25 ND ND ND 0.28 ND
Article 26 ND ND ND 0.32 ND
Article 27 ND ND ND 0.39 ND
Article 28 ND ND ND ND ND
Article 29 ND ND ND ND ND
Article 30 ND ND ND ND ND
+0.15 Show HN: PgDog – Scale Postgres without changing the app (github.com S:+0.17 )
324 points by levkk 6 days ago | 64 comments on HN | Mild positive Mixed · v3.7 · 2026-02-26 03:26:27 0
Summary Knowledge Sharing & Digital Collaboration Advocates
The pgdog GitHub repository is a public, freely accessible open-source database tool that exemplifies advocacy for knowledge sharing, scientific progress, and collaborative participation. The content demonstrates strong positive alignment with Articles 19–20 (freedom of expression, association, and information access), Article 26–27 (education and cultural participation), and structural support for equitable access through GitHub's accessibility features and open-source distribution model. However, structural constraints on intellectual property ownership (Article 17) and privacy concerns related to platform analytics tracking (Article 12) create modest counterbalancing signals.
Article Heatmap
Preamble: ND — Preamble Preamble: No Data — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: ND — Privacy Article 12: No Data — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.38 — Freedom of Expression 19 Article 20: +0.14 — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: ND — Work & Equal Pay Article 23: No Data — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: +0.28 — Standard of Living 25 Article 26: +0.32 — Education 26 Article 27: +0.39 — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Editorial Mean +0.15 Structural Mean +0.17
Weighted Mean +0.30 Unweighted Mean +0.30
Max +0.39 Article 27 Min +0.14 Article 20
Signal 5 No Data 26
Volatility 0.09 (Low)
Negative 0 Channels E: 0.6 S: 0.4
SETL -0.02 Structural-dominant
FW Ratio 55% 41 facts · 34 inferences
Evidence 33% coverage
5H 8M 2L 16 ND
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.00 (0 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.00 (0 articles) Personal: 0.00 (0 articles) Expression: 0.26 (2 articles) Economic & Social: 0.28 (1 articles) Cultural: 0.35 (2 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 20 top-level · 21 replies
mijoharas 2026-02-23 17:29 UTC link
Happy pgdog user here, I can recommend it from a user perspective as a connection pooler to anyone checking this out (we're also running tests and positive about sharding, but haven't run it in prod yet, so I can't 100% vouch for it on that, but that's where we're headed.)

@Lev, how is the 2pc coming along? I think it was pretty new when I last checked, and I haven't looked into it much since then. Is it feeling pretty solid now?

cpursley 2026-02-23 17:53 UTC link
Looks great - I'd love to include it in https://postgresisenough.dev (just put in a PR: https://github.com/agoodway/postgresisenough?tab=readme-ov-f...)
noleary 2026-02-23 18:19 UTC link
> If you build apps with a lot of traffic, you know the first thing to break is the database.

Just out of curiosity, what kinds of high-traffic apps have been most interested in using PgDog? I see you guys have Coinbase and Ramp logos on your homepage -- seems like fintech is a fit?

jackfischer 2026-02-23 18:37 UTC link
Congrats guys! Curious how the read write splitting is reliable in practice due to replication lag. Do you need to run the underlying cluster with synchronous replication?
I_am_tiberius 2026-02-23 18:42 UTC link
I really hope to use the sharding feature one day.
codegeek 2026-02-23 18:54 UTC link
Stupid question but does this shard the database as well or do we shard manually and then setup the configuration accordingly ?
saisrirampur 2026-02-23 19:11 UTC link
Great progress, guys! It’s impressive to see all the enhancements - more types, more aggregate functions, cross-node DML, resharding, and reliability-focused connection pooling and more. Very cool! These were really hard problems and took multiple years to build at Citus. Kudos to the shipping velocity.
cuu508 2026-02-23 19:46 UTC link
Some HTTP proxies can do retries -- if a connection to one backend fails, it is retried on a different backend. Can PgDog (or PgBouncer, or any other tool) do something similar -- if there's a "database server shutting down" error or a connection reset, retry it on another backend?
mosselman 2026-02-23 20:41 UTC link
I see the word 'replication' mentioned quite a few times. Is this managed by pgdog? Would I be able to replace other logical replication setups with pgdog to create a High Availability cluster?

Do you have any write up on how to do this?

array_loader 2026-02-23 20:54 UTC link
(apologies for new account - NDA applies to the specifics)

Nice surprise to see this here today. I was working on a deployment just last week.

Unfortunately for me, I found that it crashed when doing a very specific bulk load (COPY FORMAT BINARY with array columns inside a transaction). The process loads around 200MB of array columns (in the region of 10K rows) into a variety of tables. Very early in the COPY process PgDog crashes with :

"pgdog router error: failed to fill whole buffer"

So it appears something is not quite right for my specific use case (COPY with array columns). I'm not familiar enough with Rust but the failed to fill whole buffer seemed to come from Rust (rather than PgDog) based on what little I could find with searches.

I was very disappointed as it looked much simpler to get set up and running that PgPool-II (which I have had to revert to as my backup plan - I'm finding it more difficult to configured, but it does cope with the COPY command without issues).

I would have preferred to stick with PgDog.

oulipo2 2026-02-23 23:02 UTC link
How do you know when/if it's justified to add additional complexity like PgDog?

Is there a number of simultaneous connection / req per sec that's a good threshold?

Is it easy on my postgres instance to get the number of simulataneous connections, for instance if I simulate traffic, to know if I would gain anything from a connection pooler?

lordofgibbons 2026-02-23 23:33 UTC link
This looks great! I have a couple of questions:

1) Is it possible to start off with plain Postgres and add pgdog without scheduled downtime down the road when scaling via sharding becomes necessary?

2) How are schema updates handled when using physical multi-tenancy? Does pgdog just loop over all the databases that it knows about and issues the update schema command to each?

farsa 2026-02-24 00:15 UTC link
Congrats on the progress! What is the behavior of PgDoc if it receives some sort of query it can't currently handle properly? Is there a linter/static analysis tool I can use to evaluate if my query will work?
written-beyond 2026-02-24 00:27 UTC link
Can you elaborate a bit more on the challenges faced in making Postgres shard-able?

I remember that adding sharing to Postgres natively was an uphill battle. There were a few companies who has proprietary solutions for it. What you've been able to achieve is nothing less than a miracle.

ijustlovemath 2026-02-24 00:42 UTC link
How would this product compare to a PostgREST based approach (this is the cool tech behind the original supabase) with load balancing at the HTTP level?
gregw2 2026-02-24 02:55 UTC link
As someone who has worked on many-TB-sized "custom" sharded systems with 30-150 shards at multiple (ok, 2) employers, a key challenge to the overall sharding landscape is unsharding all the data back at the analytics layer.

This at a minimum often involved adding back a shard key to the physical data, or partitioning, and/or physical data sorting easily in the "OLAP" layer. And a surprising number of CDC and ETL toolkits don't make it easy to parameterize a single code/configuration base, nor handle situations like shards being down at different times for maintenance or fetching data from each shard at a time of day specified by its end-of-day or handling retransmissions or reconciliation or gaps or data quality of a single shard when back in an unsharded landscape. SQL UNION ALL to reunite shards works, until it doesn't.

YMMV but would be curious if you have a story/solution/thoughts along these lines. It's easier if you shard with unified analytics/reporting in mind on day one of a sharded system design, but in the worlds I've lived in, nobody ever does. But maybe you could.

febed 2026-02-24 04:28 UTC link
Does it support extensions like PostGIS?
jacobsenscott 2026-02-24 06:58 UTC link
I've been watching PgDog for a while now. Great progress!
yilugurlu 2026-02-25 08:47 UTC link
Sorry if this is a weird question, but can I use this with TimescaleDB?
dujuku 2026-02-25 17:00 UTC link
Really exciting to see the progress on this project! I'm not sure I understand the update "we are in production." Is this referencing a particular release or a more general statement about adoption?
levkk 2026-02-23 17:47 UTC link
It feels better now, but we still need to add crash protection - in case PgDog itself crashes, we need to restore in-progress 2pc transaction records from a durable medium. We will add this very soon.
verdverm 2026-02-23 18:17 UTC link
Why don't you just do it yourself if you maintain a curated resource list?
aram99 2026-02-23 18:22 UTC link
.
nebezb 2026-02-23 18:26 UTC link
While the lift to add to your database is low, I don’t think you’re at a point you can outsource the work.

But all the better if they do!

levkk 2026-02-23 18:33 UTC link
We have all kinds, it's not specific to any particular sector. That's kind of the beauty for building for Postgres - everyone uses it in some capacity!

My general advice is, once you see more than 100 connections on your database, you should consider adding a connection pooler. If your primary load exceeds 30% (CPU util), consider adding read replicas. This also applies if you want some kind of workload isolation between databases, e.g. slow/expensive analytics queries can be pushed to a replica. Vertically scaling primaries is also a fine choice, just keep that vertical limit in mind.

Once you're a couple instance types away from the largest machine your cloud provider has, start thinking about sharding.

levkk 2026-02-23 18:41 UTC link
Not really, replication lag is generally an accepted trade-off. Sync replication is rarely worth it, since you take a 30% performance hit on commits and add more single points of failure.

We will add some replication lag-based routing soon. It will prioritize replicas with the lowest lag to maximize the chance of the query succeeding and remove replicas from the load balancer entirely if they have fallen far behind. Incidentally, removing query load helps them catch up, so this could be used as a "self-healing" mechanism.

levkk 2026-02-23 18:59 UTC link
It shards it as well. We handle schema sync, moving table data (in parallel), setting up logical replication, and application traffic cutover. The zero-downtime resharding is currently WIP, working on the PR as we speak: https://github.com/pgdogdev/pgdog/pull/784.
pbreit 2026-02-23 19:10 UTC link
How well does PG work with 10-20 million (financial) records per day? Basic stuff: a few writes per, some reads, generating some analytics, etc.
levkk 2026-02-23 20:22 UTC link
Not currently, but we can add this. One thing we have to be careful of is to not retry requests that are executing inside transactions, but otherwise this would be a great feature.
levkk 2026-02-23 20:53 UTC link
I'll need a bit more info about your use case to answer. We use logical replication to move data between shards, with the intention of creating new shards.

This is managed by PgDog. We are building a lot of tooling here, and a lot of it is configurable and can be used separately. For example, we have a CLI and admin database commands to setup replication streams between databases, irrespective of their sharded status, so it can be used for other purposes as well, like moving tables or entire databases to new hardware. If you keep the stream(s) running, you can effectively keep up-to-date logical replicas.

We don't currently manage DDL replication (CREATE/ALTER/DROP) for logically replicated databases - this is a known limitation that we will address shortly. After all, we don't want users to pause schema migrations during resharding. I think once that piece is in, you'll be able to run pretty much any kind of long-lived logical replicas for any purpose, including HA.

levkk 2026-02-23 20:57 UTC link
I think we may have fixed this 3 weeks ago: https://github.com/pgdogdev/pgdog/pull/744

Might be worth another try. If not, a GitHub issue with more specifics would be great, and we'll take a look. Also, if binary encoding isn't working out, try using text - it's more compatible between Postgres versions:

    [general]
    resharding_copy_format = "text"
maherbeg 2026-02-23 21:11 UTC link
The way we solved it is by checking the lsn on the primary, and then waiting for the replica to catch up to that lsn before doing reads on the replica in various scenarios.
levkk 2026-02-23 23:39 UTC link
1. Yup, we support online resharding, so you don't need to deploy this until you have to.

2. That's right, we broadcast the DDL to all shards in the configuration. If two-phase commit [1] is enabled, you have a strong guarantee that this operation will be atomic. The broadcast is done in parallel, so this is fast.

[1]: https://docs.pgdog.dev/features/sharding/2pc/

levkk 2026-02-24 00:58 UTC link
PostgREST is a translation layer: you use HTTP methods, inputs and outputs, to interact with Postgres, the database. It's a replacement for SQL, the language, which happens to also have a load balancer.

Their load balancer is still at the Postgres layer though. You can think of it as just an application that happens to speak a specific API. Load balancing applications is a solved problem.

levkk 2026-02-24 01:04 UTC link
So many, where to begin.

1. People don't design schemas to be sharded, although many gravitate towards a common key, e.g. user_id or country_id or tenant_it or customer_id. Once that happens, sharding becomes easier.

2. Postgres provides a lot of guarantees that are tricky to maintain when sharded: atomic changes, referential integrity, check constraints, unique indexes (and constraints), to name a few. Those have to be built separately by a sharding layer (like PgDog) and have trade-offs, usually around performance. It's a lot more expensive to check a globally enforced constraint than a local one (network hops aren't free).

3. Online migrations from unsharded to sharded can be tricky: you have to redistribute terabytes of data while the DB continues to serve writes. You can't lose a single row - Postgres is used as a store of record and this can be a serious issue with business impact.

We're taking increasingly bigger bites at this apple. We started with basic query routing and are now doing query rewrites as well. We didn't handle data movements previously and now have almost fully automatic resharding. It takes time, elbow grease and most importantly, willing and courageous early adopters to whom we owe a huge debt of gratitude.

levkk 2026-02-24 01:13 UTC link
The current behavior unfortunately is to just let it through and return an incorrect result. We are adding more checks here and rely heavily on early adopters to have a decent test suite before launching their apps to prod.

That being said, we do have this [1]:

    [general]
    expanded_explain = true

This will modify the output of EXPLAIN queries to return routing decisions made by PgDog. If you see that your query is "direct-to-shard", i.e. goes to only one shard, you can be certain that it'll work as expected. These queries will talk to only one database and don't require us to manipulate the result or assemble results from multiple shards.

For cross-shard queries, you'll need your own integration tests, for now. We'll add checks here shortly. We have a decent CI suite as well, but it doesn't cover everything. Every time we look at that part of the code, we just end up adding more features, like the recent support for LIMIT x OFFSET y (PgDog rewrites it to LIMIT x + y and applies the offset calculation in memory).

We'll get there.

[1]: https://docs.pgdog.dev/features/sharding/explain/

levkk 2026-02-24 01:20 UTC link
I would say, over 100 Postgres connections, consider getting a connection pooler. Requests per second is highly variable. Postgres can serve a lot of them, as long as you keep the number of server connections low - that's what the pooler is for.

You can use pgbench to benchmark this on local pretty easily. The TPS curve will be interesting. At first, the connection pooler will cause a decrease and as you add more and more clients (-c parameter), you should see increasing benefits.

Ultimately, you add connection poolers when you don't have any other option: you have hundreds of app containers with dozens of connections each and Postgres can't handle it anymore, so it's a necessity really.

Load balancing becomes useful when you start adding read replicas. Sharding is necessary when you're approaching the vertical limit of your cloud provider (on the biggest instance or close).

levkk 2026-02-24 03:36 UTC link
A couple options come to mind:

1. Replicate shards into one beefy database and use that. Replication is cheaper than individual statements, so this can work for a while. The sink can be Postgres or another database like Clickhouse. At Instacart, we used Snowflake, with an in-house CDC pipeline. It worked well, but Snowflake was only usable for offline analytics, like BI / batch ML, and quite expensive. We'll add support for this eventually; we're getting pretty good at managing logical replication, including DDL changes.

2. Use the shards themselves and build a decent query engine on top. This is the Citus way and we know it's possible. Some queries could be expensive, but that's expected and can be solved with more compute.

In our architecture, shards going down for maintenance is an incident-level event, so we expect those to be up at all times, and failover to a standby if there is an issue. These days, most maintenance tasks can be done online in-place, or with blue/green, which we'll support as well. Zero downtime is the name of the game.

levkk 2026-02-24 04:54 UTC link
Technically yes. We only support BIGINT (and all other integers), VARCHAR and UUID for sharding keys, but we'll happily pass through any other data. If we need to process it, we'll need to parse it. To be clear: you can include PostGIS data in all queries, as long as we don't need it for sharding.

It's not too difficult to add sharding on it if we wanted to. For example, we added support for pgvector a while back (L2/IVFlat-based sharding), so we can add any other data type, e.g., POLYGON for sharding on ST_Intersects, or for aggregates.

levkk 2026-02-25 19:00 UTC link
General statement about adoption. Last time we made a Show HN (9 months ago), it was a POC, running on my local. Now we're used in production by some pretty big companies, which is exciting!
levkk 2026-02-25 19:01 UTC link
You can I believe. We only support BIGINT, VARCHAR and UUID for sharding, but all other data types are completely fine for passthrough, i.e. to be included and used in your queries.
Editorial Channel
What the content says
+0.20
Article 27 Cultural Participation
High Advocacy Practice
Editorial
+0.20
SETL
+0.06

Repository is itself a contribution to shared scientific and technical knowledge; open publication exemplifies participation in advancing cultural and scientific progress.

+0.18
Article 26 Education
High Advocacy Practice
Editorial
+0.18
SETL
+0.07

Repository and open-source model explicitly support education and knowledge development; technical documentation, code comments, and shared implementation serve educational purposes.

+0.15
Article 19 Freedom of Expression
High Advocacy Practice
Editorial
+0.15
SETL
-0.10

Page demonstrates advocacy for open-source software and collaborative knowledge sharing, which implicitly champion freedom of expression and information access through code publication.

+0.12
Article 20 Assembly & Association
High Practice
Editorial
+0.12
SETL
-0.07

The public repository model implicitly supports association and assembly by enabling collaborative contribution and community participation around the project.

+0.10
Article 25 Standard of Living
High Practice
Editorial
+0.10
SETL
-0.09

Repository description and documentation implicitly support adequate standards of living by providing open-source tools that lower barriers to database infrastructure access.

ND
Preamble Preamble
Medium Practice

No explicit editorial content addressing the Preamble's dignity, equality, or freedom principles.

ND
Article 1 Freedom, Equality, Brotherhood
Medium Practice

No editorial content explicitly addressing equality and freedom of all humans.

ND
Article 2 Non-Discrimination
Medium Practice

No editorial content addressing freedom from discrimination.

ND
Article 3 Life, Liberty, Security

No content addresses right to life, liberty, or personal security.

ND
Article 4 No Slavery

No content addresses freedom from slavery or servitude.

ND
Article 5 No Torture

No content addresses torture or cruel treatment.

ND
Article 6 Legal Personhood

No content addresses right to recognition as a person.

ND
Article 7 Equality Before Law

No content addresses equal protection before law.

ND
Article 8 Right to Remedy

No content addresses right to effective remedy for rights violations.

ND
Article 9 No Arbitrary Detention

No content addresses arbitrary arrest or detention.

ND
Article 10 Fair Hearing

No content addresses fair and public hearing.

ND
Article 11 Presumption of Innocence

No content addresses presumption of innocence.

ND
Article 12 Privacy
Low Practice

No explicit editorial engagement with privacy rights.

ND
Article 13 Freedom of Movement
Medium Practice

No editorial content addressing freedom of movement.

ND
Article 14 Asylum

No content addresses right to asylum.

ND
Article 15 Nationality

No content addresses nationality.

ND
Article 16 Marriage & Family

No content addresses right to marriage and family.

ND
Article 17 Property
Medium Practice

No editorial content addressing property rights or intellectual ownership.

ND
Article 18 Freedom of Thought

No content addresses freedom of thought, conscience, or religion.

ND
Article 21 Political Participation

No content addresses political participation or voting.

ND
Article 22 Social Security

No content addresses social security or welfare rights.

ND
Article 23 Work & Equal Pay
Medium Practice

No explicit editorial content addressing labor rights or employment.

ND
Article 24 Rest & Leisure

No content addresses rest and leisure rights.

ND
Article 28 Social & International Order
Medium Practice

No explicit editorial content addressing social and international order.

ND
Article 29 Duties to Community
Medium Practice

No explicit editorial content addressing duties or limitations.

ND
Article 30 No Destruction of Rights
Low Practice

No explicit editorial content addressing protection from destruction of rights.

Structural Channel
What the site does
+0.20
Article 19 Freedom of Expression
High Advocacy Practice
Structural
+0.20
Context Modifier
+0.20
SETL
-0.10

GitHub's public repository model (cached DCP +0.12) and community guidelines (cached DCP +0.08) enable unrestricted speech within bounds of terms; repository is openly readable and searchable, supporting information access.

+0.18
Article 27 Cultural Participation
High Advocacy Practice
Structural
+0.18
Context Modifier
+0.20
SETL
+0.06

GitHub's access model (cached DCP +0.12) and community guidelines (cached DCP +0.08) enable open participation in cultural and scientific advancement; repository is freely shared and contributable.

+0.15
Article 20 Assembly & Association
High Practice
Structural
+0.15
Context Modifier
0.00
SETL
-0.07

GitHub's community infrastructure allows developers to associate freely, comment, create issues, and collaborate; public discussions visible in repository demonstrate freedom of association.

+0.15
Article 25 Standard of Living
High Practice
Structural
+0.15
Context Modifier
+0.15
SETL
-0.09

GitHub's accessibility features (cached DCP +0.15) enable equitable platform access; open-source distribution reduces economic barriers to obtaining database tools.

+0.15
Article 26 Education
High Advocacy Practice
Structural
+0.15
Context Modifier
+0.15
SETL
+0.07

GitHub's accessibility and public-discussion model (cached DCP +0.15) create conditions for learning; repository code and documentation are freely accessible for educational purposes.

ND
Preamble Preamble
Medium Practice

GitHub's public repository model and open access structure support the Preamble's aspirational framework of universal human dignity through knowledge sharing and collaborative practice.

ND
Article 1 Freedom, Equality, Brotherhood
Medium Practice

GitHub's ToS (cached DCP) establish baseline equal treatment without discrimination; public repository access model treats all viewers equally.

ND
Article 2 Non-Discrimination
Medium Practice

GitHub ToS prohibit discrimination; public repository model does not discriminate based on user characteristics.

ND
Article 3 Life, Liberty, Security

Repository page does not engage structural dimensions of life, liberty, or security.

ND
Article 4 No Slavery

Not applicable to this software repository context.

ND
Article 5 No Torture

Not applicable to this software repository context.

ND
Article 6 Legal Personhood

Not applicable to this software repository context.

ND
Article 7 Equality Before Law

Not applicable to this software repository context.

ND
Article 8 Right to Remedy

Not applicable to this software repository context.

ND
Article 9 No Arbitrary Detention

Not applicable to this software repository context.

ND
Article 10 Fair Hearing

Not applicable to this software repository context.

ND
Article 11 Presumption of Innocence

Not applicable to this software repository context.

ND
Article 12 Privacy
Low Practice

GitHub's privacy controls (cached DCP +0.1) and ad_tracking concerns (-0.08) create a slight net positive but constrained by behavioral data collection risks on feature flags and analytics.

ND
Article 13 Freedom of Movement
Medium Practice

Public repository platform enables users to move freely between repositories and communities without geographic restrictions.

ND
Article 14 Asylum

Not applicable to this software repository context.

ND
Article 15 Nationality

Not applicable to this software repository context.

ND
Article 16 Marriage & Family

Not applicable to this software repository context.

ND
Article 17 Property
Medium Practice

GitHub's platform control (cached DCP -0.05) means user-generated content ownership is conditional on platform terms rather than absolute; creators retain some rights but subject to GitHub's control.

ND
Article 18 Freedom of Thought

Not directly observable in repository context.

ND
Article 21 Political Participation

Not applicable to this software repository context.

ND
Article 22 Social Security

Not applicable to this software repository context.

ND
Article 23 Work & Equal Pay
Medium Practice

Open-source model enables free participation and choice of contribution; GitHub's platform does not mandate labor but facilitates voluntary work sharing and peer production.

ND
Article 24 Rest & Leisure

Not applicable to this software repository context.

ND
Article 28 Social & International Order
Medium Practice

GitHub's global platform structure and public accessibility create conditions for international cooperation and order based on UDHR principles.

ND
Article 29 Duties to Community
Medium Practice

GitHub's terms of service (cached DCP) and community guidelines establish duties and limitations; platform enforces community standards and legal obligations.

ND
Article 30 No Destruction of Rights
Low Practice

GitHub's platform maintains content preservation and user protections within its terms; however, platform control creates dependency risk.

Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Epistemic Quality
How well-sourced and evidence-based is this content?
0.68 low claims
Sources
0.7
Evidence
0.7
Uncertainty
0.6
Purpose
0.8
Propaganda Flags
No manipulative rhetoric detected
0 techniques detected
Emotional Tone
Emotional character: positive/negative, intensity, authority
measured
Valence
+0.3
Arousal
0.3
Dominance
0.3
Transparency
Does the content identify its author and disclose interests?
0.50
✓ Author
More signals: context, framing & audience
Solution Orientation
Does this content offer solutions or only describe problems?
0.75 solution oriented
Reader Agency
0.8
Stakeholder Voice
Whose perspectives are represented in this content?
0.55 2 perspectives
Speaks: individualsinstitution
Temporal Framing
Is this content looking backward, at the present, or forward?
present unspecified
Geographic Scope
What geographic area does this content cover?
global
Complexity
How accessible is this content to a general audience?
technical high jargon domain specific
Longitudinal 144 HN snapshots · 5 evals
+1 0 −1 HN
Audit Trail 25 entries
2026-02-28 14:29 eval_success Lite evaluated: Neutral (0.00) - -
2026-02-28 14:29 eval Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
PR tech content
2026-02-26 23:12 eval_success Light evaluated: Neutral (0.00) - -
2026-02-26 23:12 eval Evaluated by llama-4-scout-wai: 0.00 (Neutral)
2026-02-26 20:21 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 20:19 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 20:18 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 20:17 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 17:42 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 17:40 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 17:38 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 17:37 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 09:15 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 09:14 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 09:12 rate_limit OpenRouter rate limited (429) model=mistral-small-3.1 - -
2026-02-26 09:12 rate_limit OpenRouter rate limited (429) model=hermes-3-405b - -
2026-02-26 09:11 rate_limit OpenRouter rate limited (429) model=mistral-small-3.1 - -
2026-02-26 09:11 rate_limit OpenRouter rate limited (429) model=hermes-3-405b - -
2026-02-26 09:10 rate_limit OpenRouter rate limited (429) model=hermes-3-405b - -
2026-02-26 09:10 rate_limit OpenRouter rate limited (429) model=mistral-small-3.1 - -
2026-02-26 09:10 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 09:10 dlq Dead-lettered after 1 attempts: Show HN: PgDog – Scale Postgres without changing the app - -
2026-02-26 08:22 eval Evaluated by deepseek-v3.2: +0.17 (Mild positive) 9,950 tokens
2026-02-26 03:26 eval Evaluated by claude-haiku-4-5-20251001: +0.16 (Mild positive) 12,701 tokens +0.01
2026-02-26 03:09 eval Evaluated by claude-haiku-4-5-20251001: +0.16 (Mild positive) 14,194 tokens