1768 points by tptacek 4346 days ago | 528 comments on HN
| Mild positive
Contested
Editorial · v3.7· 2026-02-28 07:33:04 0
Summary Privacy & Security Advocates
Heartbleed.com documents CVE-2014-0160, a critical OpenSSL vulnerability that compromises privacy and security across the internet. The site strongly advocates for privacy protection through technical remediation, transparent disclosure, public education, and coordinated international response, empowering individuals and institutions to understand threats and fulfill their recovery responsibilities.
What a great writeup. Comprehensive without being overly verbose, answers to "what does this mean?" and "does this affect me?", and clear calls to action.
While I'm not happy at having to spend my Monday patching a kajillion machines, I welcome more vulnerability writeups in this vein.
Does SSH (specifically sshd) on major OSes use affected versions of OpenSSL? [answer pulled up from replies below: since sshd doesn't use TLS protocol, it isn't affected by this bug, even if it does use affected OpenSSL versions]
What's the quickest check to see if sshd, or any other listening process, is vulnerable?
(For example, if "lsof | grep ssl" only shows 0.9.8-ish version numbers, is that a good sign?)
> Recovery from this bug could benefit if the new version of the OpenSSL would both fix the bug and disable heartbeat temporarily until some future version... If only vulnerable versions of OpenSSL would continue to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible.
This sounds risky to me. I'm afraid attackers would benefit more from this decision than coordinated do-gooders.
This thing has been in the wild for two years. What are the odds it hasn't been systematically abused? And what does this imply?
To me it sounds kind of like finding out the fence in your backyard was cut open two years ago. Except in this case the backyard is two thirds of the internet.
This doesn't sound like "responsible disclosure" to me - how can Codenomicon dump this news when all the major Linux vendors don't have patches ready to go ?
There was a discussion here a few years ago (https://news.ycombinator.com/item?id=2686580) about memory vulnerabilities in C. Some people tried to argue back then that various protections offered by modern OSs and runtimes, such as address space randomization, and the availability of tools like Valgrind for finding memory access bugs, mitigates this. I really recommend re-reading that discussion.
My opinion, then and now, is that C and other languages without memory checks are unsuitable for writing secure code. Plainly unsuitable. They need to be restricted to writing a small core system, preferably small enough that it can be checked using formal (proof-based) methods, and all the rest, including all application logic, should be written using managed code (such as C#, Java, or whatever - I have no preference).
This vulnerability is the result of yet another missing bound check. It wasn't discovered by Valgrind or some such tool, since it is not normally triggered - it needs to be triggered maliciously or by a testing protocol which is smart enough to look for it (a very difficult thing to do, as I explained on the original thread).
The fact is that no programmer is good enough to write code which is free from such vulnerabilities. Programmers are, after all, trained and skilled in following the logic of their program. But in languages without bounds checks, that logic can fall away as the computer starts reading or executing raw memory, which is no longer connected to specific variables or lines of code in your program. All non-bounds-checked languages expose multiple levels of the computer to the program, and you are kidding yourself if you think you can handle this better than the OpenSSL team.
We can't end all bugs in software, but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm. It has now cost us a two-year window in which 70% of our internet traffic was potentially exposed. It will cost us more before we manage to end it.
What worries me about this is that the commit that fixes it [0] doesn't include any tests. Is that normal in crypto? If I committed a fix to a show-stopper bug without any tests at my day job I'd feel very amateur.
What are the chances that the NSA is having a field day with this in the 24-48 hours that it will take everyone to respond? Also, is it possible that CA's have been compromised to the point where root certs should not be trusted?
Honestly, why aren't the formal verification people jumping on this? I keep hearing about automatic code generation from proof systems like Coq and Agda but it's always some toy example like iterative version of fibonacci from the recursive version or something else just as mundane. Wouldn't cryptography be a perfect playground for making new discoveries? At the end of the day all crypto is just number theory and number theory is as formal a system as it gets. Why don't we have formal proofs for correct functionality of OpenSSL? Instead of a thousand eyes looking at pointers and making sure they all point to the right places why don't we formally prove it? I don't mean me but maybe some grad student.
I think the summary is a bit too sensationalistic in terms of what the actual security implications are:
The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software.
Yes, while that's true, it's not a "read the whole process' memory" vulnerability which would definitely be cause for panic. The details are subtle:
Can attacker access only 64k of the memory?There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
The address space of a process is normally far bigger than 64KB, and while the bug does allow an arbitrary number of 64KB reads, it is important to note that the attacker cannot directly control where that 64KB will come from. If you're lucky, you'll get a whole bunch of keys. If you're unlucky, you might get unencrypted data you sent/received, which you would have anyway. If you're really unlucky, you get 64KB of zero bytes every time.
Then there's also the question of knowing exactly what/where the actual secrets are. Encryption keys (should) look like random data, and there's a lot of other random-looking stuff in crypto libraries' state. Even supposing you know that there is a key, of some type, somewhere in a 64KB block of random-looking data, you still need to find where inside that data the key is, what type of key it is, and more importantly, whose traffic it protects before you can do anything malicious.
Without using any privileged information or credentials we were able steal from ourselves the secret keys
It really helps when looking for keys, if you already know what the keys are.
In other words, while this is a cause for concern, it's not anywhere near "everything is wide open", and that is probably the reason why it has remained undiscovered for so long.
Node.js sort-of dodged a bullet here. It includes a version of openssl that it links against when building the crypto module (and, I would think, the tls module). Node.js v0.10.26 uses OpenSSL 1.0.1e 11 Feb 2013.
The bug is in the handling of the TLS protocol itself (actually, in a little-used extension of TLS, the TLS Record Layer Heartbeat Protocol), and isn't exposed in applications that just use TLS for crypto primitives.
I'm very curious to see the change that introduced the bug in the first place. According to the announcement it was introduced in 1.0.1. That's the version that added Heartbeat support, so maybe it was a bug from the beginning.
That is my concern as well. We are still running CentOS 6.4 which does not have the affected version of OpenSSL, but we terminate SSL at the ELB so if they are affected then are keys are not safe.
Worse, it's retroactively unfixable: Even doing all this [revoking certs, new secret keys, new certificates] will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption.
So it would be a good idea to change all your passwords to critical services like email and banks, once they have issued new certs and updated their openssl.
Agree. This needs a big fat the world is coming to an end stlye of warning.
I've just shut down the webservers running SSL that I can control.
If you are vuln and don't want to build openssl from source and can afford the outage. I'd reccomend to do the same.
OTHERWISE BUILD FROM SOURCE IMMEDIATELY, PATCH, AND GET NEW KEYS!
Let's hope CA's don't get swamped by all the CSR's. Or rather let's hope they do so we see people are doing something...
For me right now these are just my hobby projects. So I don't care if they're down. But I imagine it will be fun tomorrow.
I believe the reason they got access was one of their customers found it and reported it to them, and they reported it to OpenSSL, and then it somehow leaked (either with the OSSL release, or someone else) and then they posted their now-public writeups of it.
From a quick reading of the TLS heartbeat RFC and the patched code, here's my understanding of the cause of the bug.
TLS heartbeat consists of a request packet including a payload; the other side reads and sends a response containing the same payload (plus some other padding).
In the code that handles TLS heartbeat requests, the payload size is read from the packet controlled by the attacker:
n2s(p, payload);
pl = p;
Here, p is a pointer to the request packet, and payload is the expected length of the payload (read as a 16-bit short integer: this is the origin of the 64K limit per request).
pl is the pointer to the actual payload in the request packet.
Then the response packet is constructed:
/* Enter response type, length and copy payload */
*bp++ = TLS1_HB_RESPONSE;
s2n(payload, bp);
memcpy(bp, pl, payload);
The payload length is stored into the destination packet, and then the payload is copied from the source packet pl to the destination packet bp.
The bug is that the payload length is never actually checked against the size of the request packet. Therefore, the memcpy() can read arbitrary data beyond the storage location of the request by sending an arbitrary payload length (up to 64K) and an undersized payload.
I find it hard to believe that the OpenSSL code does not have any better abstraction for handling streams of bytes; if the packets were represented as a (pointer, length) pair with simple wrapper functions to copy from one stream to another, this bug could have been avoided. C makes this sort of bug easy to write, but careful API design would make it much harder to do by accident.
That's why I have high hopes for Rust. We really need to move away from C for critical infrastructure. Perhaps C++ as well, though the latter does have more ways to mitigate certain memory issues.
Incidentally, someone on the mailing list brought up the issue of having a compiler flag to disable bounds checking. However, the Rust authors were strictly against it.
This sort of argument is becoming something of a fashion statement amongst some security people. It's not a strictly wrong argument: writing code in languages that make screwing up easy will invariably result in screwups.
But it's a disingenuous one. It ignores the realities of systems. The reality is that there is currently no widely available memory-safe language that is usable for something like OpenSSL. .NET and Java (and all the languages running on top of them) are not an option, as they are not everywhere and/or are not callable from other languages. Go could be a good candidate, but without proper dynamic linking it cannot serve as a library callable from other languages either. Rust has a lot of promise, but even now it keeps changing every other week, so it will be years before it can even be considered for something like this.
Additionally, although the parsing portions of OpenSSL need not deal with the hardware directly, the crypto portions do. So your memory-safe language needs some first-class escape hatch to unsafe code. A few of them do have this, others not so much.
It's fun to say C is inadequate, but the space it occupies does not have many competitors. That needs to change first.
> C and other languages without memory checks are unsuitable for writing secure code
I vehemently disagree. Well-written C is very easy to audit. Much much moreso than languages like C# and Java, where something I could do with 200 lines in a single C source file requires 5 different classes in 5 different files. The problem with C is that a lot of people don't write it well.
Have you looked at the OpenSSL source? It's an ungodly f-cking disaster: it's very very difficult to understand and audit. THAT, I think, is the problem. BIND, the DNS server, used to have huge security issues all the time. They did a ground-up rewrite for version 9, and that by and large solved the problem: you don't read about BIND vulnerabilities that often anymore.
OpenSSL is the new BIND; and we desperately need it to be fixed.
(If I'm wrong about BIND, please correct me, but AFICS the only non-DOS vulnerability they've had since version 9 is CVE-2008-0122)
> but we can plug this seemingly endless source of bugs which has been affecting the Internet since the Morris worm.
If we're playing the blame game, blame the x86 architecture, not the C language. If x86 stacks grew up in memory (that is, from lower to higher addresses), almost all "stack smashing" attacks would be impossible, and a whole lot of big security bugs over the last 20 years could never have happened.
(The SSL bug is not a stack-smashing attack, but several of the exploits leveraged by the Morris worm were)
What are the odds that the NSA didn't already know about it? Even if you don't think they would have deliberately monkeywrenched OpenSSL (as they are widely believed to have done with RSA's BSAFE), they certainly have qualified people poring over widely used crypto libraries, looking for missing bounds checks and all manner of other faults --- quite likely with automated tooling.
As to CAs, there have been enough compromises already from other causes that serious crypto geeks like Moxie Marlinspike are trying to change the trust model to minimize the consequences --- see http://tack.io
[this command generates a private key and server cert and outputs to pem's]
[Note also the key sizes are 4096, you may want 2048. AND I use -sha256, as sha1 is considered too weak nowadays. These certs are valid for 3650 days...10 years]
Since the command overwrites certs/keys in the current directory of the same name as the outfiles...that's it...you're done. Just restart nginx.
If you change a self-signed cert, like above, expect a new warning from the client on the next connection...this is just your new cert being encountered. Click permantly accept..blah blah.
Oh it's even worse, basically every secret you had in your server processes' RAM was potentially read in real-time by an attacker for the last 2 years.
It's not hard to screen what's returned for chunks that look like they could be keys (you know the private key's size by looking at the target's certificate, you know it's not all zeros, etc.) and then simply exhaustively check chunks against their public key.
I just looked at one of my running apache processes, it only has 3MB of heap mapped (looked at /proc/12345/maps). That's not a whole lot of space to hide the keys in.
I agree entirely with your post, and I can't quite understand the hysteria in this thread. The odds of getting a key using this technique are incredibly low to begin with, let alone being able to recognize you have one, and how to correlate it with any useful encrypted data.
Supposing you do hit the lottery and get a key somewhere in your packet, you now have to find the starting byte for it, which means having data to attempt to decrypt it with. However, now you get bit by the fact that you don't have any privileged information or credentials, so you have no idea where decryptable information lives.
Assuming you are even able to intercept some traffic that's encrypted, you now have to try every word-aligned 256B(?) string of data you collected from the server, and hope you can decrypt the data. The amount of storage and processing time for this is already ridiculous, since you have to manually check if the data looks "good" or not.
The odds of all of these things lining up is infinitesimal for anything worth being worried about (banks, credit cards, etc.), so the effort involved far outweighs the payoffs (you only get 1 person's information after all of that). This is especially true when compared with traditional means of collecting this data through more generic viruses and social engineering.
So, while I'll be updating my personal systems, I'm not going to jump on to the "the sky is falling" train just yet, until someone can give a good example of how this could be practically exploited.
The entire page is devoted to documenting privacy breaches and advocating for privacy protection. Privacy is framed as a fundamental right under threat.
FW Ratio: 57%
Observable Facts
The page explicitly states vulnerability 'compromises the secret keys...the names and passwords of the users and the actual content.'
Under 'What leaks in practice?' the site details stolen 'secret keys...user names and passwords, instant messages, emails and business critical documents and communication.'
Core threat identified: 'This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.'
The site's recovery guidance includes: 'All session keys and session cookies should be invalidated and considered compromised.'
Inferences
The detailed documentation of privacy breaches demonstrates strong advocacy for protecting individuals' privacy against eavesdropping and data theft.
The comprehensive coverage of email, messages, and password theft reflects deep commitment to protecting private communications.
The site's own minimal tracking reflects commitment to practicing privacy protection rather than merely advocating for it.
The page is explicitly educational, using Q&A format and accessible explanations to educate readers about a complex security vulnerability.
FW Ratio: 60%
Observable Facts
The page includes a comprehensive Q&A section with 15+ questions about technical aspects, scope, and mitigation.
Complex concepts like X.509 certificates, TLS handshake, heartbeat extension are explained in accessible language without assuming specialist knowledge.
The 'How to stop the leak?' and recovery sections provide educational guidance on preventing and recovering from vulnerability.
Inferences
The Q&A format and explanations of complex technical concepts demonstrate commitment to educating the public about security threats.
The translation of specialized terminology into accessible language reflects intent to enable informed decision-making.
The site explicitly addresses security threats and advocates for security restoration through detailed remediation procedures.
FW Ratio: 60%
Observable Facts
The page states: 'This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet.'
Recovery section explicitly addresses restoring security: 'Recovery...requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys.'
The site emphasizes security urgency: 'We decided to take this very seriously' and details their own patching process.
Inferences
The detailed advocacy for security restoration demonstrates commitment to protecting individuals' security of person.
The explanation of security threats and remedies empowers individuals to restore their security through informed action.
The site provides detailed, categorized recovery procedures enabling affected individuals to pursue remedies for privacy breaches.
FW Ratio: 60%
Observable Facts
The page provides categorized recovery procedures for four types of leaked material: 'primary key material, secondary key material, protected content, and collateral.'
Service provider recovery: 'Recovery...requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys.'
User recovery: 'After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services.'
Inferences
The detailed and differentiated recovery procedures provide practical remedies for affected individuals and organizations.
The clear delineation of responsibilities enables individuals to pursue appropriate remedies through proper channels.
The site freely expresses detailed technical information about the vulnerability without withholding or obfuscating details, advocating for transparent reporting.
FW Ratio: 60%
Observable Facts
The site presents detailed technical information about the vulnerability, attack vectors, and exploitability without restriction.
The site acknowledges OpenSSL's scientific importance and explicitly calls for supporting its continued development.
FW Ratio: 50%
Observable Facts
The page identifies OpenSSL as 'the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet.'
The site explicitly appeals: 'Please support the development effort of software you trust your privacy to. Donate money to the OpenSSL project.'
Inferences
The recognition of OpenSSL's scientific importance demonstrates commitment to supporting scientific progress.
The donation appeal reflects advocacy for continued development of critical security infrastructure.
The site describes coordinated international response to the vulnerability, demonstrating commitment to social and international order.
FW Ratio: 50%
Observable Facts
The page describes international coordination: 'NCSC-FI took up the task of verifying it, analyzing it further and reaching out to the authors of OpenSSL, software, operating system and appliance vendors.'
Multiple international CERT organizations are listed: NCSC-FI (Finland), CERT.at (Austria), CIRCL (Luxembourg), CERT-FR (France), JPCERT (Japan), CERT-SE (Sweden), CNCERT (China), and others.
Inferences
The coordination of international response reflects commitment to building social and international order that protects rights.
The involvement of government security agencies and CERTs demonstrates coordination of public institutions for public good.
The site explicitly describes community duties of vendors, service providers, users, and security community.
FW Ratio: 60%
Observable Facts
Service vendor duties: 'Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users.'
User duties: 'Service providers and users have to install the fix as it becomes available.'
Security community duty: 'The security community should deploy TLS/DTLS honeypots that entrap attackers and to alert about exploitation attempts.'
Inferences
The description of community duties reflects commitment to individuals fulfilling their obligations to protect others.
The clear delineation of vendor, user, and security community responsibilities reflects understanding of mutual obligations for common good.
The site discusses protection of intellectual property (encryption keys and certificates) as valuable assets requiring protection and recovery.
FW Ratio: 67%
Observable Facts
The page discusses compromise of 'secret keys used to identify the service providers and to encrypt the traffic,' which are valuable intellectual property.
Recovery includes 'revocation of the compromised keys and reissuing and redistributing new keys,' protecting property rights.
Inferences
The protection of intellectual property reflects respect for property rights of service providers and individuals.
The site documents a serious vulnerability threatening human dignity and security, treating the topic with seriousness and commitment to public awareness.
FW Ratio: 60%
Observable Facts
The page identifies Heartbleed as a 'serious vulnerability in the popular OpenSSL cryptographic software library.'
The page emphasizes the vulnerability 'allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software.'
The site was published 7th of April 2014 in coordination with NCSC-FI and OpenSSL team to inform the public about CVE-2014-0160.
Inferences
The serious framing reflects commitment to alerting the public about threats to security and dignity.
The comprehensive FAQ structure indicates intent to empower individuals with information to protect themselves.
Evaluated by claude-haiku-4-5-20251001: +0.21 (Mild positive)
2026-02-28 00:00
eval_success
Light evaluated: Neutral (0.00)
--
2026-02-28 00:00
eval
Evaluated by llama-3.3-70b-wai: 0.00 (Neutral)
reasoning
Technical explanation of bug
2026-02-27 23:34
eval_success
Evaluated: Strong positive (0.64)
--
2026-02-27 23:34
eval
Evaluated by deepseek-v3.2: +0.64 (Strong positive) 12,229 tokens
2026-02-27 23:16
rater_validation_fail
Parse failure for model deepseek-v3.2: Error: Failed to parse OpenRouter JSON: SyntaxError: Expected ',' or ']' after array element in JSON at position 16487 (line 326 column 6). Extracted text starts with: {
"schema_version": "3.7",
"
--
2026-02-27 23:16
eval_retry
OpenRouter output truncated at 4096 tokens
--
2026-02-27 22:59
dlq
Dead-lettered after 1 attempts: The Heartbleed Bug
--
2026-02-27 22:58
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-27 22:57
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-27 22:56
eval_success
Light evaluated: Neutral (0.00)
--
2026-02-27 22:56
eval
Evaluated by llama-4-scout-wai: 0.00 (Neutral)
reasoning
ED, neutral tech info on Heartbleed bug
2026-02-27 22:55
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-27 22:44
eval
Evaluated by claude-haiku-4-5: +0.48 (Moderate positive)
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 10:41:39 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.