Exaggerated risk?
CloudFlare have found it impossible to exploit the bug to steal keys despite their efforts:
http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed
Robin Seggelmann, the man who accidentally introduced the password-leaking Heartbleed bug into OpenSSL, says not enough people are scrutinizing the crucial cryptographic library. The Heartbleed flaw, which was revealed on Monday and sent shockwaves through the IT world all week, allows attackers to reach across the internet …
"CloudFlare have found it impossible to exploit the bug to steal keys"
Well, steal keys from a specific Nginx setup, but I take your point - and the Cloudflare blog is linked to in the article. I note that the Cloudflare heartbleed challenge site has updated itself to "Has the challenge been solved yet? MAYBE? (verifying)". Stay tuned.
In general, it is very tricky to steal private SSL keys (going to Vegas to put everything on red 14 seems like a better chance of success), but that doesn't stop the leaking of passwords and whatnot.
Plus, it's a rather fun bug. Code safe, everyone.
C.
"CloudFlare have found it impossible to exploit the bug to steal keys"
Bad luck, ducky. It's utterly possible :(
"We confirmed that both of these individuals have the private key and that it was obtained through Heartbleed exploits. We rebooted the server at 3:08PST, which may have contributed to the key being available in memory, but we can’t be certain."
C.
Volunteers are not necessarily less (or more) competent than paid people and in any case, a lot of Open Source development is actually paid for by large companies. Open Source does not necessarily mean written by volunteers.
Regardless, flaws happen in proprietary code and in Open Source code. I've never been convinced there's an intrinsic bias either way. There is a rather exaggerated idea about "a thousand eyes" leading to fewer flaws in Open Source software. That's never been that supportable. The advantages of Open Source are not magic lack of vulnerabilities. The advantages of Open Source are that you can check it for deliberate subversion - e.g. government backdoors and because it can be forked at any time, you're hopefully protected against lock-in and projects being abandoned. (Though Google do their best on the former).
I suppose some might say I've missed "free" off the list above, but to quote the Great Wookie himself: "Free as in speech, not free as in beer". Any serious customers are likely to go with whatever solution is best rather than cheapest.
But the essential point is that companies like RedHat, SuSE, et al. are not small companies and it's not a pile of volunteer code. Some of it is, but it's the testing side that matters more than the development side in matters like this.
Perhaps I'm in an atypically generous mood today, but I didn't read Grease Monkey's comment as denigrating volunteers. I read it as "do you twits now think it might be better to donate money/people/resources to this code branch since it is so critical to your business? Remember it's free as in 'speech' not free as in 'beer'."
We only use low paid college interns to write our code,look :
Companies have built entire operations around Office but due to RTF parsing bugs:
http://technet.microsoft.com/en-us/security/bulletin/ms12-079 (2012) - Critical
and whoopsie daisy we have deja vu
http://technet.microsoft.com/en-us/security/advisory/2953095 (2014) - Critical
Let's get thing into perspective here, all software has bugs and never assume code is 100% secure. It's what you do about them that matters.
"Also, outside of the Linux Kernel Mailing list, has anyone ever seen a code review actually catch a problem? I sure as hell haven't.
I take it you are unaware of how IBM Federal Systems wrote the code for th Shuttle.
Coder review was were the key to finding the bugs.
But what really lowered the bug rate was using that information to identify the pattern of that bug and pro actively look for other instances of that pattern and verifty they did not have the fault as well.
That's why the software never failed in 30 years of use.
"But what really lowered the bug rate was using that information to identify the pattern of that bug and pro actively look for other instances of that pattern and verifty they did not have the fault as well."
Static Code Analysis. The first time you run it on your code base, have spare undergarments to hand. It's not foolproof, but it is another tool that is inexpensive to slip into your build system and automate.
"Static Code Analysis. The first time you run it on your code base, have spare undergarments to hand. It's not foolproof, but it is another tool that is inexpensive to slip into your build system and automate."
I think in the late 70's, when they started writing the Shuttle code it did not exist. It was all code reviews and clever grep scripts.
"That's why the software never failed in 30 years of use."
Never catastrophically failed sure, but *never* failed in any way that required a reset? You 100% sure about that? Even avionics software occasionally has the odd glitch. I'd be surprised if the software in the shuttle was any different. Certainly at least one well known rocket crash was down to faulty software: http://en.wikipedia.org/wiki/Cluster_(spacecraft)
and then there was the mars mission that used a mix of metric and imperial...
"Never catastrophically failed sure, but *never* failed in any way that required a reset? "
Correct.
The team built both the OS and the "application" software. The system was 4 way redundant (unlike Ariane 5 with's master/slave system) and implemented cross checking of IO and synch pulses.
"Certainly at least one well known rocket crash was down to faulty software: http://en.wikipedia.org/wiki/Cluster_(spacecraft)"
Firstly the Ariane software was not built by IBM Federal Systems (who BTW were the role models for the CMU Capability Maturation Model Level 5 for how software should be developed) and secondly the failure was a failure of change control.. They reused the Ariane 4 software with a policy of leaving software modules in. The module that crashed the processors was not even a core module. A5 was designed to allow much greater movements at some parts of the flight. The software (that should have not been running at that point in the flight. A failure of requirements management) thought the rocket was going haywire and crashed the master processor. The slave processor then crashed in a cascade failure.
BTW Ariane 5's software was AFAIK written in Ada.
In fact I'd say the Ariane5 CLUSTERf**k (as I like to think of it) was more a management than a software development failure.
Which IBM FS were also pretty good at.
You must not be very experienced.
Look, there are some very good people writing very good code, however, that's a very small subset of all of the code that is being tossed out in to the public.
Then you have Apache were depending on the project, visibility and money tossed behind it... YMMV.
Having been using a code review scheme for the last year, it has caught many many issues (not just bugs, but commenting mistakes, inefficient code etc).
It does depend on how good the reviewer is. I'm particularly crap at it, but others can find real niggly issues that didn't show up in testing.
Also, static analysis like Coverity, or even running in valgrind for some dynamic stuff digs up hundreds of issues, even on code that has been around for ages. Both well worth doing. I think Coverity may well have found heartbleed for example.
"The trick is to pick your reviewers carefully - those that hit the "Ship it" button within 5 minutes are not reviewing code."
The problem is that a lot of the time the people who get chosen to review some code are presented with a lump of code that they have little idea of the function of. So the best they can do is check for obvious syntax and logical errors, pass it and then get on with their own work which they're probably under pressure to get done. For proper code review you need it set down as an actual task with a specific time slot allocated so people can get up to speed on what they're looking at - not something fitted inbetween other tasks if the person has a few minutes. Unfortunately thats just not the way its done in most companies.
1) Fund OpenSSL development
2) Buy your own island
3) Buy your own 767 and use it to reach some tropical island
4) Buy a castle somewhere in Europe
What Oracle, Google, Facebook, etc. etc. CEOs do?
Ah, and then some executive will tell managers "get our developers use open source code, it's free...."
It does seem ridiculous that so many mission-critical systems thoughout the world have unquestioningly adopted free software without at least double-checking the code. Free Open Source software is fantastic but shouldn't be put in critical systems without extra checks.
The business community can remedy this by jointly starting a free software security consortium whose mandate is to search and test for security holes in the free software they intend to implement. Pooling resources in an open multiparty project would save each business having to spend the money to do these tests themselves, and all would benefit as a result. A joint debugging effort would still be a lot cheaper than having to go back to writing or buying proprietary software for every component of their systems.
Oh. So I could have my mobile phone connect to a TLS-enabled SMTP server such as Gmail, and in the short periond that that connection is open (read the Android developer docs about battery management) those dastardly people at Google could read up to 64k of core memory from my phone, and this represents a threat to me even 0.1% as serious as some geezer in China connecting to a Gmail server, never attempting to make SMTP authentication over that TLS connection, but snatching 64k out of that server, to which lots and lots of people have connected, and where in principle the private key might be visible to go with the public cert, facilitating impersonation?
Mmmm I don't think so. Yes the library inplementing the protocol has a flaw and there is a vulnerability, but the consequences to humanity at large of unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate) are rather than less serious than those from malicious clients connecting to unsuspecting servers.
> TLS-enabled SMTP server such as Gmail,
Alternatively you could just browse the internet with your phone.
Whether you like it or not, Android 4.1.1 is vulnerable.
It doesn't matter how probable it is that somebody will use the vulnerability to extract 64k from your phone, it is still vulnerable. For you it might only expose the cat videos you are watching, but others have more sensitive information.
"Yes the library inplementing the protocol has a flaw and there is a vulnerability, but the consequences to humanity at large of unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate) are rather than less serious than those from malicious clients connecting to unsuspecting servers."
Ummm, I don't think anyone has said the problems for clients are just as serious, however you don't seem to understand the situation.
Are you saying you only ever visit google and your banks websites? Or maybe you use the lesser-known plugin "httpsNoWhere"?
Any site you visit could have malicious code - even a non-https site could have embedded https stuff (with a valid certificate too - that's not relevant)
So, you are basically trusting the honesty *and* security of every site you vvisit, and every third party ad company/image broker/js-library provider they use.
unsuspecting clients connecting to malicious servers (servers which will still be expected to present a valid SSL certificate)
Not necessarily, from what I gather the malicious server wouldn't have to present a valid certificate. Your point still stands, people are extracting useful info from servers by hammering them with malicious SSL requests; I can't see that happening on a phone. Remember that in the 64k you can extract at a time, most is truncated or otherwise uninterpretable garbage. Moreover, on a client machine most if not all of that garbage would be data that the malicious server previously sent to begin with (ot that was sent to the malicious server by this particular client). In chrome and Firefox, tabs are run in separate processes, so even if the attacker managed to hammer your phone with malicious requests at the right instant -extremely unlikely to begin with- they couldn't snatch your bank credentials from a concurrently-open tab.
Not terribly scary then. Still needs patching.
Yeah, my thought too. If you're worried about this bug on your handset I have a personal meteorite deflecting shield you may be interested in. Heartbleed can leak some of the calling process' memory stack.
If memory serves, both Chrome and Firefox fork processes on connection, which means that a malicious website would have access to 64k of... it's own prior data exchanges with you. In other words an attacker could use your ram as his own history. Oh noes, the end is nigh etc.
This bug is really only a concern on massively multi-user servers, where the 64k of leaked memory could contain _someone else's_ data. A client machine typically has only "one-on-one" server-client connections, so attackers can mostly retrieve data they already have. And that is, if they can make use of the tiny time frame in which the connection is established (typically, client system are not designed to accept out-of-the-blue SSL connections; they establish the connection for a particular need they have, say, to retrieve the list of emails in a distant mailbox, then shut it down).
A server is vulnerable because it is designed to be listening to random connection requests, and potentially has a huge number of users connected to it. Unless I missed something, neither is happening on a client system.
With great power comes great responsibility. If it is impossible to exercise responsibility then it is time to take away the power.
It should be recognised by now that we are not using tools which are fit for purpose and are harming and putting everyone at risk through that.
There is only one answer... ban C
"Not Z. Please, $deity, not Z."
I liked the idea of Z, but in practice I could never get away from the fact that I could achieve the same ends expressing the same constraints using some carefully written C++ unit tests. :)
Note: There are things Z can do which carefully written C++ can't, and of course it's possible to have bad tests fail to detect bugs in bad code... That said, it's pretty rare that people can write Z accurately either. :(
>>"Well there are 25 other letters in the alphabet...choose one."
Well in that case, I choose D. It's a lovely language - essentially a rebuild of C++ with an "if we knew then what we know now" approach. But it may not satisfy the OP's criteria. I'm interested to see if they have an answer, or only a criticism.
"... D. It's a lovely language - essentially a rebuild of C++ with an "if we knew then what we know now" approach."
A bit like C++11 then. Both would be perfectly reasonable replacements for the C that (inexplicably to my mind) appears to be the preferred choice for several rather important FOSS endeavours. Seriously guys, it has been a quarter of a century since we learned how to make C safer without any loss in performance (or one's ability to twiddle bits or map brain-dead structure layouts). Memory management in particular is a solved problem.
The real problem is an implementation design error, compounded by a coding practice error, combined with apparently inadequate source code review and prerelease testing. It appears that the packet was not expected to be inconsistent, so the protocol did not address the issue. In coding, the possibility of an inconsistent packet was overlooked and suitable action (e. g., discard the packet) was not coded. That could have been caught by a code review, and it could have been caught by rudimentary - and automated - testing of the results of invalid conditions like an internal length specifier implying a length greater than the total packet length.
Mistakes happen, and there is no reason I know of to think that they are either more or common with open source than closed source software. They are a result of fallible humans doing demanding work, sometimes under time and money constraints and sometimes coming up short.