back to article Heartbleed exploit, inoculation, both released

As the Heartbleed fallout continues, the good news is that code to protect against similar such attacks has been released. The bad news is that exploit code is also available. Let's start with the latter, released by a chap who took up Cloudlare's challenge to coders in the hope someone, somewhere, would be able to use …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    You left out

    That the successful exploit against the Cloudflare challenge took 2.5 million attempts, which any decent intrusion detection system that would normally running on a server should have noticed, considered a DOS attack, and reacted to.

    1. Anonymous Coward
      Anonymous Coward

      Re: You left out

      and you left out the fact that any decent IDS still wouldn't be able to detect a distributed attack, you might be able to react to it if you noticed a sudden surge in traffic but again, if it was distributed and spread over a period, you're still ******.

    2. A Non e-mouse Silver badge

      Re: You left out

      the successful exploit against the Cloudflare challenge took 2.5 million attempts

      Fedor Indutny took 2.5 millions requests. Ilkka Mattila took just 100K requests. It is suspected (but not proven) that rebooting the server helped Ilkka Mattila.

      blog.cloudflare.com/the-results-of-the-cloudflare-challenge

      I don't know how many requests it took others to extract the key.

    3. Anonymous Coward
      Anonymous Coward

      **facepalm**

      WAT .. You must be off your fucking head mate lol

      That'll be those IDS's that inspect encrypted traffic on the wire then?

      You are aware the those 2.5million request were made over the course of a day. Just to put that into perpective Wikipedia get 10m requests per hour.

      Going by elreg's report that this could be exploited in just 4 bytes, that'll be Makes the full exploit come in at a whopping 1GB of data sent.

      http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed

      1. lansalot

        Re: **facepalm**

        You are aware that there are IDS rules to detect large-packet TLS responses specifically to spot Heartbleed then? No? Oh...

        The fact that it's encrypted doesn't come into it.

        1. Jamie Jones Silver badge
          Facepalm

          Re: **facepalm**

          "You are aware that there are IDS rules to detect large-packet TLS responses specifically to spot Heartbleed then? No? Oh..."

          Hmmmm, so you're saying the attack will be caught on those servers which have updated IDS rules, but not patched servers?

          In other words, any update made to explicitly stop/catch heartbleed is irrelevent when talking about attacks against heartbleed!

      2. Pookietoo

        Re: this could be exploited in just 4 bytes

        The 4 byte example was enough to show it would work, not enough to have any chance of stealing useful data.

        1. Jamie Jones Silver badge

          Re: this could be exploited in just 4 bytes

          "The 4 byte example was enough to show it would work, not enough to have any chance of stealing useful data."

          Nah... 4 bytes is all that is needed - in fact, any more would be less effective, as you'd be 'overwriting' the out-of-bounds data you'll be getting back!

          Note, this is the request data we are talking about. Many such small requests receiving 64Kb replies may be detected, though.

    4. Tim99 Silver badge
      Trollface

      Re: You left out

      Err no. You do know that many servers out there aren't secure because the script-kiddy programmer left them running his cut-and-paste code, and has moved on, and because ROR would not run his really cool stuff he turned security off?

  2. Anonymous Coward
    Anonymous Coward

    "But the company says it will happily work with others who think it can be improved."

    They should work with the OpenSSL developers; it would only make the product better and more secure.

    1. Michael Wojcik Silver badge

      They should work with the OpenSSL developers

      They are. Rich Salz posted the patch to the openssl-users list, which all the OpenSSL developers follow. Why would you assume otherwise?

      1. Anonymous Coward
        Anonymous Coward

        "They are. Rich Salz posted the patch to the openssl-users list, which all the OpenSSL developers follow. Why would you assume otherwise?"

        Posting it is not the same as working with the developers. You also must have missed this in the article as well:

        "This patch is a variant of what we've been using to help protect customer keys for a decade"

        So, Akamai made changes to OpenSSL a decade ago and kept it to themselves.

        So, Akamai has not worked with the developers in making OpenSSL even more secure. If they had a decade ago, this would be a non-issue right now.

  3. Anonymous Coward
    Anonymous Coward

    Is the akamai patch trustworthy?

    Repost from a well known news site...

    http://lekkertech.net/akamai.txt

    1. Anonymous Coward
      Anonymous Coward

      Akamai patch failed

      Akamai have confirmed that their patch failed to protect all critical data and are now revoking/reissuing all keys/certs.

      https://blogs.akamai.com/2014/04/heartbleed-update-v3.html

  4. jake Silver badge

    Patched nearly a week ago by OpenSSL, on April 7th.

    And by Slackware on the 8th. Half a week late, there, Akamai.

    http://en.wikipedia.org/wiki/OpenSSL

    ftp://ftp.osuosl.org/pub/slackware/slackware64-14.1/ChangeLog.txt

    1. storner

      No, early

      The patch is not to fix the Heartbleed vuln. It is a patch to improve OpenSSL so the crypto keys are made off-limits to a future attack.

      1. Anonymous Coward
        Anonymous Coward

        Re: No, early

        Whilst the crypto keys are of interest to someone who wants to spoof the site - and also has the necessary access to do DNS cache poisoning or whatever - it would have done nothing about the *content* of the SSL session, including people's usernames and passwords, credit card numbers and such like, all of which would still be available for hoovering up even if this patch had been used.

        OTOH, the fact that OpenSSL had purposely *bypassed* memory protection mechanisms in malloc() is a bigger issue.

        1. Anonymous Coward
          Anonymous Coward

          Re: No, early

          > OTOH, the fact that OpenSSL had purposely *bypassed* memory protection mechanisms in malloc() is a bigger issue.

          No they didn't. What they did is write their own wrapper around malloc so that they could cache memory instead of releasing it. The bug would still exist since malloc() itself would more than likely return a segment of memory from the heap that had previously been allocated and used by the process. The exception would be when allocating and freeing a block of memory larger than the mmap threshould, which is 128k by default. All of this is, of course, implementation dependant but this is what linux does.

          1. Daniel B.
            Boffin

            Re: No, early

            No they didn't. What they did is write their own wrapper around malloc so that they could cache memory instead of releasing it.

            Hm… this could be interesting. If they have a wrapper around malloc(), they could theoretically zero out recently allocated memory before returning the new pointer to the caller. That would render Heartbleed (and any similar attack) useless as the whole allocated bunch would be full of NULLs wouldn't it? I'd fill it out with 0xDEADBEEFs but that would probably be more costly to pull off...

  5. david 12 Silver badge

    Heartblead exposes a generic problem

    Recovery of data from memory has been demonstrated many times by increasingly sophisticated malware. So the real question isn't "why wasn't this exploit detected by static analysis from Coverity?", but why on earth is Open Source/Linux/BSD software leaving vulnerable information in memory in the first place?

    1. Paul Crawford Silver badge

      Re: leaving vulnerable information in memory in the first place?

      ALL computers leave essential information in memory - they need to in order to work!

      The issue here, as is so often the case, is poor use of malloc()/free() and the opportunity for such memory to be re-used without sanitisation.

      I'm not an expert, but I use calloc() in all but uber-time-critical steps partly to stop this sort of thing, and partly so when I do make a boo-boo at least I get consistent borking as it always starts with zero'd memory before I go on to abuse it.

      The patch is about keeping the keys in memory that is not easily re-used, which is good, but as already reported the OpenSSL project really needs some proper support and a bit more code review. Hey NSA/GCHQ could you do something useful for us for a change?

      1. This post has been deleted by its author

        1. Sir Runcible Spoon

          Re: leaving vulnerable information in memory in the first place?

          Are the keys stored in memory in a single block?

          What about distributing the keys into different segements and use pointers to the various locations to stitch it back together when required? Inefficient I suppose, but then they could always use HSM's.

          Or indeed, has already been pointed out, wipe the fucking memory block after using such uber sensitive data.

        2. Paul Crawford Silver badge

          Re: leaving vulnerable information in memory in the first place?

          "You using calloc doesn't solve a damn thing."

          Except in this bug it would have, as the padding beyond the heartbeat request that was returned when the request length was longer would always be zero'd. Thus no leaks.

          Where you are correct is that it won't stop other heap-walking mischief where something else gets hold of a freed block with sensitive data. Though others using calloc() by default would minimise that risk as well.

          What would be nice would be a built-in cfree() equivalent that would clean up by already knowing the allocated buffer size to zero it, so that you could use "#define free(x) cfree(x)" (or some compile flag) to apply generically without having to re-write code to pass the size as well.

          1. Michael Wojcik Silver badge

            Re: leaving vulnerable information in memory in the first place?

            Except in this bug it would have, as the padding beyond the heartbeat request that was returned when the request length was longer would always be zero'd.

            I don't believe that's true. First, OpenSSL's malloc wrapper would also have to clear the allocated memory if it took a buffer from its freelist. Second, the packet buffer would always have to be allocated for at least 64KB, regardless of message size; I'm not sure OpenSSL does that in all cases.

            In fact, I don't think it ever allocates a packet buffer of that size, at least in 1.0.1c. Look at ssl3_setup_read_buffer in s3_both.c (which also allocates the receive buffer used for TLS). It computes the buffer length using SSL3_RT_MAX_PLAIN_LENGTH, which is ~16BK.

            1. Paul Crawford Silver badge

              Re: @Michael Wojcik

              I'm not sure, but usually if you overrun a buffer then standard tools like the "electric fence" library or the valgrind tool fill find the problem.

              Of course, if you write obscure code and use a not-very-well-thought-through alternative version of malloc() then things might not go so well...

              1. Jamie Jones Silver badge

                Re: @Michael Wojcik

                2 errors in the comments in this thread:

                "They use their own malloc"

                No. If you follow the spaghetti trail that is the source code, you'll see that their "malloc wrapper" is simply a call to the system malloc.

                "This wouldn't have happened if they used calloc"

                Yes it would. Try it yourself!

                This bug has nothing to do with memory allocation. It seems many people think that the buffer is malloced to the 64k by virtue of the attacking packet, but only the much smaller payload is copied into the buffer, exposing the rest of the buffer as malloced but stale data.

                THIS ISN'T THE CASE!

                Besides, any sane malloc on a multi-user system would clear/randomize the returned buffer.

                What is happening is that 64K of data is being copied into a 64kb buffer, from a char * buffer that contains the much smaller data sent by the attacker, hence overfilling the buffer with other variable data on the stack.

                It can be simplified to:

                char retbuf[65535];

                char sentbuf[1];

                strcpy (retbuf, 65535, sentbuf);

                I.e. it's read-overflow (or 'buffer overflow' by reading rather than writing) - nothing to do with the memory allocation!

                1. Paul Crawford Silver badge

                  Re: @Jamie Jones

                  Thanks for the feedback, I stand corrected.

                  "If you follow the spaghetti trail that is the source code"

                  I think you have identified a significant problem just there.

                  "I.e. it's read-overflow (or 'buffer overflow' by reading rather than writing) - nothing to do with the memory allocation!"

                  If they are really using a stack-based source then electric fence would not have caught it, but I would have hoped some of the code profiling tools would have thrown up a warning about the copy size being potentially bigger than the buffer.

                  1. Jamie Jones Silver badge
                    Pint

                    Re: @Paul

                    After reading your posts, I spent a few hours going over the code again, and google, before replying.

                    I''m no C expert - definitely no crypto expert, but I would have to say it shows that the code is written by mathematicians rather than programmers! - loads of labels and pointers to pointers to functions and bleugh!

                    They even comment-out code using #ifdef 0 . Ugh

                    "If they are really using a stack-based source then electric fence would not have caught it, but I would have hoped some of the code profiling tools would have thrown up a warning about the copy size being potentially bigger than the buffer."

                    There was an interesting post (http://security.coverity.com/blog/2014/Apr/on-detecting-heartbleed-with-static-analysis.html) from one of the Coverity people on why they missed it, and in a linked followup post, how they've now altered their product to find such errors in future, though to me, it look like their solution is a bit of a kludge, potenially producing false positives (I'm probably wrong, but t seems to me that they are keying on a very weird scenario, not necessarily an illegal one - though I'm probably wrong! - or maybe that's how these programs generally work anyway... I don't know!)

                    Anyway, I agree with all your comments in general, but am curious - is there really any 'live' malloc that doesn't return a pointer to cleared/scrubbed memory? I know the spec says the contents are undefined, but surely it would be a security risk ( I suppose that a malloc optimized to not bother scrubbing memory returned to the same UID or even just process wouldn't be a hole in itself, but even that would make it easier to exploit bugyy software (especially servers))

                    Anyway it's a lovely day, so Im going outside. Have a cold beer on me!

          2. This post has been deleted by its author

      2. Anonymous Coward
        Anonymous Coward

        Re: leaving vulnerable information in memory in the first place?

        >ALL computers leave essential information in memory - they need to in order to work!

        >The issue here, as is so often the case, is poor use of malloc()/free()

        I understand that the closest linux equivilant to "CryptProtectMemory" is "gcry_malloc_secure", not "malloc".

        So the real question is, why on earth is Open Source/Linux/BSD software leaving vulnerable information in memory in the first place?

  6. All names Taken
    Paris Hilton

    Hmmm

    Hey NSA/GCHQ could you do something useful for us for a change?

    Hmmm seems sensible.

    You'd think with a business model generating 2 trillion gazillion billion of trade on a worldwide basis in hardware, software and content that someone, somewhere might say something like "yeah - but we need it to be sanitized?"

    Maybe the open-open view of internet is okay but maybe a closed-closed internet has a commercial basis too?

    1. Anonymous Coward
      Anonymous Coward

      Re: Hmmm

      maybe a closed-closed internet has a commercial basis too?

      Not if it is to create interoperability between diverse parties and organisations. Trusting the carrier is simply not good security practice. I can see where you're coming from, but the moment you require any kind of scale and scalability, the issue becomes one of too far distributed trust, and you end up with the old "hard shell, soft centre" risk where one breach exposes all.

      The "trust the network" is typically done inside one single company, and even there you ought to have segregation - a sizeable company whose HR and financial systems are not separated from the main office LAN (and even WAN - don't laugh, I've seen it) is begging for trouble - also from a compliance perspective.

      1. All names Taken

        Re: Hmmm

        Upvoted

        have a beer dude!

This topic is closed for new posts.

Other stories you might like