back to article Oracle's Larry Ellison claims his Sparc M7 chip is hacker-proof – Errr...

Oracle insists it really is going to sell computers powered by Sparc M7 processors – the same chips it started talking about in 2014. On Monday, Big Red breathlessly unveiled hardware powered by the beefy microprocessor, and on Tuesday, its supremo Larry Ellison lauded the 64-bit CPU's security defenses. One of these defenses …

Page:

  1. Mage Silver badge

    4 bits one in 16

    So you try 16 times?

    Or what have I misunderstood?

    1. diodesign (Written by Reg staff) Silver badge

      Re: 4 bits one in 16

      That's basically it. Try at least 16 times, you're bound to work once. Or bit flip the most significant bits in a pointer to change its color. Just make sure you don't set off any alarms if you repeatedly crash the app.

      C.

      1. DropBear
        Facepalm

        Re: 4 bits one in 16

        Thankfully, crashes these days are so exceedingly rare they would also immediately raise a red flag of nefarious stuff going on.

        ...oh, wait... (want an instant BSOD STOP error on your PC, in Anno Domini 2015? Upgrade to the latest Steam client! True story!)

      2. brain_flakes

        Re: 4 bits one in 16

        If you're in a position to flip bits in someone else's pointer aren't you already in control of the application?

        1. Michael Wojcik Silver badge

          Re: 4 bits one in 16

          If you're in a position to flip bits in someone else's pointer aren't you already in control of the application?

          Generally not. The typical use-after-free attack, like most stack-smashing attacks, integer-overflow attacks, etc, must leverage the initial violation into a full exploit. Generally that's a process of some complexity - how complex depends on the vulnerability and the application in which it exists. Sometimes it's straightforward, as with many return-into-library exploits. Sometimes it isn't; Ormandy's #GP Trap exploit for Windows is a good example of a complicated one.

          So it's quite plausible that you'd have a vulnerability that let you flip bits in a pointer but did not in itself give you much more than that.

      3. Anonymous Coward
        Anonymous Coward

        Re: 4 bits one in 16

        Oh my, those guys who design silicon chips for a living must be real dumwits to not think of just trying 16 times to get past this protection.

        Fancy that, they can design a chip that has billions of logic gates and takes 20 world records but never think of such an easy get around that defeats a 5 year design process.

        OTOH, perhaps your assumptions taken from an exec keynote intended for a non-technical audience may not be entirely correct.

        1. Roland6 Silver badge

          Re: 4 bits one in 16

          "Oh my, those guys who design silicon chips for a living must be real dumwits to not think of just trying 16 times to get past this protection."

          An interesting programming challenge, as everytime your probe guesses wrong the chip issues an exception and halts your probe. I look forward to seeing what Joanna Rotkowska (Invisible Things Lab) has to say about the M7 security features...

          1. Anonymous Coward
            Anonymous Coward

            Re: 4 bits one in 16

            "An interesting programming challenge, as everytime your probe guesses wrong the chip issues an exception and halts your probe."

            Not really, your main executable would keep track of the address and colours it's tried and spawn a new process to do the actual trying. If the new process gets terminated, next colour.

            1. Anonymous Coward
              Anonymous Coward

              Re: 4 bits one in 16

              your main executable would keep track of the address and colours

              So you're assuming that you get to run your own executable on the server? Well that already limits the potential exploits and requires a valid login to the server and a way to upload/create an executable. This is not the vector used by most exploits such as Heartbleed.

              We're talking here about preventing the malicious exploit of software bugs causing buffer overflows or double free memory errors. These are exploits of software bugs, not malicious executables.

              Also you're assuming an executable can 'see' the pointer colour. It cannot, neither can it manipulate the pointer colour.

              Remember that the memory is allocated (and coloured) on a malloc basis, so even contiguous memory for a single application where the memory has been malloc'd in chunks will become unreadable sequentially as different mallocs will have been allocated different colours. Both free memory and pointer metadata have dedicated colours and are automatic fails.

              The protection here is not betweeen applications (that has been tried before) but between different malloc calls.

    2. chris 48

      Re: 4 bits one in 16

      This isn't a particularly useful security feature once your badly written code is out in the wild. It stops you writing that bad code in the first place because your application will fail 15 times out of 16 in normal use and hopefully you'll look in to that.

    3. brain_flakes

      Re: 4 bits one in 16

      If allocation is done cyclically rather than random it may not be possible to set up a situation where your pointer colour matches a new memory block due to the order of memory allocation in the application.

    4. Anonymous Coward
      Anonymous Coward

      Re: 4 bits one in 16

      Seeing as the only people likely to be running these are probably stuck with large legacy estates running lots of Java crap, Oracle are trying anything they can to stuff some hardware rags in the Swiss Cheese holes in their software...

  2. Neil Barnes Silver badge

    I'm having a vague flashback here...

    Didn't early 68000 Macs have an issue with people storing metadata in the top bits of memory pointers - casing much brokenness when larger memory models came along?

    Yes, I know a 64 bit address is a *big* chunk of memory, but sooner or later...

    1. This post has been deleted by its author

    2. jzl

      Re: I'm having a vague flashback here...

      A 64 bit address allows 18.4 million terabytes to be addressed. That's not just "big" :-)

      By the time we need more than that, code will be writing itself and human beings will be obsolete sacks of meat.

      To give you an idea, getting 18.4 million terabytes using 32GBit desktop DRAM sticks would require a surface area of around 16 square kilometres by some back-of-the-envelope calculations I found on the web.

      1. andy67

        Re: I'm having a vague flashback here...

        Remember that only 30 or so years ago, it was inconceivable that anyone would ever need more than 640KB of memory...:-) You'll have that much in a mobile phone in 30 years time!

      2. Crisp

        Re: 18.4 million terabytes?

        That should be enough for anybody...

        1. Preston Munchensonton
          Coat

          Re: 18.4 million terabytes?

          That should be enough for anybody...

          Only if these movie studio retards keep making movies like Sharknado and Saw.

          1. Anonymous Coward
            Anonymous Coward

            Re: 18.4 million terabytes?

            Hollywood has already come up with a number of fiendishly clever compression algorithms to save space on future movie projects. For example, the recent "DH" algorithm compresses an entire movie, which is typically tens of Gb, into an array of around five bytes using an initial key known as a "willis".

            The as-yet-unpublished RFC describes the reference implementation using the following example.

            Key ("willis") = Die Hard

            Input = movie to be compressed

            Output: (Key) + "on a " + {bus, plane, car, office, ...}

            Initial benchmarks have proven extremely effective at compressing the input in a lossless manner that allows full data recovery. For example, audiences were able to predict the content of a sample of movies including Speed, Passenger 57 and Air Force One using only the inputs of the key name and the strings "bus" and "plane".

      3. ckm5

        Re: I'm having a vague flashback here...

        Given that there are estimates that each person will be producing 5Tb of data/year by 2020, 18 million Tb doesn't sound like that much.

        1. HCV

          Re: I'm having a vague flashback here...

          It would mean that in 2020, the amount of permanent data one person would generate in a year would fit in 3 millionths of the temporary working area of a single 64-bit application.

        2. Michael Wojcik Silver badge

          Re: I'm having a vague flashback here...

          Given that there are estimates that each person will be producing 5Tb of data/year by 2020

          People make all sorts of claims (sometimes in passive voice, to avoid attributing them). I suspect anyone promulgating such an "estimate" is a tad hard of thinking.

          What is it even supposed to mean? What constitutes "producing data" in this context? How much of that data is said person going to want to store? Will this same amount be true of both some techophilic wannabe-transhumanist lifeblogger and a subsistence farmer in Bangladesh, or is "each person" the sort of handwaving rubbish thrown around by pundits who can't be troubled with the details?

          And why 5Tb? Because it sounds better than 0.6TB?

          1. Vic
            Black Helicopters

            Re: I'm having a vague flashback here...

            What is it even supposed to mean? What constitutes "producing data" in this context?

            Amount of CCTV video lying around with your face on it?

            Vic.

      4. Stevie

        Re: I'm having a vague flashback here...

        Me too.

        Trainee me, circa 1979: "Shouldn't we be storing the century part of the date here?"

        Chief programmer, struggling to be heard over the laughter: "We won't be running these Cobol programs at the end of the century, silly boy!"

        1. Michael Wojcik Silver badge

          Re: I'm having a vague flashback here...

          Trainee me, circa 1979: "Shouldn't we be storing the century part of the date here?"

          Chief programmer, struggling to be heard over the laughter: "We won't be running these Cobol programs at the end of the century, silly boy!"

          Chief programmer was right, at least from a Bayesian reasoner's point of view. There was a high probability that any given piece of code wouldn't still be in use in 20 years. And for what was, the savings of using 2-digit years was very likely greater than cost of remediation (once adjusted for inflation, etc). Factor in the widespread economic benefits of the capital spending and productivity boost that Y2K remediation is broadly credited with (among people who've actually studied it with some degree of rigor), and it was with high probability the better decision.

  3. jzl

    Still vulnerable

    "Hi Doris, this is Charlie from tech support. Can you tell me your password please?"

    1. german_se

      Re: Still vulnerable

      If that were the only weakness of this technique, then it would be a massive improvement. Organisations can control root access to all devices. The big problem is an undetected cybernetic intrusion, lasting for years if properly executed.

      1. Naselus

        Re: Still vulnerable

        "Organisations can control root access to all devices. The big problem is an undetected cybernetic intrusion, lasting for years if properly executed."

        Most hacks rely on social engineering at the top of the kill chain; that's usually how the cybernetic intrusion happens in the first place.

        Also, I wouldn't overstate the ability of any corporation of significant size to control Root. The moment your IT department is big enough to include senior staff with non-techy backgrounds, you might as well publish the domain admin password in the local paper.

        1. jzl

          Re: Still vulnerable

          Large organisations are made up of thousands of people. Even the high tech ones have relatively untrained, uneducated people in some positions with systems access. And when you start looking at thousands of people, you're playing the odds.

          No corporation can successfully secure its borders if it employs more than, say, one hundred humans.

      2. Michael Wojcik Silver badge

        Re: Still vulnerable

        The big problem is an undetected cybernetic intrusion

        The big problem is an intrusion into a system that controls its own behavior using feedback? Because that's what "cybernetic" means.

        Social engineering is a type of "cybernetic intrusion".

    2. Amorous Cowherder
      Facepalm

      Re: Still vulnerable

      "Hi DBA Duncan this developer Dave. We've written this app on the dev system that needs to go live this afternoon. We need system password on live to make it work."

      If refused, this usually follows next from the project manager...

      "Do you know who asked for this app to coded? Board member Brian and he's going to be mighty pissed and looking to make heads roll if anyone stands in the way of this app going live!"

      1. Dadmin
        FAIL

        Re: Still vulnerable

        It's 2015 and you're letting idiots access root instead of making them go through sudo, and you're not running an auditing system with remote logging? Is that you TalkTalk Root_Admin_42069? Never mind. Carry on. Good work!

        1. Vic

          Re: Still vulnerable

          It's 2015 and you're letting idiots access root instead of making them go through sudo

          On public-facing systems? Yes.

          To achieve root access, an attacker needs to compromise two separate passwords...

          Vic.

  4. Anonymous Coward
    Anonymous Coward

    The advantage of security in hardware...

    ...is that it can't be changed.

    The disadvantage of security in hardware is... that it can't be changed..

    When the security is found wanting, you either have to replace the hardware, or go back to software/OS measures.

    Or am I missing something?

    1. jzl

      Re: The advantage of security in hardware...

      The strength of security in hardware isn't so much that it can't be changed as that it can't be avoided by poor development practices and/or software quality control.

      The danger - in my opinion - comes from the potential complacency it induces.

      Does the danger outweigh the benefit? I don't think so. But this stuff should only be used in conjunction with defensive software techniques including, in my book, keeping use of unmanaged languages (C etc) to a minimum. Buffer overruns and pointer exploits are the gift which just keeps giving.

      1. german_se

        No - a Weakness

        The most powerful security measures can be done at the layer of operating systems (sandboxing) and memory-safe programming languages.

        And surely we *have to* improve security or users will go back to physical filing systems and microfiche. The experts in the Kremlin apparently already do this. They should know, as they rule the largest landmass. That is only possible with proper security, including the security of administrative information.

        1. jzl

          Re: No - a Weakness

          With respect, using the Kremlin as an example of good practice in any area is a bad choice. They're mostly a bunch of low paid, over-corrupted, lifelong bureaucrats.

          Russia, as a country, doesn't work particularly well. There's a sort of mystique that's sprung up around Putin, but the reality of the Russian government is very different.

          1. german_se

            Re: No - a Weakness

            "Russia, as a country, doesn't work particularly well"

            In terms of *security* I do think they know what they do, as opposed to countries who have their parliament totally pwned and still do not allow their own security services to defend the network. Yeah, Berlin.

            1. jzl

              Re: No - a Weakness

              Nah. I'd be very surprised if the NSA didn't have the Duma and much of the Kremlin's systems completely pwned.

      2. Roland6 Silver badge

        Re: The advantage of security in hardware...

        "The strength of security in hardware isn't so much that it can't be changed as that it can't be avoided by poor development practices and/or software quality control."

        Well that depends on whether they are on by default or have to be enabled by software...

        I suggest you take a look at the intel 286! It included some rather nice security features, which (if memory is correct) allowed you do similar to what Oracle are claiming for the M7. I expect these features have been carried forward into the x86 and x64 chip families, yet neither Windows or Linux make use of them.

        Whilst there was a significant performance hit when processors ran at 10Mhz, now we have multi-cores running at 2+Ghz, the overhead is significantly reduced.

        I wonder if any of the VM hypervisors implement these security features...

    2. Michael Habel

      Re: The advantage of security in hardware...

      Certainly was the case with S0NY's PlayStation 3. Dirty little secret is that every Console built before Jan. 2011 is defacto "open". Since all the Keys... Up to that point are in the wild now. Consequently its also why anything post Jan. 2011 will in all likelihood never get hacked. Of course pissing of a bunch of nerdy geeks, by swiping their Linux install functions, + leaving (or as good as left traces), of their private key in the Firmware was also a bit of luck for those that like the idea of having as full an access to their hardware. Bit the fact remains though the immediate exploit OFW3.55 has since been closed. Thus making such a hack, less trivial to pull off, then just installing some Custom Firmware over S0NYs latest, and greatest. It will never fully close the door on such Console's, and thus remain highly exploitable.

    3. Stevie

      Re: The advantage of security in hardware...

      Four words:

      Field

      Programmable

      Gate

      Array

  5. german_se

    More Bandaids from America

    ASLR, DEP and this thingy won't fix the problem at the core: The C language. ALGOL was already in a much better state of security during the heyday of ICL, Burroughs and Elbrus mainframes.

    What we need as an industry is to migrate everything to memory-safe languages: Swift, Rust, Sappeur, Vala, Java, C#.

    The first three languages demonstrate that garbage collection is not necessary for memory safety.

    If we do not fix the root cause, the cyber war chaos will only grow bigger and bigger.

    1. jzl

      Re: More Bandaids from America

      Hear hear. Couldn't agree more.

    2. Bronek Kozicki

      Re: More Bandaids from America

      We don't need to migrate to so called memory safe languages because that wouldn't change much. It would be still possible to put SQL injection bug, skip certificate validation, use poor seed of a random number generator in security-sensitive context, mixup physical units and put many other bugs (human stupidity knows no limits) in any of these languages. Removing certain language features is just another band-aid.

      We just need competent programmers writing in any language and API they happen to use at the moment. I have not written a single "free()" or "delete" in past 5 years programming in C++, because I know how to use smart pointers. Root cause is not the language, it is that some people who don't know the basics are deemed to be "competent programmers" and allowed to write security-sensitive code (which is most code, if you follow the principle of multiple layers of defense)

      1. Naselus

        Re: More Bandaids from America

        " it is that some people who don't know the basics are deemed to be "competent programmers""

        This. And a general lack of security awareness throughout the whole enterprise, tbh. Most staff should now be given a few hours of security basics automatically before their even allowed to touch a PC - how to spot a fishing scam, the importance of passwords, why encryption is your friend, why USB sticks aren't. Too many businesses think that having one guy who met a CISSP once covers their security needs completely, and so everyone else can get on with their job without needing to know anything about it. Programmers are just the most obvious example of this, but the problem is endemic at all levels and ll business areas from the janitor upto the C-Suite.

        1. A Non e-mouse Silver badge

          @Naselus Re: More Bandaids from America

          And a general lack of security awareness throughout the whole enterprise, tbh

          Is it the lack of security awareness, or the low priority managers put on security? Security isn't a simple tick box in Visual Studio or Eclipse! Often a manager will want code that works "well enough" to ship out the door. Time is money, and security done right takes time.

      2. german_se

        Really ?

        We had a shedload of C-style exploits in the Windows, HPUX and Linux kernels. The type of infallible software engineer you call for DOES NOT EXIST.

        And because we can contract cancer (SQL injections), we should not implement countermeasures against the plague ? How rational of you !

      3. jzl

        Re: More Bandaids from America

        Essentially, you're saying "there's nothing wrong with the system, fix the humans".

        You can't fix the humans. They're too variable. The system, however, can be fixed.

        C / C++ are not safe for routine enterprise development.

        1. Bronek Kozicki

          Re: More Bandaids from America

          "C / C++ are not safe for routine enterprise development."

          strongly disagree. Yes, for certain purposes neither C nor C++ are safe, that's where you would use Ada or yet something else. However in the hands of competent programmers, using correct toolset (including compiler warnings, with added static and dynamic analysis tools) and with sane design there is nothing wrong with these languages in the enterprise.

          On the other hand, leaning too much on a large monolitic sandboxed VM runtime (e.g. JVM or CLR) creates monoculture which, as we know, brings its own security risks. Unless you really believe that these environments are bug-free, in which case I want to remind you that they are written in C++ and that they are very, very complex internally. More complex than many of the enterprise applications you would want to run on top.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like