4 bits one in 16
So you try 16 times?
Or what have I misunderstood?
Oracle insists it really is going to sell computers powered by Sparc M7 processors – the same chips it started talking about in 2014. On Monday, Big Red breathlessly unveiled hardware powered by the beefy microprocessor, and on Tuesday, its supremo Larry Ellison lauded the 64-bit CPU's security defenses. One of these defenses …
Thankfully, crashes these days are so exceedingly rare they would also immediately raise a red flag of nefarious stuff going on.
...oh, wait... (want an instant BSOD STOP error on your PC, in Anno Domini 2015? Upgrade to the latest Steam client! True story!)
If you're in a position to flip bits in someone else's pointer aren't you already in control of the application?
Generally not. The typical use-after-free attack, like most stack-smashing attacks, integer-overflow attacks, etc, must leverage the initial violation into a full exploit. Generally that's a process of some complexity - how complex depends on the vulnerability and the application in which it exists. Sometimes it's straightforward, as with many return-into-library exploits. Sometimes it isn't; Ormandy's #GP Trap exploit for Windows is a good example of a complicated one.
So it's quite plausible that you'd have a vulnerability that let you flip bits in a pointer but did not in itself give you much more than that.
Oh my, those guys who design silicon chips for a living must be real dumwits to not think of just trying 16 times to get past this protection.
Fancy that, they can design a chip that has billions of logic gates and takes 20 world records but never think of such an easy get around that defeats a 5 year design process.
OTOH, perhaps your assumptions taken from an exec keynote intended for a non-technical audience may not be entirely correct.
"Oh my, those guys who design silicon chips for a living must be real dumwits to not think of just trying 16 times to get past this protection."
An interesting programming challenge, as everytime your probe guesses wrong the chip issues an exception and halts your probe. I look forward to seeing what Joanna Rotkowska (Invisible Things Lab) has to say about the M7 security features...
"An interesting programming challenge, as everytime your probe guesses wrong the chip issues an exception and halts your probe."
Not really, your main executable would keep track of the address and colours it's tried and spawn a new process to do the actual trying. If the new process gets terminated, next colour.
your main executable would keep track of the address and colours
So you're assuming that you get to run your own executable on the server? Well that already limits the potential exploits and requires a valid login to the server and a way to upload/create an executable. This is not the vector used by most exploits such as Heartbleed.
We're talking here about preventing the malicious exploit of software bugs causing buffer overflows or double free memory errors. These are exploits of software bugs, not malicious executables.
Also you're assuming an executable can 'see' the pointer colour. It cannot, neither can it manipulate the pointer colour.
Remember that the memory is allocated (and coloured) on a malloc basis, so even contiguous memory for a single application where the memory has been malloc'd in chunks will become unreadable sequentially as different mallocs will have been allocated different colours. Both free memory and pointer metadata have dedicated colours and are automatic fails.
The protection here is not betweeen applications (that has been tried before) but between different malloc calls.
This post has been deleted by its author
A 64 bit address allows 18.4 million terabytes to be addressed. That's not just "big" :-)
By the time we need more than that, code will be writing itself and human beings will be obsolete sacks of meat.
To give you an idea, getting 18.4 million terabytes using 32GBit desktop DRAM sticks would require a surface area of around 16 square kilometres by some back-of-the-envelope calculations I found on the web.
Hollywood has already come up with a number of fiendishly clever compression algorithms to save space on future movie projects. For example, the recent "DH" algorithm compresses an entire movie, which is typically tens of Gb, into an array of around five bytes using an initial key known as a "willis".
The as-yet-unpublished RFC describes the reference implementation using the following example.
Key ("willis") = Die Hard
Input = movie to be compressed
Output: (Key) + "on a " + {bus, plane, car, office, ...}
Initial benchmarks have proven extremely effective at compressing the input in a lossless manner that allows full data recovery. For example, audiences were able to predict the content of a sample of movies including Speed, Passenger 57 and Air Force One using only the inputs of the key name and the strings "bus" and "plane".
Given that there are estimates that each person will be producing 5Tb of data/year by 2020
People make all sorts of claims (sometimes in passive voice, to avoid attributing them). I suspect anyone promulgating such an "estimate" is a tad hard of thinking.
What is it even supposed to mean? What constitutes "producing data" in this context? How much of that data is said person going to want to store? Will this same amount be true of both some techophilic wannabe-transhumanist lifeblogger and a subsistence farmer in Bangladesh, or is "each person" the sort of handwaving rubbish thrown around by pundits who can't be troubled with the details?
And why 5Tb? Because it sounds better than 0.6TB?
Trainee me, circa 1979: "Shouldn't we be storing the century part of the date here?"
Chief programmer, struggling to be heard over the laughter: "We won't be running these Cobol programs at the end of the century, silly boy!"
Chief programmer was right, at least from a Bayesian reasoner's point of view. There was a high probability that any given piece of code wouldn't still be in use in 20 years. And for what was, the savings of using 2-digit years was very likely greater than cost of remediation (once adjusted for inflation, etc). Factor in the widespread economic benefits of the capital spending and productivity boost that Y2K remediation is broadly credited with (among people who've actually studied it with some degree of rigor), and it was with high probability the better decision.
"Organisations can control root access to all devices. The big problem is an undetected cybernetic intrusion, lasting for years if properly executed."
Most hacks rely on social engineering at the top of the kill chain; that's usually how the cybernetic intrusion happens in the first place.
Also, I wouldn't overstate the ability of any corporation of significant size to control Root. The moment your IT department is big enough to include senior staff with non-techy backgrounds, you might as well publish the domain admin password in the local paper.
Large organisations are made up of thousands of people. Even the high tech ones have relatively untrained, uneducated people in some positions with systems access. And when you start looking at thousands of people, you're playing the odds.
No corporation can successfully secure its borders if it employs more than, say, one hundred humans.
"Hi DBA Duncan this developer Dave. We've written this app on the dev system that needs to go live this afternoon. We need system password on live to make it work."
If refused, this usually follows next from the project manager...
"Do you know who asked for this app to coded? Board member Brian and he's going to be mighty pissed and looking to make heads roll if anyone stands in the way of this app going live!"
The strength of security in hardware isn't so much that it can't be changed as that it can't be avoided by poor development practices and/or software quality control.
The danger - in my opinion - comes from the potential complacency it induces.
Does the danger outweigh the benefit? I don't think so. But this stuff should only be used in conjunction with defensive software techniques including, in my book, keeping use of unmanaged languages (C etc) to a minimum. Buffer overruns and pointer exploits are the gift which just keeps giving.
The most powerful security measures can be done at the layer of operating systems (sandboxing) and memory-safe programming languages.
And surely we *have to* improve security or users will go back to physical filing systems and microfiche. The experts in the Kremlin apparently already do this. They should know, as they rule the largest landmass. That is only possible with proper security, including the security of administrative information.
With respect, using the Kremlin as an example of good practice in any area is a bad choice. They're mostly a bunch of low paid, over-corrupted, lifelong bureaucrats.
Russia, as a country, doesn't work particularly well. There's a sort of mystique that's sprung up around Putin, but the reality of the Russian government is very different.
"The strength of security in hardware isn't so much that it can't be changed as that it can't be avoided by poor development practices and/or software quality control."
Well that depends on whether they are on by default or have to be enabled by software...
I suggest you take a look at the intel 286! It included some rather nice security features, which (if memory is correct) allowed you do similar to what Oracle are claiming for the M7. I expect these features have been carried forward into the x86 and x64 chip families, yet neither Windows or Linux make use of them.
Whilst there was a significant performance hit when processors ran at 10Mhz, now we have multi-cores running at 2+Ghz, the overhead is significantly reduced.
I wonder if any of the VM hypervisors implement these security features...
Certainly was the case with S0NY's PlayStation 3. Dirty little secret is that every Console built before Jan. 2011 is defacto "open". Since all the Keys... Up to that point are in the wild now. Consequently its also why anything post Jan. 2011 will in all likelihood never get hacked. Of course pissing of a bunch of nerdy geeks, by swiping their Linux install functions, + leaving (or as good as left traces), of their private key in the Firmware was also a bit of luck for those that like the idea of having as full an access to their hardware. Bit the fact remains though the immediate exploit OFW3.55 has since been closed. Thus making such a hack, less trivial to pull off, then just installing some Custom Firmware over S0NYs latest, and greatest. It will never fully close the door on such Console's, and thus remain highly exploitable.
ASLR, DEP and this thingy won't fix the problem at the core: The C language. ALGOL was already in a much better state of security during the heyday of ICL, Burroughs and Elbrus mainframes.
What we need as an industry is to migrate everything to memory-safe languages: Swift, Rust, Sappeur, Vala, Java, C#.
The first three languages demonstrate that garbage collection is not necessary for memory safety.
If we do not fix the root cause, the cyber war chaos will only grow bigger and bigger.
We don't need to migrate to so called memory safe languages because that wouldn't change much. It would be still possible to put SQL injection bug, skip certificate validation, use poor seed of a random number generator in security-sensitive context, mixup physical units and put many other bugs (human stupidity knows no limits) in any of these languages. Removing certain language features is just another band-aid.
We just need competent programmers writing in any language and API they happen to use at the moment. I have not written a single "free()" or "delete" in past 5 years programming in C++, because I know how to use smart pointers. Root cause is not the language, it is that some people who don't know the basics are deemed to be "competent programmers" and allowed to write security-sensitive code (which is most code, if you follow the principle of multiple layers of defense)
" it is that some people who don't know the basics are deemed to be "competent programmers""
This. And a general lack of security awareness throughout the whole enterprise, tbh. Most staff should now be given a few hours of security basics automatically before their even allowed to touch a PC - how to spot a fishing scam, the importance of passwords, why encryption is your friend, why USB sticks aren't. Too many businesses think that having one guy who met a CISSP once covers their security needs completely, and so everyone else can get on with their job without needing to know anything about it. Programmers are just the most obvious example of this, but the problem is endemic at all levels and ll business areas from the janitor upto the C-Suite.
And a general lack of security awareness throughout the whole enterprise, tbh
Is it the lack of security awareness, or the low priority managers put on security? Security isn't a simple tick box in Visual Studio or Eclipse! Often a manager will want code that works "well enough" to ship out the door. Time is money, and security done right takes time.
We had a shedload of C-style exploits in the Windows, HPUX and Linux kernels. The type of infallible software engineer you call for DOES NOT EXIST.
And because we can contract cancer (SQL injections), we should not implement countermeasures against the plague ? How rational of you !
"C / C++ are not safe for routine enterprise development."
strongly disagree. Yes, for certain purposes neither C nor C++ are safe, that's where you would use Ada or yet something else. However in the hands of competent programmers, using correct toolset (including compiler warnings, with added static and dynamic analysis tools) and with sane design there is nothing wrong with these languages in the enterprise.
On the other hand, leaning too much on a large monolitic sandboxed VM runtime (e.g. JVM or CLR) creates monoculture which, as we know, brings its own security risks. Unless you really believe that these environments are bug-free, in which case I want to remind you that they are written in C++ and that they are very, very complex internally. More complex than many of the enterprise applications you would want to run on top.