Marketing BS...
Be mindful of it, SalesFarce are full of it.
Botched network maintenance has been blamed for a huge crash at Saleforce’s data centres, taking out customers’ CRM and data services across the US and Europe. Seven of Salesforce’s 17 North American instances and two out of four in EMEA went down. Servers started dropping out just before 2am UTC on 15 November, with outages …
I like it how Twitter is nowadays immediately used as a safety valve to let off steam and bad karma.
This has probably prevented a few cases of cow-orkers "going postal" and transforming their colleagues and middle management into peppered steak and filleted fish before they can be stopped by the boys in blue.
Twitter is just as likely to cause such incidents when a postal worker/writer/politician/policeman/what-have-you is forced to deal with the braying masses.
The internet is full of piranhas, most of them utterly irrational and without any semblance of clue.
The outrage is because of scale. If one business selling tat to people has the POS system go down they inconvenience a few dozen folks that day, maybe cause a knock-on b2b issue to a couple of other businesses.
If VISA goes down, the world stops. Now the same is true of salesforce, Amazon and increasingly Microsoft. That's hundreds of millions of consumers inconvenienced and millions of B2B issues created.
How many single points of failure does your economy need?
or perhaps because 'the cloud' was touted as (and one of it's major selling points infact) invulnerable to outages. I don't think anybody is suggesting that inhouse systems fair much better, but when the marketing coke hounds sell something based strongly upon a strength that is then repeatedly shown to be absolute ballcocks in practice it is only fair to point and giggle.
The reality is that the concept of cloud computing probably is a lot more resilient, however it is left in the hands of human cloud wranglers who are, as always, a major source of all chaos in the universe. The next obvious step would be a distributed network that controls itself and removes humans from the equation. Perhaps a suitable name would be skynet??
Fail for inability to recognize security and availability as separate concerns, bringing in security here and completely ignoring economics like a juvenile do-it-yourselfer.
Meanwhile "The Internet is only vaporware and it never will get any better ... everyone needs a leased line from the incumbent operator, yadda, yadda, herpers derpers."
Truly secure systems are designed (or should be) so they CANNOT delete your data during routine maintenance. That's not the same thing as "Uptime" or "Availability" which is where the data is temporarily offline but still there. I had understood that data was deleted, not just unavailable.
Nobody said anything about leased lines etc etc.
The fact is that your data is not "secure" if one must worry if it can be inadvertently deleted by the very system you trust to keep it safe. Only YOU should be able to delete it.
Wonderful news...
... if I have a disaster recovery company. What better advert can you have than making sure you have a second copy?
You may have the best company with the best people in the world, but put all your data in one basket and someone drops the basket -- you're business is toast. WILL PEOPLE NEVER LEARN?
I am not an immediate fan of the cloud (that is, use it when it makes sense, but sometimes it doesn't).
But let me answer you with a bit of history. I remember a HUGE data warehousing project at a large bank in Boston more than a few years ago. While in build of this totally secure, non-cloud system, the lead DBA confused his environments and ended up deleting the entire LIVE database of master dev customer data, rather than a test environment. He is someone that used to work with me, this story is not fiction. Gone. The back-ups were out of synch and didn't reload properly. WEEKS of this data warehouse being down, development staff stopped, hundreds of thousands of dollars lost to development time...and that was a very simple mistake, made by a usually very skilled individual.
If you don't know more stories like that, they you haven't worked in IT very long. Cloud, in-house, mainframe, SOA, or client-server...these are all just technologies. But the fundamental fuck-ups are usually human in nature, and they will always happen. And I have seen them happen about equally on all of those platforms, even to good staffs.
Hell I've been there, hit a typo, took out a phone system for 4 hours on Christmas Eve, just me and a security guard left on site...
Back Up? This was the NT4 days.....the 4 hours to manually rebuild would be far less painful than uses a backup in those days.
Fail for inability to recognize security and availability as separate concerns
Keep an open mind. It is entirely possible that the author used the word security in one of its other meanings, namely "freedom from risk".
>In a candid keynote speech at the Ricon West distributed systems conference on Tuesday, Salesforce architect and former Amazon infrastructure brain Pat Helland talked up Salesforce's internal "Keystone" system: ...
>"The ideal design approach is 'web scale and I want to build it out of shit'." ...
>"Salesforce has a preference for buying "the shittiest SSDs money can buy," he said"
http://www.theregister.co.uk/2013/10/29/salesforce_infrastructure_reveal/
"....>"The ideal design approach is 'web scale and I want to build it out of shit'." ...
>"Salesforce has a preference for buying "the shittiest SSDs money can buy," he said"....." Maybe he thought it was cool to describe commodity hardware as "shit", but then it's not uncommon for geeks from old-style high-availability backgrounds, where they usually worked with kit such as mainframes where the only supported option was a 6' serial cable that cost $2000, to describe commodity as "cheap shit". Indeed, there seem to be some masochistic architects that take glee in informing all what clever designers they are that they can use "shit" and make wonderful systems (they hang out on Linux forums mostly). The truth is if you design a truly highly-redundant system then you can indeed build it out of "shit", it's just the MTBF is shorter you're much more likely to be using the failover and recovery functions on a regular basis. Of course, if you screw up the design, then you tend to get those seven hour outages.....
Well said and exactly what I meant. The issue is not shit hardware but that Salesforce design seems to also be shit. As you imply good expensive hardware can give more leeway for design faults. Still this architect came off as a real wanker and if you are going to brag like this then if I read an article about your site going tits up for hours in just a few months I will remember. Plus as pointed out in the comments of that previous article this is not the first massive collapse of salesforce in recent memory.