* Posts by virtualgeek

33 publicly visible posts • joined 1 Jul 2010

Dell bleeds converged infrastructure unit into existing divisions

virtualgeek

Thanks for the comments - my quick 2 cents...

Disclosure - I'm Chad :-)

To all - thanks for the comments, they are appreciated.

To some of the @anonymouscowards - I'm insanely flattered that you may think that I would even be remotely qualified to be in the big CEO seat.

The commenter who disagreed is right. To be qualified for that role, I would need a lot of development: 1) on scope (my largest span was several thousand employees, several billion of revenue, and several hundred of annual OPEX - still tiny relative to the company as a whole); 2) on age/experience (I have learnt a lot over the last few years particularly on the operational demands of a scaled business - but have a LOT to learn); 3) wall street face/relationships (most of my relationships are with the technical side of our customers, not the CEOs and certainly not with Wall Street Finance). Frankly, I've been working to continue to learn, develop - but a long way from ready for that :-) One day, it would be an honour to be the person where the buck stops - someday, someplace.

@Zen____ It's true that I can be a clown, and I'm pretty (overly?) enthusiastic. I don't take myself seriously, and I think it's important to have a little fun (including poking fun at oneself). A clown? Yes.

A "marketing clown"? Nope. No one should misunderstand - one can be silly and clown-like and ALSO deadly serious about the business, the people, and our customers.

You've got to be who you are - authentic at all times, and like all people - we are all multi-faceted. I'm glad you think I'm human and vaguely entertaining :-) Your characterization of my colleagues is not reflective of my experience.

Now - re: your VxRail comments. I agree, we were late to the game, but the team came out swinging. Yes, our great salesforce, channel and customer base plays a part in where we stand. However, that's not the whole story.

The strategic realization that the majority of the market for HCI and moreover cloud (IaaS, PaaS, CaaS) stacks would LOVE an HCI which is unabashedly aligned with their standard (the majority of the market being vSphere) was an obvious face-palm moment, and when we leaned in for a turnkey experience, and as vSAN matured - we moved the market fast. Our Q4 results will come out in the future, and they will show the formula is working, and working furiously. I get skepticism, or being a fanboi of others - but denying results or trying to marginalize them, well that's like being a science denier.

The team in their new home (not "embedded" in the Server business, but aligned with the server business - Ashley and team know that HCI is not the same as a server) will keep the pedal to the metal - across VxRail, VxRack, and our other HCI offers. There is a strong roadmap, and there is a ton to improve - and I know they will do that and more.

@angardner - thank you! I think it's important to bring core values and be your whole self in everything you do. I don't think that those are uniquely "Canadian" values - I think they are humanist values, and values that are grounded in freedom for individuals to be themselves, to be empathetic to all (including those that disagree with you) and being able to disagree without being disagreeable.

@AnonymousCoward who commented with #karmaisabitch - I'm sorry if I've hurt you in some way. I'm an easy dude to find online. Send me a DM on twitter, let me know where you feel I erred, and let me know if I can help you in some way. Minimally, if I know, I can reflect on whether I acted the right way.

@vFidel - thank you! The company is being super-supportive and I've got the luxury of time to determine my next role, and do it right. Dell Technologies is an amazing and broad place - lots to see, learn, and do. In the meantime, I'm spending some quality time with my wife and kids, getting into a little better shape (!)

All, thanks for the comments (of all types) - and we'll talk again soon!

-Chad

Dell EMC man: Hyperconverged is love, hyperconverged is life, but won't kill SAN yet

virtualgeek

Re: Just for the record

@Anonymous - thanks for the comment. You're right, but missed one other option: that the person was oversimplifying. Yup - in a 1000 word comment (count 'em up - I needed to cut a lot to fit into the El Reg comment maximums) I'm completely guilty of OVER simplifying, not over complicating.

All traditional app stacks (and EPIC has been around for a LOOOONG time) have multiple components, and trying to say "SAP runs on" or "Oracle runs on" or "EPIC runs on" is an over-simplification. It's also an oversimplification to say that "mission critical workloads don't run on SDS/HCI".

@anon is right - EPIC is a complex EMR system with many components (Cache/Clarity, but also all the services around them). BTW - to nurses and doctors, the EMR stack as a whole is mission critical - the end-user portion of the stack being down means "EMR is down".

In fact, that's one of the reasons the vast majority of EPIC deployments are deployed on VMAX and HDS systems - specifically for the reason I pointed out that "SANs aren't going away anytime soon".

Even if SDS/HCI is ready for the majority of workloads - there are workloads that are low in COUNT that are very important. The number of healthcare customers on CI I'm also intimately aware of - a ton use VxBlocks. Also a reason why the "HCI vs. CI" debate is a silly one from where I sit.

Those workloads depend on the resilience, availability, serviceability - and data services (including complex compound replication engines that can manage multiple related devices in consistency groups). SDS/HCI models can be very high performant now, very resilient now - but those data services are something I don't anticipate SDS/HCI to replicate any time soon - not because it's not possible, but because it's really hard, and not the majority of workloads, or where there is the most workload growth.

I'll find out if any of the deployments are public, and provide further detail about exactly what is used where.

@Anon - thank you, It's a first that I've been correctly tagged for not being long winded enough or explicit enough :-)

virtualgeek

Thanks for the comments Anonymous and rnr!

1) I appreciate that you think my comments were impressive, but they aren't a function of a huge paycheck :-) They are a function of passion for this topic and this area, and frustration in anonymous, ignorant comments that are demonstrably false.

You're right - the acronyms are irrelevant, but the facts matter.

For a small customer that needs easy - that market is moving to SaaS (not generally IaaS public clouds) the most quickly, and the "easy button" that HCI (or native functions in the OS/hypervisor) represents is very compelling for the things that stay on premises.

2) Storage isn't alchemy - I AGREE. Like all things - it is science. I can't state it enough - I'm not "defending" the SAN domain. Actually, if you think about it, it's fascinating to have the market leader in this category stating that SDS/HCI models are ready for the majority of workloads. I'm just pointing out the intrinsic places where it's unlikely that SDS/HCI models will displace things that look like "external storage", and that those are important for many, many people.

3) You're right that vendors that don't disrupt themselves (both in the tech, but also in the economic/consumption models) - things are going to go from "hard" to "impossible" to "death". As a leader in Dell EMC - I have a duty to our customers and our employees to make sure that doesn't happen.

4) rnr - I believe you're right - new in-memory models (particularly as we hit the next wave of HW platforms and the early days of the Next-Generation Non-Volatile Memory (NGNVM) wave - this can have a huge effect. An interesting observation is that the highest performance transactional systems that power hyper-scaled applications already use distributed memory caches (things like memcached, gemfire, and others) that front-end a persistence layer. This has been the case well before the SAP HANA wave - but that is one of the things bringing in-memory approaches into the mainstream consciousness. This will move, over time, from the weird fringe to a more broadly applied approach.

For perspective though - some of the coolest, most scaled SaaS platforms millions use every day still sit on a oldy-timey RDBMS that in turn sits on massive SANs.

Those customers would LOVE to move into a distribute in-memory front end sitting in front of a distributed SDS/HCI back-end supporting a modern NOSQL data layer. In that process, they would replace the RDBMS and the SAN - and make a huge quantum leap. But they cannot do it in one step. We're working to help them - but it's not a light switch. They need to modernize what they have, while they look at how to rearchitect their whole application stack.

**AS A GENERAL AXIOM** if you have a dollar to spend, the BEST place to spend that is at the application layer. If it's not a generic app (in which case - go SaaS) - the best dollar is to rebuild the app stack around modern 12-factor app principles. If you do that - infrastructure dependencies go away, you have great workload portability. That axiom - while true, isn't the end. That axiom has a corollary: if you can't change the app - make the infrastructure as simple as possible.

It's fascinating to me, the power of "brand association" you can see coming through this comment thread - It's a reflection of the unconscious mental image that brands carry.

"EMC" is brand associated with "big iron storage", "Dell" is brand associated with "laptops and client computing". "VMware" is brand associated with server virtualization/consolidation.

Yes, we are those things, and sometimes that's all we are, the only way we engage with customers. When we do that - it's us at our worst.

Each part of the business is so much more than that - and when both come to the surface, that's us at our best.

We power Spring Framework (in Pivotal) - the most popular developer framework for how people transition apps, downloaded tens of millions of times per month.

We (in Dell EMC) are leading the transition from component level assembly into CI, to HCI, and to the things that are the evolution of HCI.

We are leading the open networking (in Dell EMC) and SDN (in VMware) transition.

We are pushing the Kubernetes efforts forward with Kubo (in Pivotal), with core persistence contributions like Rex-Ray (in Dell EMC) in Kubernetes 1.6 (along with other container/cluster managers).

We have the best Developer-Ready Infrastructure stack (Pivotal/VMware/Dell EMC).

We are partnered with RedHat on their cloud stack, and also with Microsoft on Azure Stack

And... yes, we are also leading in traditional servers and storage for the huge masses for whom all that stuff doesn't apply.

All true - just not "sound bite simple" :-)

virtualgeek

Disclosure - Dell EMC employee here – namely the interviewee.

I suppose it's inevitable that with an El Reg article a pile of snarky anonymous commenters would pile on. If you’re confident in your point of view – put your name on it.

These demand some form of response - and I hope to add to the dialog (though I'm sure I'll get a whole bunch of snark in return).

I'm on an airplane, so have time - and I've watched all the movies, so here goes. WARNING - if you want a trite soundbite, I'm not the place to go, so stop reading now, and fling the poop you want. If you want deeper dialog be willing to invest minutes + brain cycles.

---

HCI and CI are being used in managed service providers, yes (and I gave examples) – but in Telco NFV / SaaS / Hyperscale clouds, they tend NOT to use HCI or CI.

Those customers tend to build their own stack via commodity HW components and SDS stacks (commercial or open source, or proprietary internal stacks they develop/sustain) because there's value in it, they can sustain it, and it's their business.

Conversely, anyone in an enterprise trying to build/maintain/sustain their own stack is wasting their time.

People who play with IT build a small test environment and say "look, it can be done"... It's so cute :-) Try running that for 3 years, in a full production environment, through multiple updates and waves of your staff turning over - then please come back and comment, at that point your feedback will be intelligent.

This is why HCI is seeing the growth that it does. It represents an easy button, so they can get on to the business of doing something that matters.

Then there is the reaction to my comments about the place for external storage.

My main point was simple: while we're already at the point where SDS stacks can support the majority of workloads. Period. The transition will take time (IT has inertia) – but is happening. But there are clear places where this won’t happen.

When I say "latency jitter" - it's nigh impossible (with current state of the art) to have a distributed storage stack over an Ethernet fabric that can deliver consistently sub millisecond response times, with latency jitter less than 200 microseconds. Most workloads are perfectly fine with that – but some aren’t – that’s my “yes and no” answer. 10GbE port to port forwarding times are on the order of 500ns - 1us + a few more microseconds for SFP+/10Gbase-T interfaces and cables. Doesn't sound like much - except that all SDS stacks are distributed in some fashion, and the software stacks add their own distributed latencies (a single IO can and will hop between multi nodes, needing multiple ACKs. This isn't magic – it is the nature of persistence. There are even computer science theorems that govern this. Learn something - look up the CAP and PACELC theorems.

I laugh right out loud at the ignorant comments in thread that lump object stores like S3 and transactional persistence models – you’re flashing your ignorance.

We have customers running 100's of GBps of bandwidth, and tens of millions of IOps on ScaleIO systems deployed at 85PB+ - and that's at a **single customer**. We have mission critical SAP and EPIC use cases on vSAN. And, this isn't about our tech - but rather a point: "this architectural model is ready for customers".

I don't suspect the SDS stacks will duplicate SRDF like data services. Want thousands of consistency groups? Want multisite sync or async replication, with thousands of consistency groups? Need multi initiator multi-target active-active? Need all of that and more? Those workloads will be with us for decades to come.

Yes, the hyper-scale players build their own “bit-bucket” object stores – but just like my point for the on-premises world, they don't run on generalized servers, but have dedicated, very proprietary dense enclosures, and even don't use off-the shelf media.

Other comments were along the lines of "HCI linear scaling = bad" and "public cloud for everything" - those are just silly.

Do you know how many different compute/memory/persistence ratios there in any HCI worth its salt? Thousands.

VxRack FLEX (which has "flexible" right in the name) can literally have any combo of compute/memory/storage in a cluster - and we have customers with literally hundreds (closing in on a thousand nodes in a cluster.

Saying “HCI linear scaling is bad” = advertising ignorance.

Re “public cloud” - of **course** more workloads will be on public cloud tomorrow than today.

Workload and data growth measured as CAGR in the public cloud currently outstrips on premises stacks by a huge factor (100% vs. largely flat). Furthermore it shows no sign of stopping.

But, anyone that thinks that means that all workloads belong in public cloud is ignorant, and doesn't talk to a lot of customers - apparently spending time posting anonymously on El Reg :-)

On-premises and off-premises choices have multiple decision criteria:

1) Economics: highly variable or unknown workload vs. steady state + compute/data ratios + nature of ingress/egress - and other factors

2) Data gravity: compute **tends** to co-locate with the data against which it is computing - and moving data is hard - note that this doesn't apply to workloads that are not latency sensitive, or have no long term persistence needs - like a lot of recursive risk/ML algorithms;

3) Governance: data sovereignty and attestation needs are real. Note this is NOT the same as “security”. on/off-premises has no bearing on security - and all the clouds are getting long lists of certifications.

The market is speaking on this topic, much more than any single voice (certainly mine included) – the answer is hybrid.

None of this is a "punchy sound bite" - but is the intellectual, data driven, scientific reality.

If you've made it this far, thanks for reading!

Feedback welcome of course – including anonymous snark ... but thought-provoking public dialog = better.

VMware VSAN has six dot six appeal

virtualgeek

Re: "...and the VMware-supporting version of Nutanix’s HCI."

Disclosure - Chad Sakac @ Dell EMC here.

Anonymous Coward is right - when you use Nutanix, it's always on their SDS stack, and I don't anticipate this changing. If you use VMware on Nutanix, it's really using ESX/vCenter - and that's about it (not vSAN, not NSX, not vRealize). This isn't the right place to be discussing the pros and cons of the various SDS stacks, approaches, data services and IO paths - and I suspect that if I did, the thread would spiral out of control :-)

Congrats to the VMware vSAN team - a huge release (amazing to consider it a .dot release), looking forward to bringing it to a lot of customers in all 3 forms (software-only, software on our Dell EMC PowerEdge vSAN-Ready Nodes, and on the turnkey HCI form in VxRail and VxRack SDDC)

EMC will hawk Dell's Nutanix-powered XC kit

virtualgeek

No need to translate Canadian :-)

All thanks for your comments, and Chris, thanks for your interest, and for pinging me on Twitter.

I think there's parts of this you got right (success we're seeing with VxRail), and there's parts of this you got wrong - it's an OEM, not a resell. Furthermore, this is a validation of customers, not anyone's tech.

There's also nothing here that suggests how we are going to go-to-market, and how for what customers and use cases we would use the elements of the portfolio. A few of the "anonymous cowards" have questions that are pretty easily answered.

This topic isn't something that I've been saying publicly for the last couple of weeks - so not news/exclusive - and interpretation is not needed.

In case people haven't heard me say it, and if any reader wants to hear it directly from me - I did a post here: http://virtualgeek.typepad.com/virtual_geek/2016/07/media-headlines-will-emc-resell-nutanix.html

Thanks!

EMC mulled a Pure Storage buyout amid patent sueball tennis

virtualgeek

Disclosure - EMCer here (Chad, specifically).

While I'm sure we've all said things in email we wish we could retract, I think I need to hold myself to a higher standard - and I failed to live up to a personal principal of "never go negative".

Context is missing here from everything - and the circumstance that I recall at the time is that internally people were equating Vaughn and I, and while we are similar, we are also very different. We are different people, in very different roles. The one thing that has always been common is passion, and being very public personas.

We had just finished a joint event where we were on stage debating in Chicago, and people who were there are the best judge of the outcome of the debate. Also, people were debating whether we should continue to be on stage at the same time (at events like VMworld) - and I was beating my chest a little so people would let us continue to do public head-to-head debates.

That doesn't matter - Vaughn and I are friends, and in fact have both referred to each other as "frienemies" (friends who are on opposite sides of a debate/battle) for years, from his NetApp days, to Pure.

As a friend, I should have picked my words more wisely - internal or not, emotional or not.

I called him yesterday and apologized in person. Vaughn - I'm sorry.

The most important battlefield is the marketplace (where we are doing quite well), but even in the heat of battle - there can, and should be respect (even when there are very real questions about IP rights and ownership - though I leave that to people who are best equipped to judge - people with all the context and judges!)

Nutanix digs itself into a hole ... and refuses to drop the shovel

virtualgeek

Re: The shady truth of the storage industry

Disclosure - EMCer here.

The comments here are my own opinion. I'm sure they are influenced by where I work, but I don't speak for EMC.

Nick, this has also been a point of frustration for me. It's surprising that no really good, really comprehensive storage benchmarking suite ever has emerged. Well - surprising isn't the right word. A shame perhaps.

Why is it not surprising? I think the reality is that when you dig really, really deeply, it is perhaps the hardest domain of low-level infrastructure to create good direct comparisons and comprehensive value assessments.

Unlike compute and network - the persistence layer has a very complex set of completely unrelated parameters.

- IO sizes have an effect

- protocol access has an effect

- bandwidth, latency, IO per second - these are all "metrics of performance"

- the variability of data services (and implementation of those data services) are all over the map.

- persistence media wildly varies

- and since unlike compute and network that don't persist, system level behaviour is non-linear over time (whether it's literally time, or behaviour variability as system parameters like utilization vary).

If this sounds like "wah wah - storage is hard", maybe it is :-) But consider the following:

Read Anandtech's comprehensive Skylake review here: http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/13

Now, look at HOW much benchmarking, with various tools was required to build a comprehensive picture of the CPU (an insanely complex system) performed. Now - imagine that every time the test was run - the results varied (non-linearity in system response). OOPH. Hard.

And, BTW, that is FAR from an exhaustive list of the things that are determinants of system response, and are a function of system non-linear behaviour.

Even the best tools require the independent tester to be fairly knowledgeable, and many (storage review's track record kind of speaks for itself)

And of course - those statements are all true whether it's a hardware tightly coupled, loosely coupled, or non-coupled software stack. (Hyper converged implementations invariably use a non coupled software stack.

For eons we found that since we are the leader (hate us or love us, we are the biggest player) - benchmarking has always been a losing game.

BTW - the IDC doc you reference? They wrote that on their own. We (and Pure, and I'm sure everyone else) gave input, and IDC can choose to ignore, incorporate, that's purely up to them (and in my experience, IDC has high integrity, and is very data driven)

We re-entered the SPC-1 and SPC-2 game - frankly because we realized we're tilting at windmills. Benchmarks are a fact of life - perfect or not.

My personal perspective on this has not changed though. I think that good product stand on their own. I think that sunshine and total transparency are the best policy. In this modern era where social media kinda is a mechanism for self-correction, people catch games, people catch bad logic. The best protection is to be open and transparent. I know the industry as a whole (us included) haven't always acted that way.

I'm going to continue to strive to remove the EULA language that exists in a lot of our stuff and VMware's stuff that talks about "running it by us" before publishing.

BTW - if people want to download and benchmark, and post their findings using our freely (no limits) downloadable software (http://www.emc.com/getscaleio; http://www.emc.com/getecs, and many others) I will fight for their right to test, play, post. I can't speak for VMware, but I know that similar dialogs are occurring there.

... And I will fight to get the EULA terms changed.

P.S. Trevor please don't kill me in a rage-filled response :-)

Let's kill off the meaningless concept of SW-defined storage

virtualgeek

Disclosure EMCer here.

Chris - I agree that SDS is an over-used term (I think we're more than guilty of "software-defined *" labelling - so trying to call the kettle black, but rather maybe add to the dialog).

I hate to say it, but I beat you to a rant on this topic :-)

It's worth a read "Is the dress white and gold, or blue and black?":

http://virtualgeek.typepad.com/virtual_geek/2015/03/is-the-dress-white-and-gold-or-blue-and-black-sds-server-or-appliance.html

The fascinating thing for me (coming from talking to customers every day all around the globe) - the "illogical circle" I put in that post is UNIVERSAL (silly humans! :-)

We have 3 data planes at EMC/VMware that are absolutely, definitively SDS (sold without hardware): transactional = ScaleIO/VSAN; object/HDFS(and soon light NFS) = ECS Software. The "illogical circle" (though I get it) after a ton of conversations is that while those are ABSOLUTELY available in a software-only version, the customer desire for integrated consumption + support drives them to appliance (commodity hardware packaged with the software) consumption. Examples of this form of "packaging": ScaleIO = VxRack with open persona, VSPEX Blue/VSAN ready nodes = VSAN, ECS = ECS appliance.

We have 1 control plane at EMC that are absolutely, definitively SDS: the ViPR Controller (and the open-source trunk is CoprHD).

Netting out this first point - there is a real SDS, but to your point, that's not a sufficient discriptor - you need to pull out the clarifying points you raise. Netting out the second point - most customers in my experience like to play with SDS in a "bring your own hardware" model - but when they move forward for real, they tend to prefer appliance consumption (I wonder if Nutanix would be willing to share the ratio of appliance vs. software-only - I would suspect it would match my observations for our stuff).

There's a 3rd point.

NOW, we ALSO have "virtualized versions of our appliances" (vVNX - analogous to ONTAP Edge, but I wouldn't call that SDS, as it's a software version of something built around an appliance - and then is hobbled by the fact that hardware dependencies (in the vVNX - and I believe ONTAP Edge - cases the hardware necessary for HA clustering with any modicum of performance) limits their use cases. Likewise - there is a virtual Isilon node, but the current version again has a dependency (in this case, the NVRAM that exists in an Isilon physical node. There are also software-only XtremIO and VMAX - but not available in the wild.

Each of those case, YES, it's commodity hardware, but the reason it's really only available in physical appliance form (or hobbled software-only form) is a esoteric hardware dependency. This means that I think calling them (or other examples of that) "SDS" is a huge stretch.

**IF** we were to solve for the Isilon NVRAM dependency and make an Isilon true software only stack (vs the virtual Isilon), then I think it could be called SDS.

BTW - all the SDS stacks and virtual appliances are available for download in a free and frictionless way (just Google them).

Anyone calling something that you can ONLY get in something with strict hardware dependencies - well, that's just silly marketing driven off the ledge :-)

So - to net out my POV:

1) - True SDS data plane and control plane stacks have NO hardware dependency, but WHAT that SDS stack does is wildly variant - calling it "SDS" and thinking that's enough is stupid.

2) - That with the exception of the ULTRA large enterprises (think 100's of thousands of VMs, 100's of PB) and the hyper-scale folks - customers in **PRACTICE** don't have (and don't desire) the ability to manage bare metal hardware/firmware/support with SDS stacks - and the SDS consumption model that tends to be the most popular is packaged with hardware

3) - That Virtual Appliance forms of hardware appliances (usually hobbled) - it's a stretch to call those SDS.

4) - That trying to "SDS wash" a hardware appliance is just stupid :-)

Thanks as always for the dialog!

Flashy upstarts facing IPO pressure. Get on with it then

virtualgeek

Outcomes talk. Positioning is, well, marketing.

Disclosure - EMCer here (that means I'm sure I'm biased, but I always speak based on what I personally see and experience, I'm no corporate mouthpiece)

Chris, I've been noting that there will be an inevitable "AFA Armageddon" for a while (so hey, at least I'm consistent).

See my Dec 2013 post here: http://virtualgeek.typepad.com/virtual_geek/2013/12/happy-holidays-merry-christmas-and-2014-top-10-predictions.html (it was prediction #3 - and I would encourage people to take a look and see if I was right or wrong).

1) Startups thrive when the giants are "asleep at the wheel" (there are some that I think are doing that, which you note in some of your other pieces - EMC is certainly not). Startups also thrive when the giants are "unwilling to disrupt themselves" (EMC certainly is with XtremIO - and I don't think it's disruptive enough to "all flashify" yourself by making all-flash variations of architectures built in eras that pre-date NAND and SSDs - customers dig VNX and VMAX including in flash-dense configs - but AFAs they ain't). The window for AFA startups is closed (or at least closing).

2) Startup funding is a lot more complex than people think. By the time the startup is in year 5 (and in round D,E, creative other funding rounds), and burning cash like crazy, the VCs are looking for an out. They will push to create a IPO (if not acquisition) HARD - and then cash out (and implosion often occurs right after - because employees start to bail). Remember that VCs need to get a 20x-ish return to make their model work.

The question for a startup isn't "is your revenue growing" (that's easy) or "are you well funded" (answer will be yes) or "have you won some deals against incumbents" (sometimes we create an opportunity by not being responsive to a given customer).

Rather, the right questions are:

Q: "what is your burn rate?" (how quickly will you burn your latest round);

Q: "how is your burn rate closing" (accelerating burn rate = bad in late stages);

Q: "is your cost of sales growing or shrinking?" (if it's getting harder to sell your stuff = bad);

Q: "is your margin shrinking?" (competitors are making life hard for you even when you win);

Q: "what is your growth rate - not in percentages, but in absolute terms?" (when you have $5M in revenues, getting to $10M is easy - but if in that same time, others grew from 150M to $300M = things aren't going to end well).

These are all the things that are cues to how a startup is REALLY working.

This isn't to say "startups bad" - my goodness, they are an innovation and disruption engine (one that we leverage a lot through our own venture funding and acquisitions). But - that it's a hard battle out there.

I personally welcome the competition, it makes us all better. But in the end, the customer speaks.

I love that people (including an earlier commenter) that EMC has a broad, overlapping portfolio. That's one of the reasons we're doing so well - and as a public company, our results speak for themselves. We're growing in almost every segment (except the enterprise high-end, where we are doing well in terms of share, but that whole market is in the process of being disrupted).

I would be concerned as an AFA startup that thinks we're asleep at the wheel, or not willing to disrupt ourselves - because we are well north of a $1B run rate in 2014, and accelerating. When the disruptors are #2 (or worse) in a market segment, and #1 is accelerating and growing faster than they are - it means "tomorrow will be even harder than today".

That said - I'm sure we'll see some IPOs, and perhaps some acquisitions - but this AFA space is NOT a good place to built a startup right now. There are many other great places for startups - I would argue infrastructure isn't the best segment as a whole for startups.

I have a healthy respect for all our competitors. Bring on the competition! :-)

EMC pulls the ViPR’s fangs, eases internal competition

virtualgeek

Chris - disclosure EMCer here.

Chris - your penchant (and skill) with an eyeball-catching title leaves me breathless :-)

The reality is far less dramatic, and highlights that the "Occam's Razor" principle on "understanding what's up in the industry" (vs. the dramatic speculation) sometimes works :-)

- ViPR Controller is doing great. Customers like the idea of of an open abstractor of storage services for a broad range of data services, and a broad range of vendors. They are deploying. What they want is: a) even more open; b) broader platform support. What they didn't dig particularly was: c) embedding additional data services themselves - makes the controller heavier and bigger than it needs to be.

- ViPR Data Services are doing great - but people didn't a) get the naming. They understand that ViPR Object and HDFS are software that runs on all sorts of hardware. They get that Elastic Cloud Storage is an appliance that bundles that software with EMC provided industry standard hardware (servers and switches) into an appliance - which delivers AWS S3 like functions - but better, and at lower cost. Naming is an SOB. Get naming right - yeah! Get naming wrong - ugh. So - the ViPR Data Services are now Elastic Cloud Storage (ECS) Software (bring your own hardware), and you can get it also as ECS Appliance (we support the whole thing as an appliance).

ViPR Controller will continue to be able to manage ECS Software for those who want to bring their own hardware, and have single abstraction to mange their traditional LUNs and POSIX filesystems (EMC and non-EMC) and their Elastic Cloud Storage Object and HDFS software + commodity storage (in either the "bring your own hardware" or "as an appliance" variation).

That's it - nothing more, nothing less.

Stay tuned for more on what we're doing to keep listening and adapting to what customers tell us :-)

Storage BLOG-OFF: HP's Johnson squares up to EMC's Chad Sakac

virtualgeek

Disclosure - EMCer here.

@NateAmsen: I'm sure I speak for HP (in a sense!) when I say: "I'm very glad you're happy with your choice!". This is why I think it's good that at the same time there are "net new" AFA designs, that the older, more mature Hybrids are getting tweaked for all-flash configurations.

For the EXACT reason you picked your 3PAR 7450 (maturity, familiarity, specific data services), we do all-flash VMAX3 and VNX2 configurations, and they are VERY, very popular, and have lots of happy customers - just like you are with your 3PAR hybrid, configured with all flash.

But - all sorts of things, including the IO path, and RAID configurations you describe (which would be similar to on a VMAX3/VNX2) are examples of things that, if a developer was assuming a 100% NAND persistence layer, they would do differently (and almost all AFAs do differently).

Not saying "100% flash configured hybrid/cached array = bad!". They have data services like the ones I mentioned and are the right answer for some customers.

I am saying that true AFAs (designed to NEVER have a magnetic persistence layer) are the fastest growing category in SPITE of none of them having VMAX3 or 3PAR like data services.

virtualgeek

A little more dialog

Disclosure - EMCer here (Chad)

@ChrisEvans: Chris you are dead right. I absolutely disclose who I am, and my employer - always - exactly for that reason. Regardless what people may suspect, I'm not a mouthpiece, I always say what I believe, what I can defend - and periodically, I say things critical about EMC and EMC products.

**BUT** it's inevitable that based on my day to day work, my exposure to what EMC and our tech partners are doing - it will colour my view. I absolutely have a sample bias (I think everyone does). Everything I say should be viewed through that lens.

---------

Now, on to the post itself (and I will comment on Chris Johnson's blog to make sure the dialog continues in both places):

I fundamentally put architectures that were designed in a period that pre-dates flash (true of VMAX, VNX, 3PAR, HDS USP/VSP/HUS) - all of which have designs (for better and worse) that presume a high-latency destage of IO (and therefore lots of cache, and other things) in a different architectural bucket than architectures designed presuming that every single IO will land on NAND for persistence.

YES, all of those "legacy" arrays have been tweaked and tuned for higher and higher NAND mix and support 100% flash configurations. VMAX3 for example was designed for almost 10x the IOPs mix of VMAX, and the caching algorithms needed huge re-writes for cache bypass and other behaviours when the IO is to NAND.

YES, all of those "legacy" arrays have many data services (like the ones I noted that DON'T exist in the "designed from scratch AFA" group).

NO, I don't think a hybrid, originally designed in the 1990s (and updated furiously since then) can be called an AFA simply with tweaks and 100% flash configurations. I think that's true of EMC. I think it's true of the rest of the industry (many of which have 100% flash configurations of hybrids - like HP).

Frankly, I think Chris might be making my point :-)

EMC pulls out VSPEX: I'll hyper-converge you in a minute

virtualgeek

Disclosure - EMCer here (Chad Sakac).

Chris, thank you for noticing the launch and writing about it. I suppose there's no surprise that as we enter the Hyper-converged market, that the existing players (Nutanix and Simplivity as quoted) wouldn't have nothing nice to say :-)

I'm disappointed (but not surprised I suppose) by Dheeraj's poop flinging.

Put simply: having a portfolio is about trying to solve a broader set of use cases, a broader set of challenges.

ScaleIO is doing GREAT. The growth rate we expect in 2015 would shock the system for a startup. Just like XtremIO, which now is north of $1B, when we enter a market, we think about it, we plan, we design, we acquire - and we go in with full force.

Today, with ScaleIO, we have customers with PBs in use. It's super-power is that it can scale to infinity and beyond (hundreds, thousands of nodes), and has OPEN support (not pinned to vSphere). Lots of openstack customers and use cases with KVM as an example. Lots of interest in CoreOS support. There is already a docker container based deployment model. Oh - and when it comes to vSphere us, there are ENORMOUS customers (think 100,000+ VMs) using vSphere that have elected ScaleIO to have a software storage layer that can support their massive vSphere environment, and ALSO their non-vSphere environments because they want heterogeneity in their kernel mode virtualization layer.

That's CHOICE.

The fact of the matter is that there is an enormous set of customers (generally smaller in size, but enormous in count) of the customers who are looking for turn-key hyper-converged customers use vSphere (so that matches the "vSphere only" VSAN design center).

Those customers bias to "simplicity above all else" (and there is NOTHING simpler than VSPEX Blue - and happy to show that to anyone who would like to try). They are also happy with a more contained amount of hardware variation. That's the design point: Simple, start small + grow, hyper-integrated with vSphere. This is the formula for an "appliance".

Let me point out a couple examples that illustrate the fault in the facile argument of "everything is a nail" that is so common from single-product companies:

1) We have a lot of at scale customers who want CI, including all the integrated support model. Increasingly they want things like NSX to be part of the integrated stack solution. Ultimately, it will be part of VSPEX Blue also, but that means building it into the full appliance model, including management and support (unless one would consider managing the appliance and the vSphere layer seperately an "appliance" - I wouldn't).

For customers who want that "built for YOU" experience (engineered CI systems) - VCE now supports that more sophisticated deployment model, now: http://www.vce.com/about/media/news?id=tcm:20-27188

2) conversely, if we wanted to create an engineered (again - think "built for YOU") hyper-converged design vs an appliance (think "here is the fixed list of what can be constructed, and you can order it and be running in minutes" model that you see from VSPEX Blue, Nutanix, Simplivity) the VCE engineered system approach and approach would of course use ScaleIO, because of the fact that it could be used in a broad set of use cases.

Perhaps there might (?) be a little concern with what we've launched (which will undoubtedly make life a little harder)... coupled that we might not be done :-)

I've got a lot of respect for the team at Nutanix that really got the the Hyper-Converged ball rolling, and other competitors like Simplivity. I respect their founders, their technology, and their employees.

I'm glad to be competing with them for the customer - and invite all customers to put us all through our paces. EMC and our EMC partners are here to compete to win their business.

Our CI portfolio is broad today - and we certainly don't think that everything looks like a nail. Our CI portfolio will continue to expand. ScaleIO will be a big part of that, regardless of what others may suggest :-)

For better or worse (and I think healthy competition is almost always good) - it's on like Donkey Kong!

Chad

No biggie: EMC's XtremIO firmware upgrade 'will wipe data'

virtualgeek

Re: virtualgeek What did you just say about vSAN 2.0?

Disclosure - EMCer here (Chad)

Matt - I absolutely agree that first and foremost is what are WE doing.

Our 100% focus has been the customer through this. Again, I'm not going to convince any haters, but this is the essence:

a) We have a rapidly growing user base of XtremIO.

b) We had features we know would require more metadata than we had in the current generation of hardware (compression, and performance improvements)

c) The internal debate was long, and hard. Should we deliver the new capabilities to the existing install base, or only to customers who buy future hardware with more RAM?

XtremIO's architecture of always storing metadata in DRAM (vs. paging to on-disk or storing on SSDs) is an important part of linear behavior always. Conversely, It does mean that total X-brick capacity and features is directly related to the DRAM capacity and the on-disk structure (which relates to the amount of metadata).

People can (and are absolutely entitled!) to second-guess our decision. We decided the right call was to:

1) make the capability available to all (existing customers and those in the future) - which requires an persistence layout change.

2) to do it quickly, as this is a very (!) rapidly growing installed base.

3) to ensure that this change would carry us through all upcoming roadmapped releases.

4) to build into the plan (seriously - we have) budget for field swing units (capacity to be deployed to assist on migrations), as well as for EMC to absorb the services cost, and wherever possible help with non-disruptive svmotion at scale (where the workloads are vSphere VMs)

5) to commit to support the happy 2.4 customers (which are legion) for years to com if they want to stay there..

This is the first disruptive upgrade (in spite of some of the earlier comments) of GA code. I agree, we should have changed the on-disk structures prior to releasing the GA code - that's 100% on us.

That all said - I'm proud of how the company that I work for is dealing with this: actively, quickly, and with the customer front and center in the thinking.

Now - on to the "never go negative" point, what I was saying Matt was this: anyone who has been around the block has seen difficult moments in every piece of the tech stack, from every vendor. As important as anything else is: *how the vendor deals with it*. This is true in the storage domain, the networking domain, the PaaS domain, the cloud domain - whatever.

If any vendor feels they are immune to issue, and make their primary argument about "the other guy" - customers generally (in my view) have a negative reaction - because they know better. That's it.

virtualgeek

Re: What did you just say about vSAN 2.0?

Disclosure - EMCer here (Chad)

No - that's not what I said (and people can go read the blog for themselves to verify).

VSAN can be a non-disruptive upgrade to the underlying persistence layer BECAUSE you can use svmotion and vmotion to vacate workloads non-disruptively. Aka - a small amount of "swing capacity" is created by clearing data, new structure, swing workloads back.

BTW - the "hyper converged players" (Nutanix, Simplivity, EVO:RAIL partners) do this as well. It's handy (and frankly an approach that can be used broadly to avoid what would otherwise be disruptive).

Why can this always be used in those models? Well - because all their workloads are VMs.

You **CAN** version metadata (this raises other engineering tradeoffs), but when you change the format/structure of on-disk structures, it involves vacating data. VSAN 2.0 will have some on-disk structure changes, but I would wager (will defer to VMware) will use this "rolling workload move" to make it non-disruptive (although data is getting vacated through the process)

It's a joke to anyone on here that claims they are beautiful and flawless (30 seconds and google "disruptive" + "vendor name" - I'm not going to wade in the muck by posting links like that for every of the vendors piling on here) - so I'd encourage all vendors to have a little humility. Customers don't like people who go negative.

The trade off for us was: "do we save the new features for customers with new hardware" (aka more RAM for more metadata), or "do we give the features to all". We chose the latter. Hence why we continue to support 2.4 for years to come. AND we also chose to plan for swing hardware and services to help customers in the migrations. Frankly, I'm pretty proud of how EMC is approaching this difficult decision, and thinking of the customer first.

I'm sure the haters out there and competitors will disagree - but hey - so be it.

Don't be a hater :-)

virtualgeek

Disclosure, EMCer here.

Anonymous Coward,

We are absolutely helping customers (along with our partners) through exactly that (in addition to svmotion). All noted in my public blog, which I'd encourage you to read (and comments welcome - though I'd suggest disclosure).

The commitment to support people on 2.4 you may not agree with, but some customers are electing to stay there for the foreseeable future. Our commitment to support them is certainly not bogus in their eyes.

virtualgeek

Re: Customer success stories - Due diligence

Disclosure EMCer here.

I guess I'm tilting at windmills on the "anonymous posting" topic, and hey - it's a free world. I think the strength of an argument is proportional to someone's willingness to personally stand with it (nothing to do with who you are, or degrees as someone suggested). I just think an argument doesn't make as much sense without context (does the person making it have an agenda?) That's why personally - I think disclosure is right.

Re this comment on benchmarking, I personally completely agree. In fact, Vaughn Stewart (Pure) and I did a joint session this set of topics (trying to be vendor neutral) at VMworld (and will repeat in Barcelona in Oct), and in essence outlined the same point:

1) Don't trust any vendor claims. Benchmark.

2) Don't let any vendor steer you in benchmarking. Even if their bias is non-malicious, they will have a bias.

3) we warned the audience - good benchmarking is NOT EASY. Sadly most people take a single host/VM, load up IOmeter with a small number of workers and just run for a few hours. While that's data - that ain't a benchmark.

Some of the steps needed to benchmark properly:

a) run for a long time (all storage targets have some non-linearity in their behaviors). As in days, not hours.

b) a broad set of workloads, at all sorts of IO profiles - aiming for the IO blender. Ideally you don't use a workload generator, but can actually use your data and workloads in some semi-real capacity.

c) you need to drive the array to a moderate/large utilization factor - not a tiny bit of the capacity you are targeting, and all AFAs should be loaded up, and then tested. Garbage collection in flash (done at the system level or the drive level) is a real consideration.

d) you need to do the benchmark while pressing on the data services you'll use in practice.

e) ... and frankly, doing it at a scale that actually discovers the "knee" in a system is pretty hard in the flash era (whether it's AFAs or software stacks on all SSD configs). It's hard to drive a workload generator reliabily past around 20K IOps. That means a fair amount of workload generators and a reliable network.

Now - I feel confident (not arrogant) saying this, and have been through enough customer cases of all shape and size to willingly invite that opportunity.

... but, I'll note that very, very few customers have the capacity or the time to benchmark right. Some partners do. The feedback Vaughn and I gave to the audience was "if you can't do the above right, you're better off talking to a trusted partner, or talk to other customers like you at things like a VMUG".

Now changing gears - one thing in this comment put a huge smile on my face :-)

I can tell you for a *FACT* that EMC SEs are NOT all running around with a "fake workload generator" trying to deviously get customers to test to our spec... LOL! While there are ~3500 EMC SEs, only a tiny fraction (55) are setup to support a customer that wants to do a PoC. Most PoCs are supported by our partners. I can tell you that we are not organized enough, or well led enough to have all the 3500 able to set the proverbial table, and execute PoCs with a fake workload generator. And frankly those 3500 have a hard (but fun!) job. They need to cover the whole EMC portfolio (and be knowledgeable on the EMC Federation of VMware and Pivotal at the same time) as well as know the customer... Phew!

Wow - if that's what our success in the marketplace is being ascribed to - well, go right ahead and think that :-)

...if 3500 people sounds like a big number - when you overlay the world with it, it's not. Heck the 55 people able to support a PoC barely covers 1 per state, and there are 298 countries in the world we cover! Thank goodness for our partners!

I'll say it again - the EMC SEs are AWESOME humans, but they are not organized enough, or well led enough to be that devious - ** and that's coming from the person singularly responsible to organize them and lead them ** :-)

What **IS** true is that we found people really struggling to benchmark. We wanted to OPENLY, TRANSPARENTLY share how we do it, and welcome feedback (crowdsourcing it) if there was input, or a better way. This tool (which aligns pretty well with the IDC AFA testing framework) is here: https://community.emc.com/docs/DOC-35014

If people can point to a better benchmark, I'm all ears!

virtualgeek

Re: Scary

Disclosure - EMCer here.

That's exactly what we (and our partners together) are doing. See the followup to the original post in Australia here: http://www.itnews.com.au/News/392118,extreme-upgrade-pain-for-xtremio-customers.aspx

Customers and partners first, always!

virtualgeek

Disclosure - EMCer here (Chad). Chris, thanks for the article (though seriously, that headline , and FWIW - it's nice to have linked to my post. I'm a big believer in transparency and disclosure.

Commenters (and readers)... man - I wouldn't put much personal trust in people without enough confidence to share their identity, and if they share their name, if they don't have the confidence to disclose any affiliations - I think the same applies.

I have no such reservation. I have the data on the XtremIO customer base. XtremIO customers are happy. Our customers partners are giving us great feedback (including healthy critiques). Availability stats across the population (far more than 1000 X-Bricks out there and growing unbelievably fast) are amazingly high. We are far from perfect, and always looking to improve - so feedback welcome. I guess for some there's no way to compete without being a hater.

Yes, this is a disruptive upgrade (certainly not a Data Loss scenario as one commenter notes), but I've detailed the "why", and the "how" on my blog post that is linked to in the article. If you want to see disclosure, and transparency, there you have it.

It's notable how much commentary is one vendor going at the other here in the comments, and how little there is of the customer voice. Seems like in the industry we all like to navel gaze - but I suppose that's the way. At least we're passionate :-)

To the NetApp commenters in here - I'm frankly a little shocked. Is the 7-mode to C-mode migration a "data vacate and migrate event"? It is. You are in the middle of a huge migration of your user base - with a fraction of the FAS user base on the clear future target and everyone else navigating a disruptive upgrade which is exactly analogous (and triggered by same cause that I note in my blog post). Further, when you point to ONTAP for this story which is about AFAs... I have to presume (assume - making an ass out of me) that customers looking at AFA solutions from NetApp are directed to the e-Series (with the engenio storage stack until FlashRay GAs - neither of which is an NDU from ONTAP. I have a great deal of respect for NetApp as a company and for their IP - this isn't about "NetApp sux" (they don't) - rather I'm scratching my head how you could make those comments without your heads exploding from cognitive dissonance, but hey - that's just my opinion :-)

And, to one of the many "anonymous cowards" - in particular the one that commented on my blog post being "misdirection"... that's not misdirection, I'm **REALLY THAT VERBOSE AND LONG WINDED** - that's just me :-)

Best comment IMHO was from MB, likely a customer - good backups are ALWAYS a good idea, and should be done before any upgrade, NDU or not.

EMC's DSSD rack flashers snub Fibre Channel for ... PCIe

virtualgeek

Disclosure - EMCer here:

No, it isn't. real-time refers to "very fast analytics" and "historical" refers to "very large datasets accumulated over time".

Put it together, and DSSD has as one of it's targets applications that need to do extremely fast analytics over a very large dataset (much larger than you can fit on locally attached PCIe).

virtualgeek

Re: Linux Controllers don't add any latency?

... Disclosure, EMCer here - while Chris does his usual work of ferreting out good deets, there are some errors in here (which is fine), and one of the errors is the data path for IOs.

Post acquisition, we disclosed that this was an early-stage startup, similar to when we acquired XtremIO (small number of customers, but pre-GA product). Just like with XtremIO, it was an extremely compelling technology - ahead of where we saw the market (and our organic work down similar paths - there was an organic project similar to DSSD codenamed "Project Thunder" - google it).

Re: organic (internal) vs. in-organic (acquisition/venture funding), it's almost an exact 50/50% split.

My own opinion (surely biased), thank goodness EMC does a lot on BOTH sides of the equation.

Time has shown again and again that without healthy internal innovation (C4, ViPR control/data services, MCx, Isilon over the last 2 years) **AND** inorganic innovation (VPLEX, DSSD, etc) - all high-tech companies ultimately struggle.

My opinion? Thinking anyone can out-innovate all the startups, all the people in schools, the entire venture ecosystem is arrogant. This is why it's such a head scratcher to me when people say its a "bad thing" to acquire - frankly customers like that we have good products and that they know we will continue to bring good ones to market (both organically and inorganically). IMO, It's a smarter move to play in the whole innovation ecosystem in parallel to organic internal-only activity.

Don't bother competing with ViPR, NetApp - it's not actually that relevant

virtualgeek

Respectfully, disagree.

Disclosure, EMCer here.

Chris - you probably would expect this from me, but I disagree. Let me make my argument, and lets see what people think. I ask for some patience from the reader, and an open mind. I'm verbose, and like to explore ideas completely - so this won't be short, but just because something isn't trite doesn't make it less accurate.

Read on and consider!

The choice of "multiple architectures to reflect workload diversity" vs. "try to serve as many workloads as you can with one core architecture" is playing out in the market. Ultimately, while we all have views - the customers/marketplace decides what is the right trade off.

a) EMC is clearly in one camp.

We have a platform which is designed to "serve many workloads well - but none with the pure awesomeness of a platform designed for specific purpose". That's a VNX. VNX and NetApp compete in this space furiously.

BUT we came to the conclusion a long time ago that if you tried to make VNX fit the space that VMAX serves (maniacal focus on reliability, performance, availability DURING failure events) you end up with a bad VMAX. Likewise, if we tried to have VNX fit the space Isilon fits (petabyte-level scale out NAS which is growing like wildfire in genomics, media, web 2.0 and more) - you end up with a bad Isilon. Why? Because AT THE CORE, you would still have a clustered head. Because AT THE CORE, file/data objects would be behind ONE head, on ONE volume. Because, AT THE CORE, you would still have RAID constructs. Are those intrinsically bad? Nope - but when a customer wants scale-out NAS, that's why Isilon wins almost overwhelmingly over NetApp cluster mode - when THOSE ARE THE REQUIREMENTS.

b) NetApp (a respected competitor, with a strong architecture, happy customers and partners) seems to me to be in the other camp. They are trying to stretch their single product architecture as far as it can go.

They finally seem to be "over the hump" of core spinnaker integration with ONTAP 8.2. Their approach of federating a namespace over a series of clustered FAS platforms has some arguments to be sure. The code-path means that their ability to serve a transactional IO in a clustered model is lower latency than Isilon (but not as fast as it was in simple scale-up or VNX, and certainly not the next-generation VNX). They can have multiple "heads" for a "scale out" block proposal to try to compete with HDS and VMAX. In my experience (again, MY EXPERIENCE, surely biased) - the gotchas are profound. Consider:

- With a Scale-Out NAS workload: Under the federation layer (vServers, "Infinite Volumes", there are still aggregates, flexvols, and a clustered architecture. This means that when a customer wants scale-out NAS, those constructs manifest - a file is ultimately behind one head. Performance is non-linear (if the IO follows the indirect path). Balancing capacity and performance by moving data and vServers around. Yup, NetApp in cluster mode will have lower latency than Isilon, but for that workload - that's not the primary design center. Simplicity and core scaling model are the core design center.

- Look at the high-end Reliability/Serviceability/Availability workload: In the end, for better or worse, NetApp cluster mode is not a symmetric model, with shared memory space across all nodes (the way all the platforms that compete in that space have been architected). That is at the core of why 3PAR, HDS, VMAX all have "linear performance during a broad set of failure behaviours". Yup, NetApp can have a device appear across different pairs of brains (i.e. across a cluster), but it's non-linear from port to port, and failure behavior is also non-linear. Is that OK? Perhaps, but that's a core design center for those use cases.

- And when it comes to the largest swath of the market: the "thing that does lots of things really well", I would argue that the rate of innovation in VNX has been faster over the last 3 years (due to focus, and not getting distracted by trying to be things it is not, and was never fundamentally designed to do). We have extended the places where we were ahead (FAST VP, FAST Cache, SMB 3.0, active/active behaviors, overall system envelope), we have filled places we were behind (snapshot behaviors, thin device performance, block level dedupe, NAS failover, virtualized NAS servers - VDM in EMC speak, Multistore/vServers in NetApp-speak), and are accelerating where there are still places to run (the extreme low-end VNXe vs. FAS 2000, larger filesystem support)

Look - whether you agree with me or not as readers - it DOES come down to the market and customers. IDC is generally regarded as the trusted cross-vendor slice of the market - and the Q2 2013 results are in, and public, here: http://www.idc.com/getdoc.jsp?containerId=prUS24302513

Can a single architecture serve a broad set of use cases? Sure. That's the NetApp and EMC VNX sweet spot. NetApp has chosen to try to expand it differently than EMC. EMC's view is that you can only stretch a core architecture so far before you get into strange, strange places.

This fundamentally is reflected in NetApp's business strategy over the last few years. They themselves recognize that a single architecture cannot serve all use cases. Like EMC, they are trying to branch out organically and inorganically. That's why EMC and NetApp fought so furiously for Data Domain (the B2D and cold storage use case does best with that architecture). I suspect that's why NetApp acquired Engenio (to expand into the high-bandwidth, use cases - like behind HDFS, or some video editing that DDN, VNX, and others compete in). The acquisition of Bycast to push into the exa-scale object store space (which biases towards simple no-resiliency COTS hardware) is another example.

On the organic front, while I have ZERO insight into NetApp's R&D - I would suggest that their architectures to enter into the all-flash array space (FlashRay?) would really be best served with a "clean sheet of paper" approach of the startups (EMC XtremIO, Pure Storage, etc) rather than trying to jam that into the "single architecture" way. If they choose to stick with a single architecture for this new "built for purpose" space - well - we'll see - but I would expect a pretty mediocre solution relative to the competition.

Closing my argument....

It is accurate to say that EMC needs ViPR more than NetApp. Our portfolio is already more broad. Our revenue base, and more importantly customer base is broader.

NetApp and NetApp customers can also benefit now - and we appreciate their support in the ViPR development of their southbound integration into the ONTAP APIs (and I think their customers will appreciate it too). NetApp is already more than a single stack company. Should they continue to grow, and expand into other use cases - they will need to also continue to broaden their IP stacks.

Lastly - ViPR is less about EMC or NetApp - rather a recognition that customer need abstraction and decoupling of storage control plane and policy REGARDLESS of who they choose - and that many customers whose needs are greater than the sweet spots of "mixed workload" (VNX and NetApp) have diverse workloads, and diverse architectures supporting that (often multi-vendor).

This is why ViPR is adjacent to, not competitive with SVC (array in front of array), NetApp vSeries (array in front of array), HDS (array in front of array), and EMC VPLEX and VMAX FTS (array in front of array). These are all valid - but very different - traditional storage virtualization where they: a) turn the disk from the old thing into just raw storage (and you format it before using); b) re-present it out for use. All these end up changing (for worse or for better) the characteristics of the architecture in the back into the characteristics of the architecture in the front. ViPR DOES NOT DO THAT.

Remember - ultimately the market decides. I could be completely wrong, but hey - innovation and competition is good for all!

THANKS for investing the time to read and consider my argument!

EMC goes virtual with in-house Oracle apps

virtualgeek

Oracle support position is actually clear, and positive.

@ICS - Disclosure - EMCer here.

I know there's a lot of FUD out there (often from Oracle) re their support and licensing stances when it comes to virtualization.

The formal policy is actually pretty darn reasonable - but isn't what people are TOLD it is.

I did a detailed post (including a screenshot of the authoritative Metalink article) here: http://virtualgeek.typepad.com/virtual_geek/2010/11/oracle-and-vmware-a-major-milestone.html

There's also a lot of confusion re: licensing (need to license any core it could run on, and not honouring things like DRS Host Affinity settings). Done right, there is absolutely no "virtualization tax" when virtualizing Oracle on VMware, and we're finding people are saving boatloads of money and getting BETTER performance.

Again, I don't want this to seem like an ad, but I also did a video at OOW where we discuss those things that are used to scare customers from doing the right thing: Performance, Support, Licensing - and of course "are other people doing it" (answer = YES, about 50% of Oracle users according to the Oracle Users's Groups). That video is on youtube here: http://youtu.be/gHyIA454YbQ

EMC exec flames El Reg

virtualgeek

Resolving support issues.

Disclosure, EMCer here.

@Alain - to double-up on J.T.'s comment - please escalate.

Actually, let me apologize first - you shouldnt be having support issues, and I'm sincerely sorry to hear that.

If you don't know how, or where, to escalate with your account team - you can contact me. Easiest way to do this while remaining anonymous is to post a comment on my blog (http://virtualgeek.typepad.com). I won't post your comment, but will direct you internally poste haste. If you can also get me SR (service request numbers), I can followup with the people that gave you unsatisfactory service.

BTW guys - most CLARiiONs are now 3-5 years old, and are pretty aged. And, JT has obviously been around the block (not saying there is any issue with any specific component), but as with anything mass manufactured, when a part fails, there is a tendency to fail in many places/customers at once (as there was a manufacturing issue).

virtualgeek

LOL - no spelling mistakes = edit?

Disclosure - EMCer here.

@VeeMan - trust me - that's just how I write and speak :-) Had it come through any approval by marketing - all the technical info would have been cut. And YES, I'm **THAT** verbose (much to my wife's chagrin). If you want evidence of the type of person I am - just read my blog (easy - google "virtual geek"). I've been out there for a while, so who I am is no secret.

I'd quit before being censored in the way you suggest. Coaching, guidance - man, I need that constantly. But changing what I say? Never. My comments are my own - my blog is my own - for better or for worse....

Also - FWIW - theres a lot of "old tapes" in your response. We're pretty active in the benchmarking game - and have been through 2010 (and will continue). We've learned and adapted. True - almost all benchmarks (at least in storage land) don't reflect well on the nature of the bizarro-world that is the real world - all shared storage subsystems very rarely support a single type of workload at a given time. That said - the lesson was learnt. People like "record breakers", so - were doing it constantly now.

virtualgeek

Thank you Chris!

Disclosure - EMCer here.

Chris - thank you for posting the comment, was honourable to post it in my view.

FWIW - while I disagree with the original article, I do think Nexenta did well on their initial participation in the HoL. Like my first comment - these sorts of live mass events are full of danger, problems, and it's a real test of tech and people.

With that said, back to the marketplace battlefield - where there is enough room for broad competition, and broad choice.

(the author of the response) - Chad Sakac (aka virtual geek - http://virtualgeek.typepad.com)

PS, if it seems erudite, overly polite, low on swear count - that's purely because I'm Canadian. Trust me - where I come from, that was a full out furious flame :-)

Off-the-shelf servers spar with million-dollar storage arrays

virtualgeek

Disclosure - EMCer here.

@frunkis - indeed that's your choice, and every customer does indeed - make a choice. I don't dispute the validity of a broad set of solutions on the market.

Every customer makes a choice. In the last month, here's a short set of customer who have shared their public choice to choose EMC (many of them publicly including what competitors they evaluated).

- English Premier League Club: http://www.emc.com/about/news/press/2011/20110914-01.htm

- Columbia Sportswear: http://www.emc.com/about/news/press/2011/20110830-03.htm

- KPIT: http://www.ciol.com/Storage/Cloud-and-Virtualization/News-Reports/VMware-Cisco-EMC-deploy-Vblock-at-KPIT/153897/0/

- Northrup Grumman, Lone Star College, Northern Hospital of Surrey County: http://www.emc.com/about/news/press/2011/20110831-01.htm

- Heritage Auctions: http://www.emc.com/about/news/press/2011/20110825-01.htm

- Washington Trust: http://www.emc.com/about/news/press/2011/20110823-02.htm

- Texas School District: http://www.emc.com/about/news/press/2011/20110817-02.htm

- Curtin University of Technology, SPAR Group, Elliot Health Systems: http://www.emc.com/about/news/press/2011/20110824-01.htm

- Columbia University: http://www.emc.com/about/news/press/2011/20110816-01.htm

Every customer is unique - so the reasons for every choice is almost as unique.

Look - the point here (at least my point :-) is not that Nexenta bad, NetApp bad (though they seem to have been that way in your view), EMC good. I'm clearly biased. That choice is for every customer to choose, and I respect their choices (how can you not?). I'm purely disputing the facts in the article that are incorrect.

As you note - they have a nice UI for Opensolaris ZFS. And, have ported parts of it to Ubuntu to deal with the ZFS outstanding legal issues & Oracle basically killing Opensolaris - which is sad, because ZFS (like many things formerly Sun) is, IMO, good technology.

Competition = good. Good for customers, good for everyone.

If there is ever an opportunity for your business, I hope that you'll consider EMC (at least give us a chance to win your former NetApp infrastructure, now Nexenta infrastructure). There's no harm in looking at options, right?

virtualgeek

1PB in a rack - good but not great.

Disclosure - EMCer here.

Also - missed it on my earlier comment.

@LarryRAguilar - your point is a good one, and that highlights my point. 1PB in a 42 standard rack is good, and hey - congrats to Aberdeen.

EMC's current shipping dense config is 1.8PB in a 42 standard rack.

And, as per my earlier comment - I'd encourage customers to get multiple quotes on configs - we're all subject to the same market forces :-)

Oh, and BTW, we're not stopping there. While our stuff is based on the same commodity components as the other guys - customer demand infrastructure to have certain capabilities.

When that need stops, we won't need to engineer our own storage processors and enclosures (all from commodity components). Today, the integrated value (far more in the software than in the hardware, but some in the hardware still) that drives customer choice is something customers value, and the market votes.

VMware 'to work with just five storage companies'

virtualgeek

Disclosure - EMCer here.

Chris - the VM Volume "advanced prototype" (shown in VSP3205 at VMworld) was a technology preview of this idea, and yeah, it's an important idea, and a disruptive idea.

Anyone who has managed a moderate to large deployment of virtualization knows that the "datastore" construct (on block or NAS storage) is not ideal - as then the properties of that datastore tend to be shared by ALL the things in it. It would be better if the level of granularity was a a VM, but WITHOUT the management scale problem. That's what was shown.

Today, the storage industry (and of course, I personally think that EMC does this more than anyone, and can prove it) are doing all sorts of things to be more integrated (vCenter plugins, making the arrays "aware of VM objects through bottom up vCenter API integration, VASA, VAAI, etc) - but unless something changes, we're stuck with this core problem - VMs are the target object, but LUNs and filesystems kind of "get in the way".

I'm sure that VMware will run it like all the storage programs they have run. The APIs are open, and available to all - but of course, the early work tends to focus on the technology partners supporting the largest number of customers.

More customers use EMC storage with VMware than any other type; and invests more resources and R&D (both by a longshot) - so it's no surprise that the demonstration in the session featured EMC storage so prominently. Pulling something like that is NOT easy, and a lot of people put in a lot of work into it.

For what it's worth - VMware is simply CHANGING what is important to customers and valuable from storage. Certain data services are moving up (policy-driven placement of VMs), certain ones are pushing down (offload of core data movement), and "intelligent pool" models (auto-tiering, dedupe) become more valuable as they map to simpler policy-driven storage use models.

While this was just a technology preview - if it comes to pass - vendors who are able to deliver strong VM Volume implementations, with VM-level policy and automation will become even more valuable.

Just my 2 cents.

Storage vendors are in VMware beauty contest

virtualgeek

EMC - supports SIOC

Disclosure - EMCer here.

Chris, FYI - EMC supports SIOC (in fact any block storage target on VMware's HCL supports it - as it's a VMware feature, not an array feature - hence our focus on VAAI which actually requires us to actually do something to support it).

SIOC is aligned with autotiering (from us and others) - with SIOC resolving instantaneous contention conditions through prioritized throttling, and auto-tiering resolving the ongoing issue of VMs with different IO loads on the same datastore.

EMC cans Atmos Online service

virtualgeek

A bit more commentary

Disclosure - EMCer here...

Chris - thanks for the article.

I did a followup post here: http://virtualgeek.typepad.com/virtual_geek/2010/07/understanding-what-were-doing-with-atmos.html

(and also an update to the original post you linked to).

Hope it clarifies what we're thinking. Everyone makes boo-boos, I think this is a move in the right direction.