* Posts by Florian Hwigl

22 publicly visible posts • joined 15 Apr 2008

More than one way to build a cloud (and there always will be)

Florian Hwigl
WTF?

Opennebula left in the haze

I know which one there is really opensource.

The one that doesn't get all the marketing-budget funded raves.

Why do we need SANs?

Florian Hwigl

IO bottlenecks

Look for "stub IO domains".

The research folks looked into the IO performance issues some years ago, I.e. with infiniband in domUs it had been proven to get native IO rates (so multiple GB/s).

Unfortunately noone bothered to do any implementations in the real world, so this is well-researched, solved, and fails.

On ESXi it is secret sauce to tune a VM to 300-400MB/s IO rates and beyond that you can forget it.

On KVM?

hahahaahahahahahahaha

Florian Hwigl

1000s of drives connected to all the hosts

Behold the power of a FC loop setup

Which was found to be really stupid somewhere around 2000.

Oh yes, you can do Lun masking in SAS

Which is another thing that has been basic FC functionality but prove to be too tedious for real usage.

So you could kinda add zoning now

And VSANs

And SNIA apis

Then it would be easy to manage. And if you also add better routing and redundancy and QoS then you'd end up with FC. Ah, no. Forgot NPIV.

FC is not complicated for storage admins, it's easy and reliable.

And you're trying really hard to believe you're not just creating, then solving the same problems another time.

How will scalability work for you if you move beyond a smallish setup?

160 storage controllers? 320?

How much metadata do they have to send around?

I'm really open to new stuff, but it should not be old bullshit.

Florian Hwigl

I really hope you're not working for purestorage

or that your joking.

Because I had been convinced they got some clue.

Florian Hwigl

cheapo storage over ip over 10ge will always be faster than 2x8gbit FC? RRiiiiight.

So wait, a 10gigE port shared between protocols and with FAT IP overhead will solve the bottleneck issues of 2x8gig FC?

Yes I've seen slow SANs - but that's a design question, and why you should have designers.

If you think you can solve any big problems with a little SSD caching or - lets call it distributed and localized - storage tiers then this might scale to serve up all your servers. but for the ones needing performance, it will suck.

Much like virtualized servers, which does fine for 98% of the windows boxes, but once you migrate your big apps onto it you suddenly see users ordering the largest non-virtual iron they can get.

did I mention uplink and backbone bandwidth issues?

I think you guys need to get some reality or you drift up into the clouds.

I'm building distributed storage over DDR infiniband and I'm also a big fan of i.e. amplidata how also have a nice idea of local ssd caching. And when purestorage does some more selling, we will see a whole new level.

just this stuff is new, unproven commodity solutions.

Yes your performance will degrade if you got a by-the-book san and consolidated 150 servers onto one EMC. But that's not an FC issue. It was asking you asking for a bottleneck.

A SAN is a *NETWORK* and so it might be helpful to use it like one, with more storage systems, and not sending every unrelated bit of data through the core.

Just as it semt so natural to use distributed commidity crap storage with SSD cache, why don't you consider avoiding bottlenecks in your SAN?

Bonus: The SAN is really fast when you need it. Always, not just when you got the data cached and coherency is not needed.

Red Hat: 'Yes, we undercut Oracle with hidden Linux patches'

Florian Hwigl

So oracle always used a different kernel put stoled it

And "crap" being 99% stability patches that will pop up in RHEL a year later after "QA".

RHEL6 / OEL6 will seem fresh for 1-2 years now, and after that their users will again go from bug report to bug report looking at things that were patched years ago in mainline, while their RHEL boxes still crash from it.

Backup: It really should be easy

Florian Hwigl

usb drive scalability

ok. now please describe how you do that for replacing like, 5000 LTO-4's in a jukebox.

I guess daisy chained USB hubs would be really clever?

I promise we won't go looking at the disk failure rate or the higher operations costs.

Oh, and yes, at home I do just that: External USB drive containing backup server + backup data, boots as vm as long as it's attached to the xen host, and will boot as normal fedora when the xen host happens to die.

Just I can grasp the difference between risking a whole company and risking my pr0n archives?

Florian Hwigl
Stop

9 times out of 10

And the truth is that's why 9 IT guys out of 10 don't get a job in any real non-cloudy datacenter.

Oracle beefs chips and I/O on x64 blades

Florian Hwigl

The sockets are for sodimm flash modules?

And probably some ram, too.

This would be really nice if you can mix and match flash + disk like this.

Simplest Ethernet storage validated

Florian Hwigl

Plan9 finally on route to victory?

CoRaid is the one cool company that largely develops on Plan9

All the best to them, and in a way using AoE on a small lan might still be less fail than any enterprises falling for the FCoE hoax, pulling out their working setup to find convergence is when they need to deploy a second LAN on expensive high-specced 10GE switches and still never get full ISL speeds :)

Oracle hates discs, loves tape

Florian Hwigl

Really, and how does this relate

to the enterprise 60-200 drives 10000 slots world, where a 800GB tape runs about $30 and there always will be some "many hundred TB" caching in front of the jukebox?

Two guys will go there to stuff in new tapes and take out the ones for the offsite storage every other week and thats it.

We got backups of about 1200 systems going to tape (via a VTL, yes, who cares, the PC servers can't feed data enough for the tapes). 8-10 hours per month are spent thinking about the tape machinery, 4x2 hrs are reserved for in-out of tapes.

So sorry, not sure if you can blame the tapes.

Sony Walkman NWZ-A845 media player

Florian Hwigl

Bought it a few days ago

i was looking for a mp3/video player with very good battery, no itunes-ish software and basic control buttons, ended up with this one as time was short, it was recommend and it looked okay.

con:

- no wireless at all

- no support for most of my videos!!!!! not just flac, but almost everything that I'm used to. I can live with crap players like WMP or QT not playing a movie, but a portable media player really should be able to play MEDIA?!

- earphones are nice but still not "good"

- picture viewer doesn't have even most basic features

- practically it will only give a small percentile of its features w/o using the sony software

- my WALKMAN back in the 90s had a remote in the headphone cable. now in 2010, that is too complicated on Sony's top-of-line player?

- the sony GUI lacks some options even my W880 mobile had (i.e. arrow left/right while navigating)

- only play/pause, hold and volume are well handlable, the other buttons on that fake ipod wheel are not really useful without looking at the display.

- can't play audio while attached to my laptop (maybe when attaching to windows when the software is installed)

pro:

- good usb speed

- nice display

- battery lifetime

85% is still a _very_ good rating given the lack of functionality of the current firmware has.

Can't really recommend it even for iTunes-wary people.

Is dynamic IT management necessary for mid-sized firms?

Florian Hwigl
Stop

is walking through the mud good enough?

I think we should strive to drive, in cars, on highways instead.

Why didn't you set out to quantify the savings (or losses) to be gained by better management software. yes, no surprise it wasn't bought after that, but it is a surprise you use this as proof for uncertainness of savings. it only proves you didn't really look into them.

second, while you state that breaches/failures and so on make investments easier, you ignore the relation between missing management tools and the very cause of some (by numbers: many) of these failures

if we ask for good policies to be put in place at the start, there is a certain need to look at better classification of failures into causes like hardware that turns into smoke(technical), lack of knowledge (people), mistake (people), lack of skill (people) and lack of planning(people)

bad management tools are in the "lack of planning" bit for me. you can't set up a good policy that won't include tools.

noone says they need be costly, i.e. having a wiki+database with hourly switchport:ip<->mac mappings or dns zone backups is in the two-digit price ratio, but most companies live without such basics. what they instead do, is run around hunting problems. sure that's good enough, but it will take longer => cost the company money.

"good enough" is where you don't have costly tools that do it by themselves and staff that can't do it by themselves either.

it'll cost you dearly with every outage, with every client pc that takes more than 30 mins to replace and so on. if your helpdesk staff has the account reinitialized because he doesn't know enough details of windows offline sync's suckage, or if the exchange admin wasn't forced by any policy to set up weekly consistency checks and you need a full restore where a service restart should suffice.

summing up the cost of "good enough IT" DOES need a business perspective.

if you don't take the time to look for and notice the small daily fuckups and don't see where they turn into the smaller disasters, everything will look ok.

but i don't think this kind of attitude can be acceptable, unless the companies wages are just enough to hire chimps. still, we don't need to lie to ourselves and think everything is great.

last, dynamic IT is possible and there are shops that are running far better than the rest.

the problem is that if you call up any of the big vendors and ask to "buy dynamic IT" you'll get the shiny marketing materials, the stacks of new hardware, the expensive outdated management tools and the lot of newbie consultants and the huge bill that will actually prove how important your choice was and that'll get you the big raise.

but you won't get the last missing piece: the dynamic stuff you tried to get.

i suggest you call up steve traugott some day and hear his side of "good enough IT" (as in no compromises made) management.

Who actually needs virtualisation and collaboration tools

Florian Hwigl
FAIL

glad you like vm's

but ranting about the poster above you and coming up with experience that most people have made 5 years ago, well, you text was even more useless than his.

but yeah, very much agreeable. servers need to be backuped.

^^

Florian Hwigl

All these services need quite a lot of ram + IO

If you end up virtualizing services like that, re-run your numbers. 8/16GB+ lowend systems in a cluster will be providing much more power and run cheaper than buying the needed RAM upgrades in your host boxes (8GB Dimms ouch)

and NOONE stops you from having failover nodes virtualized etc. (let the cluster shutdown some dev boxes etc. etc.)

NPIV gives you all the added freedom you could need to P2V or failover into the VM side of things within minutes.

WD VelociRaptor 300GB HDD vs SSD

Florian Hwigl
Unhappy

useless thanks

so why while you gave proper mb/s numbers you dont give anything that tests heavy random IO which, after all, where the SSD might show interesting benefits, or for example tests using two IO threads (for the gamers: think leeching your new game off a p2p net and recording something with fraps)

This is an interesting test, but oh so inconclusive, you stopped halfway!

Low-end EMC NSX is on its way

Florian Hwigl

EMC^2 Appliances...

Lasst year, i got some old EMC^2 Cellera (super-low-end version) 1U Rackmount and played around with it for a little; it was quite interesting to find the $$$$$ NAS Appliance had PCI DMA Busmastering disabled in its bios...

Download al Qaeda manuals from the DoJ, go to prison?

Florian Hwigl
Flame

Re: Notts Uni Viewpoints

Now thats great what a clear path of thought they had, just a pity they're a bit challegend with the thinking

- student sends it to a non-student to print out, who then is the culprit

- student explains the police what the situation is, and gets detained not for sending around the manual, but for obstructing their investigations (by trying to explain them what they couldnt figure within a 6 day investigation)

- the uni doesnt even find anything wrong with taking them to jail to establish they're innocent, they're just a bit unhappy it took long (maybe because the clerc didnt show for work, being halfway to gitmo)

- the uni doesnt even remotely care about the threats to their student.

- the uni is to clueless to just even figure someone has a research project on this

and then here comes mister I-studied-there who doesnt even GET what shocks people about this. I dare say this is proof that uni isn't the best suited for a thorough education.

Of course, yes, studying radical islamic groups and a uni cleric could be just another camouflage for a terrorist cell.

But, being as braindead as police acted in this encounter they could as well take the manuals printout and look inside, and see if it's covered in there.

Remains the final question: Why would the terrorists not know whats in that manual, why would they read a version seized years ago and why do the idiots that set up a scanning of every damn email NOT involve some brain to think of situations like this, especially with a (back then) widely published piece of evidence.

imho funniest thing about it is that the DOJ publishs it, and most of the info comes from CIA agents training the afghan resistance. Yet again, you'd think they knew what's in it...

(and yes, i read that thing back then, too. and it was interesting, i.e. if you're in an area with such idiots trying to make a revolution never keep guns + ammo in the same place, so they can't restock on the people they killed. i can't see how it is bad to know what training they act by, it probably works better for stopping them than, well, sending researchers to jail)

inflammable because it's bad to see a hole school administration being just as dumb as a bunch of AK-47 waving idiots...

IBM rejects deduplication for new VTL

Florian Hwigl

different classes

VTLs of that size usually really seem constant input at far above of what a few thumpers can handle - maybe this is just too much for current deduplication implementations - otherwise it shouldn't be hard to add a "few" nodes into the VTL that are dedicated for deduplication. I dont know if FalconStor also uses dedicated nodes for compression like the "competitior" system we got at our site.

(of course they didnt use a hw gzip engine... nono... never use anything fancy if you can bill much anyway)

also, I could well imagine that the customers in question dont want to rely on some magic black box but rather have a more easily recoverable state on the tapes, in case the datacenter and VTL blow up.

Sysadmins get Quake tools

Florian Hwigl
Thumb Up

noclip?

Can you see the advantages!

Citrix goes storage light with XenDesktop

Florian Hwigl
IT Angle

ohmygod they invented diskless terminals that use a fullblown OS image

Thank you for the happy buzzword blurb and welcome to retying what failed in 1995.

"...and fail miserably"

old saying and oh-so-valid.

Florian Hwigl
Stop

First things first

Citrix should make it's "XenServer Enterprise" enterprise-ready before building products on it.

I'd hate to see all my "virtual" Desktops go topside down just because they still didnt hire anyone with enterprise storage experience or HA-clustering experience.

"Whats a failover..."

Go see their forums... they barely know what the difference between active/passive and active/active storage is...

Not to mention that a normal Citrix Terminalserver + any cheap "thin" client would appear save many times more money than this virtual vapourware.