back to article Fibre Channel's looking a bit flat. Bad news for these three firms

The Fibre Channel trio, Brocade, Emulex and QLogic, are seeing declining popularity in their SAN storage networking businesses and need to find growth elsewhere. As we have seen with Emulex and QLogic's results, Brocade has found growth through better business efficiencies but the basic Fibre Channel market is widely seen as a …

  1. Anonymous Coward
    Anonymous Coward

    FCoE didn't emerge as it didn't offer anything new to the storage world.

    Storage admins are a conservative lot and unless you're going to increase performance, reduce complexity or cut cost, they aren't going to be interested.

    It was a standard largely being pushed by one vendor: Cisco. Other vendors supported it, but did not promote or lead with it.

    It wasn't any 'faster' than 10GbE iSCSI or Fibre-Channel.

    It wasn't a widely adopted standard, unless you were Cisco.

    it still required significant qualification, switched models, correct hardware and firmware.

    It turned out to be just as expensive as Fibre-Channel, or more expensive, once you took into account the cost of the full Cisco nexus switch hardware.

    iSCSI has reduced complexity, reduced cost and become widely adopted as a result.

    I expect FC to be around for a long time - there are places where it still makes sense to run over iSCSI - but I don't expect FCoE to increase adoption. It will wither and die.

    1. Lusty

      FCoE does, however, offer advantages in the cloudy automated data centre world though, since adapters can be easily repurposed without cabling or hardware changes. This also offers the various niceties of FC which are fairly engrained in larger enterprise environments. I certainly agree for most it offers no advantages but there are some places where it can add value.

      What concerned me more was the assertion that iSCSI is taking some share and virtual SAN some more - what protocol are these virtual SANs using then?! All the ones I've seen take DAS and present it out using iSCSI so that more than one system can see it. I realise from their performance claims that all of the virtual SAN vendors believe in magic, but surely the SAN protocol isn't magic too so they must be using FC or iSCSI of some sort.

      1. Anonymous Coward
        Anonymous Coward

        Virtual SANs

        VMware VSAN doesn't present iSCSI (well yet, they could always add it later). It uses its own lightweight protocol that seems somewhat less vulnerable to low grade switches and buffers than iSCSI from our testing. Configuration consists of setting up vKernel's and checking a box to enable it so I'm not sure how much more basic a setup you can get.

  2. Stephen McLaughlin

    I've yet to see much real interest in FCoE, whereas ISCSI does have a lot of adopters. One thing that will hurt the sale of Fiber HBAs is VM clusters as they share fewer ports, unlike stand-alone servers, and require fewer cards. Also with more edge switch designs, they free up quite a bit of real estate in the core enterprise switches. From my experience, FC overall seems still to the default choice for enterprise datacenters.

  3. A Non e-mouse Silver badge

    FCoE

    Ironically, FCoE, once see as a great threat to Fibre Channel , has not materialised..

    FCoE was touted as lowering the entry cost for fibre channel as there was no need for fibre, dedicated switches or NICs: "It's all just ethernet".

    But when I looked at FCoE, you needed specialist switches & NICs to run FCoE which weren't much cheaper than real fibre channel gear - and you were then on the bleeding edge/uncharted seas of technology.

    I think iSCSI becoming more mature (and standard & interoperable) has sealed the fate of fibre channel.

    1. Smoking Gun

      Re: FCoE

      I agree with FCOE being largely a Cisco move. The only time I came across it was with FlexPods and from angry customers who soon found they had no way to easily integrate their legacy FC kit like tape libraries etc.

      1. Lusty

        Re: FCoE

        They could just plug the legacy tape library into a FC port as they always have done. It's easy enough to hook one up to a Nexus if designed properly. I would question why tape was still in the solution for someone going UCS though, the solution itself would indicate a forward thinking customer who would likely have also worked out how to eliminate tape from the environment. There are always exceptions though.

        1. Tom Maddox Silver badge

          Eliminating tape

          Tape is easy to eliminate, as long as you don't have any regulatory obligations or other needs to archive data over a long period. Otherwise, tape is still king for archival storage.

          1. Big Ed

            Re: Eliminating tape

            Tape is barely acceptable for only some of the long-term retention use cases. I tried in vain this past summer to find a DLT based library with new drives to scan a couple of million archive tapes only to be severely disappointed by the lack of a hardware market for these obsolete drives. Resellers offered used and re-manufactured drives but told me that I needed to get a large supply of spares, and be preparred for downtime between drive mis-behavior, diagnostic, replacement and verification. And oh yah, your replacement might have gone bad, so start the cycle all over again.

            And some of the US regs in the health and drug industries are going to require lifetime plus 10 retention periods. Think of the impact of 300 million citizens with hundreds of HC records; factor in population growth and there you go... a problem prime for a better solution...

            So here are some of the issues with tape:

            1. Tape vendors will not guarantee supply or even 15-year backwards compatibility, let alone be able to cope with projected 100+ year retention requirements on the horizon.

            2. DB vendors do a nice job of saying that it can process old releases of DBs. But if your programming bits, hardware and OS bits are different; can you absolutely gurantee the same result today compared to the day when it was created?

            3. Tape is linear and offline stored, if you need to access the last block on a many tapes, it could take days, weeks, or even months to re-assemble a file system.

            4. And when you have hundreds, thousands to millions of tapes; there really are no effecctive solutions to regularly read-in, re-validate bits, delete, re-write, and the data files. Some software solutions say they do logical deletes, but don't have a efficient way to do physical deletes.

            5. Application reality... you may start off with good intentions with a tape solution, people change, priorities change, and it gets too easy to get lazy and a tape solution grows, gets stale, backup/archive meta-data gets lost or obsolete. And then your stuck with a bunch of tape that you are paying a fortune to store.

            6. Tape bits degrade over time and are not periodically scanned or rebuilt.

            7. Tape don't dedupe

            8. In a world of cheap spinning rust; putting anything, ANYTHING on tape is just plain STUPID.

            And any IT Professional that puts anything on tape should be drawn and quartered. Horses make nice pets and should be around for a long time.

            1. toughluck

              Re: Eliminating tape

              Disclosure: I work for Oracle Tape.

              You didn't search good enough, or you'd be aware that what you actually needed is a library supporting SDLT or SDLT320 drives (assuming the user had DLTape IV, God help you if they had DLTape III or earlier, but still doable).

              In case of Oracle, it would be an SL500 or L180/700 -- both are end of life, but recent enough to actually find drives for in good condition -- those first and second generation SDLT drives are usually in very good working order and assuming you're migrating the data to disk or to new media, even tens of thousands of tapes isn't a scary prospect, since DLTape IV supported at most 40 GB natively per cartridge.

              I don't know which DLT generation your user had, but even if you found a DLT-based library, you'd probably have problems with finding HVD SCSI HBAs to attach to the drives. The real reason you could not find a library sporting DLT drives is because it's been end of life for so long that it's obsolete by all modern standards and 99% of customers moved on.

              Even if it was a problem getting a Storagetek library for your case, I'm fairly sure Quantum would jump at this chance.

              About the retention periods -- you seriously think that using disk drives is going to solve this? Suppose you put it on a MAID array today using state of the art 16 Gbps Fibre Channel, 40 Gbps Ethernet or 3rd generation SAS. Are you sure you're going to be able to access that array in 15 years?

              - It's impossible to access first generation 1 Gbps Fibre Channel arrays with 8 Gbps HBAs and switches. That obsolescence came in just 12 years. It was impossible to find new disk drives to replace failing ones 7-10 years from introduction of these arrays.

              - It's not possible to connect 10 Mbps Ethernet to some 1 Gbps switches, and to no 10 Gbps switches. Not to even mention coax standards. It's probably easier to find legacy consumer stuff for this and step down with switches supporting lower speeds, but if you said that's your solution for future access that array, you'd be laughed out of the data centre.

              - Like Fibre Channel, SAS only supports negotiating a link down to two generations back. Next SAS generation will not negotiate a link with first generation SAS.

              And now let me go over your points:

              1. We still support 9840 tape drives in new tape libraries (SL8500 and SL3000), originally introduced 1998. Heck, we still support 9490 tape drives, introduced 20 years ago (although the libraries in which they are used are end of life). New T10000D drives still support reading from cartridges written by T10000A drives introduced in 2006.

              2. That's completely irrelevant. How is that an issue with tape? It's exactly the same regardless of whether you use tape, disk, flash or anything else today.

              3. That wasn't a problem since basically forever. With 9840, you can access over 50% of blocks on tape within 8 seconds of mount, and any block on tape within 20 seconds. If you know which file mark you're looking for, this is stored in the media information region. Same applies to all modern tape formats, which take at most 90 seconds to spool the whole tape if it turns out that the data you're looking for is at the end of media. Serpentine writing means that the data is more evenly spread across tape.

              With LTFS, it's even easier, since the tape is effectively presented as a block device to the OS -- there are two partitions, one has the file layout, the other is actual data.

              True, it's still impossible to read data backwards, so if the file is stored over the entire length of tape, but starts at the end of it, it will still add 90 seconds overhead to reading the file.

              4. It's called Storagetek Tape Analytics and it's meant to do exactly what you say here -- mount a tape at preset intervals, read the media information region and either do a full tape read or read random bits to verify it's not degrading too much.

              Re-writing will occur if the margins are getting too thin.

              And there's now Xcopy to seamlessly move data from one cartridge to another without host involvement. There's a lot of exciting stuff happening that you're completely unaware of.

              How about efficient physical delete on a disk drive? Oh, not possible? Again, how is that a tape problem specifically?

              Efficient physical delete on tape? A few seconds in a degausser does the trick. The tape is completely blank and unreadable, including the servo tracks, making it completely impossible to read from.

              And with hardware-based encryption, there's really no reason you should worry about logical deletes.

              5. Again, it's not a problem specific to tape. If in your organization employee attrition, changing priorities and laziness allows anything to get out of control and ignore processes, you have much bigger issues at hand than worrying about tape obsolescence.

              6. So, disk drives don't deteriorate, huh? They do, and much faster than tape since magnetic domains are much smaller. Seriously, if you only write to tape once (as should happen in a proper archive), the retention period is way more than the guaranteed 30 years.

              7. Disk drives don't dedupe, either. So what? There are three approaches to deduplication on tape:

              - Don't dedupe. Retain integrity in every object/file you store. That prevents any problems with being unable to read from tape in the future.

              - Write raw data from your deduplicating arrays to tape. It's the most efficient method, but only if your array supports that and you're sure the manufacturer will be around when you need to restore the data. It probably makes sense for short-term backups when you don't lose track of data and would need to restore specific portions of your storage, but definitely not for long-term archives.

              - If you have a lot of similar files (that dedupe well), offload them in a single compressed image to tape -- or in multiple images, where the deduped blocks are stored in line with the rest of the files. It's a compromise and it requires some capability to read the data in the future, but it could work if your archive assumes you would only ever restore most of the files from the archive or when it's done well and you don't reference more than one tape.

              Anyway, deduplication is a foolish solution for a long-term archive. If you did dedupe, you'd quickly have a situation where restoring a single file from archive involved reading bits and pieces from a number of tapes ranging from one per file to one per deduped block. And if you somehow lost the unique copy of some particular block common to all files in your storage system (as happens in improperly configured deduplicating solutions), you'd lose all data.

              8. Here's a news flash: disk drives are not cheap. Enterprise drives are still over 10 times more expensive than tape per byte. And for enterprise products (like Oracle's T10000D), cost of storage per byte is lower than cheapest consumer hard drives today. An 8.5 TB cartridge costs about the same as a 1 TB disk drive.

              Let me rephrase what you said: In a world of very cheap tape, putting EVERYTHING on disk is just plain STUPID.

              And to rephrase your last paragraph: Any IT professional that doesn't examine the virtues of every available solution should be tarred and feathered. Horses are definitely nice animals, and they shouldn't be used to execute anyone.

            2. Federal

              Re: Eliminating tape

              I don't see how a spinning disk is going to be as reliable, much less more reliable, than a tape locked up in a vault. I will speculate that you meant to propose that archived files be moved from disk to disk over time, such that the archives are always on a fairly fresh set of disks. But that has some issues, too. Generally if you need to archive it, there's at least some sensitivity associated with it, so it has to be encrypted, tracked, and limits placed on its duplication. So now you have a key, algorithm and location problem for 20 year old files to deal with (which I agree is probably more easily solved than trying to read a 20 year old tape). But it's not a non-issue, either. You also have 20 years of a bunch of files being on-line, copied, copied again, etc. all the while being exposed to the various intentional and accidental gremlins that seem to go after neglected files over time.

              It's never easy, and I don't think that there's a single solution that best fits all problems. I wouldn't be quite so quick to dismiss tape, blue-ray, or non-spinning hard disk cartridge archives. Like just leaving it on-line, the specialized archive solutions have their place. And you can always store a couple of tape drives alongside your tapes in the data vault - that way there's no need to go looking when it comes time to do a restore 20 years later. Your encryption keys may not be as easily found.

              ...now, whatever did I do with the combination to that old tape vault we filled up 15 years ago and haven't accessed since?

          2. Lusty

            Re: Eliminating tape

            Tape doesn't in itself meet any regulatory obligations for archival - disk would just as easily achieve this and for all but the largest data sets disk will be comparable in cost too. Old backups are not an archive, either. Very few companies actually have a true archive, most I've seen simply keep years worth of backups and pretend it's an archive.

            In the hundreds of companies I've worked with, large and small, exactly two of them had a genuine requirement for tape archival. One works with media files and wants to keep everything just in case they need to reuse. The other keeps detailed pictures of the whole of the planet. Everyone else just didn't realise there was a good alternative.

        2. Smoking Gun

          Re: FCoE

          I'm not sure, you'd have to speak with the FlexPod partner and ask them why they had to pull a pair of 5 year old Brocade's out of service to get the filers talking to the libraries. There remains a lot of tape in the NHS.

          1. Anonymous Coward
            Anonymous Coward

            Re: FCoE

            There remains a lot of everything in the NHS. Doesn't mean they need it or use it, just that they are over budget and under designed

    2. Lusty

      Re: FCoE

      "FCoE was touted as lowering the entry cost for fibre channel as there was no need for fibre, dedicated switches or NICs: "It's all just ethernet"."

      If that was how it was sold to you then it was mis-sold which happens a lot. Realistically you'd have at the very least needed the extra NICs to maintain bandwidth. Extra switches have nothing to do with performance and everything to do with operational management and preventing accidental network changes from making disks disappear. You don't want your database going offline because some end user created a loop, or the network team made a routing mistake, VLAN mistake, port ID mistake etc.

      As for simplicity because it's "just Ethernet" I'd say FC is way harder for newbies to get wrong. Just plug it all in and it's normally good enough. ISCSI on the other hand needs all sorts of fettling to get it performing correctly.

      1. Mayday
        Stop

        Re: FCoE

        FCoE is not "just Ethernet"

        *Full disclosure - I'm a CCIE and that does not necessarily mean Cisco to the death by any stretch.

        FCoE, pushed by Cisco to the largest degree, requires storage licences for all the (Nexus) switches in use, a dedicated storage VDC if using 7ks (which can't be used for anything else) and appropriate QoS policies in place. This all facilitates "lossless Ethernet" which is different to "just Ethernet". If it were "just Ethernet" you could run whatever the hell you liked over it, like you can with iSCSI and just have your QoS in order. FCoE only has the layer 2 in common with boring old Ethernet and as such is much more of a PITA than iSCSI.

      2. Tom Maddox Silver badge
        WTF?

        Re: FCoE

        Also:

        'As for simplicity because it's "just Ethernet" I'd say FC is way harder for newbies to get wrong. Just plug it all in and it's normally good enough. ISCSI on the other hand needs all sorts of fettling to get it performing correctly.'

        Pull the other one, mate, it's got bells on.

        1. Lusty

          Re: FCoE

          "Pull the other one, mate, it's got bells on."

          No, I was quite serious. Buy a couple of FC switches and plug in a bunch of computers and generally things will work once you install the drivers. I have yet to see a customer break a FC setup enough to reduce throughput.

          iSCSI (my preferred protocol for various reasons) on the other hand, I have had to fix for numerous customers after they set it up. MTU, multipathing, dupex, speed, load balancing algorithm, LACP believe it or not, insufficiently powerful switches, flow control and various other things have been incorrectly set up on iSCSI networks. There are also the various occasions where people have made changes or cockups on the core network which have knocked all storage offline because the core network was the storage network.

          Trust me, I have enough experience of all of the protocols, and especially other peoples configs to know that FC is easier to get working acceptably. It doesn't help that most people never do proper load testing to make sure it's working, and even when they do, most people don't seem to know the difference between throughput and capacity measurement, and almost nobody at all knows the difference between GiB and GB leading to an assumption of the imaginary "formatting losses" or that there is some kind of magical overhead in the link making up the difference.

      3. P. Lee

        Re: FCoE

        >Extra switches have nothing to do with performance and everything to do with operational management

        Herein lies a major problem with consolidation in general. It isn't just about making mistakes either. Change control becomes a major issue as approvals have to be obtained from everyone and their dog.

        If you bring down a network for a few seconds, there's a chance not too many people will get upset. Application protocols generally tolerate small outages. Bring down disk links with high i/o throughput and low latency requirements and you risk corruption and all sorts of long-term badness. It is the different risk profiles, the distinct admin teams and the general lack of a requirement to bundle the two things together which imply separate hardware.

  4. Etherealmind

    It took Ethernet to make FibreChannel cheap enough to use

    It was just a few years ago that Fibre Channel ports cost upwards of £3000 per 2GB port.

    FCoE came along and threatened the market and now FC ports cost the same as Ethernet because, physically, they are Ethernet ports. The FC encoding etc is different but the switch silicon and interface hardware is identical to Ethernet and many switches can do both Ethernet, FC or FCoE on the same device.

    FC customers should saying thanks to Ethernet for keeping it cheap enough to use in 2015.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon