* Posts by Magellan

75 publicly visible posts • joined 12 Sep 2007

Page:

Virtualization juggernaut VMware hits the CPU turbo button for licensing costs

Magellan

Nutanix should be happy

This will make their AHV more compelling.

Suunto settles scary scuba screwup for $50m: 'Faulty' dive computer hardware and software put explorers in peril

Magellan

Happened to me

I had a Suunto Vyper Air which had a series of issues not communicating with the air pressure sensor. It spent weeks in the shop, and I think it may have been sent back to Suunto. Anyway, the air pressure sensor was replaced, and I never had a problem since. I also became completely paranoid about air remaining.

However, the Vyper Air was the second computer I bought (I originally bought a Suunto Zoop). But I kept the Zoop and always dove with two computers. I also had a regular pressure gauge on my regulator. So I would dive with two pressure gauges (one regular, one on the Suunto Air), three depth gauges (two computers and a regular depth gauge), and two compasses (one digital compass in the Suunto Air and one analog compass).

At a minimum, one should always have a manual pressure gauge and a manual depth gauge, and always cross check an air-integrated dive computer with the manual air gauge every 5 minutes..

However, I haven't dived in several years.

NAND so it begins: Micron mounts head-on attack against 10K disks

Magellan

Enterprise SATA?

Most enterprise storage systems use SAS drive connections. The first 7,200 RPM SATA capacity drives used additional electronic connectors in the drive sled to connect a single-connected SATA drive to a dual connected SAS drive enclosure backplane.

There are some servers, "storage servers" and HCI appliances which used SATA SSDs, but give the use case for these drives will not be transnational workloads (based on the low DWPD), and the likelyhood they would need to be in an all-flash array with inline efficiencies to reduce drive writes, suggests this would have been better served with a SAS interface.

I think there may still be a place for SAS connected SSDs in all-flash arrays for QLC SSDs and high drive count use cases, where cost is more important than maximum performance. It is far less costly to build and scale out hundreds of SAS drives compared to NVMe, which requires more costly electronics and cabling.

IBM's storage biz wakes up, smells dedupe coffee

Magellan

Re: Um, the "N-series"?

IBM stopped OEMing the NetApp FAS/Data ONTAP systems back in 2014.

NYSE fed up of Violin's bum notes, threatens stock market ejection

Magellan

Reverse stock split

If they are going to do a reverse split, they should go 4-1 or 5-1 at a minimum.

Industry vet Khosla kicks Dell-EMC in the innovation crotch

Magellan

The law of big numbers

Dell/EMC will have very large revenue dollar numbers, and very large gross profit dollar numbers. They can take a small slice of those dollars and direct them at R&D, and that small slice will dwarf any start-up's venture funding.

The question is not IF Dell/EMC will spend on R&D, it is HOW it will spend. Spending on maintaining existing code and hardware, spending on iterating existing code and hardware, spending on generating new code and hardware, all must be done, but in what mix?

EMC has some good technology, much of it derived from its own and acquired technology. EMC seems to do best by purchasing a nascent product, and adding considerable product development to it. XtremIO today is a very different product than the original version EMC acquired. Yotta Yotta's technology found its way into VPLEX.

Yes, there is fair concern looking at Dell, especially in Dell's enterprise offering, and being concerned. But if all of Dell's data center business is put under EMC's management, it might actually improve the Dell product side of the line card.

Tegile: Tesla, T4000 and playing the long game on cheap data storage

Magellan

Why stripe LUNs across controllers today?

The idea of striping LUNs across controllers made sense when everything was hard disk based (no hybrid, no flash caching), FC and iSCSI wire speeds were slow, and controllers had small, embedded CPUs and small amounts of RAM.

Today, All-Flash Arrays with 16Gb FC and 10Gb iSCSI, configured with beefy controllers with dozens of cores and lots of RAM eliminate most of the benefit of striping LUNs across controllers.

Why NetApp shouldn’t buy Solidfire

Magellan

Boulder Colorado

The Denver-Boulder corridor has historically been an area with strong storage engineering talent. Both StorageTek and McDATA were headquartered there until they were acquired.

Clean-cut code-warriors behind Cohesity's hyper-converged tech

Magellan

Dell?

Cohesity looks a lot like Scale-IO. For that reason I doubt any Dell relationship will be lasting.

Cray hoists Docker containers into supercomputers

Magellan

Anyone remember SGI's "Cellular IRIX"?

Cellular IRIX's "cells" were one of the first proposed UNIX container concepts.

Cloud will kill tech sales jobs

Magellan

In theory, this happens with on-premises vendor hardware software sales as well. Every time vendor A wins, vendors B, C, and D lose and opportunity for some period of time, and that could impact jobs.

Certainly software is stickier than hardware, so a software vendor's win will have a more significant impact on software competitors than a hardware vendor's win will have on hardware vendor competitors.

In the case of cloud, one would have to assume it is the last IT sale ever to assume the cloud eliminates jobs. Amazon Web Services has been hiring like crazy lately. Azure has as well. As the most important thing a sales person brings to an employer are his or her relationships with customers, it is likely many will move into sales positions with the cloud vendors.

So, was it really the Commies that caused the early 20th Century inequality collapse?

Magellan

Careful with timing

Events and actions a decade or two before can cause outcomes only measurable much later.

For example, I have heard someone comment prior to the collapse of the Bretton Woods System increases in consumer spending correlated to increases in wages. After the collapse of Bretton Woods there is a correlation between increases in consumer spending and increases in consumer debt.

It is highly possible the collapse of the Bretton Woods System in the early 1970s contributed to increasing wealth and income inequality in the late 1980s.

Similarly, the deregulation of transportation and communications in the late 1970s had certain impacts.

The changes to the income tax system were actually fairly small until the late 1980s. It is hard to comprehend how changes in income taxation on the wealthiest have a significant impact in wealth and income inequality, since they only impact one end of the wealth and income spectrum. However, it is possible there is correlation with changes to capital gains taxation and executive compensation.

By far the biggest economic impacts in the 1980s were those due to what John Naisbitt called "Megatrends", and other business futurists predicted. The movement of low skilled industrial labor offshore, the automation of low-skilled white collar work, the decline of skilled blue-collar work, along with the rise in the dual-professional household all had impacts.

Feeding the XPoint cuckoo and finding it a place in the storage nest

Magellan

3D XPoint is a new form of RAM, not SSD

Back on June 11th, seven weeks before the Intel/Micron announcement, Stephen Breezy at Seeking Alpha pulled back the covers on the purpose of the then unnamed 3D XPoint memory. Breezy has been watching Intel patent applications, and saw one for a hybrid DRAM/PCM memory architecture, which looks exactly like the "Apache Pass" memory architecture for Intel's Skylake CPU/Purley platform, the details of which leaked in late May.

Server storage slips on robes, grabs scythe, stalks legacy SANs

Magellan

For Server SAN to displace traditional SAN, it must develop traditional SAN attributes

I see four key attributes Server SAN software must develop for it to become the primary storage archetype for the data center:

1. Server SAN software must support multiple hypervisors, and no hypervisor (i.e., Containers, Hadoop, Oracle RAC, etc.).

2. Server SAN software must be or become flash aware (Write Amplification Factor = 1.0, etc.).

3. Server SAN software must move to parity/erasure coding data protection and move away from RF2/FTT1 and RF3/FTT2.

4. Server SAN software must support storage only nodes and compute only nodes for asymmetric scaling.

Pure's mass disk drive killer lays out plans for flash hegemony

Magellan

Non-Deduped Flash is as Cheap as SAS

Right now, the Raw $/GB of desktop for consumer-grade, 960GB MLC SSDs is about $0.50/GB, which is about the same as OEM prices for 900GB 10K RPM SAS drives.

These SSDs are not enterprise grade, they target the home PC gamer market. They probably have MTBFs too high for use in storage arrays. But by next year, enterprise grade versions of these will be readily available, and likely at a similar $/GB as this years PC SSDs.

3D MLC NAND is starting to come out now, putting price pressure on standard MLC. Also, standard MLC has benefited from die shrinks and the manufacturing learning curve. The result is all MLC flash is cheaper this year. By next year, it will be cheaper still.

Dedupe will still matter, because it will provide competitive advantage. But it will not be needed for NAND flash to match SAS HDD.

Wikibon takes Fusion-io founder's FaME to logical conclusion

Magellan

Not Trivial, but Possible

This creates a new non-uniform memory access (NUMA) problem. Before, NUMA's non-uniformity was an issue of memory being close to some processors, and far from others--that is, a cluster of local CPU/memory groups vs. a remote cluster of CPU and memory. All memory was not remote, memory was local to at least one processor.

The way OS designed addressed this was to note the locality of data memory to the CPU addressing the data, and migrate either the data closer to the processor, or migrate the process (or threads) closer to the physical memory containing the data. The problem was, it took a long time for the operating system and applications to catch up to this architectural change. The early NUMA systems from each particular vendor were plagued with performance and scalability problems, until the operating systems and compilers were made truly NUMA-aware, and memory intensive apps such as databases were updated to take advantage of NUMA architectures. The first SGI Origins, HP Superdomes, and Sun Fire 15Ks had NUMA related performance problems. Also, earlier versions of VMware had issues on AMD Opteron and the IBM multi-chassis NUMA x-Series, which required careful consideration of vCPU alignment.

With the FAME concept, there are varying levels of memory access for a given processor, even after memory/processor locality is applied. It becomes purely a data migration problem, similar to prefetching into a CPU cache. The migration of data between slower and faster memory will be very much like a caching problem to be solved, with scanning, evicting/demoting of cold data and promoting of hot data, but done at an OS virtual memory/page management level.

The closest thing to this concept which has already been released is IBM's MAX5 memory only expansion blades and chassis, which introduced two latency levels of main memory access to an individual processor.

One can expect similar performance issues early on such an architecture, but eventually operating systems, hypervisors, and database software will be adapted to take advantage of the new architectures.

Given the considerable work already done in NUMA, this should happen faster than the first phase, even though it is a tougher problem to solve.

VMware and Nutanix in vSphere support spat

Magellan

No, not quite true.

When I worked for a server vendor (who also was a VMware OEM), most of the customers I sold servers to for running ESX had VMware ELAs and rarely, if ever, purchased individual ESX per-socket licenses from me. This was also true for Microsoft Windows licenses when I worked at another server vendor (who was a Microsoft OEM).

What Chuck Hollis is saying is, when a server vendor OEMs VMware, there apparently is a cooperative support agreement that comes with being an OEM, so there is cooperative support between the server vendor and VMware regardless of from whom the customer purchased the VMware licenses (and software support contract).

This is a fair criticism, but Hollis did not communicate it well at all.

To Nutanix's point, I believe Nutanix should--like the Dells, HPs, IBMs ... er .. Lenovos, and Ciscos of the world--OEM vSphere, Hyper-V, RHEV, XenServer, OVM, and any other commercial hypervisors it supports (assuming the hypervisor vendor is willing), even if it does not sell a single license, to maximize its support for those hypervisors. It can always plan for its channel partners who are also VMware, Microsoft, etc. partners to sell the licenses separately if it makes the most sense for Nutanix, the partner, and the customer.

One has to ask: Why would Nutanix not OEM VMware? I can only imagine the reason Nutanix has not done this is because VMware's OEM agreement must be onerous. It may require the prospective OEM partner to offer VSAN and EVO:RAIL.

If anyone is a student of history, they will recall Microsoft had an onerous desktop OEM agreement which led to anti-competitive complaints.

At its core, Nutanix is a software company that sells a VMware plug-in. Yes, they also sell commodity hardware, and yes they have broadened their portfolio to include a Hyper-V plug in. But again, think of the ISVs who sold Windows desktop utilities, and ISVs who sold Windows desktop applications, who were destroyed as Microsoft put more functionality into Windows and Office. VMware has decided to follow that path.

Chief architect Beepy ready to take Pure’s flash somewhere new

Magellan

Maybe Scale-Out?

This limitation of Pure's design is it builds off of the midrange, modular, scale-up, failover controller design which has dominated midrange storage systems for two decades.

This "vertical" design makes sense when the bottleneck is the disk drives, controllers represent a significant part of the cost, and scaling over time is accomplished by adding disk trays. But will flash, the controller is the bottleneck, and the flash media represents the more significant part of the cost.

Scale out, or "horizontal" designs make more sense with all-flash. XtremIO has an interesting scale-out design, where essentially each disk tray has a dedicated pair of controllers. Solidfire also has an interesting scale-out design, but does not have parity RAID, which, given the high per-GB cost of flash media, creates challenges. Kaminario may have the most interesting scalability story.

If Pure can build a vertical and horizontal scaling design similar to Kaminario, that would significantly enhance their product.

Tegile to shove flash into DIMMs, it would seem

Magellan

Sure NVDIMM is new, but PCI Flash?

The use of NVDIMMs in storage controllers is a new idea. However, PCI Flash in storage controllers has been around for years in NetApp.

Server SANs: Don't throw the baby out with the bathwater

Magellan

Data Protection Requirements Add Up

The only option for data protection in VSAN and most other "Server SAN"/"Distributed DAS" approaches is mirroring, preferably a triple mirror. This creates much higher raw/usable ratio compared to parity RAID on external arrays.

This will also drive up power, space, and cooling requirements per usable GB.

The advantage of Server SAN/Distributed DAS is cost. EVO:RAIL requires the use of higher $/GB SAS disks instead of cheap $/GB SATA disks. Nutanix seems to be moving towards SAS solutions as well. However, the $/GB of server SAS HDDs is competitive with the $/GB of disk array vendors' SATA options. The same effect will likely happen with eMLC SSDs at some point. But all-flash array vendors might have the upper hand with their parity RAID, data reduction, and consumer grade flash enabled by aggressive write management, so the era of Server SAN might be limited as most on-line storage moves to all-flash over the next few years.

As for the idea of a local copy of data on the server, that was the idea behind EMC's Project Lightning/VFCache/XtremCache. But with other in-memory and server flash solutions (i.e., Flash DIMMs) gaining traction, big memory servers may become the norm, so local storage may become less relevant from a performance aspect.

The optic NERVE of it: Intel declares WAR on InfiniBand

Magellan

Re: Omni Path is Infiniband

Interesting. RDMA over Converged Ethernet (RoCE) is also "InfiniBand" over a different physical interconnect.

EMC, HP blockbuster Borg: Big potential or total CATASTROPHE?

Magellan

What is driving this was an activist investor who wanted EMC to divest its investment in VMware. It seems part two of this is sell off EMC's storage business to an acquirer.

I think if a server vendor owned VMware, it would not be good for the broader market. An independent VMware is more valuable to its stockholders.

So this likely was really about EMC's storage business, post VMware divestment.

That said, while there is overlap, it really is only the overlap between 3PAR and VNX. The rest of EMC's storage portfolio would find no real overlap, especially given HP's support of its HDS OEM relationship is lukewarm.

If HP purchased EMC, it could drive an acquisition of NetApp, most likely by Cisco.

If Cisco purchased EMC's storage business, it may drive an acquisition of NetApp, but is less likely.

Oracle reveals 32-core, 10 BEEELLION-transistor SPARC M7

Magellan

Re: Nice! -- NOT!

Rock was much more than sixteen cores in four core clusters. Originally Rock did not call the cores in the core clusters cores, it referred to the core cluster as a core. The core cluster in Rock was four integer pipelines and one shared floating point pipeline. The four integer pipelines shared an instruction fetch unit and L1 caches. There was a dislike of calling the cluster multiple cores because at that time all CPU cores contained an instruction fetch unit, a dedicated L1 cache, and an FPU. It was only later when marketing decided a high core count suggested advanced engineering that the individual integer pipelines were called cores. This was consistent with the marketing of the various UltraSPARC T processors, which did not have a one-to-one ratio of IUs to FPUs. Rock's advanced features included hidden hardware helper threads to prefetch data (the "Hardware Scout"), the ability to simultaneously run both branches of a code branch ("Execute Ahead"), "reverse hyperthreading" ("Scalable Simultaneous Threading", which turned the four integer pipelines and their paired floating point pipeline into a single virtual core for HPC workloads), and transactional memory.

Rock's four core clusters shared an L2 cache. There was no on-chip L3 cache.

Rock had an in-order pipeline, but could execute out of order via Execute Ahead. If I recall, each Rock integer pipeline had four hardware threads, two for executing code (allowing Execute Ahead), and two for the Hardware Scout (to feed the two execution threads). Only two threads were visible to the operating system. Rock was interesting because it used threading to gain ILP.

It appears of the advanced Rock features, M7 only has transactional memory, although Solaris has used a software based version of scout threading (called Dynamic Helper Threading) since the UltraSPARC IV+ days and this was expanded in the various UltraSPARC T series.

Ignore the pie-in-the-sky storage roadmaps. This is what's REALLY afoot

Magellan

Conspiracy Theories?

1. FCOE exists because Cisco wants you to buy a boatload of switches.

10Gb iSCSI does the same thing.

2. FCOE exists because EMC is tired of customers reusing their existing FC switches and wants you to rip and replace the entire stack

10Gb iSCSI does the same thing.

3. FCOE exists because iSCSI is significantly lower cost and storage vendors would like to keep customers locked into thinking that storage is special and thus has to be expensive, when it's not and it doesn't have to be.

10Gb iSCSI is not significantly lower cost than 10Gb FCoE. FCoE capable NICs are standard on most blade servers today, and there is not a significant difference in the cost of a 10Gb NIC and an a 10Gb CNA for rack servers. Most 10Gb switches available today are FCoE capable, and 10Gb ports on legacy switches are very expensive. That said, FCoE suffers from the same qualification matrix issues that FC suffers from, while iSCSI generally requires fewer compatibility layers.

4. FCOE exists so that NetApp can finally do a decent block-access from their filers.

Since FCoE is basically FC protocol over 10Gb Ethernet, if NetApp FCoE block access is "decent", than so must be NetApp FC block access. And NetApp iSCSI is widely used.

Mutant array upstarts feast on EMC, NetApp's leavings

Magellan

NetApp has Deduplication

"Neither EMC nor NetApp can retrofit efficient primary data deduplication to their legacy storage. Score one to Tegile."

As AC above, yes NetApp did "retrofit" efficient primary data deduplication to its Data ONTAP storage operating system with Data ONTAP release 7.2 in 2007,

Also, because NetApp deduplication leverages block checksums which were implemented into Data ONTAP's WAFL filesystem long ago, means there was no "retrofit" required. All that was required was to leverage the existing checksums with a deduplication software module.

IBM skips BladeCenter chassis with Power7+ rollout

Magellan

BladeCenter chassis are out of gas

The BladeCenter chassis, both the H and E, are tapped out on power, cooling, and space. The current Intel blades do not the high-wattage CPUs, and offer limited memory.

EMC flies in Azure bods for TOP SECRET 'data plane' project

Magellan

Sounds like the return of Invista

Out of band storage virtualization, that was Invista's domain. It lost IBM's in-band SVC storage virtualization.

However, VMware ESX 6 vVols will provide a new storage abstraction. Perhaps Bourne will have hooks into ESX 6 to provide more automation.

Is EMC really jealous of these nubile storage upstarts?

Magellan

Is XtremIO all it was cracked up to be?

Based on Nimbus Data's blog post on the XtremIO architecture, it appears XtremIO is very similar at a hardware level to Isilon. That is, nodes containing considerable compute and disk drives, interconnected with InfiniBand.

My guess is EMC saw efficiencies of logistics and product development at a hardware level, with the prospect of integration at a software level. This likely encouraged EMC to go for the less expensive XtremIIO over the more expensive Violin Memory, or some of the other less expensive all-flash players.

Given XtremIO is little more than an Isilon-like hardware design with SSDs and block only access, it suggests any delays are either due to changes in the hardware platform for the sake of logistics efficiencies, or if EMC has kept XtremIO's original SuperMicro hardware, it suggests the delays are in the software, not the hardware.

If the latter, perhaps XtremIO's software was not far enough along, and it is taking EMC more time to get it enterprise ready.

That said, the XtremIO solution seems to throw an awful lot of hardware at the problem--four Intel Xeon CPUs and two InfiniBand HCAs per 16 SSDs--resulting in a lot of space, power, and cooing per GB.

Fox turns LightSquared political

Magellan

Register shilling for Media Matters?

For crying out loud. This story was being pushed by Tina Brown's "Daily Beast ", a left wing web news site, and by Arianna Huffington's "Huffington Post", another left wing web site.

Somehow, I doubt Media Matters took notice of America's political left's disgust at the political favors being bought in Washington D.C.

EMC morphs storage arrays into servers

Magellan

EMC Symmetrix = World's most expensive x86 server

I think the idea of dedicated servers with large locally attached storage makes sense for certain applications. But putting a general purpose app on the world's most expensive RAID controller probably does not make sense.

Poulson Itaniums hit 'Replay' for reliability

Magellan

Intel: Please bring Instruction Replay to Xeon EX

That is all.

Thanks.

Oracle's Sparc T4 chip: Will you pay Larry's premium?

Magellan

Looks promising

The stronger single-threaded performance should help decision support style workloads, as well as heavier duty jobs in the middleware space, such as supply chain calculations.

Also, with legacy Solaris container support on Solaris 11, those old Solaris applications should run well.

Directing Fibre Channel storage traffic over Ethernet

Magellan

FCoE Standards

T11′s FCoE standard, FC-BB-5, had its technical draft completed in October 2008, was submitted for publication in June 2009, and published May 2010. FC-BB-5 includes the FCoE Ethernet frame protocol, and the FIP protocol (required for multi-hop). Multi-hop FCoE has been a published standard for over one year, and has been available in final form for two years.

The only IEEE Ethernet standards required to support FCoE traffic are IEEE 802.3-2008 PAUSE and IEEE 802.1Qbb Priority Flow Control. IEEE 802.3-2008 is standard, IEEE 802.1Qbb is complete and was submitted for publication a year ago (July 2010).

One other Data Center Bridging standard, IEEE 802.1Qaz, which includes Enhanced Transmission Selection (ETS) and Data Center Bridging eXchange (DCBX), is complete and was submitted for publication in November 2010.

IETF TRILL is not part of the IEEE, is not part of of the IEEE's Data Center Bridging Task Group, and is not required for FCoE or multi-hop FCoE.

Oracle Solaris 11 to abandon elderly servers

Magellan

US-IV+ and T1 are similar age

Odd they would not at least support US-IV+ (Panther). It was released in late 2005 like the original UltraSPARC T1 (Niagara).

HP dubs Oracle 'bitter antagonist' in Itanic spat

Magellan

HP should port NSK and VMS to x86

HP-UX used to run on PA-RISC, and was ported to Itanium.

NSK used to run on MIPS, and was ported to Itanium. And before that NSK ran on multiple proprietary CPUs, and was ported to MIPS.

VMS used to run on Alpha, and was ported to Itanium. Before that, VMS ran on VAX, and was ported to Alpha.

There is no reason these operating systems cannot be ported to x86, especially the Westemere EX processor.

Also, there is considerable hardware design sharing in HP. The HP C7000 Blade System was based on technology developed by Compaq/Tandem. The Superdome 2 is a hybrid of the C7000 and the original HP/Convex NUMA interconnect. With the Itanium 9300 and Intel 7500/E7 sharing Intel QPI technology, the hardware would not be a barrier. Notice how SGI was able to design the Altix UV to use Nehalem-EX processors.

I don't see much reason to port HP-UX to x86, but certainly NSK and VMS could be ported. Then HP could work with Red Hat to qualify RHEL 6.x on a Westmere based Superdome 2.

Cisco refuses to deny it will sell off Linksys

Magellan

WebEx sale would make little sense

Cisco had a web conferencing solution, called MeetingPlace, prior to the WebEx acquisition. MeetingPlace was more limited compared to the emerging hosted solutions at that time (Microsoft LiveMeeting, WebEx, GoToMeeting, and WebDialogs). It seemed Cisco likely acquired WebEx so it could keep its MeetingPlace customers from going to hosted solutions.

If Cisco gave up its many WebEx customers to someone else, they could lose some control of their VoIP installed base.

Dear US gov: Stay the hell out of Silicon Valley

Magellan

Asay is right and wrong

Right on our need to keep Federal meddling out of start ups. Yes, the Feds have been involved in Silicon Valley inventions, but for Federal benefit, not just for the sake of doing it. The best thing the Feds did for Silicon Valley in the 1970s was deregulate the telecommunications industry, which helped drive the innovation which provide a large consumer market for the Internet and mobile communications.

Where Matt is wrong is in quoting Morozov's article. I just read this yesterday, having picked up the FP magazine in an airport. I laughed out loud at how wrong Morozov got it. Granted, this article was written in late 2010, and published in early January. But reading it on 24 January was quite entertaining. Morozov completely jumped the shark on this. Egypt was the Facebook revolution.

I am amazed Asay would use this article as an example. It appears the U.S. government's engagement with youth group leaders (started under the Bush administration), and the drive for freedom of the Internet accomplished in Egypt what took an army to accomplish in Iraq.

What's Oracle got up its sleeve?

Magellan

SAM-ZFS?

Sun bought the company LSC in 2001. LSC had two products, Storage and Archive Manager File System (SAM-FS) and the parallel SAN filesystem Quick File System (QFS).

SAM-FS was interesting because it was an hierarchical storage manager which functioned as a local filesystem, not as a separate product. Since SAM could integrate with other filesystems (SAM-QFS was also offered to the HPC market), there was talk of integrating Sun's ZFS with SAM to create SAM-ZFS.

Combining ZFS' ability to leverage flash nearline storage, and also clone and snapshot midline disk storage, with SAM's archiving to SATA and tape, and integrating this into a storage appliance, one could create a storage system which combines four tiers of storage (flash, nearline disk, midline disk, and tape) and do it in an intelligent and automated manner.

This, combined with Exadata for processing the data could be very useful for large data archives, but it would not solve the disaster recovery problem of off-siting data.

Why is IBM declaring war on Cisco?

Magellan

This is not about Cisco or Juniper

This is IBM's response to a trend of the server access network edge becoming integrated with the compute platform. This trend is best demonstrated by HP with Virtual Connect. HP has taken much IBM blade server share by integrating networking into the chassis in a way which adds unique, differentiated value compared to third-party switches.

This has become a bigger issue with Cisco releasing UCS, which also provides a compute platform with integrated server access networking and unique differentiation.

BNT makes sense, because BNT already provides products for BladeCenter and iDataPlex and BNT is acquirable.

This will allow IBM to build an integrated blade solution which is more competitive with HP C-Class and Cisco UCS.

Xsigo rejiggers virtual I/O director for Ethernet

Magellan

What are the protocols?

The InfiniBand version of Xsigo used proprietary protocols for server IP and storage access (Xsigo vNIC and Xsigo vHBA). They submitted these protocols to the Open Fabrics organization, but I do not recall any mention of the Xsigo protocols in the latest Open Fabrics Sonoma Conference.

This does not appear to be an FCoE solution. It appears to be an Ethernet iSCSI front-end kludge to an InfiniBand bridge. So I assume storage goes from GigE iSCSI, to an encapsulated proprietary Xsigo InfiniBand storage protocol, back to native Fibre Channel. This could be done via a standardized iSER (iSCSI over RDMA) protocol. IP traffic could either go from native Ethernet to an encapsulated proprietary Xsigo InfiniBand IP protocol, or via a standardized IPoIB (IP over InfiniBand) protocol.

I assume Xsigo would use its proprietary protocols, as its central management of vNICs and vHBAs are what differentiates Xsigo's solution.

Either way, if indeed it is iSCSI to the host, this is little more than an iSCSI to FC router, with some added management intelligence. But the iSCSI router industry died a decade ago, once native iSCSI storage came along.

Oracle dumping HPC: Genius or foolhardy?

Magellan

If HPC was really a good investment ...

SGI would be the world's leader in computing, Convex would have bought HP, not the other way around, and DEC's Alpha would have ruled the processor world.

While it is correct to see the HPC influence on current Oracle products (Exadata is basically an InfiniBand cluster of x86 computers, combined with optimized processing nodes), Oracle does not need to be in HPC to do that R&D (they will do it anyway), and it is stupid to try to monetize R&D over the money losing HPC business (see SGI).

AIX 7.1 moves forward to Power7 iron

Magellan

Solaris' binary compatiblity was the Gold Standard

Sun went through so much pain with the Motorola to SPARC transition, and then even more so with the SunOS 4 BSD to Solaris 2 SVR4 transition, there was a "Never Again" declared within Sun's Solaris group. This is what led first to Sun's famous Solaris Application Guarantee, which was actually imitated (the sincerest form of flattery) by IBM a few years ago.

Solaris binary compatibility was so good even drivers worked across versions. Customers got so conditioned to things just working some were quite upset when they found their 32-bit Solaris 2.2 Adaptec SCSI card driver would not work with the 64-bit Solaris 7 kernel. That was a driver bit incompatibility, as the same driver worked fine if one booted the 32-bit Solaris 7 kernel.

IBM had a big problem when they went from the 64-bit version of AIX 4.3 to AIX 5.1. These two versions of AIX were completely incompatible, and it meant for example, you could not move your Oracle 8i database running on an p680 on AIX 4.3 over to a p690 on AIX 5.1 without also moving from Oracle 8i to Oracle 9i. That fiasco is what caused IBM to get its act together on compatibility.

IBM has cleared innovated tremendously in AIX over the last decade, but they have also copied madly from Sun. Those less transparent Solaris commands? Now available on AIX. Containers ... er, Workload Partitions? Now available on AIX. Branded Containers (i.e., Solaris 8 and Solaris 9 compatible containers in Solaris 10), now available on AIX.

IBM's Power 780 pushes the value envelope

Magellan

Availability dates on TPC reports

The October 13th date is exactly six months after the benchmark report date of April 13th. This is standard procedure for most vendors. Six months is the limit on hardware/software availability. Even if IBM projected all components to be available in four months, it will put a date of six months out on the report, that way if product availability slips, they do not have to pull the benchmark.

I know vendors who would challenge competitor's benchmark reports when a part did not ship by the availability date.

Oracle tunes Solaris for Intel's big Xeons

Magellan

History of Hot Add and Remove with Solaris

This SPARC/Solaris capability goes back to 1993 with Cray's Business System Division (BSD) Cray Superserver 6400 (or CS6400), the SuperSPARC predecessor of the E10000, and Cray's OEM version of Solaris 2.3. The CS6400 had the ability to hot remove and hot add CPU/Memory boards from a running system. This was not a mature capability, as it required briefly suspending the operating system during the actual physical removal. Some applications could not tolerate the OS suspension. The E10K resolved that, removing the need for a pause. The pause based hot swap was brought to the UltraSPARC-II E3500 - E6500 with Solaris 7, pauseless hot-swap on midrange servers came out with the UltraSPARC-III F3800-F6800.

AMD draws x64 battle lines with 'Magny-Cours'

Magellan

The real comparison will be to Westmere

I wonder how a six-core, twelve thread Westmere at a similar price and/or TDP will compare to a native 12-core Magny Cours.

SGI punts baby Xeon blade box

Magellan

OEM of Intel Modular Server

This is just Intel's OEM blade design (Intel Modular Server) with an SGI logo.

SMB does not seem to be one of SGI's target markets, but there could be a play in small scale HPC.

Full-body scanner blind to bomb parts

Magellan

"Puffer" Machines much better option

A Trace Portal Machine (aka "Puffer Machine") probably would have easily detected the powdered PETN in Umar Farouk Abdulmutallab underpants. These puffer machines are where aviation security and safety efforts should be focused.

Scale-out SVC on the way from IBM?

Magellan

Not much value to putting FCoE on SVC

SVC creates virtual volumes on Fibre Channel attached storage arrays. SVC does not function as protocol converter or storage router. It typically connects to the Fibre Channel SAN fabric close to the storage arrays, typically connecting to the the same Fibre Channel director class switches the storage arrays are connected to.

Unless SVC is virtualizing native FCoE arrays, there is no need to put FCoE attachment on the SVC.

US Navy aims to make jetfuel from seawater uranium

Magellan

Methane is not a problem

Methane can be Fischer-Tropsched into jet fuel.

Page: