Re: Those 10" runs
I'm still sceptical, all the other consumer-level standards seem to have stranded on 10 GT/s or less (USB: 5 GT/s (10 GT/s in version 3.1), SATA: 6 GT/s, GDDR5: 7 GT/s, Display Port: 5.4 GT/s (8.1 GT/s in the upcoming version 1.3), HDMI: 6 GT/s, Thunderbolt: 10 GT/s). All over the line they are fighting physics, and physics seem to be winning.
SATA now has a standard where a single cable fit in two plugs, thus doubling the data rate without having to deal with too high transfer rates. They have also partially outsourced the electrical part of the specification to PCIe, thus upgrading from 6 GT/s to 8 GT/s without having to admit that they couldn't make it faster on their own anyway.
USB used to increase the transfer rate around tenfold for each release, realising that there is no way to hit 50 GT/s they opted for just doubling last time around to hit 10 GT/s, a year after the standard was release there are still no USB 3.1 products available for sale, so it remains to be seen if it will actually be as cheap as promised.
Display Port shifted from doubling transfer rate with the previous version to only provide a 50% increase with version 1.3, 10.8 GT/s seemingly wasn't deemed a realistic target.
Thunderbolt, being the only 10 GT/s product that is actually available, turned out too expensive to be used much by anyone but Apple. As far as I understand those 10 GT/s is a big part of the reason.
GDDR5, while being standardised for up to 7 GT/s, has seen most use in the 4 to 6 GT/s range, high speed memory controllers are so expensive that there is often better economy in more slower lanes, despite a markedly increased number of chip pins and corresponding board complexity.
All this combined lead to the question: How the **** can anyone think that they can just jump to 16 GT/s without incurring a prohibitively high cost?