Samsung has worked out a way to stack memory chips vertically, increasing memory density and decreasing power needs. It says next-generation servers are going to have fewer memory sockets – 30 per cent fewer – implying that unless you can stick more memory in a socket, you're going to have less of the stuff. What it's done is …
Surely the main advantage of copper interconnects is they're cheaper than doubling the number of wire bonds. The chips are bussed on the DIMM anyway so I can't see how shortening the signal path my a few mm is going to make a 40% power difference all by itself. There's going to be fractions of a puff less per bus line.
Are modern SDRAMs even wire bonded any more or aren't they more likely to be flip chip? I guess if you're stacking them the top one would have to have wires.
Okay I am not well versed in this but increasing the amount of memory on a DIMM will increase bandwidth needed. Is there enough bandwidth available to allow this?
There are plenty of applications that would benefit from keeping more of their dataset in RAM that aren't bandwidth-limited. Also, if you replace two chips with one then you need shorter wires on the motherboard to keep them all joined to the processor. That lets you clock the bus faster and there's your extra bandwidth.
But if we're only looking at stacking two units of memory then the gains are admittedly marginal. It will be interesting to see if they can scale the technique.
- Ex-HP boss Carly Fiorina sacked one week into new job
- Monster Cloud and an angry customer wanting a refund: A Love Story
- Analysis Intel has driven a dagger through Microsoft's mobile strategy
- Old, complex code could cause another UK banking TITSUP – study
- Ireland's tech sector fears fallout of Brexit 'Yes' vote