Samsung has worked out a way to stack memory chips vertically, increasing memory density and decreasing power needs. It says next-generation servers are going to have fewer memory sockets – 30 per cent fewer – implying that unless you can stick more memory in a socket, you're going to have less of the stuff. What it's done is …
Surely the main advantage of copper interconnects is they're cheaper than doubling the number of wire bonds. The chips are bussed on the DIMM anyway so I can't see how shortening the signal path my a few mm is going to make a 40% power difference all by itself. There's going to be fractions of a puff less per bus line.
Are modern SDRAMs even wire bonded any more or aren't they more likely to be flip chip? I guess if you're stacking them the top one would have to have wires.
Okay I am not well versed in this but increasing the amount of memory on a DIMM will increase bandwidth needed. Is there enough bandwidth available to allow this?
There are plenty of applications that would benefit from keeping more of their dataset in RAM that aren't bandwidth-limited. Also, if you replace two chips with one then you need shorter wires on the motherboard to keep them all joined to the processor. That lets you clock the bus faster and there's your extra bandwidth.
But if we're only looking at stacking two units of memory then the gains are admittedly marginal. It will be interesting to see if they can scale the technique.
- Microsoft: We're hiking UK cloud prices 22%. Stop whining – it's the Brexit
- Thanks, IoT vendors: your slack attitude will get regulators moving
- AWS budget tool update gives suits the control they crave
- Cabinet Office gears up to ink mega Oracle deal
- Government Digital Service under review after rural payments cockup