Bite-size number crunching
This actually makes a lot of sense.
A dual CPU, quad-core Xeon workstation is perfectly capable of running serious parallel simulations, and usually does so (at least in my department) running a flavour of Linux and OpenMPI. Even on such small system, CFD programs (for example) can spit out terabytes of data within a matter of days. Moreover, since most scientific calculations involve the processors going flat out at 99% usage for days/weeks on end, something that many workstations are not designed to do, it's not uncommon to get heat issues.
Going up a step usually involves purchasing CPU time on a mainframe, which means you suffer lack of control of the system. Data transfer issues also become prevalent, even across national academic networks.
A 32 core server sitting in the corner, if reasonably priced, would pique my interest. If I can transfer my code to it and have it all running smoothly within a day - ie., no fiddling with custom builds of BLAS, Lapack, OpenMPI etc under Ubuntu - then it might just be worth it.
I just hope to God they've not left the default reference BLAS in by default....