Clustering is Painful
As of today, VMware ESX has made "high availability" on commodity hardware a no-brainer in terms of configuration, compatibility, supportability and cost i.e. set up the hardware for HA (multiple hosts, shared storage, redundant hardware) and every VM you deploy becomes instantly "highly available" (VMs automatically restart on another host in the event of a hardware failure).
If you wanted even higher availability, you had to resort to clustering at the application level with solutions like Microsoft's Clustering Services, Symantec's Veritas Cluster Server etc. Those in the know are well aware of how painfully complex these solutions are to configure and run successfully. For certain applications and requirements, there were no other alternatives. For non-cluster-aware workloads, there wasn't even a solution.
The introduction of "Fault Tolerance" in ESX 4 essentially bridges that gap and provides a simple means of providing that extra level of availability without having to jump through hoops and fork out a tonne of $$$ to get it. The marketing indicates that it is a mere act of "checking a tick box" to make a VM "fault tolerant". Woohoo!
I can appreciate that the initial support is for single vCPU virtual machines only as many of VMware's "v1.0 features" are always somewhat "limited" or "experimental" as they eventually iron out the kinks as more and more customers start deploying them in their environments. You can certainly imagine the extra network bandwidth required to provide two VMs with a communication channel to run in lock-step continuously. As 10GigE proliferates and becomes common-place, the barriers will start to fall I reckon. In the meantime, it makes sense to go after the "low hanging fruit" before enabling such functionality on for your mission critical stack.