It's not the size but how you use it that matters
The argument that there is no long term need for 2.5-inch drives is interesting but potentially one could argue that there is no long term need for 3.5-inch drives.
There is no inherent technical difference between a 3.5-inch and 2.5-inch disk drive as they use the same media and mechanical design and this shows in the fact that performance numbers are pretty comparable between the two form factors assuming similar rotational speeds. The biggest issue is power and since with a 2.5-inch platter you have less platter and as a result less physical mass that need to be moved and additionally with the smaller form factor you generally have fewer platters it quickly becomes obvious why a 2.5-inch drive consumes about half the power of a 3.5-inch one.
Right now there is a 3x to 4x capacity difference between a 3.5-inch disk drive and a 2.5-inch drive (600GB vs 2TB). But this ignores the fact that due to the smaller form factor you can fit twice the number of 2.5-inch drives in the same space as a 3.5-inch drive. This brings it closer to 1..2TB to 2TB in the same amount of rack space with an overall power and cooling footprint that is similar but with an idle and active power/cooling rating that will be lower with 2.5-inch compared to 3.5-inch drives.
And with 2x spindles you get 2x interface connections, 2x queues, 2x read/write mechanisms, and potentially 2x the IOPs and MB/s.
The smaller sized drives even means that when a disk drive does fail you have less data to recover and recovery times are reduced. This all should stay relative with the release this year of 900GB 2.5-inch drives and 3TB 3.5-inch drives.
One could reasonably argue that outside of large media repositories and virtual tape libraries that the larger 3.5-inch disk drives may no longer be relevant.
An interesting comment is that HDDs fail more than SSDs because they are mechanical in nature and I would say that this has not been definitely shown to be the case as of yet. A recent story posted here on the The Register reported that analysis of one reseller's return rates showed that SSD return rates were about the same as that of HDD. Part of this of course is attributable to the relative maturities of the two technologies and SSD reliability is bound to improve.
My personal experience with HDD failures especially in enterprise level storage arrays is that frequently the disk that has been failed by the array is actually still quite serviceable and I have redeployed many of them to other less demanding situations without any issues. I suspect that the main reason for the high failure rate in storage arrays is that in a raid stripe or volume group that one slow drive can effect the performance of the other drives and storage vendors will fail these drives for performance balancing reasons.
Another point to keep in mind with the use of SSDs is that currently due to their considerably higher cost there is a tendency to employ them as single disks without any RAID level assigned to them making them effectively RAID-0 or to employ them for the most demanding business critical environments. This means that when a SSD does fail it is likely to have a higher than average impact to operations so one needs to put in place proper expectations and recovery plans when using SSDs.
This is actually a pretty good time for storage administrators as there are plenty of choices and options to meet all kinds of requirements. The hard part is that they need to determine, the capacity, performance and cost requirements to be able to make an optimum decision in what they purchase and where they deploy it.