> (in over 11,500 branches, each of which might have two, three even four active terminals at any one time)
Based on a own datacentre system I designed and installed in circa 2004 that’s:
A top end Z-Series fronted by 4 mid-range “Unix”(*) boxes, double it for failover and add a third as a spare/tertiary failover configuration in a different data centre.
Although with the number of back end integrations required, I would probably replace/supplement that Z-Series with a two-tier configuration of high-end Unix servers, making it easier to add/remove bespoke servers for specific product/service lines.
Obviously the in-branch server will also be a reasonable “Unix”(*) box, so most of the traffic will be transactions rather than terminal sessions.
Alternatively, you could go the web server approach, which would require a larger server infrastructure. in which case the in-branch server is minimised, but has the limitation that a loss of service would mean the counter closing rather than not being able to process particular services.
I suspect cloud providers will push this load on their rebranded version of IBM cloud.
The issues in development and testing is that, at this scale, is you don’t use the apps and dev tools out-of-the-box to build the production system, so many of the developers will have had no experience of this style of development.
I remember from that project the main DB application provider saying stuff could be done overnight, until we confirmed they had only used their toolset on DBs up to circa 400GB; we were going to be using them on a 4TB DB…
(*) By Unix I mean a Unix/Linux box actually designed to be a server to support high levels of I/O, rather than a beefed up PC running Windows/Linux.