Friday 27 October 2006
I sometimes hear te strangest figures when availability is discussed.
"The system must be available 95% of the time" or "The system shall never fail" or "We will only accept 99.999% uptime (5 nines)".
Usually these figures are not based on calculations and/or people have no idea about the cost of reaching these numbers.
To make things clear: All hardware will break. The question is not if something breaks, but when.
There are 24*365=8760 hours in one year. 1% of this is 87,6 hours. A system with an availability of 95% can be unavailable for 438 hours per year. This means 18 full days per year!
On the other end of the horizon is the 99.999% demand. Here a system may only be unavailable for 5 minutes per year, including any repair times! The 99.999% (five nines) is a popular number these days.
Availability can be calculated by multiplying the MTBF with the MTTR.
For hardware usually an MTBF is stated (Mean Time Between Failures). A Seagate Cheetah hard disk for instance, has an MTBF of 1.200.000 hours. This means that on average the hard disk will fail every 136 years. A system is built with many components, each with it's own MTBF. Imagine a disk cabinet with 64 disks (this is not unusual in a SAN). In such a setup, every 2 years one of these disks will fail, even with the large MTBF of the Seagate disks.
While disks are the components that fail the most (because they contain many moving parts), other components of a system also have a MTBF. For instance servers (mainly the Fans in the power supplies), routers, switches, and even cabling.
The MTBF figure is mainly a marketing instrument. How can Seagate prove that their disks will actually on average fail every 136 years? Usually this is done using simulations and tests under stress-conditions.
Apart from MTBF, there is MTTR: Mean Time To Repair. This is the time needed to fix or replace a broken system(part). Usually the MTTR is kept low by having a service contract with the supplier of the part. Sometimes spare parts can be kept on-site to keep the MTTR low.
Except for hardware, systems contain software. Usually the MTBF and MTTR for software components can not be calculated easily. No programmer will state the MTBF of the software she wrote. Who knows the MTBF of Windows? Of Linux? SAP? Your in-house developed software?
The human aspect
Usually only 20% of the causes of failures are technology failures. In 80% of the cases, human errors are the reason. For instance, a system administrator accidentally pulls a wrong cable or enters an incorrect command. Users sometimes delete inportant (system) files.
Of course it helps to have highly qualified and trained personnel, with a healthy sense of responsibility. Errors are human, however, and there is no MTBF to be calculated here.
As stated above, availability figures of a system are very hard to guarantee. MTBF and MTTR are either unknown, can not be calculated, or are exaggerated.
Availability can only be reported on afterwards, when a system has run for some years. With this knowledge afterwards, new systems can be designed which will probably have a higher availability.
Of course , in the last years much knowledge is gained on how to design high-available systems, for instance by using clustering, failover, redundancy, structured programming, avoiding Single Points of Failures (SPOF's) and implementing proper system management.
IT architects (or security architects for that matter) are responsible for giving availability the attention it deserves. Because the costs of being not-available can be very high, a good match between IT and business is crucial.