There is sufficient research into the causes of failure to assert that any system with a human interface will eventually fail. In the data centre, as with other industries, human error is believed to account for as much as 80% of downtime. Limiting these interfaces and the design complexity, and continually training the humans that operate them is therefore imperative for resilient data centres.

The biggest single barrier to risk reduction is knowledge sharing and lack of risk awareness. Many sites document risk analyses, but often these are not shared with all the operators and therefore their impact is limited.

The accumulated experience of a company and the depth of experience of the individual, interact on the universal learning curve, and are important both in reducing risk and addressing energy wastage. Knowledge sharing becomes more important as the complexity of systems increases, particularly where operators lack experience with the installed system.

Universal Learning Curve

Knowledge Sharing

The educational theorist, Kolb, says that learning is best achieved when we move through all four quadrants of the Kolb Learning Cycle: reflection, theory, practice and experience, shown below.

Kolb Learning Cycle and Construction Industry

It is interesting to compare this process with how technical information is transferred on a construction project. Each quadrant is inhabited by a different role, between which contractual boundaries exist, making knowledge transfer difficult. Of particular interest is the handover from installation to operations teams. Much of the knowledge imbedded in the project is lost and the operations team is left to look after a live, critical facility with only a few hours of training and a set of record documents to support them.

Integrated systems testing (IST), used contractually to ensure systems work as designed to, is now common on data centre projects, but generally includes only limited involvement of the operations team, and therefore limited knowledge transfer. Furthermore, many facilities have little or no communication with the original designer or installation contractor, again limiting opportunities for knowledge transfer.

Consequentially operators are not engaged, and don’t feel sufficiently informed to make changes to optimise system performance, and improve the energy performance of the facility, for fear of introducing risk. This lack of awareness can lead to operational errors, leaving the facility particularly vulnerable at times of reduced resilience, for example during maintenance.

As the complexity of a facility increases, so too does this risk of operational error. It is clear that most failures in the data centre are due to human error.

The Human Element

Site-specific, facility-based training is therefore paramount in reducing the risk of failure from human error. In addition, it’s important that teams are trained on more than just the area of the facility that they operate, and at every level of the team from manager to site operative. This approach helps them to operate the facility holistically, understanding how each system interacts, and promotes communication between different levels and teams. Traditionally, however, this approach is rarely adopted by the industry.

In addition to this, a learning environment, which promotes continuous improvement, is recommended to allow teams to learn from the failures and near misses that do occur. This increases knowledge and awareness of possible failure scenarios.

Complexity

A 2N system is the minimum requirement for a SPOF-free (single point of failure free) system, in which two or more simultaneous events result in a failure. Traditional risk analyses, such as FTA (fault tree analysis) are not applicable to human error in which data is subjective and variables, infinite.

In a 2N scenario, the two discrete systems can be designed to have no interaction. This creates a simple design with limited complexity. However, facilities are rarely designed in this way. For example using BMS controlled automatic disaster recovery changeovers, rather than simple mechanical interlocks.

Although the design remains 2N, the number of variables and complexity has increased exponentially. The training and knowledge requirements to run the systems are therefore increased.

Research has also shown that failures are often due to an unforeseen sequence of events. Until it has occurred there is no knowledge of it’s potential.

The Austrian physicist, Ludwig Von Boltzmann, developed an equation for entropy that has been applied to statistics, and in particular, to missing information. The theory can be used to determine the number of questions needed to determine which box, on a defined grid, a coin is placed.  If we substitute system components for the boxes, and unknown failure events for the coins, we can consider how system availability is compromised by complexity. It can be seen that with fewer unknown failure events, the number of ways in which a system can fail are reduced. Increasing our detailed knowledge of systems, and discovering unknown events will therefore reduce the combinations in which the system can fail, thereby reducing risk.

Conclusion

Human error is indisputably the largest contributor to data centre downtime. Continual, site-specific training is therefore of paramount importance in reducing facility failures. Furthermore, reducing complexity not only reduces the number of unknown sequences of events that cause a failure, it also reduces the amount of training required. Finally, it is important that particular attention is paid to the processes used when handing over a live site to the operations team to ensure knowledge is not left with the installation and commissioning contractors.