Monday Morning Quarterback Review of the 2011 Indianapolis Colts and Risk Management

While a debate regarding who is the best quarterback in the National Football League would certainly include multiple viewpoints, there is little disagreement among those close to the sport about which team can best do without its top passer.  Over the last several years it has become generally accepted that the team least capable of succeeding without its starting quarterback is the Indianapolis Colts.  The Indianapolis Colts have enjoyed tremendous success since they drafted quarterback Peyton Manning first overall in the 1998 NFL draft.  Manning started at quarterback immediately, and the team has failed to qualify for the NFL playoffs only once since his rookie year, averaging 11.5 wins per season since 1998.  Over that period Manning has amassed four Most Valuable Player awards, and he has led the Colts to two Super Bowl appearances, winning against the Chicago Bears in 2007.  Having come to expect such a high-level of play, the 2011 version of Colts must be difficult to bear for fans.  Manning has been sidelined all year due to offseason surgery.  In his absence the team has posted a record of 0-8, while being outscored by an average of 32-15.

??So what went wrong?  The Colts, like many organizations both in and outside professional sports, were not adequately measuring and thus mitigating risk.  Risk is calculated by multiplying the probability that an event may occur by the expected loss that would result. (Probability  x  Expected Loss)  The Colts have had very little experience without Manning in the lineup.   He had started every game over the course of his career, and was a threat to break the record set by Brett Favre for most consecutive games played.   On the other hand, both experts and casual fans alike were aware of the impact that Manning’s absence would have on the team and the fact that he had offseason surgery in May with aggressive estimates for recovery at two to three months.  In addition, it can be argued that with Manning now thirty-five years old the team should have been planning for a successor regardless of his health.  Since the expected loss was clearly extreme, the Colts likely made their mistake in judging the probability of Manning missing significant time.  In calculating probability the team needed to factor Manning’s consecutive games played streak along with his surgery and the fact that he is at an advanced age for a professional football player.

Having miscalculated the risk that their All-Pro starting quarterback would miss significant time, the Colts failed to adequately mitigate their risk.  An easy way to remember mitigation options is to categorize them into the Four Ts: Terminate, Transfer, Treat, and Tolerate.  Terminating risk involves not taking a specific course of action or the elimination of the activity causing the risk.  For most organizations this might mean passing on an investment or expansion opportunity or ceasing to operate in an area that is prone to a natural disaster like a flood.  In the Colts case termination was not an option.  The team couldn’t refuse to play until Manning returned, nor could they simply stop the regular season from beginning on the scheduled date.

?The Colts were also largely unable to transfer their risk.  Transferring risk involves shifting the responsibility for loss away to a third party.  A transfer of risk can be executed through arranging for insurance through an insurance provider or through other types of contractual agreements.  The nature of professional sports does not allow for the transfer of risk related to player availability.  Teams carry multiple players at each position.  The expectation is that an injured player will be replaced by another player from the team’s bench.  The Colts did have the ability to transfer their risk of financial loss related to Peyton Manning’s health.   Teams will often take out insurance on highly-paid players to cover losses in the event that the player is injured and unable to play out the term of their contract.  The Colts have a large investment in Peyton Manning and would be liable to pay the guaranteed portion of his salary regardless of whether he ever plays another game for the team.   Teams may also include injury payout terms in player contracts.  These clauses define the amount to be paid to player in the event of an injury.  Injury payments are usually much lower than the value of the contract providing significant financial protection for the organization.

Treating the risk that Peyton Manning would not be able to play was the Colts best option.  The Colts allowed several opportunities to treating their risk slip away.  Their first chance was through the NFL Draft.  The draft allows NFL teams to select new players from college or elsewhere while providing the team with exclusive negotiating rights to the player.   The Colts had five selections in the 2011 draft and did not use any of their picks to select a quarterback.   How much the team knew about Manning’s health at the time of the draft is unknown, and the draft took place in April ahead of Manning’s first surgery in May.  Regardless of the timing of the draft and the surgery,  the Colts should have been planning for a successor to Manning by drafting a quarterback in this or a previous year’s draft.  None of the players would have been a true replacement for Manning, but the team would have been in a better position to compete in 2011 had they selected a young quarterback.

?The Colts had an additional opportunity to treat their risk during the NFL free agent signing period.  Free agents are players who have played out their contracts and are free to negotiate with any team.  In the Colts defense, there are complications with signing free agents.  Depending on their current status, signing a player may require that future draft picks be provided to the player’s former team as compensation.   In addition, all player contracts have implications with regard to the league’s salary cap.  Better players command higher salaries requiring teams to use more of their cap space.  The Colts were mostly inactive during free agency.  The free-agent signing period began on July 29th.  On August 25th the Colts signed Kerry Collins.  Collins was signed too late to participate in a full training camp and had just over two weeks to prepare for the regular season.  Collins had been a very good player in the league, but he was 38 years old when the season started, was headed for retirement, and had not played a full NFL season since 2008.  Despite the challenges, Collins entered the regular season as the starter.  He has since been injured and has not played since the third week of the season.

?Signing Collins was a desperate attempt to address the risk that the Colts had failed to accurately assess and treat prior to the start of the 2011 NFL season.   Properly treating risk involves taking definitive steps to minimize the likelihood and/or impact of the risk.  For most organizations this can mean re-engineering the processes or activities where risk is identified or leveraging the organization against financial losses.  Regardless of the steps taken to treat the risk, a plan for monitoring and measuring risk needs to be established as conditions will change impacting the likelihood and impact of risks and thus changing the implications for the organization.

?In some cases the options for treating risk are more costly to implement than the potential impact of the risk itself.  The impact of a risk may also be deemed to be at an acceptable level, while in certain circumstances a risk may be considered unavoidable.   In these cases risk will be tolerated.  For the Colts tolerating the risk of Peyton Manning’s injury is all that remains.  Having miscalculated their risk, failing to successfully treat it, and being left without the option to transfer or terminate it, the Colts are left

with Curtis Painter as their starting quarterback.  Painter was a sixth-round draft pick in 2009.  Prior to this season, he had only thrown 28 passes in the NFL.  The Colts management may publicly state that they have confidence in Painter; however, the late move to sign and consequently name Kerry Collins as their starter indicates otherwise.  So it would seem that for 2011 both Colts fans and football fans in general are left to tolerate their performance on the field.  We hope that your organization doesn’t fall into a similar situation.

Should Your Organization Use Business Continuity Software?

The debate over the use of software for business continuity planning is typically focused on the perceived value of the system functionality. Software vendors champion their automation features while critics cite the licensing cost and the complexity of implementation and administration. Most organizations hinge their final determination on whether the system capabilities are viewed to be worthy of the resources required to use the tool properly. This analysis is often flawed in that many organizations perform their evaluation while focused solely on current software capabilities and organizational requirements in conjunction with the present state of business continuity. The advantages of properly implemented business continuity software only expand as an analysis matures to include the long-term goals of the organization and the direction of business continuity as a whole.

The functional benefits of business continuity software are numerous:?

Business continuity software facilitates global data updates by cascading individual changes throughout the system. This marks a direct return on investment that increases as the system is configured to import from or link directly to external systems of record.?

Software improves standardization across the enterprise. While a document template will facilitate standardization to a certain degree, business continuity software typically allows administrators to enforce planning requirements using security and planning wizards/assistants/navigators. Planners must work within the framework designed for them. Many software packages allow for plan completion tracking and reporting of completion rates across the enterprise.

Most software packages allow end users to map recovery dependencies illuminating relationships and enabling the remediation of exposures. When plans are developed in silos, the risk that recovery time objectives are not supported by predecessor business processes and/or information technology systems is magnified.

?Software allows data integration across modules. Many software systems have evolved to include modules for business impact analysis, emergency notification, and incident management. Sharing the same database allows these software systems to support data sharing between plans, BIAs, emergency notification systems, and incident management tools.

?The latest versions of business continuity software have dramatically increased their level of continuity intelligence. Some vendors have developed planning tools that incorporate guidance based on current industry standards such as BS25999. The standard plan wizards/assistants/navigators include industry-specific methodology and allow for the further customization of end-user guidance.

Business continuity software facilitates responses to organizational changes. As organizations restructure, the storage functionality in most software packages enables plans to be relocated to reflect changes in business structure or geographical footprint. Plans can target the response and recovery of locations, business processes, applications, or network nodes. More importantly if a current plans scope is to be divided across multiple plans, some software offers the ability to move a central component with all of its recovery details between plans. This type of change in word processing tools or spreadsheets is manual and cumbersome.

Evolving planning initiatives are accommodated more freely through software. The risks highlighted by events just over the last few years have renewed the industry focus on exposures associated with pandemics, nuclear energy production, and supply-chain resilience. Planning wizards/ assistants/navigators can be updated to address these new initiatives and assigned to all or specific plans quickly. These planning tools can be enhanced to deliver instructional details for meeting new organizational guidelines and standards and to assist planners as they work to capture steps for addressing new threats. In the ever-changing business continuity landscape, this is critical.

?Software supports the creation of business continuity metrics. A relational database allows the creation of complex reports that summarize business continuity information across all plans. Management increasingly requires an enterprise-level view of the current state of preparedness in order to determine program direction. Manually gathering data from documents for the creation of metrics is a monumental task, and few organizations are staffed at levels that allow for the consistent and continual collection of the required information. In the absence of a database, the generation of metrics will be too infrequent to provide value. Additionally, if metrics data must be compiled manually, there is a much greater risk of error. Strategy development is hindered if there is a lack of confidence in the accuracy of data and its ability to be representative of the entire organization. Management may be reluctant or unwilling to act on the information. As a result planners will view their work as less meaningful to the organization.

?Implementing BC Software will drive program commitment, innovation, and advancement:

?Implementing software for business continuity planning improves the individual sense of plan ownership. Recent business continuity standards speak to the need to move beyond plan creation to the creation of an organizational culture of resilience. The goal is an embedded sense of risk awareness. Planners must be conscious of threats to safety and normal organizational activities, and they need to view their continuity plans as integrated components of normal processes. Creating that elevated sense of ownership is easier if planners recognize a significant investment of resources in support of resilience. Ironically, key aspects of the argument against business continuity software – cost and the challenge of implementation – become psychological allies in creating a resilient culture. The investment in business continuity software sends several impactful signals to the planning community. The first is that the program is not only approved but directly supported by senior management. Planners will view the dedication of financial and human resources as a tangible measure of the importance of the initiative to the overall organization. Planners will expect that their use of the tool and the output of their work will be evaluated. This valuation is enhanced as key stakeholders are involved at critical points in the system development life cycle and in the governance and change control processes.

?As business continuity tools facilitate summary reporting, senior management can further mold a culture change by acting on the data and addressing exposures. If the data collected by and reported from the system is acted upon and creates change, planners will see the direct value of their work and view the effort to create a resilient culture as sustained. This is not to say that an organization cannot create and sustain a resilient culture without software. The challenge is much more significant, though, if the end users cannot identify a direct connection between the communications regarding the importance of the initiative and the resources dedicated in support of it.

?The determination on whether to implement business continuity software should incorporate future organizational needs and the direction of continuity as an industry. The means of creating plans needs not only to support the planning requirements for today, but it should be flexible in adapting to the changing needs of the organization. The content currently mandated for plans will evolve as the organization changes. As new threats emerge, the device where plan data is captured will need to allow for that evolution. The question to ask is does the current mode of planning provide the agility necessary to change the criteria for what is now considered a comprehensive and actionable plan? In the case of isolated, unrelated documents created using a template based on the organizational needs of the moment, the answer is no. Planning tools must be capable of supporting the regular revision of requirements and the distribution of new guidelines as the organization changes, new threats emerge, and new compliance standards are applied. Organizations using business continuity software will find it easier to revise planning requirements and implement them across the enterprise than those organizations using templates for word processing or spreadsheet programs.

?Trends in business continuity further the argument for the use of software:

?Continuity programs are increasingly finding themselves reorganized within the realm of risk management. It is a logical change. Business continuity bridges the gap for risk management by protecting the organization from prolonged outages caused by random events and from the cumulative related effects of an event that are difficult to identify through typical risk analysis. As a discipline of risk management, business continuity will be increasingly required to quantify resilience capability. One way software has begun to address this need is the concept of a continuous business impact analysis. For most organizations a business impact analysis is a yearly or less frequent endeavor. There is software available today that facilitates a continual BIA update capability in conjunction with the traditional plan update capability. These tools allow organizations to continually review current impact information rather than cycling BIA efforts on a yearly or less frequent basis. The focus of these tools on impact allows them to more closely align business continuity with risk management. If a more frequent or continual analysis of business impact is needed in the future, data must be captured so as to easily be revised, collected, and summarized. Continuity software provides a decided advantage in this regard.

?An increasingly closer alignment with governance and compliance standards is also emerging in the field. Business continuity governance and compliance is not new; however, the standards are more refined, the number of industries held to stringent guidelines is increasing, and the standards are revised more frequently than in the past. There are several software systems currently available that incorporate the more recognized standards and provide a means of measuring compliance. Until recently these capabilities were limited if available at all. Some of the more robust systems not only include the capability of guiding users toward the creation of compliant plans, but allow for the measurement of plan compliance. Administrators can select the applicable standard and generate data to determine the current level of compliance. This is a major step for business continuity software as earlier generations of these programs provided only the means for creating plans while assuming the user was well-versed in continuity.


The recent software advances highlighted here point to a final trend for the industry. The number of business continuity software vendors has grown exponentially over the last few years. Their success will depend upon their ability to outperform their many competitors. The consumer clearly is the beneficiary with this increase in competition. The result will be valuable gains in functionality, ease of use, business continuity intelligence, and more competitive pricing. Increased competition will also mean more rapid responses to changes in the industry, and improved responsiveness to their client needs. The BC maturity gap between organizations utilizing continuity software and those that are not will only widen as software capabilities become more robust.

?Organizations that implement business continuity software will derive functional and non-functional benefits providing them with a competitive advantage that will only widen as business continuity moves into the future. The evolving demands on continuity programs are too great to be managed in a means that was not intended specifically for business continuity.

The Barbarino Test for Actionable Plans

Vinny Barbarino was a likeable character on a 1970s sitcom called Welcome Back Kotter. Welcome Back Kotter featured Gabe Kaplan as a Brooklyn school teacher charged with the education of some very unique students who had little interest in their studies. Barbarino, played by John Travolta, was the ringleader of the crew. He was famous for attempting to extricate himself from sticky situations by feigning complete ignorance of the subject matter. With a confused look, Barbarino would pose the following questions to Mr. Kotter: Who? What? Where? When? Barbarino’s ploy never fooled Mr. Kotter, but it can be a useful means of establishing how actionable recovery plans will be in the event of a disruption. Try the Barbarino Test to determine how actionable your plans are. Does your plan answer who, what, where, and when?

Who? Your plan should answer who with contact information. Call trees are part of the answer, but ensure you have multiple contact methods for any organization or person with which you would need to communicate during a recovery. Think about employees, vendors, customers, emergency management personnel/organizations, health care providers, and government organizations. Think about key skill sets and who possess them. Include backup personnel with the same capabilities. Document alternates, and account for the chain of command, or line of succession for your organization.

What? Think about exactly what needs to be done to recover. The heart of an actionable plan is a detailed list of the procedures required for recovery. Leverage standard operating procedures and adjust them as needed assuming that your normal workplace is unavailable. Think about the level of detail required so that someone less familiar with the procedures can still execute them. There is no guarantee that key personnel will be able to work. Include workarounds for unavailable IT systems and data.

Where? The plan should include where people will perform their recovery responsibilities. The normal workplace is not available, so where will you go? Include directions for people traveling to recovery sites. Include contact information in the who of your plan for the people that provide access to the sites you need. Account for the space available and the number of people planning to work at each location. Ensure that the people who will work remotely have been provided with the right equipment and training to make the connection to organizational networks/data/systems.

When? Account for when things need to occur in order to recover. If you are responsible for business processes, rank them in order of criticality. Document all recovery prerequisites and dependencies. Create a sequence for the necessary actions to be executed. The proper recovery of IT systems is often tied to successfully sequencing the order in which things are brought back online.

Do you think you have it all in place? Prove it – exercise the plan. The Barbarino Test is a decent guideline, but plan exercises/tests are the only way to know if your plan is truly actionable. My son continues to be frustrated when he doesn’t hit the ball over the heads of all the other kids at his tee ball games. The vast majority of organizations are satisfied with having untested plans. What do these things have in common? I continue to tell my son and all the organizations that I work with that it doesn’t make any sense to think you will be great at something you have never done before. Your organization shouldn’t act like a five year old. Put the Barbarino Test and your plan to action. Learn where the plan gaps are, and address them; then test again. If you follow these simple steps it will keep your recovery from looking like a 1970s sitcom.


When I was learning to drive, my Mom told me I had an advantage. It would be easier for me than it was for her because I could focus on learning to control the car. When she learned to drive, it was more difficult because in addition to learning to handle the car, she also was learning how to shift gears manually. I couldn’t argue that. I was holding the wheel tightly with two hands and had a hard time imagining using one hand to try to shift at the same time. ‘Automatics’ were easier, she said.

Large organizations have an inherent advantage as it pertains to resilience. Shifting workload from an affected site to an unaffected site with similarly skilled staff is a strategic option only large dispersed organizations can consider. It is a common strategy option for both big private and public entities. The plans we see frequently call out workload shifting in the recovery procedures. When looking deeper into this strategy, many organizations have done little else besides capture it in their plans.

Bigger does not mean better. Not when the size of the organization leads planners to believe that workload shifting is automatic. The likelihood that plan builders have made unsupported assumptions in their recovery plans is much higher for a large organization than it is for a smaller one. Planners at smaller organizations are much more aware of limitations. At large organizations planners commonly assume that “there is someone who does that”, and that this unnamed individual or group will “do that” when something goes wrong. The problem is that the people who are being tasked with these specific responsibilities often have no idea what has been assumed of them.

Assumptions made in regard to what other individuals and groups are capable of and plan to do in an event are a common gap to be aware of in recovery planning. This is a key area of concern whenever all or part of the recovery strategy is workload shifting. Ensure that the receiving entity is well aware of the strategy and can manage the added workload within expected time frames. Make sure the IT requirements to shift the workload have been detailed and are part of the IT recovery plan. In short, as with any strategy, exercise the procedure often. Exercising the ‘manual’ requirements for workload shifting is the only thing that will make it feel automatic when it is necessary to implement the strategy.

Can We Talk Here

The more I work with different organizations around the world, the more I realize that we can’t talk here. “Here” meaning the industry of business continuity. Joan Rivers’ catch phrase was effective for decades at eliciting laughter. Our dysfunction as it regards the still challenging effort to adopt a formalized language for our industry is only effective at eliciting confusion, frustration, and uncertainty.

This should not be an issue. Not if comparing Business Continuity to fields such as medicine, technology, or law. Each has had quite a heard start on us; however, the vastness of those fields dwarf business continuity and the rate of change as it pertains to language is much higher. Despite those challenges, medicine, technology, and law, as well as many other fields, have standardized their languages better than Business Continuity.

It is not for a lack of effort. BSI published BS25999 in 2006 and continues to make efforts to educate on Organizational Resilience. ISO 22301 has since become the standard of choice for compliance. Both standards include sections on terminology. Industry leading organizations such as the Business Continuity Institute (BCI) and the Disaster Recovery Journal offer online glossaries for Business Continuity.

What has also become apparent is that the lack of standardization is predominantly a matter of choice. We don’t have an issue with the socialization of a standard language as much as we have a refusal to accept it on behalf of a measurable percentage of industry practitioners. The usual explanation I receive for the conscious choice to avoid proper industry terminology is that the organization has used certain terms for a long time and changing them now would be very difficult.

So how do we fix this? Tactfully addressing inaccuracies wherever we see or hear them is a start. This can mean engaging in uncomfortable exchanges, but if we don’t, who will? Continuing to speak and write in proper terms and directing people to accepted sources of information is less daunting and may be more effective. Eventually, we will get to a point where we can all talk “here”. We have to, or we will literally die not trying.