:
Skip Navigation
Resources Blog Tripwires: When we might learn and where we don’t

Tripwires: When we might learn where we don't

Success depends on how fast, how often, how deeply, and how broadly you learn. How so? Your team and mine enter a situation. You succeed and I fail; you win and I lose. Why?

Because you arrived better prepared knowing what to do and how to do it, backed by skills to act on that knowledge and otherwise armed with capabilities to put your skills to good use.

That combination of knowledge, skills, and capabilities didn't descend from nowhere, created ex nihilo. They were developed, discovered, invented. That yours were better means that your team outlearned mine; your learning was better, faster, broader, deeper and more sustained. So, you were better prepared and did better in the situation in which we were both tested.

The point of what follows is to dig into what we have to do to learn better and fast and what gets in our way. For what it’s worth, getting better at this core skill is how we convert fear into hope. After all, what is a common fear? That we'll fail in what we're challenged to do. And what is a common hope. That we succeed in just such situations.

Those situations may include: responsibility for high risk, high hazard systems where perfection is necessary and failure is hugely costly; running "operations" where we have to regularly delight stakeholders while avoiding disappointment and letdown; or aspiring for "giant leap" accomplishments where we can’t fall short and still maintain our social license to do our work.Moving LeftThe following examples identify three phases when we can learn but might not: During planning when self-correcting is fast, cheap, and impactful; during preparation when we have chance to add multiple pages to our playbook, and during operation when we often get early indications that a course correction is necessary. The reader will see the extrapolations from these examples to software development, construction, and myriad other situations.

High velocity learning by everyone, about everything, all of the time

Chief of Naval Operations, ADM John Richardson, started talk s with maps of worldwide internet activity (top figure) and shipping lanes (bottom) to highlight several points. First is that information, ideas, and finances can move from anywhere to anywhere nearly instantaneously. Second is that the people and materials to put those ideas into effect also move from anywhere to anywhere fast. While the Navy has to protect these networks, adversaries can use these same networks to their own advantage.

There's more. The standard of success is uneven. A layperson might think "asymmetrical warfare" is a difference in technology and tactics — an armed and armored warship matched against an explosives packed speedboat. Asymmetry is also in the standard of success. The Navy has to protect these networks everywhere, all the time, against everything. Adversaries just have to disrupt them somewhere, sometime, somehow, in some way to score.

So, you have situations in which sailors and civilians are responsible for effective and safe operation of systems — ships, logistics, training and equipping, that are also complex and dynamic. The crew has to be adaptive to changes in those factors. Crews are operating in environments, sometimes hostile, that are also complex and dynamic. Thus, no amount of planning can ever be perfectly aligned with what actually is going to occur. Therefore, there has to be relentlessness in figuring out what is happening and what to do about it.3

By the myriad problems that arise, high speed problem solving, improvement, and innovation has to be a dynamic of everybody, everywhere, about everything, all of the time. If problem-solving capacity is not broadly distributed, then the few who are the designated problem-solvers will be overwhelmed by all that has to be resolved.4

Learning leading to capability growth as the root cause for success

Success for individuals and groups depends on knowing what to do and being able to do it well in all the situations you'll face. Being more successful more often means having more and better knowhow and capabilities that span more situations. Being successful in contests with rivals and adversaries means having more and better knowhow and capabilities than they possess and can deploy. Failure is the converse of success, the result of not knowing enough or lacking the capabilities to act on that knowhow. In adversarial contests, lack of knowing and being unable to do reflects itself as accumulating losses. From where does a consistently self-replenishing reservoir of knowledge and capability originate? It comes from learning, since none of what we know and what we can do is innate.

Learning - Knowledge

Time is a critical dimension. First, our surroundings and counterparties are quickly changing, so we have to adjust faster, better, more broadly to be resilient and agile. Therefore, some static level of knowledge and capability is not enough. We have to build skill and knowledge quickly and relentlessly, because what once was valuable rapidly decreases in utility.

Conversely, if success depends on replenishing capability, which depends on great learning dynamics, then it follows that failure comes from learning that wasn't occurring powerfully enough.As we’ll see, there are distinct "tripwires," booby traps that impede learning and adaptation, so they presage failure.

How we learn

Learning depends on recognizing that there are gaps between what we know and can do and what we have to know and what we need to do. Gaps should trigger reflection, investigation, and action to build better, more relevant knowhow and capabilities, which can be incorporated into updated approaches. This dynamic has to be non-stop given that the environment is constantly changing — whether intentionally (like adversaries) or not (like the weather).This is captured in adjacent diagram. Ideally, we would:

  • Plan + CorrectCapture clearly in planning what you expect of conditions to be faced, actions
    to be taken, and results to be gained.
  • Do/Act according to the plan. It is the best-known approach for winning success.
  • Check continuously that what was predicted is matched or not by what is actually happening.
  • Correct what you know and what you do if there are discrepancies, with those improved understandings and capabilities incorporated into Plan_v.2, etc.

When we learn (or not)

From the earliest conception of an idea or anticipation of a situation until the moment when expectation meets reality, there are three marks where knowledge and skills can be built:PLan+ Practice

  • Planning: when ideas are being formulated.

  • Practicing: when skills are being built

  • Executing: when skills and knowledge are put to test.

There are social-psychological impediments to learning in each of these phases. These impediments are the "tripwires" that sabotage our best intentions.

Tripwire 1: In Planning—Not enlisting and believing an aggressive ‘Red Team’ adversary

A first chance to learn is at the earliest stages of planning. Ideas exist only on paper or "in silico," commitments on their behalf are relatively few, and changes are far less disruptive and costly than if done later. "Red teaming," stress testing — physical and conceptual, "war gaming," trialing, and other synonymous approaches exist to push hard against designs and the thinking underpinning them. The point is to identify oversights and flaws before important (and often irreversible) commitments of time and resources have been made.6 7

Of course, we can cause failure in planning phases by not declaring with sufficient detail what is expected to be done when, by whom, and how. Not budgeting time for clarity in planning leaves too much ambiguous. This forces improvisation8 during execution, which becomes a burn on time, focus, and coordination when they are most needed.

That said, more compulsive attention to a design is not sufficient. No matter its detail, there are inevitably flaws in it. These can be exposed through aggressive challenge. The social impediments to doing so are a tripwire that creates failure well in advance of action.

First, let's look at this on an individual scale. A friend, who works for an investment firm, bemoaned that his junior analysts will develop an investment strategy and then "pitch" it to their colleagues, emphasizing all that is good about it, and evading or otherwise diminishing concerns and criticisms. They could take a different approach: explaining the proposal as the product of their best thinking but also soliciting challenge to find flaws in their reasoning. The former approach ensures that flaws go unrevealed and uncorrected. The latter makes possible improvement when we have time to fix with little disruption. (In contrast, imagine changing direction once funds have already been committed to the market.)

Why don't they do that? In truth, what's important is the investment plan once it's put to use and is operating in the market. However, attention has been focused and emotion been committed to the plan. Like anything else to which we've made a commitment, we assign far greater value than perhaps we should.9

This emotional investment in plans, rather than in the consequence of those plans in use, occurs on large scale too. Parshall and Tully ask, in Shattered Sword, how could the better-quipped, better-manned, and more experienced Japanese Navy lose at the Battle of Midway in June 1942?

Their answer is not that the battle was lost early in the day, or in the few days or
weeks beforehand. Rather, the Japanese Admiralty ensured defeat by 1929!

Why? They had adopted certain assumptions, upon which they built all their planning and preparation, but for which they never aggressively challenged the validity.

In particular, the Imperial Japanese Navy ("IJN") drew lessons from its victory over Russia in the early 1900s: (a) Navies would face each other en masse, (b) one would deal the other a devastating blow, and (c) the devastating blow would destroy the loser's will to fight. Those underlying assumptions motivated the objective of delivering a massive, fast, first strike. In turn, they informed doctrine, strategy, tactics, training, and equipping.

What the IJN didn't consider was that the US wouldn’t fight by the IJN's plan. This meant, despite all its resources and equipment, the IJN was ill prepared for battle despite the demonstrated skill of its pilots, the capability of crews, and the quality of its technology.

Parshall and Tully's provocative conclusion is that the IJN should have known that their plan was built on flawed assumptions had they simply believed the results of table top war games conducted a month before the actual battle. When junior officers fought as the American fleet in ways contrary to what the IJN's plan assumed, the US proxies won and the Admiralty lost. But those tabletop refutations were rejected. The Admiralty dismissed the results as a sign that the junior officers on the "red team" didn't understand the plan rather than the Admiralty not understanding how an adversary might react to the IJN battle plan. This left the IJN unprepared for when the Americans fought as the red teamers predicted. A combination of emotional investment in the plan coupled with hierarchical social norms made the leadership closed to the possibility that they were wrong.

Of course, the IJN's leadership was not unique. It's not uncommon to invest time, energy, emotional stake, and status in developing a plan. That creates risk of losing sight that what's important is not the plan but how it'll work when put to use. So, rather than improve it, we defend it.10

A similar question of explaining a paradoxical outcome applies to Germany’s battle successes at the start of World War II. In 1939, Germany was out-equipped and out-manned by the British and French.11 Yet, once the Germany Army turned its attention from Poland to the West, it drove the Allies to Dunkirk in blazing speed.

Both Allies and Axis built militaries in the 1920s and 30s based on the same events in WWI — waging trench based, infantry warfare. But the winners and losers drew different lessons. The winners assumed that future wars would also be trench based, attrition battles of infantry. So, they directed technological innovation accordingly, such as tanks that wouldn’t outpace troops with packs and rifles and wouldn’t weigh too much to cross where men on foot could go.12 13 The Magniot Line was the ubertrench you would build with plenty of lead time.

The Maginot Line (details of design)14                                         Maginot Line (location)15

Germany interpreted the same events through their losing experience, concluding that trench warfare would be disastrous again. So, during the 1920s and 30s, the Germans worked out tactics and training leading to "combined arms," even though they couldn't reequip. They used wooden mockups for tanks and balloons for airplanes, so when Germany did rearm, its equipment slotted into already practiced uses. They also rejected the French assumption that the more lightly defended Belgium border would not be contested.

(Early (bicycle powered) version of the Reichwehr mock-tank, early 1920s. Air officers send upballoons to represent aircraft during maneuvers of the sixth Infantry and Third Cavalry divisions, circa 1924-1927) 16BallTrain 

Combined arms                                                                        Blitzkreig routes17TanksMap3

Tripwire 2: Failure to Prepare Across a Sufficient Range of Scenarios

A second tripwire that converts success into failure by depriving us of our learning opportunities is the gradual narrowing of situations for which we prepare, gaining more and more focus on what is “most likely.” So, when low likelihood scenarios actually occur, we’re left to figure out what to do while we’re actually doing it. This is a losing proposition.

Figuring out what to do “on the fly” is most often a poor approach. Our brains are wondrous at creative thinking, problem solving, innovation, and invention. The problem is that they’re wicked slow at all that stuff, and more often than not, the situations for which they have to prepare happen wicked fast. That’s why it’s so important to prepare and rehearse across a broad range of possible scenarios, not just those most likely to occur, but also those probable but nevertheless consequential should they happen.18 

Of course, if we first ‘red team’ and stress test our plans, we’re more likely to envision a broader range of scenarios for which we need to plan and prepare. Conversely, if organizations don’t subject their plans to aggressive ‘red teaming,’ they create the risk of drawing too narrow a boundary around the types of situations for which they do prepare. They leave themselves without well developed and well-rehearsed routines for addressing conditions they haven’t anticipated or which they’ve dismissed as unlikely.19

Such a failure to prepare or rehearse for a broad range of situations is described about Boatthe 2009 crash of Air France flight 447, lost en route from Rio to Paris.20 According to researchers, "Transient icing of the speed sensors on the Airbus A330 caused inconsistent airspeed readings, which in turn led the flight computer to disconnect the autopilot and withdraw flight envelope protection, as it was programmed to do when faced with unreliable data. The startled pilots now had to fly the plane manually."

Switching off by the auto pilot meant that one pilot had to fly manually at an altitude normally assumed by the computer, one pilot had to wrestle to interpret what the data being generated actually meant, with the third pilot also confused trying to understand their situation. Not having rehearsed how to act in such a situation, they were stuck with collective sense making that was too slow for the speed with which events evolved. All on board were lost. And as one final insult, pilot error becomes the excuse.

After the fact, according to the researchers, simulations showed that the situation had been recoverable if the pilots had rehearsed routines into which they could tap. However, these possibilities apparently weren’t considered ("The possibility that an aircraft could be in a stall without the crew realizing it was also apparently beyond what the aircraft system designers imagined.") so ways to deal with them weren’t drafted or drilled.

The assertion that simulations would have better prepared the pilots is supported by another aviation example. On July 19, 1989, United Airlines flight 232 had a mechanical failure in its tail mounted engine that cost the DC-10 its hydraulic controls. Other DC-10s that had lost hydraulic controls had crashed horribly. What was different for UA 232 was that onboard was flight instructor Dennis Fitch, who had rehearsed how to fly a plane only by controlling the throttles. He was able to guide the crew to a controlled crash landing. In the end, more than one hundred of the nearly 300 on board died, mostly from smoke inhalation after the crash. Had Fitch not been able to tap into what has already been thought through, the consequences would have been much worse.

The authors of the Air France paper on which my summary, above, is based, do a terrific job at constructing the narrative of what occurred. They arrive at a poor solution. From their perspective, the ‘real issue’ is over dependence on automated systems that create complacency and leave us unprepared should they fail. The corrective action is to use automation less. But where does that logic end? No auto for headlights on cars, no anti lock braking systems? Imagine the yield losses in food production, manufacturing, and the like, or medical care that would otherwise be impossible. The issue is not to ‘turn the machines off.’ It is to design them so that even if they are not perfectly reliable, they are still safe (a distinction made by Nancy Levenson in Engineering a Safer World) and to prepare what we do when they fail on us.

The US Army, in a review done by its Center for Army Lessons Learned of Operation Eagle Strike, the effort to expel ISIS from Mosul, found issues of ‘qualifying’ on too few situations. According to the Wall Street Journal, “The Army’s standard training for urban warfare . . . didn’t adequately replicate the difficulty in maneuvering through Mosul’s narrow streets against a dug-in and well armed enemy. ‘Urban training scenarios are too limited and sterile to replicate conditions such as those experienced in Mosul,’ it states.” For example, “The intense fighting inside Mosul also posed a challenge for medical care. It was difficult for Medevac helicopters to land safely in the city and the rubbled streets sometimes made it hard to transport the wounded by vehicle. As a result, surgical hospitals may need to be closer to the front lines.”21

 
Tripwire 3: Failure in Execution

Ideally, we learn by subjecting our plans to aggressive stress tests, and we learn by practicing our skills and improving our plans across a range of prepared and rehearsed scenarios. We should also learn in actual operation, as the reality of testing our plans in use reveals even more flaws in our thinking and gaps in our ability to do. Too often, this chance to learn by recognizing abnormalities during execution is missed. Being obtuse to what the system is trying to tell us about is vulnerabilities is a third tripwire that forces failure.
 
Instead, individuals and teams execute a procedure or carry out a routine. When things don’t go as expected, if would be great if they called out the aberration, recognizing that what is occurring is abnormal and merits deeper consideration. More typically, people cope or workaround. In the moment, the “job gets done,” but underlying vulnerabilities are not rectified. Even worse, these deviances get 
Explosionnormalized, meaning there is almost  no chance of future aberrations triggering correction action. After that, the problem is sure to recur, sometimes at much larger scale. NASA lost the space shuttle Challenger by pushing ahead with its flight schedule despite compelling evidence that O-Rings in the solid fuel rockets were prone to cracking in cold weather. The same “normalized deviance”22 meant that even though foam insulating the external fuel tank was certain to fall off and collide with and cause damage to the thermal protection tiles, flights would continue without correction, leading to the loss of Columbia.
 
Myriad misadventures in healthcare delivery—wrong side surgery, mis-medication, avoidable infection, slips and falls, missed consultations and referrals, etc.—are foreshadowed errors, mistakes, and close calls that threaten but don’t actually result in patient harm. However, they don’t trigger remediation, mitigation, or change in approach. Therefore, vulnerabilities and threats remain resident in the system, conspiring to cause harm to someone, some time.
 
For instance, a nurse injected several doses of insulin into an IV rather than an anti coalgulant. Nothing was unusual about the situation—it was the end of the night shift so the room light was dim and the nurse was tired, the insulin and heparin vials were similar in feel and were stored in close proximity on the meds cart—all conditions that existed persistently. This time, however, they aligned in just the ‘right’ way that rather than catching himself before mis-administrating the med, the nurse gave a crippling dose.23
 
Leadership Implications

When things go wrong, multiple assertions are made about their causes and what should be done to prevent recurrence. 
 
 Individuals: Individuals get blamed for being deficient, particularly those in charge in the moment—commanders, managers, supervisors, and their immediate deputies. Were this the cause, this would merit tighter evaluations so those ill prepared for their roles would be filtered out before being placed in positions of technical or command responsibility. 
 
Training: Blame is also put on “failures” to develop professional skills and norms well enough. That would suggest changing the ‘school house’ curricula and on the job training and mentoring by which people become qualified in roles. However, it does raise the question, if systemically inadequate professional skills are the cause of calamities, why these teams at these times and not other teams under different conditions?
 
Self-correcting, self-improving dynamic: Focusing on the learning dynamic, as this note has done, is a complimentary explanation.Perhaps teams fail, not because they were all that much worse than other teams nor were they having a particularly unusual “off day.”Rather, they found themselves ill prepared for the situations because not enough was learned individually and collectively at three critical junctures leading up to the crisis: during planning, during preparation and rehearsal, and during execution of particular ‘evolutions.’
 
To the extent that this learning alternative explanation is true, it leads to different corrective measures than the first two.HonorIt requires making calling out objections and challenges24 the norm, not the exception; a core skill that developed by everyone, always, for everything, and not just as the episodic (respectful) disobedience. Questioning has to be a core value as dear as any other. 
 
 
Steven J. Spear is Principal, HVELLC; Sr. Lecturer MIT; Sr. Fellow Inst. for Healthcare Imprv. Author: The High Velocity Edge.
2 For land based analogies, see A Different Kind of War: The United States Army in Operation  Enduring Freedom by Donald Wright et al and Team of Teams by Stanley McCrystal et al.
3 The term “complex adaptive systems” is often used to describe situations such as these. You  have “systems”— biological, technological, organizational, or organizations tightly  intertwined with technology—that have many components, the relationships among which are  constantly changing within the system. Then, those systems are in ‘relationships’ with other  systems, which are adjusting too, sometimes based on their natural dynamics, sometimes intentionally. As for the latter ‘natural’ change, materials suffer wear and tear. Technical systems lose relevance as technology as a whole and user needs change. Weather is never stable. Other times, systems adapt to each other. A dynamic ‘adaptive’ relationship exists between us and our surroundings. Our immune systems adapt even as pathogens evolve. The hope is that our pace of change stays ahead of the environmental threats. Dynamic adaptation exists within human relationships—between spouses, with parents and children, within peer groups—as we each adapt to each other.Dynamic adaptation occurs between adversaries. Williamson Murray, in Military Adaptation in War: With Fear of Change belies a perception that protracted stalemates on the Western Front during WW I happened because commanders mindlessly threw troops at each other without changes in approach. Rather, he details the adaptations each side made in technology and the tactics for its use. The real reason for stalemate was that the cycle time of adaptation on one side was more or less equal and offset by the adaptation cycle time on the offer. So, while each side changed, neither changed consistently faster than the other to earn a significant advantage. Parity in learning speed meant parity in capabilities (even as capabilities increased) meaning parity on the battlefield.
4 Consider the biological examples. Health depends on countless adjustments being made within cells, by cells, by  tissues, by organs, by sub-systems for the organism to thrive. That nesting of adaptation is critical to avoid overwhelming a person with decisions most of us never consider. For example, ‘shutting off’ adaptive mechanisms leads to various diseases—diabetes, hypertension, congestive heart failure, renal disease, and the like—which require great effort to manage. Pin prick, blood sugar level check and insulin dosing, for instance, all of which increase the fragility and reduce the agility of the afflicted individual. Similarly, the failure or elimination of control systems means cars, planes, and other devices far less capable than those we use.
5 This argument assumes that learning before performing is necessary. Improvising is difficult most times, more so when situations are changing faster than you can adapt.
6 See “An Open Letter to the U.S. Navy from Red,” Proceedings Magazine, June 2017 Vol. 143/6/1,372 [2], CAPT Dale C. Rielage, USN.
7 See “Why Belichick Really Is a Mad Scientist,” WSJ, by Jonathan Clegg and Kevin Clark, Jan. 15, 2014. That article describes how New England populates its practice squads with doppelgängers of upcoming opponents, so they can practice a playing field virtualization of its next game. “Bill Belichick: The NFL’s Scary Alex Trebek,” WSJ, Kevin Clark and Dan Barbarisi, Jan. 14, 2015 describes the endless pop-quizzing that happens to find where there are gaps in the team’s understanding.
8 Called “deconflicting” in certain circumstances.
9 Consider the cognitive bias of “The Ikea Effect.” That once someone has used simple tools to build a piece of furniture, they’ll part with it far more reluctantly than had it arrived fully assembled at exactly the same price.
10 Also see “General Motors: How to avoid more failed parts,” USA Today, July 27, 2014.
11 The Rise of Germany, James Holland, 2015.
12 Fast Tanks and Heavy Bombers, David Johnson, 1998. Also, Holland.
13 David Johnson writes that things were so desperate in advancing thinking about how to use tanks effectively, that George Patton left the nascent tank corps to join the cavalry with the motivation that at least he’d get to play lots of polo.
15 Maginot line map: https://https://militaryhistorynow.com/2017/05/07/the-great-wall-of-france-11-remarkablefacts-
about-the-maginot-line/
16 The Roots of Blitzkrieg, James Corum.
17 Blitzkreig map: https://worldwar-ii.weebly.com/hitlers-lightning-war.html
18 For elaboration on this, please see Thinking: Fast and Slow by Daniel Kahneman or The Undoing Project (about Kahneman and his collaborator Amos Tversky) by Michael Lewis.
19 For an example of an organization doing this very well, see Three Games to Glory IV, NFL Productions, 2015, which details the hundreds of times the Patriots practiced and improved its defense of the play, in the weeks before the Super Bowl, in which rookie Malcolm Butler made the game saving interception.
20 “The Tragic Crash of Flight AF447 Shows the Unlikely but Catastrophic Consequences of Automation,” N. Oliver, T. Calvard, and K. Potočnik, Harvard Business Review, Sept. 2017.
21 “U.S. Army Study Finds Flaws With Military’s Pivotal Assault on Mosul: American military units fighting Islamic State in Iraq were hampered by difficulties in sharing battlefield imagery and other problems,” Michael R. Gordon, The Wall Street Journal, December 15, 2017.
22 A phrase introduced and popularized by Diane Vaughan in The Challenger Launch Decision.
23 “Ambiguity and Workarounds as Contributors to Medical Error,” Annals of Internal Medicine, Spear and Schmidhofer.
24 Called “forceful backup” in some circles.

Written by Steven J. Spear