Success depends on how fast, how often, how deeply, and how broadly you learn. How so? Your team and mine enter a situation. You succeed and I fail; you win and I lose. Why?
Because you arrived better prepared knowing what to do and how to do it, backed by skills to act on that knowledge and otherwise armed with capabilities to put your skills to good use.
That combination of knowledge, skills, and capabilities didn't descend from nowhere, created ex nihilo. They were developed, discovered, invented. That yours were better means that your team outlearned mine; your learning was better, faster, broader, deeper and more sustained. So, you were better prepared and did better in the situation in which we were both tested.
The point of what follows is to dig into what we have to do to learn better and fast and what gets in our way. For what it’s worth, getting better at this core skill is how we convert fear into hope. After all, what is a common fear? That we'll fail in what we're challenged to do. And what is a common hope. That we succeed in just such situations.
Those situations may include: responsibility for high risk, high hazard systems where perfection is necessary and failure is hugely costly; running "operations" where we have to regularly delight stakeholders while avoiding disappointment and letdown; or aspiring for "giant leap" accomplishments where we can’t fall short and still maintain our social license to do our work.
Chief of Naval Operations, ADM John Richardson, started talk s with maps of worldwide internet activity (top figure) and shipping lanes (bottom) to highlight several points. First is that information, ideas, and finances can move from anywhere to anywhere nearly instantaneously. Second is that the people and materials to put those ideas into effect also move from anywhere to anywhere fast. While the Navy has to protect these networks, adversaries can use these same networks to their own advantage.
There's more. The standard of success is uneven. A layperson might think "asymmetrical warfare" is a difference in technology and tactics — an armed and armored warship matched against an explosives packed speedboat. Asymmetry is also in the standard of success. The Navy has to protect these networks everywhere, all the time, against everything. Adversaries just have to disrupt them somewhere, sometime, somehow, in some way to score.2
So, you have situations in which sailors and civilians are responsible for effective and safe operation of systems — ships, logistics, training and equipping, that are also complex and dynamic. The crew has to be adaptive to changes in those factors. Crews are operating in environments, sometimes hostile, that are also complex and dynamic. Thus, no amount of planning can ever be perfectly aligned with what actually is going to occur. Therefore, there has to be relentlessness in figuring out what is happening and what to do about it.3
By the myriad problems that arise, high speed problem solving, improvement, and innovation has to be a dynamic of everybody, everywhere, about everything, all of the time. If problem-solving capacity is not broadly distributed, then the few who are the designated problem-solvers will be overwhelmed by all that has to be resolved.4
Success for individuals and groups depends on knowing what to do and being able to do it well in all the situations you'll face. Being more successful more often means having more and better knowhow and capabilities that span more situations. Being successful in contests with rivals and adversaries means having more and better knowhow and capabilities than they possess and can deploy. Failure is the converse of success, the result of not knowing enough or lacking the capabilities to act on that knowhow. In adversarial contests, lack of knowing and being unable to do reflects itself as accumulating losses. From where does a consistently self-replenishing reservoir of knowledge and capability originate? It comes from learning, since none of what we know and what we can do is innate.
Time is a critical dimension. First, our surroundings and counterparties are quickly changing, so we have to adjust faster, better, more broadly to be resilient and agile. Therefore, some static level of knowledge and capability is not enough. We have to build skill and knowledge quickly and relentlessly, because what once was valuable rapidly decreases in utility.
Conversely, if success depends on replenishing capability, which depends on great learning dynamics, then it follows that failure comes from learning that wasn't occurring powerfully enough.5 As we’ll see, there are distinct "tripwires," booby traps that impede learning and adaptation, so they presage failure.
Learning depends on recognizing that there are gaps between what we know and can do and what we have to know and what we need to do. Gaps should trigger reflection, investigation, and action to build better, more relevant knowhow and capabilities, which can be incorporated into updated approaches. This dynamic has to be non-stop given that the environment is constantly changing — whether intentionally (like adversaries) or not (like the weather).This is captured in adjacent diagram. Ideally, we would:
From the earliest conception of an idea or anticipation of a situation until the moment when expectation meets reality, there are three marks where knowledge and skills can be built:
Planning: when ideas are being formulated.
Practicing: when skills are being built
Executing: when skills and knowledge are put to test.
There are social-psychological impediments to learning in each of these phases. These impediments are the "tripwires" that sabotage our best intentions.
Tripwire 1: In Planning—Not enlisting and believing an aggressive ‘Red Team’ adversary
A first chance to learn is at the earliest stages of planning. Ideas exist only on paper or "in silico," commitments on their behalf are relatively few, and changes are far less disruptive and costly than if done later. "Red teaming," stress testing — physical and conceptual, "war gaming," trialing, and other synonymous approaches exist to push hard against designs and the thinking underpinning them. The point is to identify oversights and flaws before important (and often irreversible) commitments of time and resources have been made.6 7
Of course, we can cause failure in planning phases by not declaring with sufficient detail what is expected to be done when, by whom, and how. Not budgeting time for clarity in planning leaves too much ambiguous. This forces improvisation8 during execution, which becomes a burn on time, focus, and coordination when they are most needed.
That said, more compulsive attention to a design is not sufficient. No matter its detail, there are inevitably flaws in it. These can be exposed through aggressive challenge. The social impediments to doing so are a tripwire that creates failure well in advance of action.
First, let's look at this on an individual scale. A friend, who works for an investment firm, bemoaned that his junior analysts will develop an investment strategy and then "pitch" it to their colleagues, emphasizing all that is good about it, and evading or otherwise diminishing concerns and criticisms. They could take a different approach: explaining the proposal as the product of their best thinking but also soliciting challenge to find flaws in their reasoning. The former approach ensures that flaws go unrevealed and uncorrected. The latter makes possible improvement when we have time to fix with little disruption. (In contrast, imagine changing direction once funds have already been committed to the market.)
Why don't they do that? In truth, what's important is the investment plan once it's put to use and is operating in the market. However, attention has been focused and emotion been committed to the plan. Like anything else to which we've made a commitment, we assign far greater value than perhaps we should.9
This emotional investment in plans, rather than in the consequence of those plans in use, occurs on large scale too. Parshall and Tully ask, in Shattered Sword, how could the better-quipped, better-manned, and more experienced Japanese Navy lose at the Battle of Midway in June 1942?
Their answer is not that the battle was lost early in the day, or in the few days or
weeks beforehand. Rather, the Japanese Admiralty ensured defeat by 1929!
Why? They had adopted certain assumptions, upon which they built all their planning and preparation, but for which they never aggressively challenged the validity.
In particular, the Imperial Japanese Navy ("IJN") drew lessons from its victory over Russia in the early 1900s: (a) Navies would face each other en masse, (b) one would deal the other a devastating blow, and (c) the devastating blow would destroy the loser's will to fight. Those underlying assumptions motivated the objective of delivering a massive, fast, first strike. In turn, they informed doctrine, strategy, tactics, training, and equipping.
What the IJN didn't consider was that the US wouldn’t fight by the IJN's plan. This meant, despite all its resources and equipment, the IJN was ill prepared for battle despite the demonstrated skill of its pilots, the capability of crews, and the quality of its technology.
Parshall and Tully's provocative conclusion is that the IJN should have known that their plan was built on flawed assumptions had they simply believed the results of table top war games conducted a month before the actual battle. When junior officers fought as the American fleet in ways contrary to what the IJN's plan assumed, the US proxies won and the Admiralty lost. But those tabletop refutations were rejected. The Admiralty dismissed the results as a sign that the junior officers on the "red team" didn't understand the plan rather than the Admiralty not understanding how an adversary might react to the IJN battle plan. This left the IJN unprepared for when the Americans fought as the red teamers predicted. A combination of emotional investment in the plan coupled with hierarchical social norms made the leadership closed to the possibility that they were wrong.
Of course, the IJN's leadership was not unique. It's not uncommon to invest time, energy, emotional stake, and status in developing a plan. That creates risk of losing sight that what's important is not the plan but how it'll work when put to use. So, rather than improve it, we defend it.10
A similar question of explaining a paradoxical outcome applies to Germany’s battle successes at the start of World War II. In 1939, Germany was out-equipped and out-manned by the British and French.11 Yet, once the Germany Army turned its attention from Poland to the West, it drove the Allies to Dunkirk in blazing speed.
Both Allies and Axis built militaries in the 1920s and 30s based on the same events in WWI — waging trench based, infantry warfare. But the winners and losers drew different lessons. The winners assumed that future wars would also be trench based, attrition battles of infantry. So, they directed technological innovation accordingly, such as tanks that wouldn’t outpace troops with packs and rifles and wouldn’t weigh too much to cross where men on foot could go.12 13 The Magniot Line was the ubertrench you would build with plenty of lead time.
The Maginot Line (details of design)14 Maginot Line (location)15
Germany interpreted the same events through their losing experience, concluding that trench warfare would be disastrous again. So, during the 1920s and 30s, the Germans worked out tactics and training leading to "combined arms," even though they couldn't reequip. They used wooden mockups for tanks and balloons for airplanes, so when Germany did rearm, its equipment slotted into already practiced uses. They also rejected the French assumption that the more lightly defended Belgium border would not be contested.
(Early (bicycle powered) version of the Reichwehr mock-tank, early 1920s. Air officers send upballoons to represent aircraft during maneuvers of the sixth Infantry and Third Cavalry divisions, circa 1924-1927) 16
Combined arms Blitzkreig routes17
Tripwire 2: Failure to Prepare Across a Sufficient Range of Scenarios
A second tripwire that converts success into failure by depriving us of our learning opportunities is the gradual narrowing of situations for which we prepare, gaining more and more focus on what is “most likely.” So, when low likelihood scenarios actually occur, we’re left to figure out what to do while we’re actually doing it. This is a losing proposition.
Figuring out what to do “on the fly” is most often a poor approach. Our brains are wondrous at creative thinking, problem solving, innovation, and invention. The problem is that they’re wicked slow at all that stuff, and more often than not, the situations for which they have to prepare happen wicked fast. That’s why it’s so important to prepare and rehearse across a broad range of possible scenarios, not just those most likely to occur, but also those probable but nevertheless consequential should they happen.18
Of course, if we first ‘red team’ and stress test our plans, we’re more likely to envision a broader range of scenarios for which we need to plan and prepare. Conversely, if organizations don’t subject their plans to aggressive ‘red teaming,’ they create the risk of drawing too narrow a boundary around the types of situations for which they do prepare. They leave themselves without well developed and well-rehearsed routines for addressing conditions they haven’t anticipated or which they’ve dismissed as unlikely.19
Such a failure to prepare or rehearse for a broad range of situations is described about
Switching off by the auto pilot meant that one pilot had to fly manually at an altitude normally assumed by the computer, one pilot had to wrestle to interpret what the data being generated actually meant, with the third pilot also confused trying to understand their situation. Not having rehearsed how to act in such a situation, they were stuck with collective sense making that was too slow for the speed with which events evolved. All on board were lost. And as one final insult, pilot error becomes the excuse.
After the fact, according to the researchers, simulations showed that the situation had been recoverable if the pilots had rehearsed routines into which they could tap. However, these possibilities apparently weren’t considered ("The possibility that an aircraft could be in a stall without the crew realizing it was also apparently beyond what the aircraft system designers imagined.") so ways to deal with them weren’t drafted or drilled.
The assertion that simulations would have better prepared the pilots is supported by another aviation example. On July 19, 1989, United Airlines flight 232 had a mechanical failure in its tail mounted engine that cost the DC-10 its hydraulic controls. Other DC-10s that had lost hydraulic controls had crashed horribly. What was different for UA 232 was that onboard was flight instructor Dennis Fitch, who had rehearsed how to fly a plane only by controlling the throttles. He was able to guide the crew to a controlled crash landing. In the end, more than one hundred of the nearly 300 on board died, mostly from smoke inhalation after the crash. Had Fitch not been able to tap into what has already been thought through, the consequences would have been much worse.
The authors of the Air France paper on which my summary, above, is based, do a terrific job at constructing the narrative of what occurred. They arrive at a poor solution. From their perspective, the ‘real issue’ is over dependence on automated systems that create complacency and leave us unprepared should they fail. The corrective action is to use automation less. But where does that logic end? No auto for headlights on cars, no anti lock braking systems? Imagine the yield losses in food production, manufacturing, and the like, or medical care that would otherwise be impossible. The issue is not to ‘turn the machines off.’ It is to design them so that even if they are not perfectly reliable, they are still safe (a distinction made by Nancy Levenson in Engineering a Safer World) and to prepare what we do when they fail on us. |
The US Army, in a review done by its Center for Army Lessons Learned of Operation Eagle Strike, the effort to expel ISIS from Mosul, found issues of ‘qualifying’ on too few situations. According to the Wall Street Journal, “The Army’s standard training for urban warfare . . . didn’t adequately replicate the difficulty in maneuvering through Mosul’s narrow streets against a dug-in and well armed enemy. ‘Urban training scenarios are too limited and sterile to replicate conditions such as those experienced in Mosul,’ it states.” For example, “The intense fighting inside Mosul also posed a challenge for medical care. It was difficult for Medevac helicopters to land safely in the city and the rubbled streets sometimes made it hard to transport the wounded by vehicle. As a result, surgical hospitals may need to be closer to the front lines.”21 |
For instance, a nurse injected several doses of insulin into an IV rather than an anti coalgulant. Nothing was unusual about the situation—it was the end of the night shift so the room light was dim and the nurse was tired, the insulin and heparin vials were similar in feel and were stored in close proximity on the meds cart—all conditions that existed persistently. This time, however, they aligned in just the ‘right’ way that rather than catching himself before mis-administrating the med, the nurse gave a crippling dose.23 |