#3 - The Normalization of Deviance & Forecasting Lessons from WeWork
Hello and welcome to Weekend Reading Volume 3. A few things covered this week: the normalization of deviance, why Kiev is a poor market for online restaurant reservations, and forecasting lessons from WeWork’s fall from grace.
The Challenger Disaster & Normalization of Deviance (Twitter)
There’s never just one cockroach. Things can compound in either direction, as this synopsis Challenger disaster illustrates. Corners are cut, limits are pushed, and everything is alright, until it isn’t. This story should sound familiar to anyone who’s watched Chernobyl: “The Challenger disaster wasn’t a single mistake or flaw or random chance that resulted in the death of seven people and the loss of a $2B dollar spaceship. It was a whole series of mistakes and flaws and coincidences over a long time and at each step they figured they could get away with it because they figured the risks were minimal and they had plenty of engineering overhead. And they were right, most of the time. Then one day they weren’t. Normalization of deviance is the idea that things are designed and limits are calculated. We can go this fast, this hard, this hot, this cold, this heavy. And the thing is, most of the time going a little faster, a little hotter, that’s fine. Nothing goes wrong. Engineers always design in a safety margin, as we’ve learned the hard way that if you don’t, s*** goes wrong very fast. So going 110% as fast as the spec says? Probably OK. But the problem is what if you’ve been doing that for a while? You’ve been going 110% all the time. It’s worked out just fine. You’re doing great, no problems. You start to think of 110% as the new normal, and you think of it as just 100%…So when the spec says 100% and you’ve been doing 110% for the last 20 missions and it seems to be working just fine, and then one day you’re running into 5 other problems and need to push something, well, maybe you do 120% today? After all, it’s basically just 10% of normal…Cause in your head you’re thinking of the 110% as the standard, the limit. You’ve normalized going outside the stated rules, and nothing went wrong. So why not go a little more? After all, 110% was just fine…But we always want to optimize. We want to do things cheaper, quicker, more at once. But the problem is that there’s no feedback loop on this. There’s often no obvious evidence that going outside the “rules” is wrong…And the feedback you do eventually, finally get might just be completely disastrous, often literally…You don’t get any “HEY STOP WRITING DOWN YOUR PASSWORDS” feedback until the whole company gets hacked and your division is laid off…If your rocket is only going to take off in temperatures from 40 degrees F to 90 degrees F, you pick certain materials and test in those temperatures. If you had to launch at colder or hotter, you might need different materials and more expensive tests. So you decide on limits. But you’ve launched at 40F and it was fine, and then one day you have to launch at 35F and it was fine, and then on a particularly bad day you have to launch at 30F and you’re fine. So you normalize this deviance. You can launch down to 30F, if you really have to. But then one day you’ve missed a bunch of launch windows and it’s 28F and the overnight temperatures were 18F but you did a quick check of the designs and specs and you probably have enough safety margin to launch, so you say GO. And you discover 73 seconds into the flight that the O-rings that seemed to always self-sealed? They don’t self-seal if they’re too hard and brittle from the cold. The gases keep leaking…Normalization of Deviance as a concept is that it applies to all sorts of engineering issues…My point with this is not to say “HEY PEOPLE STOP BENDING THE RULES”, exactly. It’s that you have to consider normalization of deviance when designing systems: How will these rules interact with how people naturally bend the rules?…Your system not breaking doesn’t mean it works and is a solid design. It might just mean you’ve gotten lucky, a lot, in a row…Most of the time when there’s a serious problem, it’s not just one event. Disasters aren’t caused by one small event: it’s an avalanche of problems that we survived up until now until they all happen at once.”
Related reading: Boeing Underestimated Cockpit Chaos on 737 Max, N.T.S.B. Says (NYT): “The agency said Boeing had underestimated the effect that a malfunction of new automated software in the aircraft could have on the environment in the cockpit…The safety board calls for Boeing and federal regulators to revamp the way they assess the risk of key systems on airplanes, by giving more weight to how a cacophony of alerts could affect pilots’ responses to emergencies.”
Product Strategy: How to Find Product/Market Fit (Medium)
To catch a fish, you need to go fishing where the fish are. Warren Buffett once said, “When a management with a reputation for brilliance tackles a business with a reputation for bad economics, it is the reputation of the business that remains intact.” This post looks at product/market fit through that lens: “When a great team meets a lousy market, market wins. When a lousy team meets a great market, market wins…A product here is any piece of software that solves a customer problem…A product strategy is what you are going to do with your product in the near term…Traditionally, a product strategy answers three main questions: (1) What do we sell? (product), (2) Who do we sell it to? (market), and (3) How do we know if we sell it successfully? (goals that are translated into product metrics)…Product/market fit means being in a good market with a product that can satisfy that market…If you trace back the stories of modern and successful digital products, you’d be surprised to find two common traits. The initial problems that this product solved were: (1) very narrow to a specific group of people and (2) very problematic…Many successful products solved the real issues of their creators.”
How We Should Bust an Investing Myth (WSJ $)
If you squint a little bit, WeWork’s recently abandoned IPO provides a good case study in the value of triangulation. It’s much easier to be wrong when only one person determines a number (Softbank’s valuation of $47B) compared to when the value is determined by a number of independent actors (public market investors valuation of ~$15B). Many implications for the forecasting and analysis: “Not long ago, We’s venture-capital backers valued it at $47 billion. The proposed IPO faltered when public investors signaled they wouldn’t value the company much above $15 billion — implying the supposedly sophisticated private market had been pricing We at roughly three times what it is worth…Private markets, however, are shallow and narrow, despite their enormous size…Markets work best when they are both deep and wide, integrating sharp differences of opinion from many people into a single price at which investments can trade…“Normal markets consist of pessimists, neutral people and optimists, who can take either side of a trade so the price can settle to some kind of equilibrium,” says Michael Mauboussin, director of research at BlueMountain Capital Management LLC, an investment firm in New York. “But that’s not the case in a private market, where it’s difficult to sell and pessimists can’t easily express a view.” So pricing is primarily in the hands of optimists.”