“The payoff of a human venture is, in general, inversely proportional to what it is expected to be.”
Nassim Taleb is one of the most interesting and unique modern philosophers. I learned A LOT from this book. If I could lodge a complaint, it is that he is not very good at making concepts easily understandable to the layperson. Taleb is clearly very smart, a polymath, worldly, and more. He likes to make sure you know this by using overly complex language and cosmopolitan anecdotes from his own life.
That said, I can tolerate mild arrogance in exchange for uniquely useful ideas. His heterodox thinking led him to foresee in some respects both the financial crisis of 2008 and the pandemic of 2020.
I spent 4+ hours coagulating all my highlights and notes to the semi-organized groupings below. Most bullets are direct quotes or paraphrases of his main points.
Personal Takeaways:
Probability, Impact, Extremes, and Statistics:
- Black Swans are events that are (1) unexpected, (2) highly impactful, and (3) retrospectively explainable.
- Black Swans are subjective. From the standpoint of the turkey, the nonfeeding on the one thousand and first day is a Black Swan. For the butcher, it is not, since its occurrence is not unexpected. Expectations are subjective. 9/11 wasn’t unexpected to the attackers. You can eliminate Black Swans by science (if you’re able), or by keeping an open mind.
- Environments that allow for black swans tend to be in “Extremistan.” In Extremistan, a single instance can significantly affect the total. Wealth, as an example, is in extremistan—where the top 1% of the population can own half the world’s wealth. In Mediocristan, a single instance cannot meaningfully affect the total. Height is an example of this type of domain.
- There is a circular problem in statistics that make understanding the mediocristan/extremistan categorization difficult. We need data to tell us what probability distribution to assume, and we need a probability distribution to tell us how much data we need. Black Swans can upend models where people mistook a field to be Gaussian because they were missing inputs of low probability events into the models. First-principles can give insight into fields in “Extremistan” better than empiricism sometimes.
- Look for scalable sectors, there is more money in designing a product (Nike, Dell, Boeing, etc) than actually making it (factories). Scalability and positive-feedback loops lead to Extremistan. Non-scalability leads to Mediocristan.
- In the buildup to great events of history, we were blind. Despite history textbooks describing “mounting tensions” and “escalating crises,” World War I came as a surprise. Only retrospectively was it seen as unavoidable by backward-looking historians. Niall Ferguson makes this point by examining imperial bond prices, which did not reflect the anticipation of war. These examples illustrated how prices can provide a good understanding of history.
- After major shocks, another is not necessarily to follow. After the stock market crash of 1987 half of America’s traders braced for another one every October—not taking into account that there was no antecedent for the first one.
- Mandelbrotian Gray Swans are Black Swans that we can somewhat account for—Earthquakes, blockbuster books, stock market crashes—but for which it is not possible to produce precise calculations.
- Taleb recommends the barbell strategy of investing. It is a method that consists of taking both a defensive attitude and an excessively aggressive one at the same time, by protecting assets from all sources of uncertainty while allocating a small portion for high-risk strategies.
- If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital–style portfolios. That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments.
- By removing the ten biggest one-day moves from the U.S. stock market over the past fifty years, we see a huge difference in returns—and yet conventional finance sees these one-day jumps as mere anomalies. 10 days over the past 40 cumulatively returned half of investment growth. On one hand, this makes me think that I should keep a lot more money in reserve as preparation to capitalize on negative multiple sigma events. A year before the COVID crisis I heard Mark Cuban say he had “a lot of money sidelined.” These are the instances where sidelined funds make a difference. On the other hand, sidelining funds also means missing out on multiple sigma positive events.

- Seize any opportunity or anything that looks like an opportunity. They are rare, much rarer than you think. Remember that positive Black Swans have a necessary first step: you need to be exposed to them. Many people do not realize that they are getting a lucky break in life when they get it. If a big publisher (or a big art dealer or a movie executive or a hotshot banker or a big thinker) suggests an appointment, cancel anything you have planned: you may never see such a window open up again.
- All these recommendations have one point in common: asymmetry. Put yourself in situations where favorable consequences are much larger than unfavorable ones.
- Power laws, Pareto Distributions; these have fractal effects. Although we have never known a lay book to sell 200 million copies, we can consider that the possibility is not zero. It’s small, but it’s not zero. For every three Da Vinci Code–style bestsellers, there might be one superbestseller, and though one has not happened so far, we cannot rule it out… Apply the same logic to wealth. Say the richest person on earth is worth $50 billion. There is a nonnegligible probability that next year someone with $100 billion or more will pop out of nowhere. For every three people with more than $50 billion, there could be one with $100 billion or more… Wars are fractal in nature. A war that kills more people than the devastating Second World War is possible—not likely, but not a zero probability, although such a war has never happened in the past… A mountain is somewhat similar to a stone: it has an affinity with a stone, a family resemblance, but it is not identical. The word to describe such resemblances is self-affine, not the precise self-similar, but Mandelbrot had trouble communicating the notion of affinity, and the term self-similar spread with its connotation of precise resemblance rather than family resemblance.
- The 80/20 rule can be reframed as you reapply recursively. It is the common signature of a power-law—actually, it is how it all started, when Vilfredo Pareto observed that 80 percent of the land in Italy was owned by 20 percent of the people. Some use the rule to imply that 80 percent of the work is done by 20 percent of the people. Or that 80 percent worth of effort contributes to only 20 percent of results, and vice versa. As far as axioms go, this one wasn’t phrased to impress you the most: it could easily be called the 50/01 rule, that is, 50 percent of the work comes from 1 percent of the workers. This formulation makes the world look even more unfair, yet the two formulae are the same (0.203= ~1%, and 0.803=~50%).
Ten Principles to Cope with Black Swans:
- What is fragile should break early, while it’s still small.
- No socialization of losses and privatization of gains.
- People who were driving a school bus blindfolded (and crashed it) should never be given a new bus.
- Don’t let someone making an “incentive” bonus manage a nuclear plant—or your financial risks.
- Compensate complexity with simplicity.
- Do not give children dynamite sticks, even if they come with a warning label. Complex financial products need to be banned because nobody understands them, and few are rational enough to know it. We need to protect citizens from themselves, from bankers selling them “hedging” products, and from gullible regulators who listen to economic theorists.
- Only Ponzi schemes should depend on confidence. Governments should never need to “restore confidence.”
- Do not give an addict more drugs if he has withdrawal pains. Using leverage to cure the problems of too much leverage is not homeopathy, it’s denial. The debt crisis is not a temporary problem, it’s a structural one. We need rehab.
- Citizens should not depend on financial assets as a repository of value and should not rely on fallible “expert” advice for their retirement.
- Make an omelet with the broken eggs.
Rules to Increase Robustness Against Black Swans.
- Have respect for time and nondemonstrative knowledge.
- Avoid optimization; learn to love redundancy. Redundancy (in terms of having savings and cash under the mattress) is the opposite of debt. But if you hide them under the mattress, you are less vulnerable to a Black Swan.
- Avoid prediction of small-probability payoffs—though not necessarily of ordinary ones.
- Beware the “atypicality” of remote events. There are suckers’ methods called “scenario analysis” and “stress testing”—usually based on the past (or on some “make sense” theory). Yet past shortfalls do not predict subsequent shortfalls, so we do not know what exactly to stress-test for.
- Beware moral hazard with bonus payments. It’s optimal to make a series of bonuses by betting on hidden risks with low-probability, then blow up and write a thank-you letter. This is called the moral hazard argument. Bankers are always rich because of this bonus mismatch.
- Avoid some risk metrics. Conventional metrics, based on Mediocristan, adjusted for large deviations, don’t work.
- Model errors benefit those exposed to positive Black Swans. There are both positive and negative Black Swans. The movies are a positive–Black Swan business. Uncertainty occasionally pays off there. A negative–Black Swan business is one where the unexpected can hit hard and hurt severely. If you are in the military, in catastrophe insurance, or in homeland security, you face only downside. Likewise, if you are in banking and lending, surprise outcomes are likely to be negative for you. You lend, and in the best of circumstances you get your loan back—but you may lose all of your money if the borrower defaults. If the borrower enjoys great financial success, he is not likely to offer you an additional dividend.
- Do not confuse absence of volatility with absence of risk.
Counterfactuals:
- Failure to know counterfactuals can severely misallocate credit. Imagine a legislator enacted a law that imposed locked bulletproof doors in every cockpit that went into effect on September 10th, 2001. This person would get no status in public squares, no real mention or credit. He might even suffer grievances from the airline industry. He would die unknown. We are not able to predict events with certainty, and those that prevent certain events go unnoticed. (Makes one want to be more generous with credit towards precautionary measures).
- Unread books are more valuable than read ones. A private library is not an ego-boosting appendage but a research tool.
Platonicity, the Ludic Fallacy, and Categorization
- Platonicity is our tendency to mistake the map for the territory; to focus on pure and well-defined “forms” whether objects, like triangles, or social notions, like utopias (societies built according to some blueprint of what “makes sense”), even nationalities.
- “Platonic is top-down, formulaic, closed-minded, self-serving, and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical.”
- The ludic fallacy is the misuse of simplified games to model complex reality. If I asked “A coin was flipped 99 times on heads in a row, what is the chance it lands on heads again?” most experts will say 50/50, because of their “expert” knowledge of probability. However, in reality, it is far more likely the coin is double-sided and will land on heads again than it is for a coin flip to land 99 times in a row (which only would happen once in a billion-billion-billion times). Taleb refers to those who fall for the ludic fallacy as Dr. John, and those who don’t as Fat Tony (from an example story he gives).
- Categories are human-defined and flexible. Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive, preventing people from considering the fuzziness of boundaries, let alone revising their categories.
- Categorizing always produces reduction in true complexity. Any reduction of the world around us can have explosive consequences since it rules out some sources of uncertainty; it drives us to a misunderstanding of the fabric of the world.
- To paraphrase Danny Kahneman, for psychological comfort some people would rather use a map of the Pyrénées while lost in the Alps than use nothing at all. They do not do so explicitly, but they actually do worse than that while dealing with the future and using risk measures. They would prefer a defective forecast to nothing. So providing a sucker with a probabilistic measure does a wonderful job of making him take more risks.
- Some models are harmful. Also when I was railing against models, social scientists kept repeating that they knew it and that there is a saying, “all models are wrong, but some are useful”—not understanding that the real problem is that “some are harmful.”
Epistemology:
- Failure (and epistemic failure) of elites. For some fields in extremistan, experts have no meaningful advantage in prediction over the layperson. The key difference was in epistemological certainty. “Nobody knew anything, but elite thinkers thought that they knew more than the rest because they were elite thinkers, and if you’re a member of the elite, you automatically know more than the nonelite.”
- Taleb uses the term epistemocrat to define someone who is highly introspective and able to evaluate the level of their ignorance.
- There is a problem with induction, also known as Hume’s problem. By induction, a turkey might think that its tomorrow will be similar to its today. This will be true, until the day its neck is wrung. It knows a little less about its future than it thinks, and that “little less” makes all the difference. You look at the past and derive some rules about the future. Well, the problems in projecting from the past can be even worse, because the same past data can confirm a theory and also its exact opposite! If you survive until tomorrow, it could mean that either a) you are more likely to be immortal or b) that you are closer to death.
- The round-trip fallacy is mistaking absence of evidence as equal to evidence of absence. An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. One can know what is wrong with a lot more confidence than one can know what is right. All pieces of information are not equal in importance.
- People trick themselves when their predictions fail. You tell yourself that you were playing a different game. You invoke the outlier. You use the “almost right” defense.
- Kerygmas are the opposite of dogmas. The central things the old teachers communicate to you are, to use religious terms, dogmas (rules you need to execute without necessarily understanding them) not kerygmas (rules you can understand and that have a purpose clear to you).
- The economic dogma around comparative advantage makes assumptions that should be challenged. The idea is that countries should focus, as a consultant would say, on “what they do best” (more exactly, on where they are missing the smallest number of opportunities); so one country should specialize in wine and the other in clothes, although one of them might be better at both. But do some perturbations and alternative scenarios: consider what would happen to the country specializing in wine if the price of wine fluctuated. Just a simple perturbation around this assumption (say, considering that the price of wine is random, and can experience Extremistan-style variations) makes one conclude the opposite of what one would expect in normal models of comparative advantage. Mother Nature does not like overspecialization, as it limits evolution and weakens the animals. This became especially poignant in the COVID-19 pandemic while the U.S. had outsourced masks and other essential item production overseas.
- Information is costly. This review is an example of an attempt to distill ideas from a much larger book, which is cumbersome to organize. The first problem is that information is costly to obtain. The second problem is that information is also costly to store—like real estate in New York. The more orderly, less random, patterned, and narratized a series of words or symbols, the easier it is to store that series in one’s mind or jot it down in a book so your grandchildren can read it someday. Finally, information is costly to manipulate and retrieve.
Narrative
- Narrative is a potent tool for information organization and consolidation. To view the potency of narrative, consider the following statement: “The king died and the queen died.” Compare it to “The king died, and then the queen died of grief.” Although we added information to the second statement, we effectively reduced the dimension of the total. The second sentence is, in a way, much lighter to carry and easier to remember; we now have one single piece of information in place of two. As we can remember it with less effort, we can also sell it to others, that is, market it better as a packaged idea. This, in a nutshell, is the definition and function of a narrative.
- Narrative also causes psychological distortions. To see how the narrative can lead to a mistake in the assessment of the odds, do the following experiment. Give someone a well-written detective story—say, an Agatha Christie novel with a handful of characters who can all be plausibly deemed guilty. Now question your subject about the probabilities of each character’s being the murderer. Unless she writes down the percentages to keep an exact tally of them, they should add up to well over 100 percent (even well over 200 percent for a good novel). The better the detective writer, the higher that number.
- Availability bias makes things seem more likely if they are more readily available in our minds. Which of these two statements seems more likely? Joey seemed happily married. He killed his wife. Joey seemed happily married. He killed his wife to get her inheritance. Clearly, the second statement seems more likely at first blush, which is a pure mistake of logic, since the first, being broader, can accommodate more causes, such as he killed his wife because he went mad, because she cheated with both the postman and the ski instructor, because he entered a state of delusion and mistook her for a financial forecaster.
- The “silent evidence effect” means we miss some of the most important consequences of policy. Katrina, the devastating hurricane that hit New Orleans in 2005, got plenty of politicizing politicians on television. These legislators, moved by the images of devastation and the pictures of angry victims made homeless, made promises of “rebuilding.” It was so noble on their part to do something humanitarian, to rise above our abject selfishness. Did they promise to do so with their own money? No. It was with public money. Consider that such funds will be taken away from somewhere else, as in the saying “You take from Peter to give to Paul.” That somewhere else will be less mediatized. It may be privately funded cancer research, or the next efforts to curb diabetes. Few seem to pay attention to the victims of cancer lying lonely in a state of untelevised depression. Not only do these cancer patients not vote (they will be dead by the next ballot), but they do not manifest themselves to our emotional system.
- Evolution is subject to silent evidence. Evolutionary fitness is something that is continuously touted and aggrandized by the crowd who takes it as gospel. The more unfamiliar someone is with the wild Black Swan generating randomness, the more he or she believes in the optimal working of evolution. Silent evidence is not present in their theories. Evolution is a series of flukes, some good, many bad. You only see the good. But, in the short term, it is not obvious which traits are really good for you, particularly if you are in the Black Swan generating environment of Extremistan. This is like looking at rich gamblers coming out of the casino and claiming that a taste for gambling is good for the species because gambling makes you rich! Risk-taking made many species head for extinction!
- The Riddle of Induction is a mode of the narrative fallacy. Given a set of data, we often can make it fit a model we have in our head rather than what it actually represents. We might have a linear model in our head—the fact that a number has risen for 1,000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1,001. Consider the graphs below.
The raw data A linear projection maps well Other projections also map The actual generating function
- Narrative distortions of causality can hide survivorship bias. Why didn’t the bubonic plague kill more people? People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and “scientific models” of epidemics. Simply though, had the bubonic plague killed more people, the observers (us) would not be here to observe.
- The way to avoid the ills of the narrative fallacy is to favor experimentation over storytelling, experience over history, and clinical knowledge over theories.
- Those news events that fit most closely to a narrative have the highest chance of leading people astray. We can use this to our advantage in investing. I’ll conclude by saying that our misunderstanding of the Black Swan can be largely attributed to our using System 1, i.e., narratives, and the sensational—as well as the emotional—which imposes on us a wrong map of the likelihood of events.
- The reverse-engineering problem describes the asymmetry of causality. It is easier to predict how an ice cube would melt into a puddle than, looking at a puddle, to guess the shape of the ice cube that may have caused it. This “inverse problem” makes narrative disciplines and accounts (such as histories) suspicious.
- Poincaré Divergence describes why as you project into the future you may need an increasing amount of precision about the dynamics of the process that you are modeling, since your error rate grows very rapidly. The problem is that near precision is not possible since the degradation of your forecast compounds abruptly—you would eventually need to figure out the past with infinite precision.
- This multiplicative difficulty leading to the need for greater and greater precision in assumptions can be illustrated with an exercise concerning the prediction of the movements of billiard balls. If you know a set of basic parameters concerning the ball at rest, can compute the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the first hit. The second impact becomes more complicated, but possible; you need to be more careful about your knowledge of the initial states, and more precision is called for. The problem is that to correctly compute the ninth impact, you need to take into account the gravitational pull of someone standing next to the table. To compute the fifty-sixth impact, every single elementary particle of the universe needs to be present in your assumptions!
- Note that this billiard-ball story assumes a plain and simple world; it does not even take into account these crazy social matters possibly endowed with free will. Billiard balls do not have a mind of their own. Nor does our example take into account relativity and quantum effects. Nor did we use the notion (often invoked by phonies) called the “uncertainty principle.” We are not concerned with the limitations of the precision in measurements done at the subatomic level. We are just dealing with billiard balls!
- We do not understand enough about Mother Nature to mess with her—and I do not trust the models used to forecast climate change. Simply, we are facing nonlinearities and magnifications of errors coming from the so-called butterfly effects. Small changes in input, coming from measurement error, can lead to massively divergent projections—and that generously assumes that we have the right equations.
- We need to be hyper-conservationists ecologically since we do not know what we are harming with now. That’s the sound policy under conditions of ignorance and epistemic opacity.
- One practical solution I have come up with, based on the nonlinearities in the damage (under the assumption that harm increases disproportionately with the quantities released), and using the same mathematical reasoning that led to my opposing the “too big” concept, is to spread the damage across pollutants—should we need to pollute, of course. Let us carry on a thought experiment. Case 1: You give the patient a dose of cyanide, hemlock, or some poisonous substance, assuming they are equally harmful—and assuming, for the case of this experiment, the absence of super-additivity (that is, no synergetic effects). Case 2: You give the patient a tenth of a dose of each of ten such substances, for the same total amount of poison. Clearly, we can see that Case 2, by spreading the poison ingested across substances, is at the worst equally harmful (if all the poisonous substances act in the same way), and at the best close to harmless to the patient.
- Poincaré proposed that we can only work with qualitative matters—some property of systems can be discussed, but not computed. You can think rigorously, but you cannot use numbers.
- We often overvalue reason. We ought also consider empiricism. The empirics practiced the “medical art” without relying on reasoning; they wanted to benefit from chance observations by making guesses, and experimented and tinkered until they found something that worked. They did minimal theorizing. Their methods are being revived today as evidence-based medicine, after two millennia of persuasion. Consider that before we knew of bacteria, and their role in diseases, doctors rejected the practice of handwashing because it made no sense to them, despite the evidence of a meaningful decrease in-hospital deaths… Similarly, it may not “make sense” that acupuncture works, but if pushing a needle in someone’s toe systematically produces relief from pain (in properly conducted empirical tests), then it could be that there are functions too complicated for us to understand, so let’s go with it for now while keeping our minds open.
One-liners, aphorisms, miscellany:
- Prediction, not narration, is the real test of our understanding of the world.
- Note that, by symmetry, the occurrence of a highly improbable event is the equivalent of the nonoccurrence of a highly probable one.
- It is much more sound to take risks you can measure than to measure the risks you are taking.
- I have in my house two studies: one real, with interesting books and literary material; the other nonliterary, where I do not enjoy working, where I relegate matters prosaic and narrowly focused. In the nonliterary study is a wall full of books on statistics and the history of statistics, books I never had the fortitude to burn or throw away; though I find them largely useless outside of their academic applications.
- Your happiness depends far more on the number of instances of positive feelings, what psychologists call “positive affect,” than on their intensity when they hit. In other words, good news is good news first; how good matters rather little. So to have a pleasant life you should spread these small “affects” across time as evenly as possible. Plenty of mildly good news is preferable to one single lump of great news.
- Know people by testing them at extremes. If you want to get an idea of a friend’s temperament, ethics, and personal elegance, you need to look at him under the tests of severe circumstances, not under the regular rosy glow of daily life. Can you assess the danger a criminal poses by examining only what he does on an ordinary day?