Mental Models

There are structural and emergent patterns that undergird our world. Mental models identify these patterns and the processes by which we may think more clearly. They simplify the world’s complexity into manageable, moldable frameworks for the mind to manipulate.

This page is constantly evolving. Most of these models are popularized already. Unique models to a given source will be given a reference when possible. Models with an asterisk (*) imply either my personal instantiation as models, or my unique naming convention for those cases when they have been outlined in similar formulations by others.

Arguments

Tools for strong arguments

Coherent Arguments

First Principles

Arguments from first principles are where someone assumes a basic fact, called an assumption or axiom, and then reasons their way forward. The base is set, and the argument is built on top. All arguments require axioms—even the assumption that one ought to use logic to make an argument is itself axiomatic. First principles thinking posits that axioms should be as foundational as possible.

Syllogism, Validity, and Soundness

Using syllogism is to draw a conclusion from premises. The classic example is “All men are mortal. Socrates is a man. Therefore Socrates is mortal.” It is valid if the conclusion is properly drawn from the premises. It is sound if the premises are true and the conclusion is valid. An argument can be valid but not sound. For example, “All Gods are immortal. Socrates is a god. Therefore Socrates is immortal” (one or more of the premises are false, but the conclusion would rightly follow if they were true).

Proof by Contradiction

Proof by contradiction is a method for proving something to be true by assuming the opposite conclusion and showing this must contradict an axiom. We have an axiom A: “All men are mortal.” We have additional proposition B: “Socrates is a man.” Say we want to prove a conclusion C: “Socrates is mortal.”

We assume B: Socrates is a man. We also assume not-C: Socrates is immortal. Therefore, not-A: all men are not mortal. This contradicts our axiom A: that all men are mortal. Therefore, Socrates must be mortal.

Proof by Contrapositive

Proof by contrapositive is a method for proving something to be true by assuming the opposite conclusion and proving the opposite of a proposition. We have an axiom A: “All men are mortal.” We have additional proposition B: “Socrates is a man.” Say we want to prove a conclusion C: “Socrates is mortal.”

We assume the opposite, not-C: Socrates is immortal. By our axiom A, not-C implies not-B: Socrates must not be a man. By the rules of logic, if not-C implies not-B, this is congruent to B implies C. Therefore, Socrates must be mortal.

Incoherent Arguments

Backwards-Justification

Backwards-justification is assuming a conclusion, and then attempting to rationalize why it must be true. This does not provide a solid foundation for an argument.

Strawman Fallacy

Strawman arguments are those which do not attack the substance of an opponent’s argument, but instead a weaker “strawman” version of it. Avoid this fallacious thinking by using the “steelman” action model.

Motte-and-Bailey Fallacy

Motte-and-bailey arguments are those that conflate two similar positions by advancing the difficult one, and retreating to the more easily defendable one when attacked. For example, some 2020 civil rights protestors proposed to “Abolish the police,” but retreated to the easier-to-defend statement “We just mean reform” when critiqued. Whereas a strawman argument substitutes an opponent’s strong argument for one easier to attack, the motte-and-bailey argument substitutes one’s own weak argument for one easier to defend. It is a form of bait-and-switch.

The fallacy gets its name from a motte-and-bailey castle. A motte is the site of the fortified castle, and the bailey encircles the town below. The motte is easy to defend, and the bailey more difficult.

Fallacy of Composition (and of Division)

The fallacy of composition is to apply an attribute to a group based off of a member of that group. The fallacy of division is the reverse, to apply an attribute to an individual based on group which they inhabit. Consider the following: a high-performing university doesn’t mean every individual within it is a high-performer (fallacy of composition). Because a newspaper employs a top-tier journalist does not mean that the newspaper is top-tier.

Whataboutism

When presented with an argument, rather than rebutting or responding to it, some people will say “okay, well what about <something else>.” This is whataboutism. Side-stepping an argument is not a rebuttal.

_____ of the Gaps, or the Argument from Ignorance Fallacy

The tendency to use a catchall explanation for situations where causes are unknown. “God of the Gaps” is the perspective that those things which science cannot explain are proof of God’s existence. A separate example would be whenever there are disparities between two races where the cause is uncertain, some might automatically attribute this gap to racism, i.e. “Racism of the Gaps.” This is a logical fallacy, as ignorance of causes is not evidence of one.

Category Vagueness, Gray Zones, or Sorites Paradox

Does one dollar make a man rich? What if you give him another? If you keep adding ad infinitum, eventually he will cross the boundary from poor to rich, but the line is opaque. Consider too the genes of a monkey. If you swap one DNA base-pair towards that of a human, is he a human yet? If you continue this process, you will have a human, but the inflection point is unknown. Categories can be subjective constructions of the mind, and allowance for gray zone must be considered. Gray Zones, or Sorites Paradox, outlines a technical concept of vagueness.

Related: Procrustean Bed

Category Errors

Category errors are misjudging information from one domain is information for another domain. For example, the 2020 Black Lives Matter movement was ignited on the charge that George Floyd was killed by police racism. Although a possible explanation, no evidence at the time solidly justified that the officer’s brutality was caused by race rather than general malice or heartlessness. This was a category error; protestors attributed the incident into the category of police racism when it was better suited to the category of general police brutality.

Rules ≠ Reality

Reality is complicated. Rules are abstractions on the world. Defining rules by which to navigate reality will inevitably incur a loss in fidelity of navigation. Scott E. Page’s graphic from The Model Thinker, below, illustrates this quite well. The world is comprised of innumerable amounts of data (e.g. specific incidents of crime). We compile these into understandable bits of information (e.g. crime in this neighborhood is high). Relating bits of information gives us knowledge (e.g. increased police prescence decreases crime). Rules, or wisdom, dictate what to do (e.g. to decrease crime, increase policing). Wisdom may be true in the whole, but by being abstracted from reality, it has lost the precision of reality. This is why rules often have exceptions (e.g. there may be a neighborhood where increased policing might increase crime for unspecifed reasons).

Scott E. Page, The Model Thinker.

This model has been seperately formulated at Farnam Street as The Map is Not the Territory.

Related: Procrustean Bed

Reference: Scott E. Page, The Model Thinker

Questioning Beliefs

Counterfactuals

A counterfactual is situation with alternate facts. People often don’t consider proper counterfactuals when they make judgments about the world. Frequently, they judge an object in isolation rather than in comparison to the counterfactual. For example, imagine you believe a certain president, policy, or culture phenomenon is bad for the country. For this judgment to be pragmatic, it should be judging against the counterfactual, i.e. what the world would look with the alternate candidate being elected, or the opposing policies and cultures in force.

Consider the specific example of self-driving cars: if a number of self-driving cars gets into crashes, people may consider it in isolation and attempt to legislate it away. They imagine that removing self-driving cars might eliminate those crashes. However this attack is unwarranted in light of the counterfactual. If self-driving cars are statistically safer than human drivers, these crashes are supposedly preferable to the increased number in the alternate situation of human drivers.

Dogmatic versus Falsifiable Beliefs

Dogma is a belief held as unquestionably true. Dogmas are essentially axioms, but are often referred to pejoratively as assumptions taken at too high a level. The assumption of reason as a basis for arguments is an axiom, but not often considered dogma. Religions of faith are considered dogmas—there is no evidence that one can use to disprove them, as these beliefs are held to be more foundational than anything that can be brought to reason against them. This is essentially the point of “faith:” belief despite evidence. If under no circumstances can a certain belief be considered false, then it is a dogmatic belief.

In contrast, falsifiable beliefs are those that can be disproven. There would be some construction of the facts by which one would change their mind. For example, you might intensely believe that Car Brand A is better than Car Brand B. But you might admit that if certain reliability and safety claims you believed true turned out to be false, and other peformance metrics were surpassed by Brand B, then you would be forced to change your mind. These are falsifiability criteria. Falsifiable beliefs are good, and scientific claims are based upon falsifiability. They posit hypotheses which could turn out to be either true or false.

Means and Ends

Instrumental versus Terminal Values

Instrumental value is that worth associated to means. Hydrogen atoms have instrumental value to the process of solar fusion. They are valuable as a means to an end. Terminal values are those ends that are valued subjectively. It is the worth that attributed to desiring solar fusion as an end which we’d like to see realized.

Continuous Spectrums of Value versus Discrete Values

We consider numerical value on a spectrum, from negative infinity to positive infinity. We can consider all measures of value on a similar spectrum, to include the measures of how one realizes any given end. Money, well-being, beauty, athleticism, health—these might all lie on a continuous spectrum of value. Discrete values are those that don’t have “shades of gray” between values, such as booleans.

Psychology

Tools for understanding the mind

General Psychology

Emotive Connotation*

The psychological complement of Emotive Conjugation. Emotive connotation is the positive or negative affect associated with an experience. A constricted calf upon waking in the night with a cramp might be identical to calf raises in the gym in terms of physical sensation, but has an experientially different emotive component to it.

It is, in other words, not objects and events but the interpretations we place on them that are the problem. Our duty is therefore to exercise stringent control over the faculty of perception, with the aim of protecting our mind from error.

Gregory Hays, Foreword of Meditations: A New Translation

Hedonic Adaptation

Hedonic adaptation is the process by which we adapt to our environments by tending towards an emotional baseline. Winning the lottery is exciting for the first week, but then we grow accustomed to the wealth and can reengage with life’s miseries. Many amputees will be distraught with their disability at first, but over time adapt to finding life just as enjoyable as others.

Rorschach Test

A Rorschach test refers to an object which is viewed differently by different people. For example, conservatives and liberals might see a political action as meaning two different things. Two doctors of different specialties might look at an X-ray and deduce entirely different diagnoses. The original Rorschach test referred to inkblot images where people would see different images in the designs, much like people might make out different shapes when looking at the same cloud.

Signaling

Signaling is added component of communication where one attempts to indicate or amplify a characteristic about themselves. Posting gym or beach pictures may be for the sake of sharing one’s life, but may also be (consciously or subconsciously) attempting to signal fitness. Owning large houses, shiny wristwatches, fancy cars, and expensive namebrands can often be attempts to signal wealth or success. Flaunting college degrees, achievements, or loquacious and superfluous verbology might attempts at signaling intelligence. We can look not just at the content of the message but the reason behind expressing it.

Virtue Signaling

A particular case of signaling where people attempt to signal moral righteousness by sharing their support for a cause, charity, or other noble project. Virtue signaling may or may not be coincident with actual virtue. Generally used in a pejorative sense.

Cheap vs. Costly Signaling

Some signals are cheap to send, i.e. posting in support of a cause on social media. Some signals are costly, which may include spending large amounts of money or time. It is useful to be able to recognize cheap and costly signaling as it gives better insight as to the true conviction a person has to a cause.

Projection

Projection is the process of attributing aspects we are trying to repress in ourselves onto other people. Those who accuse others of being penny-pinching, overly precoccupied with sex, unattractive, unintelligent, or other negative descriptors may be doing so out of concern for those qualities within themselves.

Discrimination Type I and Type II

Imagine that in a certain neighborhood 95% of the crime was committed by race X. Is it racist to cross the street when encountering one of these people when you would not have if they were from race Y? Yes, it’s racist insofar as it’s discriminating on the basis of race. However, it’s not the same type of discrimination as that which might think a class of people is inherently inferior. It is merely the use of empirical evidence to guide decisions. If the data was reversed, or you were in another neighborhood were race Y was the dangerous one, then you might do the same for them. In any case, if you knew the individual, and knew they were not a member of the class of criminals, you might supposedly stay on the same side of the street. This is discrimination type I, discrimination based on empirical evidence in a situation where information is lacking. This is distinctly different from discrimination type II, which is what people classically consider bigotry, which is to subordinate one identity group to another.

Reference: Thomas Sowell, Discrimination and Disparities.

The Soft Bigotry of Low Expectations

This is the phenomenon where someone—in an attempt to protect a disenfranchised group—is actually prejudiced against them by expecting less than they would from a comparison group. For example, a liberal westerner might support equal rights for women in the workplace, but then fail to acknowledge the failure of certain Islamic cultures to advance these values. In an attempt to protect Muslims from criticism, one is actually setting a lower bar than they would for Western cultures. The implication that another culture, race, or creed is not expected to meet the moral standards we expect of ourselves is bigoted against them.

Negative Visualization

Negative visualization is the process of imagining how a situation could’ve been worse, which can then induce some amount of gratitude for this not having come to pass.

Related: Counterfactuals

Hedonic Reframing*

Hedonic reframing is the ability to consciously remove the negative affect associated with an emotion. By closely directing attention to one’s state of mind, through negative visualization, or by deciding to appreciate the negative sensations one feels, one can reevaluate their emotional state from negative affect to positive affect. Generally associated with stoicism or the meditational practice of mindfulness.

Related: Situational Reframing

Trend-Anecdote Swapping

Trend-anecdote swapping is a process for confirming one’s political narrative. If there is an anecdote the supports one’s agenda, it will often be framed as a broader trend. If there is a trend that disputes one’s agenda, it will often be framed as merely an unusual anecdote.

Reference: Tim Urban, “Political Disney World

Psychological Biases

Survivorship Bias

A form of psychological cherry-picking, survivorship bias is the result of focusing on the survivors of a selective process and their qualities while discounting the losers. For example, one might reference Steve Jobs and say that a dictatorial leadership style is a good way to build a company. However, this discounts all the autocratic leaders that failed and never heard of, thus failing to be included into our calculus. It could be that such a style leads to success one-in-a-million times, but because we only see the one, and not the million, we overestimate the effect of such a quality.

A friend once shared that he thought “there must be a higher power of some sort, because how else could something as complex as humans evolve.” We only can ask this questions because we have survived the selective process to complex thought. Like Steve Jobs, we may be the one example of evolution yielding complex thought, whereas there could be millions of failed realities or other worlds where it did not happen. With survivorship bias, we tend to apply causal forces where statistical ones are more appropriate.

Hindsight Bias

Hindsight bias is the tendency to overestimate how accurately one could have predicted a current outcome in the past.

Impact Bias

Impact bias describes the tendency for people to overestimate how much a single event will affect them over time. Stated another way, impact bias is ignorance of hedonic adaptation.

Availability Bias

The availability bias describes the tendency to find examples that come easily to mind as more representative of a greater population than is actually appropriate.

Affect Heuristic

The affect heuristic describes the tendencies for people to hold beliefs and make decisions based on emotion. People are more likely to believe things they like and disbelieve things they dislike.

Related: Social Proof

Fundamental Attribution Error

If you see someone driving excessively fast, you may consider them a bad or dangerous person. However, their speeding might be attributable not to their personality, but to the fact that they are rushing to the hospital, airport, or other reason where we might speed ourselves. The fundamental attribution error is tendency to over-attribute judgment to a personality rather than to a situation. Also known as correspondence bias.

Procrustean Bed

Procrustes, of ancient Greek lore, had a bed on which he’d stretch or amputate his guests so they fit perfectly. Likewise, we stretch and compress aspects of the world into neat packages which don’t quite fit. Categories are often more ambiguous than we are led to believe. Take the intersectional notion of privilege: a white, cis-hetero male is said to be privileged, but such categorization amputates all the rest of the information about his past, his financial status, family history, looks, luck, and more.

Related: Rules ≠ Reality, Gray Zones (Sorites Paradox)

Backfire Effect

The backfire effect is “the seemingly paradoxical outcome when our existing convictions are actually strengthened by evidence that contradicts them.”

Reference: James Lindsay and Peter Boghossian, How to Have Impossible Conversations

The Chameleon Effect

The Chameleon Effect is the psychological quirk of which Language Modeling takes advantage. It is the subconscious tendency for one to mimic the postures, tone, and language of their interlocutor.

St. George in Retirement Syndrome

St. George is mythologically known as a dragon slayer. He was a hero for a right and just cause. St. George in retirement is the imagined reality where there he will not admit that he has already accomplished his purpose, and continues to fight small, insignificant, or imaginary dragons. This model is generally used now to describe social movements which have accomplished much or all of their intended goal, and have started to take on smaller or imaginary goals with the same fervor as when they were trying to slay their original dragon. Consider third and fourth-wave feminism and specific other social justice movements.

Round-trip fallacy

Mistaking absence of evidence as equal to evidence of absence. An acronym used in the medical literature is NED, which stands for No Evidence of Disease. There is no such thing as END, Evidence of No Disease. One can know what is wrong with a lot more confidence than one can know what is right. All pieces of information are not equal in importance.

Reference: Nassim Taleb, The Black Swan

Social Pyschology

Vocal Minority

Just because there is a lot of noise does not mean there is a lot of support. Vocal minorities can drive messaging if they are louder than the majority. Many of the flashpoints of modern politics are held by both the extreme left and the extreme right. They drive narratives, but the majority of Americans largely disagree with both of them.

Desirability Bias, Social Proof

Desirability bias is the tendency for people to bias decisions in favor of those that are generally seen as socially desirable. If you ever buy a brand name over a generic of equal quality, you may be subject to desirability bias or signaling. This may also be referred to as social proof. Advertisers will often show you positive reviews and lists of other customers to activate these effects. Social proof is strongest in instances of uncertainty and low information—when in doubt, people tend to go with the crowd.

Preference Falsification

Preference falsification is the phenomenon where people’s public stances differ from ther private ones. People might signal a set of beliefs by stating one set of preferences, while in fact believing and acting upon another set of beliefs. Consider the 2016 election, where it is believed some poll respondents may have felt that stating support for then-candidate Trump incurred an unwanted cost, so they falsified their preferences and voted for him nonetheless.

What kept many of the Soviet states communist for so long? The fact that many people who hated the system pretended they liked it, especially under the authoritarian and coercive power of the state. This is highly unstable. Like in the common parable “The Emperor’s New Clothes,” everyone believed the Emperor was naked but were publically pressured to state that his clothes were of the finest silk. Once a few people publically admitted as much, the social cost of preference falsification was reduced and the entire paradigm shifted.

Reference: Eric Weinstein, The Portal podcast ep. 4, “Timur Kuran”

Pluralistic Ignorance

Pluralistic ignorance is the phenomenon where a majority of a group thinks they have a minority position. That is to say, everyone may dissent from a “mainstream” idea but have preference falsification or compelling narrative make this mass dissent invisible. People are ignorant of pluralistic opinions. For those familiar, the children’s story “The Emperor’s New Clothes” illustrates pluralistic ignorance and the breaking of the same paradigm.

Emotional Contagion

One person’s anger, joy, or other emotions can spread. Emotional contagion is the societal experience of an emotion stemming from the strong feelings of a few. Consider social-media mobs and other instances of communal outrage.

Related: The Chameleon Effect

The Streisand Effect

Barbra Streisand’s home was photographed by paparazzis. She was furious, wanting to maintain her privacy. She attempted to suppress their publication through legal action, but in doing so drew all extra publicity to the photographs. The Streisand Effect is applied when an attempt to hide information makes it more public than it would otherwise be.

Conversation

Tools for interpersonal interaction

When we write or speak we use words. We use these words to convey ideas or information. When one converses intelligibly, we use words so our interlocutor will comprehend the meaning we intend to convey. Little kids often confuse speed with acceleration. It is not until these concepts are properly defined and understood do they become available for useful employment. This is why maintaining clear and consistent definitions is important.

The fish trap exists because of the fish. Once you’ve gotten the fish you can forget the trap. The rabbit snare exists because of the rabbit. Once you’ve gotten the rabbit, you can forget the snare. Words exist because of meaning. Once you’ve gotten the meaning, you can forget the words. Where can I find a man who has forgotten words, so I can have a word with him?

Zhuangzi, Chuang Tsu: Inner Chapters

If the meaning attached to a word is different between two people, they will not be speaking about precisely the same thing. When a doctor asks if someone is a “woman” for the purposes of medical diagnoses it is important that he shares the same definition of woman. If the doctor’s concern is for chromosomal makeup , and a patient’s definition of “woman” is related to their female gender identity rather than their male genitalia and XY genes, the patient will fail to convey the meaning the doctor intended to receive. Likewise, someone who defines “violence” to entail physical force may fail to be able to meaningfully converse with someone who also thinks “speech is violence” unless the distinction is acknowledged. Shared definitions are a prerequisite to productive conversation.

Aiding Understanding

Emotive Conjugation

Emotive Conjugation (Russell Conjugation) is the use of words with factually identical definitions but different emotive meaning. Bertrand Russell illustrates how the words “firm,” “obstinate,” and “pigheaded” all refer to the same attitude against changing one’s mind, but do so with a positive, neutral, and negative connotation respectively. Understanding emotive conjugation can help one recognize bias and neutrality. Consider the connotative difference between a journalist using “illegal immigrant” versus “undocumented immigrant,” or the word “attacks” rather than “critiques.” Relatedly, one might consider two words that are not synonyms, but are interchangeable in context. For example, a newspaper choosing to use the word “thug” or “teen” when referring to a teenager committing a violent crime. Word choice confers a lot of information about the mind of speakers and writers.

Related: Emotive Connotation

Reference: Eric Weinstein, “Russell Conjugation”

Semantic Overload

Words convey meaning. Sometimes a single word will be used in a way that is overloaded by multiple meanings. For example, the word “biannually” means both every two years, or twice a year. Sometimes an interlocutor will use semantic overload in a political context: violence might mean specifically physical harm, and might separately mean causing emotional harm. “Black Lives Matter” refers both to the concept that black lives are important, and also to a movement that has broader political goals. Especially when used in the context of a single conversation, semantic overload induces confusion due to the conflation of multiple definitions.

Semantic Diffusion

The process by which a word with precise meaning becomes more broad and less exact.

Semantic Death

When a word loses its meaning entirely. Consider the word “literally,” which once meant “in the literal sense” and now means nothing at all (besides its use for general emphasis).

Cherry-Picking

Cherry-picking is the process of presenting partial data that misrepresents the whole of the data.

Motivated Reasoning

Cherry-picking is a result of motivated reasoning. Those who cherry-pick are often motivated to defend a certain point rather than to find the truth of the matter. This can be a result of confirmation bias, or of environmental incentives.

Confirmation Bias

People are more likely to find information believable if it confirms what they already believe about a subject. One way to combat this is to recognize when data is presented contrary to your views, and consciously attempt to approach it with an open mind.

Adversarial Tactics

Shibboleth

In the Hebrew Bible, the world shibboleth was a word that Gileadites could use to identify and then slay Ephraimites due to their dialect’s pronunciation. Sometimes your interlocutor will, explicitly or implicitly, require you to state a shibboleth to be considered “part of the club.” For example, they may consider you a bad-faith conversant unless you are able to say “black lives matter” or “all lives matter” or some other such political statement. Acknowledging the shibboleth outright may reduce the need to comply with this expectation.

Gaslighting

Gaslighting is the process of trying to convince someone that they are insane. This might take the form of people asserting vehemently that something obviously true is in fact false, or providing fake evidence to make one question one’s own knowlege. The term gets its name from the 1940’s film noir Gaslight where a wife questions her sanity while her husband rearranges furniture, makes mysterious sounds, and dims gaslights without seemingly being touched.

Dogwhistling

Only dogs can hear dogwhistles. In conversation, it is the use of a word or phrase that will only be heard or understood by certain groups of people.

Overton Window

The Overton Window defines the bounds of mainstream socially acceptable conversation. If you hold a position that is outside this window, you are likely to be ostracized. The Overton Window can shift over time—it was once outside the realm of reasonable political discourse to be considered a socialist in America, but Bernie Sander’s 2016 campaign helped shift the Overton Window to allow room for these iconoclasts.

Crying Wolf

In the evolutionary context, crying “wolf” when there is no wolf nearby is a sure way to get people to scatter. But if one makes such a cry and is lying or exaggerating, people will not believe them when a real wolf shows up.

Simple Rubric Problem

The Simple Rubric problem is where someone mistakes the means for the end by virtue of being more simply measurable. For example, in the COVID-19 pandemic, the goal is to reduce the spread and effect of the virus. The social yardstick for how one is judged as to aiding this goal is wearing a face-covering. One person might be wearing a properly fitted N95 mask. Another might wear a stretched neck gaiter which aerosolizes droplets and potentially increase transmission. By levying a Simple Rubric, one mistakes this failing gaiter as efficacious by virtue of being a face-covering, not by virtue of how it achieves the goal.

Some rubrics are goals in other contexts. If the goal is to save lives, and the Simple Rubric is the measure of COVID-19 spread, if there was a scenario where the second-order effects of a policy caused more death than would be saved by it’s reduction in spread, then this too would be mistaking the means and the ends.

Reference: Bret Weinstein, DarkHorse Podcast #67

Network Effects

Tools for understanding complex systems

Network Effects

Network effects are those that attribute value to a network as nodes are added. For example, if only one person has a telephone, it is a useless product. But as more people acquire them, each becomes more valuable, as you are able to call more people. Most instances of social media have positive network effects. Examples of negative network effects may be instances of congestion, where adding more nodes slows down or perturbs connection for other nodes.

Metcalfe’s Law

States that the value of a network is equal to the square of the nodes. Technically, the number of of links in a fully connected network is n (n – 1) / 2, however this is mathematically proportional to n2 as n increases. The more people, satellites, computers, etc. that you add to a system, the system becomes increasingly valuable at a power law scale, rather than a linear one.

Decentralization

Decentralized systems are those built without centralized control. Independent actors are the decision makers. This often leads to highly efficient and flexible systems, but sacrifices a degree of uniformity. Market economies are decentralized, socialist ones are centralized. The US military acts with (largely) decentralized command, former Soviet states mainly utilize centralized command and control.

Key Man, Eigenvalue

Many organizations may have a “key man” without whom the institution might cease to function. On a modeled network, they might be the person who has the highest node centrality. This can also be referred to as the “eigenvalue,” which is the measure of influence of a node by how many other high-ranking nodes it is connected to. A keyman might additionally not be the most connected node, but a driver of action nonetheless.

Positive Feedback Loops, Runaway Effects

Positive feedback loops are formed when a change in a certain direction makes additional change in that direction more likely. This is how one can get runaway effects; consider runaway inflation and runaway greenhouse gasses. These are unstable systems.

Negative Feedback Loops

Negative feedback loops are formed when a change in a certain direction makes additional change in that direction less likely. Consider biological homeostasis and the regulatory process of insulin and glucagon. Sweating is also a good example: sweating lowers one’s temperature further reducing the need to sweat. These are stable systems.

Economics

Tools regarding the allocation of scarce resources

Opportunity Cost

When performing an action, opportunity cost is that benefit which is foregone by failing to have done another action. For example, the decision to invest in company A incurs the opportunity cost of not investing in another promising company B. The decision to vacation in Rome incurs the opportunity cost of not enjoying the same vacation in Paris.

Related: Counterfactuals

Comparative Advantage

The concept of comparative advantage acknowledges that different countries, companies, or other entities produce products at different levels of efficiency and at different opportunity cost. Country A can have a comparative advantage at producing food, Country B can have a comparative advantage at producing furniture, and an efficient equilibria would have each producing relatively more of these products to create supply and then trading to meet demand.

Nassim Taleb critiques assumptions surrounding comparative advantage in The Black Swan, acknowledging that if the price of food or furniture (in this case) changes drastically due to random circumstances, the welfare of these two countries could suddenly be worse off than if they kept production in-house.

Repeated Games

Game theoretical optimums change when games are repeated. Prisoner’s Dilemma, famously expounded as a lose-lose, can become a win-win in repeated games. This has birthed a number of proposed positive strategies, including the tit-for-tat series. In real life, repeated interactions in business and personal life can change the dynamic of how one might act.

Black Swan Events

Black Swan events, as originally modeled by Nassim Taleb in his eponymous book, describes those episodes that are unpredictable, post-hoc explainable, and of important consequence. These include things like the assassination of Archduke Ferdinand, 9/11, the COVID-19 Pandemic, and more.

Reference: Nassim Taleb, The Black Swan

Ludic Fallacy

The ludic fallacy is the misuse of simplified games to model complex reality. If I asked “A coin was flipped 99 times on heads in a row, what is the chance it lands on heads again?” most experts will say 50/50, because of their “expert” knowledge of probability. However, in reality, it is far more likely the coin is double-sided and will land on heads again than it is for a coin flip to land 99 times in a row (which only would happen once in a billion-billion-billion times).

Reference: Nassim Taleb, The Black Swan

Flattening the Curve

This heuristic became socially prominent under the Coronavirus pandemic of 2020. Flattening the curve is a way of more efficiently distributing limited resources. For example, with limited hospital beds as a constraint, a flat curve might mean we can always handle patient throughput, whereas with a spiked bell curve we might find periods of scarcity, even if total patients are equivalent.

Sam Whitney, CDC.

Goodhart’s Law

“When a measure becomes a target, it ceases to be a good measure.” When you set a measure as a goal, you directly influence its ability to be a good measure.

Reference: James Clear, Atomic Habits

The Cantillon Effect

With expansionary monetary policy, there will be those with assets that rise in value along with inflation, and there will be those with fiat money that diminishes in value due to inflation. That some people benefit (namely, those with inflation-resistant assets or who are paying back debt) and some suffer (those without such assets or debt) constitutes a transfer of purchasing power. This is dubbed the Cantillon Effect, after economist Richard Cantillon.

The Principal-Agent Problem

There is often a mismatch in incentives between decision-makers and the person who’s interests they are supposed to represent. An employee (agent) might be incentivized to to do their job as fast as possible, rather than as thoroughly as possible as their employer (principal) would want. Politicians (agents) might be incentivized to get reelected rather than represent the best interests of their constituents (principals). Governments, investors, and other third parties are more likely to be reckless with money of other people than they would be with their own.

Pareto Superiority and Optimality

A Pareto superior is a situation which makes at least one person better off without making any other person worse off. It strictly dominates an inferior position. A Pareto optimal is when any deviation from the current situation will make at least one party worse off. That is to say, there is no change one can make where all parties would be happy.

Lindy Effect

For non-perishable items (ideologies, books, music), the Lindy effect describes the theory that the life-expectancy of that item is proportional to its current age. This is in opposition to items who normally have reduced life expectancy as time goes on. For example, if a play that has been on Broadway for a year is likely to still be playing another year hence. Simply put, the longer Lindy-compatible items last, the longer they are expected to last.

Reference: Nassim Taleb, Skin in the Game

Politics

Tools regarding policy making

Bigotry of Low Expectations

This is the tendency to ascribe less agency to a disenfranchised group while defending them than one would attribute to a majority group. For example: one might blame societal oppression for causing a British Muslim to join ISIS, or an inner-city black teen to join a gang. This can be demonstrated as bigotry in two ways.

First is by comparing the free-will of these individuals to that of the rest of the society. If you were to equally attribute agency to all people, then either (A) society is purposefully oppressive AND these deviants are purposefully criminal, or (B) the deviants are responsive to their environment AND the rest of society is as well.

The second way this may be demonstrated is to imagine the opposite: if a white teen joined ISIS or a gang, many of these same people would generally attribute the blame to the white kid, and it would be bigotry of low expectations to not do the same when the perpetrator is a minority.

Regulatory Capture

Regulatory capture describes when the regulators are captured by those that they regulate—in effect failing to produce efficient policy. This can happen because the people with the best knowledge of what to regulate come from industry. And the people with the best knowledge of regulations go to industry. It’s unlikely one will regulate away the inefficient or unethical method by which they made millions of dollars. It’s also likely one will leave some otherwise inefficient regulatory pathway by which they can return to industry and make millions.

Chesterton’s Fence

Consider a fence in an empty pasture. You might think: there is no reason for this fence here, we may as well take it down. Later, you find that a neighboring farmer sometimes brings his herd over and that without a fence they now all escape. The parable illustrates why one must carefully investigate reasons before dismantling institutions.

Iatrogenics

Iatrogenics is the term which describes harm caused by a medical provider (by misdiagnosis, error, malpractice, etc). More generally, it could be used to describe harm by those who were meant to help—such as bankers destroying the financial system, politicians undermining politics, academics ruining education, etc.

Minority Rule

Complex systems, such as those in politics, do not behave as a sum of their parts. The interactions between people can matter more than the desires of people themselves. Minority rule is when the desire of a minority is significantly more intransigent than that of the majority, to where it then leads to ruling the majority. This may be why extremist views can often win out in politics—the minority refuses to change their opinion, and while the majority may be disdainful, they don’t care enough to steer the ship.

Science

Tools to study the world

Scientism

Scientism is a slight pejorative that describes research which looks like science because it has data, graphs, and hard-to-understand words, but is in fact non-scientific. As Nassim Taleb puts it, scientism is “the belief that science looks…like science, with too much emphasis on the cosmetic aspects, rather than its skeptical machinery. It prevails in domains with administrators judging contributions according to metrics. It also prevails in domains left to people who talk about science without ‘doing,’ such as journalists and schoolteachers.”

Reference: Nassim Taleb, Skin in the Game

Occam’s Razor

Occam’s Razor is normally described as “the simpler solution is normally right.” More formally: the less assumptions an argument has (the simpler it is), the less likely any assumption is violated, and thus the less likely for it to be either contradictory or wrong. Every additional axiom is another vector by which one could attack an argument or hypothesis.

Hershey Factory

The Hershey Factory model is a metaphor used for generating a null hypothesis. If a chocolate river was flowing through Hershey, Pennsylvania, what should the default expectation as to the cause be? It would require less assumptions to hypothesize a container in the Hershey factory broke, than that cacao trees started growing and somehow leaking chocolate. This is related to Occam’s Razor’s—start with the simple solution.

This is relevant in discussions of the COVID-19 pandemic, where many assumptions would have to be made to hypothesize complex processes happening (wet-market interactions between bats and pangolins producing such a virus), whereas the simple explanation for a lab-leak would be from a lab that was studying gain-of-function research of coronaviruses in bats in the exact city where COVID was discovered. That is the chocolate river. Lab-leak is the obvious null-hypothesis, even if later disproved by justification of more complex phenomena.

Decision-Making

Tools to guide your actions

OODA Loops

Observe-orient-decide-act. This is the model first formulated by Colonel John Boyd to describe the process of decision-making. In warfare, business, or other competitive contests, the person who can get inside their enemy’s OODA loop will win. If you can act before your enemy decides, then they will have to reorient themselves and start their process over. Keep preempting their action, and they become paralyzed with uncertainty and indecision.

Poincaré Divergence*

Nassim Taleb, The Black Swan. Illustration by David Cowan.

As you predict into the future, errors compound rapidly. Changes in initial conditions can mean wildly different outcomes, as illustrated above. Poincaré proved this in the “three body problem,” showing how a stray comet would not alter the orbit of planets at first, bit over time could magnify into a sudden and rapid divergence.

Related: Chaotic Systems (TGMM, V.1)

Reference: Nassim Taleb, The Black Swan

Via Negativa

The principle that we can know what is wrong with more clarity than what is right. Science, for example, disproves hypotheses rather than proves them—those that have not been disproven remain as the standing theory but are not decidedly known. There are infinite ways to be wrong, and much fewer ways to be right.

Reference: Nassim Taleb, Skin in the Game

Kind vs. Wicked Learning Environments

The kind/wicked distinction in learning environments is a function of how much noise there is in the feedback to our actions. In kind learning environments, we have clear cause-and-effect relationships that allow us to turn observation into knowledge. In wicked ones, the opposite is true.

Structured games are the best example of kind environments. If in chess, you lost a piece to a fork by an opponents knight, you can learn to recognize and avoid that pattern in the future. The stock market, in contrast, is frequently a wicked learning environment, where complex systems make cause-and-effect harder to predict.