How to Understand and Mitigate Risk
This post wouldn’t be possible without support from the EA Hotel.
Epistemic Status: Fairly certain these distinctions are pointing at real things, less certain that the categories are exactly right. There’s still things I don’t know how to fit into this model, such as using Nash Equilibria as a strategy for adversarial environments.
Instrumental Status: Very confident that you’ll get better outcomes if you start using these distinctions where previously you had less nuanced models of risk.
Transparent Risks
Transparent risks are those risks that can be easily quantified and known, in advance. They’re equivalent to the picture above, with a transparent bag where I can count the exact amount of marbles in each bag. If I’m also certain about how much each marble is worth, then I have a simple strategy for dealing with risks in this situation.
How to Mitigate Transparent Risks: Do the Math
The simple strategy for transparent risks like the one above is to do the math.
Expected Value
Expected value is a simple bit of probability theory that says you should multiply the likelihood of an event happening by the payoff to get your long run value over time. It’s a simple way to figure out if the risk is worth the reward in any given situation. The best introduction I know to expected value is here.
Kelly Criterion
The Kelly criterion is helpful when losing your entire bankroll is worse than other
outcomes. I don’t fully understand it, but you should, and Zvi wrote a post in it here. (If someone would be willing to walk me through a few examples and show me where all the numbers in the equation come from, I’d be very grateful.)
Transparent Risks in Real Life
Drunk Driving
Driving drunk is a simple, well studied risk on which you can quickly find probabilities of crash, injury and death to yourself and others. By comparing these costs to the costs of cab fare (and the the time needed to get your car in the morning if you left it), you can make a relatively transparent and easy estimate whether it’s worth driving at your Blood Alcohol Content level (spoiler alert, No if your BAC is anywhere near .08 on either side.) The same method can be used for any well-studied risks that exist within tight, slow changing bounds.
Commodity and Utility Markets
While most business opportunities are not transparent risks, an exception exists for commodities and utilities (in the sense mean’t by Wardley Mapping). It’s quite easy to research the cost of creating a rice farm, or a power plant, as well as get a tight bounded probability distribution for the expected price you can sell your rice or electricity at after making the initial investment. These markets are very mature and there’s unlikely to be wild swings or unexpected innovations that significantly change the market. However, because these risks are transparent it also means that competition drives margins down. The winners are those which can squeeze a little extra margin through economies of scale or other monopoly effects like regulatory capture.
Edit: After being pointed to the data on commodities, I no longer lump them in with utilities as transparent risks and would call them more Knightian.
Opaque Risks
Opaque risks are those risks that can be easily quantified and unlikely to change, but which haven’t already been quantified/aren’t easy to quantify just by research. They’re equivalent to the picture above, with an opaque bag that you know contains a static amount of a certain type of marble, but not the ratio of marbles to each other. As long as I’m sure that the bag contains only three types of marbles, and that the distribution is relatively static, a simple strategy for dealing with these risks emerges.
How to Mitigate Opaque Risks: Determine the Distribution
The simple strategy for opaque risks is to figure out the distribution. For instance, by pulling a few marbles at random out of the bag, you can over time become more and more sure about the distribution in the bag, at which point you’re now dealing with transparent risks. The best resource I know of for techniques to determine the distribution of opaque risks is How to Measure Anything by Douglas Hubbard.
Sampling
Sampling involves repeatedly drawing from the distribution in order to get an idea of what the distribution is. In the picture above, it would involve simply reaching your hand in and pulling a few marbles out. The bigger your sample, the more sure you can be about the underlying distribution.
Modelling
Modelling involves breaking down the factors that create the distribution, into as transparent pieces as possible. The classic example from fermi estimation is how many piano tuners there are in Chicago—that number may be opaque to you, but the number of people in Chicago is relatively transparent, as is the percentage of people that own pianos, the likelihood that someone will want their piano tuned, and the amount of money that someone needs to make a business worthwhile. These more transparent factors can be used to estimate the opaque factor of piano tuners.
Opaque Risks in Real Life
Choosing a Career You Don’t Like
In the personal domain, opaque risks often take the form of very personal things that have never been measured because they’re unique to you. As a career coach, I often saw people leaping forward into careers that were smart from a global perspective (likely to grow, good pay, etc) but ignored the more personal factors. The solution was a two tier sampling solution: Do a series of informational interviews for the top potential job titles and potential industries, and then for the top 1-3 careers/industries, see if you can do a form of job shadowing. This significantly helped cut down the risk by making an opaque choice much more transparent.
Building a Product Nobody Wants
In the business domain, solutions that are products(in Wardley Mapping terms) but are not yet commoditized often qualify as opaque risks. In this case, simply talking to customers, showing them a solution, and asking if they’ll pay, can save a significant amount of time and expense before actually building the product. Material on “lean startup” is all about how to do efficient sampling in these situations.
Knightian Risks
Knightian risks are those risks that exist in environments with distributions that are actively resistant to the methods used with opaque risks. There are three types of Knightian Risks: Black Swans, Dynamic Environments, and Adversarial Environments.
A good portion of “actually trying to get things done in the real world” involves working with Knightian risks, and so most of the rest of this essay will focus on breaking them own into their various types, and talking about the various solutions to them.
Types of Knightian Risks
Black Swans
A black swan risk is an unlikely, but very negative event that can occur in the game you choose to play.
In the example above, you could do a significant amount of sampling without ever pulling the dynamite. However, this is quite likely a game you would want to avoid given the presence of the dynamite in the bag. You’re likely to severely overestimate the expected value of any given opportunity, and then be wiped out by a single black swan. Modelling isn’t useful because very unlikely events probably have causes that don’t enter into your model, and it’s impossible to know you’re missing them because your model will appear to be working accurately (until the black swan hits). A great resource for learning about Black Swans is the eponymous Black Swan, by Nassim Taleb.
Dynamic Environments
When your risks are changing faster than you can sample or model them, you’re in a dynamic environment. This is a function of how big the underlying population size is, how good you are at sampling/modelling, and how quickly the distribution is changing.
A traditional sampling strategy as described above involves first sampling, finding out your risks in different situations, then finally “choosing your game” by making a decision based on your sample. However, when the underlying distribution is changing rapidly, this strategy is rendered moot as the information your decision was based on quickly becomes outdated. The same argument applies to a modelling strategy as well.
There’s not a great resource I know of to really grok dynamic environments, but an ok resource is Thinking in Systems by Donella Meadows (great book, but only ok for grokking the inability to model dynamic environments).
Adversarial Environments
When your environment is actively (or passively) working to block your attempts to understand it and mitigate risks, you’re in an adversarial environment.
Markets are a typical example of an Adversarial Environment, as are most other zero sum games with intelligent opponents. They’ll be actively working to change the game so that you lose, and any change in your strategy will change their strategy as well.
Ways to Mitigate Knightian Risks
Antifragility
Antifragility is a term coined by Nassim Taleb to describe systems that gain from disorder. If you think of the games described above as being composed of distributions, and then payoff rules that describe how you react to this distributions, anti-fragility is a look at how to create flexible payoff rules that can handle Knightian risks. Taleb has an excellent book on anti-fragility that I recommend if you’d like to learn more.
In terms of the “marbles in a bag” metaphor, antifragility is a strategy where pulling out marbles that hurt you makes sure you get less and less hurt over time.
Optionality
Optionality is a heuristic that says you should choose those options which allow you to take more options in the future. The idea here is to choose policies that lower you’re intertia and switching costs between strategies. Avoiding huge bets and long time horizons that can make our break you, while developing agile and nimble processes that can quickly change. This is the principle from which all other anti-fragile principles are generated.
This helps with black swans by allowing you to quickly change strategies when your old strategy is rendered moot by a black swan. It helps with dynamic environments by allowing your strategy to change as quickly as the distribution does. It helps with adversarial environments by giving you more moves to use against changing opponents.
Going with the bag of marbles example, imagine there are multiple bags of marbles, and the distributions are changing over time. Originally, it costs quite a lot to switch between bags. The optionality strategy says you should be focused on lowering the cost of switching between bags over time.
Hormesis
Hormesis is a heuristic that says that when negative outcomes befall you, you should work to make that class of outcomes less likely to hurt you in the future. When something makes you weak temporarily, you should ultimately use that to make yourself stronger in the long run.
This helps with Black Swans by gradually building up resistance to certain classes of black swans BEFORE they hit you. It helps with rapidly changing distributions by continually adapting to the underlying changes with hormetic responses.
In the bag of marbles example, imagine that at the start pulling a red marble was worth -$10. Every time you pulled a red marble, you worked to reduce that harm of red things by 1⁄10. This would mean that in an environment with lots of red marbles, you would quickly become immune to them. It would also mean that if you eventually did pull out that stick of dynamic, your general ability to handle red things would mean that it would hurt you less.
(I get that the above example is a bit silly, but the general pattern of immunity to small events helping you with immunity to black swans in the same class is quite common).
Evolution
The evolution heuristic says that you should constantly be creating multiple variations on your current strategies, and keeping those that avoid negative consequences over time. Just like biological evolution, you’re looking to find strategies that are very good at survival. Of course, you should be careful about calling up blind idiot gods, and be cautious about being tempted to optimize gains instead of minimize downside risk (as it should be used).
This helps with black swans in a number of ways. Firstly, by diversifying your strategies, it’s unlikely that all of them will be hit by black swans. Secondly, it has an effect similar to hormesis in which immunity to small effects can build up immunity to black swans in the same class. Finally, by having strategies that outlive several black swans, you develop general survival characteristics that help against black swans in general. It helps with dynamic environments by having several strategies, some of which will hopefully be favorable to the environmental changes.
The Barbell Strategy
The barbell strategy refers to a strategy of splitting your activities between those that are very safe, with low downside, and those that are very risky, with high upside. Previously, Benquo has argued against the barbell strategy, arguing that there is no such thing a riskless strategy. I agree with this general idea, but think that the framework I’ve provided in this post gives a clearer way to talk about what Nassim means: Split your activities between transparent risks with low downsides, and Knightian risks with high upsides.
The transparent risks obviously aren’t riskless (that’s why they’re called risk), but they behave relatively predictably over long time scales. When they DON’T behave predictably is when there’s black swans, or an equilibrium is broken such that a relatively stable environment becomes an environment of rapid change. That’s exactly when the transparent risks with high upside tend to perform well (because they’re designed to take advantage of these situations). That’s also why this strategy is great for handling black swans and dynamic environments. It’s less effective at handling adversarial environments, unless there’s local incentives in the adversarial environment to think more short term than this strategy does.
Via Negativa
Via negativa is a principle that says to continually chip away at sources of downside risk, working to remove the bad instead of increase the good. It also says to avoid games that have obviously large sources of downside risk. The principle here is that downside risk is unavoidable, but by making it a priority to remove sources of downside risks over time, you can significantly improve your chances.
In the bag of marbles example, this might look like getting a magnet that can over time begin to suck all the red marbles/items out of the bag, so you’re left with only the positive value marbles. For a more concrete example, this would involve paying off debt before investing in new equipment for a business, even if the rate of return from the new equipment would be higher than the rate of interest on the loan. The loan is a downside risk that could be catastrophic in the case of a black swan that prevented that upside potential from emerging.
This helps deal with black swans, dynamic environments, and adversarial environments by making sure you don’t lose more than you can afford given that the distribution takes a turn for the worse.
Skin in the Game
Skin in the game is a principle that comes from applying anti-fragility on a systems level. It says that in order to encourage individuals and organizations to create anti-fragile systems, they must be exposed to the downside risk that they create.
If I can create downside risk for others that I am not exposed to, I can create a locally anti-fragile environment that nonetheless increases fragility globally. The Skin in the game principle aims to combat two forces that create molochian anti-fragile environments- moral hazards and negative externalities.
Effectuation
Effectuation is a term coined by Saras Sarasvathy to describe a particular type of proactive strategy she found when studying expert entrepreneurs. Instead of looking to mitigate risks by choosing strategies that were flexible in the presence of large downsides risks (antifragility), these entrepreneurs instead worked to shift the distribution such that there were no downside risks, or shift the rules such that the risks were no longer downsides. There’s not a book a can recommend that’s great at explaining effectuation, but two OK ones are Effectuation by Saras Sarasvathy and Zero to One by Peter Thiel. This 3-page infographic on effectuation is also decent.
Note that Effectuation and Antifragility explicitly trade off against each other. Antifragility trades away certainty for flexibility while Effectuation does the opposite.
In terms of the “marbles in a bag” metaphor, Effectaution can be seen as pouring a lot of marbles that are really helpful to you into the bag, then reaching in and pulling them out.
Pilot-in-Plane Principle
The pilot-in-plane principle is a general way of thinking that says control is better than both prediction and anti-fragility. The pilot-in-plane principle emphasizes proactively shaping risks and rewards, instead of creating a system that can deal with unknown or shifting risks and rewards. The quote that best summarizes this principle is the Peter Drucker quote “The best way to predict the future is to create it.”
This principle also isn’t much use with black swans. It deals with dynamic environments by seizing control of the forces that shape those dynamic environments. It deals with adversarial environments by shaping the adversarial landscape.
Affordable Loss Principle
The affordable loss principle simply says that you shouldn’t risk more than you’re willing to lose on any given bet. It’s Effectuation’s answer to Via Negativa principle.
The difference is that while Via negativa recommends policies that search for situations with affordable downside, and focus on mitigating unavoidable downside, Affordable loss focuses on using your resources to shape situations in which the loss of all parties is affordable.
It’s not enough to just make bets you can afford to lose, you have to figure out how to do this while maximizing upside. Can you get a bunch of people to band together to put in a little, so that everyone can afford to lose what they’re putting in, but you have a seat at the table? Can you have someone else shoulder the risk who can afford to lose more> Can you get guarantees or insurance to minimize downside risk while still getting the upside? Many of these principles break the Skin in the Game principle needed for anti-fragility, but work perfectly (without calling up Moloch) when using an effectuative strategy. This is the affordable loss principle.
It helps with black swans by creating buffers that protect catastrophic loss. It helps with dynamic environments by keeping what can you lose constant even as the environment changes. It helps with adversarial environments by making sure you can afford to lose to your adversary.
Bird-in-Hand Principle
The bird-in-hand principle says that you should use your existing knowledge, expertise, connections, and resources to shift the distribution in your favor. It also says that you should only choose to play games where you have enough of these existing resources to shift the distribution. Peter Thiel says to ask the question “What do I believe that others do not?” Saras Sarasvathy says to look at who you are, what you know, and who you know.
This helps with Black Swans by preventing some of them from happening. It helps with dynamic environments by seizing control of the process that is causing the environment to change, making most of the change come from you. It helps with adversarial environments by ensuring that you have an unfair advantage in the game.
Lemonade Principle
The lemonade principle that says when the unexpected happens, you should use that as an opportunity to re-evaluate the game you’re playing, and seeing if there’s a more lucrative game you should be playing instead. Again, the idea of “make the most of a bad situation” might seem obvious, but through the creative and proactive lens of effectuation, it’s taken to the extreme. Instead of saying “What changes can I make to my current approach given this new situation?” the lemonade principle says to ask “Given this new situation, what’s the best approach to take?”
This helps with Black Swans by using them as lucrative opportunities for gaining utility. It helps with dynamic environments by constantly finding the best opportunity given the current landscape. It helps with adversarial environments by refusing to play losing games.
Patchwork Quilt Principle
The patchwork quilt principle says that you should trade flexibility for certainty by bringing on key partners. The partners get to have more of a say in the strategies you use, but in turn you get access to their resources and the certainty that they’re on board.
While the original work on effectuation paints this principle as only having to do with partnerships, I like to think of it as a general principle where you should be willing to limit your options if it limits your downside risk and volatility more. The inverse of the optionality principle from antifragile strategies.
This strategy doesn’t really help with black swans that much. It helps with dynamic environment by making the environment less dynamic through commitments. It helps with Adversarial environments by turning potential adversaries into allies.
Capability Enhancement
Capability enhancement is a general strategy of trying to improve capabilities such that knightian risks are turned into opaque risks (which are then turned into transparent risks through sampling and modelling). Unlike the previous to ways to mitigate knightian risk, this is more a class of strategies than a strategy in its’ own right. In terms of the “marbles in a bag” analogy, capability enhancement might be building x-ray googles to look through the bag, or getting really good at shaking it to figure out the distribution.
Black Swans can be turned opaque by knowing more (and having less unknown unknowns. Dynamic environments can be turned opaque by increasing the speed of sampling or modelling, or the accuracy or applicability of models. Adversarial environments can be turned opaque by giving better strategies to model or face adversaries (and their interactions with each other).
There are numerous classification schemes one could use for all the various types of capability enhancement. Instead of trying to choose one, I’ll simply list a few ways that I see people trying to approach this, with no attempt at completeness or consistent levels of abstraction.
Personal Psychology Enhancement
By making people think better, work more, be more effective, an individual can increase the class of problems that become opaque to them. This is one approach that CFAR and Leverage are taking.
Better Models
By creating better models of how the world works, risks that were previously knightian to you become opaque. I would put Leverage, FHI, and MIRI into the class of organizations that are taking this approach to capability enhancement. The sequences could fit here as well.
Better Thinking Tools
By creating tools that can themselves help you model things, you can make risks opaque that were previously Knightian. I would put Verity, Guesstimate, and Roam in this category.
Improving Group Dynamics
By figuring out how to work together better, organizations can turn risks from knightian to opaque. Team Team on Leverage, and CFARs work on group rationality both fit into this category.
Collective Intelligence and Crowdsourcing
By figuring out how to turn a group of people into a single directed agent, you can often shore up individuals weaknesses and amplify their strengths. This allows risks that were previously knightian to individuals become opaque to the collective.
I would put Metaculus, Verity, and LessWrong into this category.
Knightian Risks in Real Life
0 to 1 Companies
When a company is creating something entirely new (in the Wardley Mapping sense), it’s taking a Knightian risk. Sampling is fairly useless here because people don’t know they want what doesn’t exist, and naive approaches to modelling won’t work because your inputs are all junk data that exists without your product in the market.
How would each of these strategies handle this situation?
Effectuation
Start your company in an industry where you have pre-existing connections, and in which you have models or information that others don’t (“What do you believe that others do not?”). Before building the product, get your contacts to pay up front to get you to build it, therefore limiting risk. If something goes wrong in the building of the product, take all the information you’ve gathered and the alliances you’ve already made, and figure out what the best opportunity is with that information and resources.
Anti-Fragility
Create a series of small experiments with prototypes of your products. Keep the ones that succeed, and branch them off into more variations, only keeping the ones that do well. Avoid big contracts like in the effectuation example, only taking small contracts that can let you pivot at a moments notice if needed.
Capability Enhancement
Create a forecasting tournament for the above product variations. Test only the ones that have positive expected value. Over time, you’ll have less and less failed experiments as your reputation measures get better. Eventually, you may be able to skip many experiments all together and just trust the forecasting data. If you’re interested in this type of thing we should really chat.
AGI Risk
At first glance, it seems like many of these strategies such as Effectuation apply more to individual or group risks than global risks. It’s not clear for instance how an effectual strategy of shifting the risks to people who can handle them applies on a society wide scale. I do however think that this categorization scheme has something to say about existential risk, and will illustrate with a few examples of ways to mitigate AGI Risk. I recognize that many of these examples are incredibly simplified and unrealistic. The aim is simply to show how this categorization scheme can be used to meaningfully think about existential risk, not to make actual policy suggestions or leaps forward.
How might we mitigate AI risk using the strategies discussed here?
Capability Enhancement/Modelling/Sampling
A capability enhancement/sampling/modelling strategy might be to get a bunch of experts together and forecast how soon we’ll get AGI. Then, get a bunch of forecasting experts together and create a function that determines how long it takes to develop benevolent AGI given the amount of AI safety researchers. Finally, create a plan to hire enough AI safety researchers that we develop the ability to create safe AGI before we develop the ability to develop unsafe AGI. If we find that there’s simply no way to discover AI safety fast enough given current methods, create tools to get better at working on AI safety. If you find that the confidence intervals on AGI timelines are too wide, create tools that can allow you to narrow them.
Anti-fragility
An anti-fragile strategy might look like developing a system of awareness of AI risk and enough funding such that you can create a strategy where two AI safety researchers are hired for every non-safety AI researcher that is hired. Thus, the more you expose yourself to the existential risk of AGI, the faster you create the mechanism that protects you from that risk. This might be pared with a system that tries different approaches to AI safety, and splits off the groups that are doing the best every few years into two groups, these evolving a system that increases the effectiveness of AI safety researchers over time.
Effectuation
The effectual strategy, instead of taking the timeline for AI as a given, would instead ask “How can we change this timeline such that there’s less risk?” Having asked that question, and recognizing that pretty much any answer exists in an adversarial environment, the question becomes “What game can we play that we, as effective altruists, have a comparative advantage at compared to our adversaries?” If the answer is something like “We have an overbundance of smart, capabable people who are willing to forgo both money and power for altruistic reasons,” then maybe the game we play is getting a bunch of effective altruists to run for local offices in municipal elections, and influence policy from the ground up by coordinating laws on a municipal level to create a large effect of requiring safety teams for ML teams (among many other small policies). Obviously a ridiculous plan, but it does illustrate how the different risk mitigation strategies can suggest vastly different object level policies.
Exercise for the reader: Robin Hanson worries about a series of catastrophic risks that tax humanity beyond it’s resources (I can’t find the article to link here but if someone knows it let me know in the comments). We might be able to handle climate change, or an asteroid, or an epidemic on their own, but if by chance they hit together, we pass a critical threshold that we simply can’t recover from.
How would you analyze and mitigate this situation of “stacked catastrophic risks” using the framework above?
Thanks to Linda Linsefors for reviewing early drafts.
- EA Hotel Fundraiser 4: Concrete outputs after 10 months by 30 Mar 2019 19:54 UTC; 51 points) (EA Forum;
- How can guesstimates work? by 10 Jul 2019 19:33 UTC; 24 points) (
- 24 Jan 2020 18:50 UTC; 3 points) 's comment on MichaelA’s Shortform by (
- 21 Jun 2019 18:07 UTC; 3 points) 's comment on Is your uncertainty resolvable? by (
- 22 Aug 2019 15:07 UTC; 2 points) 's comment on Matthew Barnett’s Shortform by (
How would you classify existential risks within this framework? (or would you?)
Here’s my attempt. Any corrections or additions would be appreciated.
Transparent risks: asteroids (we roughly know the frequency?)
Opaque risks: geomagnetic storms (we don’t know how resistant the electric grid is, although we have an idea of their frequency), natural physics disasters (such as vacuum decay), killed by an extraterrestrial civilization (could also fit black swans and adversarial environments depending on its nature)
Knightian risks:
- Black swans: ASI, nanotech, bioengineered pandemics, simulation shutdown (assuming it’s because of something we did)
- Dynamic environment: “dysgenic” pressures (maybe also adversarial), natural pandemics (the world is getting more connected, medicine more robust, etc. which makes it difficult how the risks of natural pandemics are changing), nuclear holocaust (the game theoretic equilibrium changes as we get nuclear weapon that are faster and more precised, better detectors, etc.)
- Adversarial environments: resource depletion or ecological destruction, misguided world government or another static social equilibrium that stops technological progress, repressive totalitarian global regime, take-over by a transcending upload (?), our potential or even our core values are eroded by evolutionary development (ex.: Hansonian em world)
Other (?): technological arrests (“The sheer technological difficulties in making the transition to the posthuman world might turn out to be so great that we never get there.” from https://nickbostrom.com/existential/risks.html )
This is great! I agree with most of these, and think it’s a useful exercise to do this classification.
Strong upvoted. This is a great overview, thanks for putting it together! I’m going to be coming back to this again for sure.
Can you say more about this? You mention that effectuation involves “shift[ing] the rules such that the risks were no longer downsides”, but that looks a lot like hormesis/antifragility to me. The lemonade principle in particular feels like straight-up antifragility (unexpected events/stressors are actually opportunities for growth).
That claim is something that often seems to be true, but it’s one of the things I’m unsure of as a general rule. I do know that in practice when I try to mitigate risk in my own projects, and I think of anti-fragile and effectuative strategies, they tend to be at odds with each other (this is true of both the “0 to 1 Companies” and “AGI Risk” examples below”)
The difference between hormesis and the lemonade principle is one of mindset.
In general, the anti-fragile mindset is “you don’t get to choose the game but you can make yourself stronger according to the rules.” Hormesis from that mindset is “Given the rules of this game, how can I create a policy that tends to make me stronger to the different types of risks?”
The effectuative mindset is “rig the game, then play it.” From that perspective, the lemonade principle looks more like “Given that I failed to rig this game, how can I use the information I just acquired to rig a new game.”
You’re a farmer of a commodity and there’s an unexpected drought. The hormetic mindset is “store a bit more water in the future.” (and do this every time there’s a draught). The lemonade mindset is “Start a draught insurance company that pays out in water.”
I think I get you now, thanks. Not sure if this is exactly right, but one is proactive (preparing for known stressors) and one is reactive (response to unexpected stressors).
I’m not sure if this is the way I would think of it but I can kind of see it. I more think of them as different responses to the same sorts of stressors.
I really enjoyed reading this. Quite concise, well organised and I thought quite comprehensive (nothing is ever exhaustive so no need to apologise on that front). I will find this a very useful resource and while nothing in it was completely “new” to me I found the structure really helped me to think more clearly about this. So thanks.
A suggestion—might be useful to turn your attention more to specific process steps using the attention directing classification tools outlined here. For example
Step 1: Identify type of risk (transparent, Opaque, Knightian)
Step 2: List mitigation strategies for risk type—consider pros/cons for each strategy
Step 3: Weight strategy effectiveness according to pros/cons and your ability to undertake
etc—that’s just off the cuff—I’m sure you can do better :)
One minor point on AGI—how can you ” get a bunch of forecasting experts together ” on something that doesn’t exist and on which there is not even clear agreement around what it actually is?
I’m sure you are familiar with the astonishingly poor record on forecasts about AGI arrival (a bit like nuclear fusion and at least that’s reasonably well defined)
For someone to be a “forecasting expert” on anything they have to have a track record of reliably forecasting something—WITH FEEDBACK—about their accuracy (which they use to improve). By definition such experts do not exists for something that has not yet come into being and around which there isn’t a specific and clear definition/description. You might start by first gaining a real consensus on a very specific description of what it is you’re forecasting for and then maybe search for forecasting expertise in a similar area that already exists. But I think that would be difficult. AGI “forecasting” is replete with confirmation bias and wishful thinking (and if you challenge that you get the same sort of response you get from challenging religious people over the existence of their deity ;->)
Thanks again—loved it
Thanks for writing this post! i always felt that many of Taleb’s concepts/ideas were missing from LessWrong (which is not the same as saying that they’re missing from the community, it’s hard to know that, and i assume many are familiar). I thought about writing a few canonical posts myself about some of his ideas, but wasn’t sure how to go about it.
More specifically i thought about making a Risk tag and noticed there were very few posts talking about risk (on the meta level) instead of a specific risk (like AI), and that was surprising to me.
The images are all gone :(
This is quite hard to debug because the images are showing up on my machine, even in incognito. Are they showing up now?
Yeah, they were hosted on Matt’s website, and are now down. Though it probably also means they can still be restored.
This is quite hard to debug because the images are showing up on my machine, even in incognito. Are they showing up now?
Nope, still broken. When I try to access them the site asks me for a password (i.e. if I go directly to the link where they are hosted) so that’s probably related. I expect turning off that password protection will probably make them visible again.
Nice post! Have you seen the Cynefin framework? It covers some similar ideas.
https://en.wikipedia.org/wiki/Cynefin_framework
This would make for a really long comment—about a thousand words (explaining how to derive it). It should probably be a post instead, and in order to be readable, the writer would have to know how to make math formulas render properly instead of just being text. I do not know how to do that last thing, so
The short version is:
The Kelly Criterion is a supposed to be a guide to “optimal betting” for an infinite number of bets, if you have the utility function U = ln(M), where M is how much money you have. The wikipedia page isn’t very helpful about the derivation anymore, but it has a link to what it says is the original paper: http://www.herrold.com/brokerage/kelly.pdf.
This is because log($0) = - infinity utilons. If you don’t think being broke is the worst thing that could happen to you, this might not be your exact utility function.
Thanks! I do get the purpose/idea behind kelly criterion, but I don’t get how to actually do the math, nor how to intuitively think about it when making decisions the way I intuitively think about expected value.
Are you familiar with derivatives, and the properties of logarithms?
I am familiar with derivatives. I don’t remember the properties of logarithms but I half remember the base change one :).
Great post.
Can you clarify for me:
Are “Skin in the game”, “Barbell”, “Hormesis”, “Evolution” and “Via Negativa” considered to be subsets of “Optionality”
OR
Are all 6 (“Skin in the game”, “Barbell”, “Hormesis”, “Evolution”, “Via Negativa” AND “Optionality”) subsets of “Anti-fragility”?
I understood the latter from the wording of the post but the former from the figure at the top. Same with “Effectuation” and “Pilot in plane” etc.
Sort of both. Both optionality and pilot in the plane principle are like “guiding principles” of anti-fragility and effectuation from which the subsequent principles fall out. However, they’re also good principles in their own rights and subsets of the broader concept. It might be that I should change the picture to reflect the second thing instead of the first thing, to prevent confusions like this one.
A good exercise to see if you grock anti-fragility or effectuation is to go through each principle and explain how it follows from either Optionality or the Pilot-in-Plane principle respectively
For some reason it never really occurred to me before that a fast enough sampling rate effectively makes the environment quasi-static for analysis purposes. That’s interesting.
I think it might be because what little work I have done in dynamics also entailed an action against which the environment needed to modeled; so even an arbitrarily high sampling/modeling speed doesn’t affect how much the environment changes between when the action initiates and when it completes.
Quite separately, this post did a good job of incorporating everything I thought of that LessWrong has on risk all in the same post, and it would totally have been worth it if it did not do anything else. Strong upvote.
I’m surprised about the examples you have for transparent risks. When it comes to drunk driving, I have no idea how many driving skills compare to the average person.
Commodity markets do occasional move in price as well. https://www.indexmundi.com/commodities/?commodity=rice&months=60 suggests that there were two months in the last 5 years where rice prices shifted by more then 10%.
That’s very different then the risk of winning the lottery where you can actually calculate the odds precisely. Taleb uses the term “ludic fallacy” for failing to distinguish those two types of risk. Given that you do quote Taleb later on, have you made a conscious decision to reject his notion of the “ludic fallacy”? If so, wha’t your reasoning for doing so?
Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn’t make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it’s pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there’s always some small amount of model uncertainty (maybe a malicious demon is confusing me).
One way to think about this is that your base risk is transparent and your model uncertainty is Knightian—this is a sensible way to approach all transparent risks, and it’s part of the justification for the barbell strategy.
How my own driving skill differs from the average person feels to me a straightforward known unknown. For rice prices there’s the known unknown whether and resulting global crop yield.
For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
> How my own driving skill differs from the average person feels to me a straightforward known unknown.
I didn’t think of model where this mattered. I was more thinking of a model like “number of mistakes goes up linearly with alcohol consumption” than “number of mistakes gets multiplied by alcohol consumption”. If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period).
> For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there’s “semi-transparent risks” that are not that dynamic or adversarial but do have black swans, and that should be it’s own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.
After having someone else on the EA forum also point me to the data on commodities, I’m now updating the post.
How you would classify other global catastrophic risks according to these types?
I would say almost all global catastrophic risks would be classified as Knightian Risk. An exception might be something like an asteroid strike, which would be more opaque.
Edit: changed meteor to asteroid.
https://twitter.com/paulg/status/1110672251102416896
Regarding Transparent Risks and “Do the Math”, reminded of this tweet
Something I wish existed: a mobile app that dynamically calculates the probability you’re about to crash your car, based on your speed, the history of the piece of road you’re on, the weather, the time of day, accelerometer data, etc.
The math isn’t that easy to do when you’re in the bar—and the sort of person who on the margin might take the bet—exactly the sort of thing that should be automated.