(You can leave anonymous feedback on posts in this series here. I alone will read the comments, and may use them to improve past and forthcoming posts in this series.)
There are many approaches to strategic analysis (Bishop et al. 2007). Though a morphological analysis (Ritchey 2006) could model our situation in more detail, the present analysis uses a simple probability tree (Harshbarger & Reynolds 2008, sec. 7.4) to model potential events and interventions.
A very simple tree
In our initial attempt, the first disjunction concerns which of several (mutually exclusive and exhaustive) transformative events comes first:
“FAI” = Friendly AI.
“uFAI” = UnFriendly AI, not including uFAI developed with insights from WBE.
“WBE” = Whole brain emulation.
“Doom” = Human extinction, including simulation shutdown and extinction due to uFAI striking us from beyond our solar system.
“Other” = None of the above four events occur in our solar system, perhaps due to stable global totalitarianism or for unforeseen reasons.
Our probability tree begins simply:
Each circle is a chance node, which represents a random variable. The leftmost chance node above represents the variable of whether FAI, uFAI, WBE, Doom, or Other will come first. The rightmost chance nodes are open to further disjunctions: the random variables they represent will be revealed as we continue to develop the probability tree.
Each left-facing triangle is a terminal node, which for us serves the same function as a utility node in a Bayesian decision network. The only utility node in the tree above assigns a utility of 0 (bad!) to the Doom outcome.
Each branch in the tree is assigned a probability. For the purposes of illustration, the above tree assigns .01 probability to FAI coming first, .52 probability to uFAI coming first, .07 probability to WBE coming first, .35 to Doom coming first, and .05 to Other coming first.
How the tree could be expanded
The simple tree above could be expanded “downstream” by adding additional branches:
We could also make the probability tree more actionable by trying to estimate the probability of desirable and undesirable outcomes given certain that certain shorter-term goals are met. In the example below, “private push” means that a non-state actor passionate about safety invests $30 billion or more into developing WBE technology within 30 years from today. Perhaps there’s a small chance this safety-conscious actor could get to WBE before state actors, upload FAI researchers, and have them figure out FAI before uFAI is created.
We could also expand the tree “upstream” by making the first disjunction be not concerned with our five options for what comes first but instead with a series of disjunctions that feed into which option will come first.
We could add hundreds or thousands of nodes to our probability tree, and then use the software to test for how much the outcomes change when particular inputs are changed, and learn what things we can do now to most increase our chances of a desirable outcome, given our current model.
We would also need to decide which “endgame scenarios” we want to include as possible terminals, and the utility of each. These choices may be complicated by our beliefs about multiverses and simulations.
However, decision trees become enormously large and complex very quickly as you add more variables. If we had the resources for a more complicated model, we’d probably want to use influence diagrams instead (Howard & Matheson 2005), e.g. one built in Analytica, like the ICAM climate change model. Of course, one must always worry that one’s model is internally consistent but disconnected from the real world (Kay 2012).
AI Risk & Opportunity: Strategic Analysis Via Probability Tree
Part of the series AI Risk and Opportunity: A Strategic Analysis.
(You can leave anonymous feedback on posts in this series here. I alone will read the comments, and may use them to improve past and forthcoming posts in this series.)
There are many approaches to strategic analysis (Bishop et al. 2007). Though a morphological analysis (Ritchey 2006) could model our situation in more detail, the present analysis uses a simple probability tree (Harshbarger & Reynolds 2008, sec. 7.4) to model potential events and interventions.
A very simple tree
In our initial attempt, the first disjunction concerns which of several (mutually exclusive and exhaustive) transformative events comes first:
“FAI” = Friendly AI.
“uFAI” = UnFriendly AI, not including uFAI developed with insights from WBE.
“WBE” = Whole brain emulation.
“Doom” = Human extinction, including simulation shutdown and extinction due to uFAI striking us from beyond our solar system.
“Other” = None of the above four events occur in our solar system, perhaps due to stable global totalitarianism or for unforeseen reasons.
Our probability tree begins simply:
Each circle is a chance node, which represents a random variable. The leftmost chance node above represents the variable of whether FAI, uFAI, WBE, Doom, or Other will come first. The rightmost chance nodes are open to further disjunctions: the random variables they represent will be revealed as we continue to develop the probability tree.
Each left-facing triangle is a terminal node, which for us serves the same function as a utility node in a Bayesian decision network. The only utility node in the tree above assigns a utility of 0 (bad!) to the Doom outcome.
Each branch in the tree is assigned a probability. For the purposes of illustration, the above tree assigns .01 probability to FAI coming first, .52 probability to uFAI coming first, .07 probability to WBE coming first, .35 to Doom coming first, and .05 to Other coming first.
How the tree could be expanded
The simple tree above could be expanded “downstream” by adding additional branches:
We could also make the probability tree more actionable by trying to estimate the probability of desirable and undesirable outcomes given certain that certain shorter-term goals are met. In the example below, “private push” means that a non-state actor passionate about safety invests $30 billion or more into developing WBE technology within 30 years from today. Perhaps there’s a small chance this safety-conscious actor could get to WBE before state actors, upload FAI researchers, and have them figure out FAI before uFAI is created.
We could also expand the tree “upstream” by making the first disjunction be not concerned with our five options for what comes first but instead with a series of disjunctions that feed into which option will come first.
We could add hundreds or thousands of nodes to our probability tree, and then use the software to test for how much the outcomes change when particular inputs are changed, and learn what things we can do now to most increase our chances of a desirable outcome, given our current model.
We would also need to decide which “endgame scenarios” we want to include as possible terminals, and the utility of each. These choices may be complicated by our beliefs about multiverses and simulations.
However, decision trees become enormously large and complex very quickly as you add more variables. If we had the resources for a more complicated model, we’d probably want to use influence diagrams instead (Howard & Matheson 2005), e.g. one built in Analytica, like the ICAM climate change model. Of course, one must always worry that one’s model is internally consistent but disconnected from the real world (Kay 2012).
References
Bishop et al. (2007). The current state of scenario development: an overview of techniques.
Harshbarger & Reynolds (2008). Mathematical Applications for the Management, Life, and Social Sciences.
Howard & Matheson (2005). Influence Diagrams.
Kay (2012). The map is not the territory: Models, scientists, and the state of modern macroeconomics.
Ritchey (2006). Problem structuring using computer-aided morphological analysis.