You are looking at the wreckage of an abandoned book project. We got bogged down & other priorities came up. Instead of writing the book, we decided to just publish a working outline and call it a day.
The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.
The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios. Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.
I think it’s unclear how well reasoning by analogy works in this area. Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone. So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.
You are looking at the wreckage of an abandoned book project. We got bogged down & other priorities came up. Instead of writing the book, we decided to just publish a working outline and call it a day.
The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.
The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios. Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.
I think it’s unclear how well reasoning by analogy works in this area. Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone. So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.