It just doesn’t seem like the implications of the differences have fully propagated into some of the recommendations?—as if an attempt to write in a way that’s comprehensible to Shock Level 2 tech executives and policymakers has failed to elicit all of the latent knowledge that Bostrom and Shulman actually possess. It’s understandable that our reasoning about the future often ends up relying on analogies to phenomena we already understand, but ultimately, making sense of a radically different future is going to require new concepts that won’t permit reasoning by analogy.
The bolded sentence (emphasis mine) seems plausibly exactly what’s going on (curious if Shulman agrees. Or Bostrom, but he seems less likely to comment).
One of the frightening things to me is that this might actually be the limit of what humanity can coordinate around. (Or rather, I’m frightened that the things humanity can coordinate around are even more weaksauce than the concepts in this doc)
You are looking at the wreckage of an abandoned book project. We got bogged down & other priorities came up. Instead of writing the book, we decided to just publish a working outline and call it a day.
The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.
The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios. Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.
I think it’s unclear how well reasoning by analogy works in this area. Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone. So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.
I particularly like this paragraph:
The bolded sentence (emphasis mine) seems plausibly exactly what’s going on (curious if Shulman agrees. Or Bostrom, but he seems less likely to comment).
One of the frightening things to me is that this might actually be the limit of what humanity can coordinate around. (Or rather, I’m frightened that the things humanity can coordinate around are even more weaksauce than the concepts in this doc)
You are looking at the wreckage of an abandoned book project. We got bogged down & other priorities came up. Instead of writing the book, we decided to just publish a working outline and call it a day.
The result is not particularly optimized for tech executives or policymakers — it’s not really optimized for anybody, unfortunately.
The propositions all *aspire* to being true, although some of may not be particularly relevant or applicable in certain scenarios. Still, there could be value on working out sensible things to say to cover quite a wide range of scenarios, partly because we don’t know which scenario will happen (and there is disagreement over the probabilities), but partly also because this wider structure — including the parts that don’t directly pertain to the scenario that actually plays out — might form a useful intellectual scaffolding, which could slightly constrain and inform people’s thinking of the more modal scenarios.
I think it’s unclear how well reasoning by analogy works in this area. Or rather: I guess it works poorly, but reasoning deductively from first principles (at SL4, or SL15, or whatever) might be equally or even more error-prone. So I’ve got some patience for both approaches, hoping the combo has a better chance of avoiding fatal error than either the softheaded or the hardheaded approach has on its own.