I don’t spend much time talking about this on LW because timeless trade speculation eats people’s brains and doesn’t produce any useful outputs from the consumption; only decision theorists whose work is plugging into FAI theory need to think about timeless trade, and I wish everyone else would shut up about the subject on grounds of sheer cognitive unproductivity
I don’t trust any group who wishes to create or make efforts towards influencing the creation of a superintelligence when they try to suppress discussion of the very decision theory that the superintelligence will implement. How such an agent interacts with the concept of acausal trade completely and fundamentally alters the way it can be expected to behave. That is the kind of thing that needs to be disseminated among an academic community, digested and understood in depth. It is not something to trust to an isolated team, with all the vulnerability to group think that entails.
If someone were to announce credibly “We’re creating a GAI. Nobody else but us is allowed to even think about what it is going to do. Just trust us, it’s Friendly.” then the appropriate response is to shout “Watch out! It’s a dangerous crackpot! Stop him before he takes over the world and potentially destroys us all!” And make no mistake, if this kind of attempt at suppression were taken by anyone remotely near developing an FAI theory that is what it would entail. Fortunately at this point it is still at the “Mostly Harmless” stage.
and doesn’t produce any useful outputs from the consumption
I don’t believe you. At least, it produces outputs at least as useful and interesting as all other discussions of decision theory produce. There are plenty of curious avenues to explore on the subject and fascinating implications and strategies that are at least worth considering.
Sure, the subject may deserve a warning “Do not consider this topic if you are psychologically unstable or have reason to believe that you are particularly vulnerable to distress or fundamental epistemic damage by the consideration of abstract concepts.”
not to mention the horrid way it sounds from the perspective of traditional skeptics (and not wholly unjustifiably so).
If this were the real reason for Eliezer’s objection I would not be troubled by his attitude. I would still disagree—the correct approach is not to try to suppress all discussion by other people of the subject but rather to apply basic political caution and not comment on it oneself (or allow anyone within one’s organisation to do so.)
If someone were to announce credibly “We’re creating a GAI. Nobody else but us is allowed to even think about what it is going to do. Just trust us, it’s Friendly.” then the appropriate response is to shout “Watch out! It’s a dangerous crackpot! Stop him before he takes over the world and potentially destroys us all!” And make no mistake, if this kind of attempt at suppression were taken by anyone remotely near developing an FAI theory that is what it would entail. Fortunately at this point it is still at the “Mostly Harmless” stage.
I don’t see how anyone could credibly announce that. The announcement radiates crackpottery.
I don’t trust any group who wishes to create or make efforts towards influencing the creation of a superintelligence when they try to suppress discussion of the very decision theory that the superintelligence will implement. How such an agent interacts with the concept of acausal trade completely and fundamentally alters the way it can be expected to behave. That is the kind of thing that needs to be disseminated among an academic community, digested and understood in depth. It is not something to trust to an isolated team, with all the vulnerability to group think that entails.
If someone were to announce credibly “We’re creating a GAI. Nobody else but us is allowed to even think about what it is going to do. Just trust us, it’s Friendly.” then the appropriate response is to shout “Watch out! It’s a dangerous crackpot! Stop him before he takes over the world and potentially destroys us all!” And make no mistake, if this kind of attempt at suppression were taken by anyone remotely near developing an FAI theory that is what it would entail. Fortunately at this point it is still at the “Mostly Harmless” stage.
I don’t believe you. At least, it produces outputs at least as useful and interesting as all other discussions of decision theory produce. There are plenty of curious avenues to explore on the subject and fascinating implications and strategies that are at least worth considering.
Sure, the subject may deserve a warning “Do not consider this topic if you are psychologically unstable or have reason to believe that you are particularly vulnerable to distress or fundamental epistemic damage by the consideration of abstract concepts.”
If this were the real reason for Eliezer’s objection I would not be troubled by his attitude. I would still disagree—the correct approach is not to try to suppress all discussion by other people of the subject but rather to apply basic political caution and not comment on it oneself (or allow anyone within one’s organisation to do so.)
I don’t see how anyone could credibly announce that. The announcement radiates crackpottery.