I’m sure most of the readers of lesswrong and overcomingbias would consider a singleton scenario undesirable. (In a singleton scenario, a single political power or individual rules over most of humanity.)
You are overwhelmingly mistaken. The vast majority prefer a singleton and think all the likely alternatives to be horrific. Robin Hanson is an exception—but his preferences are bizarre to the point of insanity.
You are overwhelmingly mistaken. The vast majority prefer a singleton and think all the likely alternatives to be horrific.
There may be some mind-projection going on here. I don’t for example have any strong preferences in this regard, although I think that a Friendly singleton is a better alternative than say paperclipping. But a singleton isn’t obviously going to be very nice to humans. A singleton that is almost but not quite Friendly could make a world that is very bad from a human perspective.
Robin Hanson is an exception—but his preferences are bizarre to the point of insanity.
Can you expand on this? There are people here who strongly disagree with what Robin thinks is likely to occur, but I’ve seen little indication that people strongly disagree with Robin’s stated preferences.
Perhaps. Also an implied “in proportion to degree of familiarity with lesswrong style thinking and premises”.
But a singleton isn’t obviously going to be very nice to humans. A singleton that is almost but not quite Friendly could make a world that is very bad from a human perspective.
All non-singleton possibilities being horrific is not to say that the majority of possible singletons do not constitute “humanity: FAIL” as well. Just that the only significant opportunities for a desirable future happen to involve a singleton.
I’ve seen little indication that people strongly disagree with Robin’s stated preferences.
The assertion ‘bizarre to the point of insanity’ was not attributed to a majority of others. That was my personal declaration. That said I am far from the only one to suggest that the scenarios Robin embraces are instead credible threats to be avoided. I am not alone in preferring not to die and have humanity and all that it values obliterated by the selection pressures of a Malthusian catastrophe as a precursor to the cosmic commons being burned.
I am not alone in preferring not to die and have humanity and all that it values obliterated by the selection pressures of a Malthusian catastrophe as a precursor to the cosmic commons being burned.
I don’t think Robin particularly wants to die. Note that he’s signed up for cryonics for example. Regarding the burning of the cosmic commons, it isn’t clear to me that he is in favor of that, just that he considers it to be a likely result. Given his training as an economist, that shouldn’t be that surprising.
All non-singleton possibilities being horrific is not to say that the majority of possible singletons do not constitute “humanity: FAIL” as well. Just that the only significant opportunities for a desirable future happen to involve a singleton.
Can you expand on this? I don’t know if this is something that is a) agreed upon or b) sufficiently well-defined. If for example, AGI turns out to be not able to go foom, and we all get functioning cryonics and clinical immortality that seems like a pretty good outcome. I don’t see how you get that non-singleton results must be horrific or even that most of them must be horrific. There may be definitional issues here in what constitutes horrific.
There may be definitional issues here in what constitutes horrific.
No, just the basic ‘everybody dies and that which constitutes the human value system is obliterated without being met’.
But there are certainly issues regarding different premises which would prohibit useful discussion here without multiple-post level groundwork preparation.
Regarding the burning of the cosmic commons, it isn’t clear to me that he is in favor of that,
No, rather, it is the default mainline probable outcome of other scenarios that he does (explicitly) embrace. (Some of that embracing was, I will grant, made for the purpose of being deliberately provocative.)
You are overwhelmingly mistaken. The vast majority prefer a singleton and think all the likely alternatives to be horrific. Robin Hanson is an exception—but his preferences are bizarre to the point of insanity.
There may be some mind-projection going on here. I don’t for example have any strong preferences in this regard, although I think that a Friendly singleton is a better alternative than say paperclipping. But a singleton isn’t obviously going to be very nice to humans. A singleton that is almost but not quite Friendly could make a world that is very bad from a human perspective.
Can you expand on this? There are people here who strongly disagree with what Robin thinks is likely to occur, but I’ve seen little indication that people strongly disagree with Robin’s stated preferences.
Perhaps. Also an implied “in proportion to degree of familiarity with lesswrong style thinking and premises”.
All non-singleton possibilities being horrific is not to say that the majority of possible singletons do not constitute “humanity: FAIL” as well. Just that the only significant opportunities for a desirable future happen to involve a singleton.
The assertion ‘bizarre to the point of insanity’ was not attributed to a majority of others. That was my personal declaration. That said I am far from the only one to suggest that the scenarios Robin embraces are instead credible threats to be avoided. I am not alone in preferring not to die and have humanity and all that it values obliterated by the selection pressures of a Malthusian catastrophe as a precursor to the cosmic commons being burned.
I don’t think Robin particularly wants to die. Note that he’s signed up for cryonics for example. Regarding the burning of the cosmic commons, it isn’t clear to me that he is in favor of that, just that he considers it to be a likely result. Given his training as an economist, that shouldn’t be that surprising.
Can you expand on this? I don’t know if this is something that is a) agreed upon or b) sufficiently well-defined. If for example, AGI turns out to be not able to go foom, and we all get functioning cryonics and clinical immortality that seems like a pretty good outcome. I don’t see how you get that non-singleton results must be horrific or even that most of them must be horrific. There may be definitional issues here in what constitutes horrific.
No, just the basic ‘everybody dies and that which constitutes the human value system is obliterated without being met’.
But there are certainly issues regarding different premises which would prohibit useful discussion here without multiple-post level groundwork preparation.
No, rather, it is the default mainline probable outcome of other scenarios that he does (explicitly) embrace. (Some of that embracing was, I will grant, made for the purpose of being deliberately provocative.)