Some possibilities on dorky LW topics (as opposed to the topics I assume Vladimir et al. are referring to):
Not only are anti-natalist arguments correct, they are correct in such a way that we should be attempting to maximize x-risks.
Wireheading is necessary and sufficient for the fulfillment of true human CEV; people only claim to care about other values for signalling purposes.
A very strong form of error theory is correct; what people actually care about is qualia, even though there is no such thing. It doesn’t all add up to normality; just as bad metaphysics may lead people to think there’s a relevant difference between praying to God and attempting to summon demons, bad metaphysics makes people think there’s a relevant difference between donating a million dollars to Against Malaria Foundation and kidnapping and torturing a small child.
It would be very fun to have a thread where we attempted to come up with seductive, harmful ideas, and the chance of actually happening upon a very infectious and very harmful one would be very low.
Wireheading is necessary and sufficient for the fulfillment of true human CEV; people only claim to care about other values for signalling purposes.
Alternative which I view as being more frightening:
For any given human, its CEV involves that human winning at zero-sum, possibly even negative-sum, games (status would be one of these). As such, the best way to maximize the current collection of humanity’s CEV would be to create new agents to which current humans defeat in zero-sum games.
That is, for every current human, create a host of new agents (all of whom are quite human for all intents and purposes) of whom the current human is emperor.
Note: if this is the case, I doubt pseudo-agents will suffice. Just as humans do not wish to love pseudo-humans (that is, humans who cannot really love), humans do not wish to win zero-sum games against pseudo-humans (that is, humans who cannot really lose zero-sum games, with all that losing these games entails).
So basically, what you’re saying is that CEV might work out to everyone getting their own secret volcano lair filled with harems of catpersons? Now where have I heard this idea before...
CEV will probably have many contributions from people who don’t want the AI to create almost-human slaves. Do you think such desires will lose out in reflective equiibrium?
I never said they would be slaves, although I certainly did imply it. I probably should not have said ‘emperor.’ A more appropriate term would have been something like ‘grand-champion’ or ‘big winner.’
Do you think such desires will lose out in reflective equiibrium?
I certainly hope not. However, I personally have no idea.
Do we have to win at the same game to be happy? It looks like for different people different games matter and if you mostly beat scarcity you reduce the factor of pragmatically useful prizes.
I will take “yes” as an answer designed to maximize scare-factor.
But frankly, do you have any evidence for this?
Maybe I am too far atypical in that I have experienced feeling of negative utility because of a victory (and it was a consequence-free and non-cheating victory, so it is a direct thing).
But look at scientists, businessmen and writers. Looks like many people in these three groups manage to look down at the two other groups. Some X try to do Y, fail, and do not care about that failure because X is what matters.
Given enough collective resources, it seems logical to invest them into designating more distinct games and ensuring that every field is filled with people who are most suitable for it. Wait, did I just repeat what Adam Smith said about separation of labour? Maybe it is not too bad, though.
bad metaphysics may lead people to think there’s a relevant difference between praying to God and attempting to summon demons
What’s wrong with the metaphysics? I figured that one of the most powerful magicks developed by the Christians is a system for addressing only the demons who are actually God (using relatively rigid designators like the Word, the Form of the Good, &c.). The biggest reason I’m suspicious of the other Indo-European religions is that they don’t advertise that they’ve developed any such system.
Some possibilities on dorky LW topics (as opposed to the topics I assume Vladimir et al. are referring to):
Not only are anti-natalist arguments correct, they are correct in such a way that we should be attempting to maximize x-risks.
Wireheading is necessary and sufficient for the fulfillment of true human CEV; people only claim to care about other values for signalling purposes.
A very strong form of error theory is correct; what people actually care about is qualia, even though there is no such thing. It doesn’t all add up to normality; just as bad metaphysics may lead people to think there’s a relevant difference between praying to God and attempting to summon demons, bad metaphysics makes people think there’s a relevant difference between donating a million dollars to Against Malaria Foundation and kidnapping and torturing a small child.
It would be very fun to have a thread where we attempted to come up with seductive, harmful ideas, and the chance of actually happening upon a very infectious and very harmful one would be very low.
Alternative which I view as being more frightening:
For any given human, its CEV involves that human winning at zero-sum, possibly even negative-sum, games (status would be one of these). As such, the best way to maximize the current collection of humanity’s CEV would be to create new agents to which current humans defeat in zero-sum games.
That is, for every current human, create a host of new agents (all of whom are quite human for all intents and purposes) of whom the current human is emperor.
Note: if this is the case, I doubt pseudo-agents will suffice. Just as humans do not wish to love pseudo-humans (that is, humans who cannot really love), humans do not wish to win zero-sum games against pseudo-humans (that is, humans who cannot really lose zero-sum games, with all that losing these games entails).
Some portrayals of heaven involve each person having dominion over a host of angels. One can only hope this allows for live action real-time strategy.
So basically, what you’re saying is that CEV might work out to everyone getting their own secret volcano lair filled with harems of catpersons? Now where have I heard this idea before...
As near as I can tell I’m -want/+like/-approve on both wireheading and emperor-like superiority.
I am willing to admit to having a desire to feel superior to other people.
Same here, but I’m willing to settle for “equal”
you’re such a good person for that.
CEV will probably have many contributions from people who don’t want the AI to create almost-human slaves. Do you think such desires will lose out in reflective equiibrium?
I never said they would be slaves, although I certainly did imply it. I probably should not have said ‘emperor.’ A more appropriate term would have been something like ‘grand-champion’ or ‘big winner.’
I certainly hope not. However, I personally have no idea.
Do we have to win at the same game to be happy? It looks like for different people different games matter and if you mostly beat scarcity you reduce the factor of pragmatically useful prizes.
I have no idea. However, ‘yes’ is the more cynical answer, to let us assume it is the case for this particular purpose.
I will take “yes” as an answer designed to maximize scare-factor.
But frankly, do you have any evidence for this?
Maybe I am too far atypical in that I have experienced feeling of negative utility because of a victory (and it was a consequence-free and non-cheating victory, so it is a direct thing).
But look at scientists, businessmen and writers. Looks like many people in these three groups manage to look down at the two other groups. Some X try to do Y, fail, and do not care about that failure because X is what matters.
Given enough collective resources, it seems logical to invest them into designating more distinct games and ensuring that every field is filled with people who are most suitable for it. Wait, did I just repeat what Adam Smith said about separation of labour? Maybe it is not too bad, though.
What’s wrong with the metaphysics? I figured that one of the most powerful magicks developed by the Christians is a system for addressing only the demons who are actually God (using relatively rigid designators like the Word, the Form of the Good, &c.). The biggest reason I’m suspicious of the other Indo-European religions is that they don’t advertise that they’ve developed any such system.