That I get bad karma here is completely biased in my opinion. People just don’t realize that I’m basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence.
It’s a mock of all that is wrong with this community. I already thought I’d get bad karma for my other post but was surprised not to. I’ll probably get really bad karma now that I say this. Oh well :-)
To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It’s nothing more than that. People suspect that I’m making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I’m not the SIAI. I can argue for things I don’t support and not even think are sound.
Note that multifoliaterose’s recent posts and comments have been highly upvoted: he’s gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
Might it be better to believe that winning is impossible, than that it’s likely, if the actual probability is very low?
I think that spending more time reading the sequences, and the posts of highly upvoted Less Wrongers such as Yvain and Kaj Sotala, will help you to improve your sense of the norms of discourse around here.
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
“If you confront it rationally full on then you can’t really justify trading off any part of galactic civilization for anything that you could get now days.”
So why, I ask you directly, am I not to argue that we can’t really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?
That I get bad karma here is completely biased in my opinion. People just don’t realize that I’m basing extrapolated conclusions on some shaky premises just like LW does all the time when talking about the future galactic civilization and risks from AI. The difference is, my predictions are much more based on evidence.
It’s a mock of all that is wrong with this community. I already thought I’d get bad karma for my other post but was surprised not to. I’ll probably get really bad karma now that I say this. Oh well :-)
To be clear, this is a thought experiment about asking what we can and should do if we ultimately are prone to cause more suffering than happiness. It’s nothing more than that. People suspect that I’m making strong arguments, that it is my opinion, that I ask for action. Which is all wrong, I’m not the SIAI. I can argue for things I don’t support and not even think are sound.
Note that multifoliaterose’s recent posts and comments have been highly upvoted: he’s gained over 500 karma in a few days for criticizing SIAI. I think that the reason is that they were well-written, well-informed, and polite while making strong criticisms using careful argument. If you raise the quality of your posts I expect you will find the situation changing.
You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
I copied that sentence from here (last sentence).
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
The SIAI = What If?
I think you should probably read more of the Less Wrong sequences before you make more top level posts. Most of the highly upvoted posts are by people that have the knowledge background from the sequences.
I’m talking about these kind of statements: http://www.vimeo.com/8586168 (5:45)
“If you confront it rationally full on then you can’t really justify trading off any part of galactic civilization for anything that you could get now days.”
So why, I ask you directly, am I not to argue that we can’t really justify to balance the happiness and utility of a galactic civilization with the MUCH longer time of decay? There is this whole argument about how we have to give rise to the galactic civilization and have to survive now. But I predict that suffering will prevail. That it is too unlikely that the outcome will be positive. What is wrong with that?