You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
Might it be better to believe that winning is impossible, than that it’s likely, if the actual probability is very low?
I think that spending more time reading the sequences, and the posts of highly upvoted Less Wrongers such as Yvain and Kaj Sotala, will help you to improve your sense of the norms of discourse around here.
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
You are one of the few people here whose opinion I’m actually taking serious, after many insightful and polite comments. What is the bone of contention in the OP? I took a few different ingredients: Robin Hanson’s argumentation about resource problems in the far future (the economic argument); Questions based on Negative utilitarianism (the ethical argument); The most probable fate of the universe given current data (the basic premise) -- then I extrapolated from there and created an antiprediction. That is, I said that it is too unlikely that the outcome will be good to believe that it is possible. Our responsibility is to prevent a lot of suffering over 10^100 years.
I never said I support this conclusion or think that it is sound. But I think it is very similar to other arguments within this community.
On a thematic/presentation level I think the biggest problem was an impression that the post was careless, attempting to throw as many criticisms as possible at its target without giving a good account of any one. This impression was bolstered by the disclaimer and the aggressive rhetorical style (which “reads” angry, and doesn’t fit with norms of politeness and discourse here).
Substantively, I’ll consider the major pieces individually.
The point that increasing populations would result in more beings that would quite probably die is not a persuasive argument to most people, who are glad to exist and who do not believe that creating someone to live a life which is mostly happy but then ends is necessarily a harm. You could have presented Benatar’s arguments and made your points more explicit, but instead simply stated the conclusion.
The empirical claim that superhuman entities awaiting the end of the universe would suffer terribly with resource decline was lacking in supporting arguments. Most humans today expect to die within no more than a hundred years, and yet consider their lives rather good. Superintelligent beings capable of directly regulating their own emotions would seem well-positioned to manage or eliminate stress and suffering related to resource decline. David Pearce’s Hedonistic Imperative is relevant here: with access to self-modification capacities entities could remain at steadily high levels of happiness, while remaining motivated to improve their situations and realize their goals.
For example, it would be trivial to ensure that accepting agreed upon procedures for dealing with the “lifeboat ethics” scenarios you describe at the end would not be subjectively torturous, even while the entities would prefer to live longer. And the comparison with Alzheimer’s doesn’t work: carefully husbanded resources could be used at the rate preferred by their holders, and there is little reason to think that quality (as opposed to speed or quantity) of cognition would be much worsened.
In several places throughout the post you use “what if” language without taking the time to present sufficient arguments in favor of plausibility, which is a rationalist faux-pas.
Edit: I misread the “likely” in this sentence and mistakenly objected to it.
I copied that sentence from here (last sentence).
Thanks, I’ll quit making top-level posts as I doubt I’ll ever be able to exhibit the attitude required for the level of thought and elaboration you demand. That was actually my opinion before making the last and first post. But all this, in my opinion, laughable attitude around Roko’s post made me sufficiently annoyed to signal my incredulity.
ETA
The SIAI = What If?