I think this needs a dose of rigor (for instance, remove “sufficient time and tech” and calculate utility for humans alive today), and a more deep exploration of identity and and individual/aggregate value. But I don’t know why it’s downvoted so far—it’s an important topic, and I’m glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).
But I don’t know why it’s downvoted so far—it’s an important topic, and I’m glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).
I agree with this. The author has made a number of points I disagree with but hasn’t done anything worthy of heavy downvotes (like having particularly bad epistemics, being very factually wrong, personally attacking people, or making a generally low-effort or low-quality post). This post alone has changed my views towards favouring a modification of the upvote/downvote system.
I agree with this as well. I have strongly upvoted in an attempt to counterbalance this, but even so it is still in negative karma territory, which I don’t think it deserves.
Well if we’ve fallen to the level of influencing other people’s votes by directly stating what the votes ought to say (ugh =/), then let me argue the opposite: This post – at least in its current state – should not have a positive rating.
I agree that the topic is interesting and important, but – as written – this could well be an example of what an AI with a twisted/incomplete understanding of suffering, entropy, and a bunch of other things has come up with. The text conjures several hells, both explicitly (Billions of years of suffering are the right choice!) and implicitly (We make our perfect world by re-writing people to conform! We know what the best version of you was, we know better than you and make your choices!) and the author seems to be completely unaware of that. We get surprising, unsettling conclusions with very little evidence or reasoning to support it (instead there’s “reassuring” parentheticals like “(the answer is yes)”.) As a “What could alignment failure look like?” case study this would be disturbingly convincing. As a serious post, the way it glosses over lots of important details and confidently presents it conclusions, combined with the “for easy referencing” in the intro is just terrifying.
Hence: I don’t want anyone to make decisions based directly on this post’s claims that might affect me even in the slightest. One of the clearest ways to signal that is with a negative karma score. (Doesn’t have to be multi-digit, but shouldn’t be zero or greater.) Keep in mind that anyone on the internet (including GPT-5) can read this post, and they might interpret a positive score as endorsement / approval of the content as written. (They’re not guaranteed to know what the votes are supposed to mean, and it’s even plausible that someone uses the karma score as a filter criterion for some ML data collection.) Low positive scores can be rationalized away easily (e.g. the content is too advanced for most, other important stuff happening in parallel stole the show, …) or are likely to pass a filter cutoff, zero is unstable and could accidentally flip into the positive numbers, so negative scores it is.
I think this needs a dose of rigor (for instance, remove “sufficient time and tech” and calculate utility for humans alive today), and a more deep exploration of identity and and individual/aggregate value. But I don’t know why it’s downvoted so far—it’s an important topic, and I’m glad to have some more discussion of it here (even if I disagree with the conclusions and worry about the unstated assumptions).
I agree with this. The author has made a number of points I disagree with but hasn’t done anything worthy of heavy downvotes (like having particularly bad epistemics, being very factually wrong, personally attacking people, or making a generally low-effort or low-quality post). This post alone has changed my views towards favouring a modification of the upvote/downvote system.
I agree with this as well. I have strongly upvoted in an attempt to counterbalance this, but even so it is still in negative karma territory, which I don’t think it deserves.
Well if we’ve fallen to the level of influencing other people’s votes by directly stating what the votes ought to say (ugh =/), then let me argue the opposite: This post – at least in its current state – should not have a positive rating.
I agree that the topic is interesting and important, but – as written – this could well be an example of what an AI with a twisted/incomplete understanding of suffering, entropy, and a bunch of other things has come up with. The text conjures several hells, both explicitly (Billions of years of suffering are the right choice!) and implicitly (We make our perfect world by re-writing people to conform! We know what the best version of you was, we know better than you and make your choices!) and the author seems to be completely unaware of that. We get surprising, unsettling conclusions with very little evidence or reasoning to support it (instead there’s “reassuring” parentheticals like “(the answer is yes)”.) As a “What could alignment failure look like?” case study this would be disturbingly convincing. As a serious post, the way it glosses over lots of important details and confidently presents it conclusions, combined with the “for easy referencing” in the intro is just terrifying.
Hence: I don’t want anyone to make decisions based directly on this post’s claims that might affect me even in the slightest. One of the clearest ways to signal that is with a negative karma score. (Doesn’t have to be multi-digit, but shouldn’t be zero or greater.) Keep in mind that anyone on the internet (including GPT-5) can read this post, and they might interpret a positive score as endorsement / approval of the content as written. (They’re not guaranteed to know what the votes are supposed to mean, and it’s even plausible that someone uses the karma score as a filter criterion for some ML data collection.) Low positive scores can be rationalized away easily (e.g. the content is too advanced for most, other important stuff happening in parallel stole the show, …) or are likely to pass a filter cutoff, zero is unstable and could accidentally flip into the positive numbers, so negative scores it is.