In the line that ends with “even if God would not allow complete extinction.”, my impulse is to include ” (or other forms of permanent doom)” before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence.
(Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn’t be human extinction, but, that scenario would still count as doom.)
Regarding the part about “it is secular scientific-materialists who are doing the research which is a threat to my values” part: I think it is good that it discusses this! (and I hadn’t thought about including it) But, I’m personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described. Like, I imagine there being a lot of path-dependence in how a culture’s values would “progress” over time, and I see little reason why a sequence of changes of the form “opinion/values changing in response to an argument that seems to make sense” would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved)
[EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy’s response, so, you might want to skip reading it, idk]
My preferred solution is closer to, “we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, “humanity doesn’t get wiped out by a big space rock”, “it is better for people to not have terrible diseases”, etc.), while avoiding these optimizations having much effect on other areas where there is much disagreement as to what-is-good.
Though, it does seem plausible to me, as a somewhat scary idea, that the thing I just described is perhaps not exactly coherent?
(that being said, even though I have my doubts about CEV, at least in the form described in the simpler ways it is described, I do think it would of course be better than doom. Also, it is quite possible that I’m just misunderstanding the idea of CEV in a way that causes my concerns, and maybe it was always meant to exclude the kinds of things I describe being concerned about?)
In the line that ends with “even if God would not allow complete extinction.”, my impulse is to include ” (or other forms of permanent doom)” before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence.
(Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn’t be human extinction, but, that scenario would still count as doom.)
Regarding the part about “it is secular scientific-materialists who are doing the research which is a threat to my values” part: I think it is good that it discusses this! (and I hadn’t thought about including it)
But, I’m personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described.
Like, I imagine there being a lot of path-dependence in how a culture’s values would “progress” over time, and I see little reason why a sequence of changes of the form “opinion/values changing in response to an argument that seems to make sense” would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved)
[EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy’s response, so, you might want to skip reading it, idk]
My preferred solution is closer to, “we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, “humanity doesn’t get wiped out by a big space rock”, “it is better for people to not have terrible diseases”, etc.), while avoiding these optimizations having much effect on other areas where there is much disagreement as to what-is-good.
Though, it does seem plausible to me, as a somewhat scary idea, that the thing I just described is perhaps not exactly coherent?
(that being said, even though I have my doubts about CEV, at least in the form described in the simpler ways it is described, I do think it would of course be better than doom.
Also, it is quite possible that I’m just misunderstanding the idea of CEV in a way that causes my concerns, and maybe it was always meant to exclude the kinds of things I describe being concerned about?)