People shouldn’t be doing anything like that; I’m saying that if there is actually a CEV-aligned superintelligence, then this is a good thing. Would you disagree?
I think an actual CEV-aligned superintelligence would probably be good, conditional on being possible, but also that I expect that anyone who thinks they have a plan to create one is almost certainly wrong about that and so plans of that nature are a bad idea in expectation, and much more so if that plan looks like “do a bunch of stuff that would be obviously terrible if not for the end goal in the name of optimizing the universe”.
Were you confused by what I said in the post or are you just suggesting a better wording?
I was specifically unsure which meaning of “optimize for” you were referring to with each usage of the term.
I think an actual CEV-aligned superintelligence would probably be good, conditional on being possible, but also that I expect that anyone who thinks they have a plan to create one is almost certainly wrong about that and so plans of that nature are a bad idea in expectation, and much more so if that plan looks like “do a bunch of stuff that would be obviously terrible if not for the end goal in the name of optimizing the universe”.
I was specifically unsure which meaning of “optimize for” you were referring to with each usage of the term.
Yep, I agree