any plan that looks like “some people build a system that they believe to be a CEV-aligned superintelligence and tell it to seize control”
People shouldn’t be doing anything like that; I’m saying that if there is actually a CEV-aligned superintelligence, then this is a good thing. Would you disagree?
what exactly you mean by the terms “white-box” and “optimizing for”
I agree with “Evolution optimized humans to be reproductively successful, but despite that humans do not optimize for inclusive genetic fitness”, and the point I was making was that the stuff that humans do optimize for is similar to the stuff other humans optimize for. Were you confused by what I said in the post or are you just suggesting a better wording?
People shouldn’t be doing anything like that; I’m saying that if there is actually a CEV-aligned superintelligence, then this is a good thing. Would you disagree?
I think an actual CEV-aligned superintelligence would probably be good, conditional on being possible, but also that I expect that anyone who thinks they have a plan to create one is almost certainly wrong about that and so plans of that nature are a bad idea in expectation, and much more so if that plan looks like “do a bunch of stuff that would be obviously terrible if not for the end goal in the name of optimizing the universe”.
Were you confused by what I said in the post or are you just suggesting a better wording?
I was specifically unsure which meaning of “optimize for” you were referring to with each usage of the term.
Thanks for the comment!
People shouldn’t be doing anything like that; I’m saying that if there is actually a CEV-aligned superintelligence, then this is a good thing. Would you disagree?
I agree with “Evolution optimized humans to be reproductively successful, but despite that humans do not optimize for inclusive genetic fitness”, and the point I was making was that the stuff that humans do optimize for is similar to the stuff other humans optimize for. Were you confused by what I said in the post or are you just suggesting a better wording?
I think an actual CEV-aligned superintelligence would probably be good, conditional on being possible, but also that I expect that anyone who thinks they have a plan to create one is almost certainly wrong about that and so plans of that nature are a bad idea in expectation, and much more so if that plan looks like “do a bunch of stuff that would be obviously terrible if not for the end goal in the name of optimizing the universe”.
I was specifically unsure which meaning of “optimize for” you were referring to with each usage of the term.
Yep, I agree