It doesn’t have to know what my CEV would be to know what I would want in those bits, which is a compressed seed of an FAI targetted (indirectly) at my CEV.
But there are problems like, “How much effort is it required to put into it?” (clearly I don’t want it to spend far more compute power than it has trying to come up with the perfect combination of bits which will make my FAI unfold a little bit faster, but I also don’t want it to spend no time optimizing. How do I get it to pick somewhere in between without it already wanting to pick the optimal amount of optimization for me?) “What decision theory is my CEV using to decide those bits? (Hopefully not something exploitable, but how do I specify that?)”
Ok, so your request would really be along the lines of “please output a seed AI that would implement indirect normativity”, or something aong those lines?
The AI is not omnipotent. How does it know what your coherent extrapolated volition would be?
It doesn’t have to know what my CEV would be to know what I would want in those bits, which is a compressed seed of an FAI targetted (indirectly) at my CEV.
But there are problems like, “How much effort is it required to put into it?” (clearly I don’t want it to spend far more compute power than it has trying to come up with the perfect combination of bits which will make my FAI unfold a little bit faster, but I also don’t want it to spend no time optimizing. How do I get it to pick somewhere in between without it already wanting to pick the optimal amount of optimization for me?) “What decision theory is my CEV using to decide those bits? (Hopefully not something exploitable, but how do I specify that?)”
Ok, so your request would really be along the lines of “please output a seed AI that would implement indirect normativity”, or something aong those lines?
That’s the goal, yeah.