I don’t think anyone has proposed any self-referential criteria as being the point of Friendly AI? It’s just that such self-referential criteria as reflective equilibrium are a necessary condition which lots of goal setups don’t even meet. (And note that just because you’re trying to find a fixpoint, doesn’t necessarily mean you have to try to find it by iteration, if that process has problems!)
It’s just that such self-referential criteria as reflective equilibrium are a necessary condition
Why? The only example of adequately friendly intelligent systems that we have (i.e. us) don’t meet this condition. Why should reflective equilibrium be a necessary condition for FAI?
Because FAI’s can change themselves very effectively in ways that we can’t.
Doesn’t mean the FAI couldn’t remain genuinely uncertain about some value question, or consider it not worth solving at this time, or run into new value questions due to changed circumstances, etc.
All of those could prevent reflective equilibria, while still being compatible with the ability for extensive self-modification.
I don’t think anyone has proposed any self-referential criteria as being the point of Friendly AI? It’s just that such self-referential criteria as reflective equilibrium are a necessary condition which lots of goal setups don’t even meet. (And note that just because you’re trying to find a fixpoint, doesn’t necessarily mean you have to try to find it by iteration, if that process has problems!)
Why? The only example of adequately friendly intelligent systems that we have (i.e. us) don’t meet this condition. Why should reflective equilibrium be a necessary condition for FAI?
Because FAI’s can change themselves very effectively in ways that we can’t.
It might be that human brain in computer software would have the same issues.
Doesn’t mean the FAI couldn’t remain genuinely uncertain about some value question, or consider it not worth solving at this time, or run into new value questions due to changed circumstances, etc.
All of those could prevent reflective equilibria, while still being compatible with the ability for extensive self-modification.
It’s possible. They feel very unstable, though.