This is totally awesome. Today is a good day on LW, first Stuart’s result and now yours.
Funny, a couple weeks ago I wrote a comment explaining why Schmidhuber’s Gödel machines probably don’t work either, but I guess people didn’t notice it or considered it too obviously wrong. What do you think?
I agree that Godel machines probably don’t work for similar reasons, though didn’t notice your post until now. Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design). I have been thinking along these lines independently, and I think resolving either problem is going to involve dealing with more fundamental issues (e.g., agents should not believe themselves to be well-calibrated). I’ve queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).
I’ve queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).
Nice! Does that mean you have many new results queued for posting? What can I do to learn them sooner? :-)
Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design).
This is totally awesome. Today is a good day on LW, first Stuart’s result and now yours.
Funny, a couple weeks ago I wrote a comment explaining why Schmidhuber’s Gödel machines probably don’t work either, but I guess people didn’t notice it or considered it too obviously wrong. What do you think?
I agree that Godel machines probably don’t work for similar reasons, though didn’t notice your post until now. Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design). I have been thinking along these lines independently, and I think resolving either problem is going to involve dealing with more fundamental issues (e.g., agents should not believe themselves to be well-calibrated). I’ve queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).
Nice! Does that mean you have many new results queued for posting? What can I do to learn them sooner? :-)
Can you expand?
I’m not sure about that billing—the comment seems to give up before much of a conclusion gets drawn.
Gödel machines are like GAs. The problem is not so much that they don’t work at all—but rather that they may work too slowly.