The AIXI proofs seem pretty adequate to me. They may not be useful, but that’s different from not working.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
By “Searlian reasons” I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
I don’t think it’s obvious it would self-destruct—any more than it’s obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
By “Searlian reasons” I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
I don’t think it’s obvious it would self-destruct—any more than it’s obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.
I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.