That also isn’t an AI in the relevant sense, as it doesn’t actually exist. Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can’t prove that an AIXI-style AI will ever work, and it’s presumably part of Tipler’s argument that it won’t work, so simply asserting that it will work is sort of pointless. I’m just saying that if you want to engage with his argument you’ll have to get closer to it ’cuz you’re not yet in bowshot range. If your intention was to repeat the standard counterargument rather than show why it’s correct then I misinterpreted your intention; apologies if so.
Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can’t prove that an AIXI-style AI will ever work, and it’s presumably part of Tipler’s argument that it won’t work, so simply asserting that it will work is sort of pointless.
The AIXI proofs seem pretty adequate to me. They may not be useful, but that’s different from not working.
More to the point, nothing in Tipler’s paper gave me the impression he had so much as heard of AIXI, and it’s not clear to me that he does accept Searlian reasons—what is that, by the way? It can’t be Chinese room stuff since Tipler has been gung ho on uploading for decades now.
The AIXI proofs seem pretty adequate to me. They may not be useful, but that’s different from not working.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
By “Searlian reasons” I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
I don’t think it’s obvious it would self-destruct—any more than it’s obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.
Fine, ‘show me this morality in a computable implementation of AIXI using the speed prior or GTFO’ (what was it called, AIXI-tl?).
That also isn’t an AI in the relevant sense, as it doesn’t actually exist. Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can’t prove that an AIXI-style AI will ever work, and it’s presumably part of Tipler’s argument that it won’t work, so simply asserting that it will work is sort of pointless. I’m just saying that if you want to engage with his argument you’ll have to get closer to it ’cuz you’re not yet in bowshot range. If your intention was to repeat the standard counterargument rather than show why it’s correct then I misinterpreted your intention; apologies if so.
The AIXI proofs seem pretty adequate to me. They may not be useful, but that’s different from not working.
More to the point, nothing in Tipler’s paper gave me the impression he had so much as heard of AIXI, and it’s not clear to me that he does accept Searlian reasons—what is that, by the way? It can’t be Chinese room stuff since Tipler has been gung ho on uploading for decades now.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
By “Searlian reasons” I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
I don’t think it’s obvious it would self-destruct—any more than it’s obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.
I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.