That’s what you get when you skip the math, which is available, and go on reasoning in the fuzzy and largely irrelevant concepts based on verbal description that is rather imprecise. Which is what EY expressed dissatisfaction with, but which is what he is most guilty of.
edit: Actually I think EY changed his mind on the dangerousness of AIXI, which is an enormous plus point for him, but the one that should come with a penalty: meta-understanding of the difficulties involved and the tendency to put-yourself-in-its-shoes-tasked-with-complying-with-verbal-description, instead of understanding the math (as well as which should come with understanding that failure doesn’t imply everyone dies). Anyhow, the issue is that the AI doing something unintended due to a flaw is a far cry from killing mankind, as far as nuclear power plant design having a flaw is from nuclear power plant suffering multikiloton explosion. The FAI is a work on a non-suicidal AI; akin to a work on unmoderated fast neutron nuclear reactor with positive thermal coefficient of reactivity (and a ‘proof’ that the control system is perfect). One could switch the goalposts and argue that nonminds like AIXI are not true AGI; that’s about as interesting as arguing that submarines don’t really swim.
That’s what you get when you skip the math, which is available, and go on reasoning in the fuzzy and largely irrelevant concepts based on verbal description that is rather imprecise. Which is what EY expressed dissatisfaction with, but which is what he is most guilty of.
edit: Actually I think EY changed his mind on the dangerousness of AIXI, which is an enormous plus point for him, but the one that should come with a penalty: meta-understanding of the difficulties involved and the tendency to put-yourself-in-its-shoes-tasked-with-complying-with-verbal-description, instead of understanding the math (as well as which should come with understanding that failure doesn’t imply everyone dies). Anyhow, the issue is that the AI doing something unintended due to a flaw is a far cry from killing mankind, as far as nuclear power plant design having a flaw is from nuclear power plant suffering multikiloton explosion. The FAI is a work on a non-suicidal AI; akin to a work on unmoderated fast neutron nuclear reactor with positive thermal coefficient of reactivity (and a ‘proof’ that the control system is perfect). One could switch the goalposts and argue that nonminds like AIXI are not true AGI; that’s about as interesting as arguing that submarines don’t really swim.