This is a debate about nothing. Turing completness tells us no matter how much it appears that a given Turing complete representation can only usefully process data about certain kinds of things in reality it can process data about anything any other language can do.
Well duh, but this (and the halting problem) have been taught yet systemically ignored in programming language design and this is exactly the same argument.
We are sitting around in the armchair trying to come up with a better means of logic/data representation (be it a programming language the underlying AI structure) as if the debate is about mathematical elegance or some such objective notion. Until you prove to me that any system in AIXI can duplicate the behavior (modulo semantic changes as to what we call a punishment) the other system can and vice versa that is the likely scenario.
So what would make one model for AI better than another? These vague theoretical issues? No, no more than how fancy your type system is determines the productivenesss of your programming language. Ultimately, the hurdle to overcome is that HUMANS need to build and reason about these systems and we are more inclined to certain kinds of mistakes than others. For instance I might write a great language using the full calculus of inductive constructions as a type system and still do type inference almost everywhere but if my language looks like line noise not human words all that math is irrelevant.
I mean ask yourself why is human programming and genetic programming so different. Because what model you use to build up your system has a far greater impact on your ability to understand what is going on than on any other effects. Sure, if you write in pure assembly JMPs everywhere with crazy code packing tricks it goes faster but you still lose.
If I’m right about this case as well it can only be decided by practical experiments where you have people try and reason in (simplified) versions of the systems and see what can and can’t be easily fixed.
You seem to be missing the point. AIXI should be able to reason effectively if it incorporates a solution to the problem of naturalistic induction which this whole sequence is trying to get at. But the OP argues that even an implausibly-good approximation of AIXI won’t solve that problem on its own. We can’t fob the work off onto an AI using this model. (The OP makes this argument, first in “AIXI goes to school,” and then more technically in “Death to AIXI.”)
Tell me if this seems like a strawman of your comment, but you seem to be saying we just need to make AIXI easier to program. That won’t help if we don’t know how to solve the problem—as you point out in another comment, part of our own understanding is not directly accessible to our understanding, so we don’t know how our own brain-design solves this (to the extent that it does).
A TM can pro cess data about anything, providing a human is supplying the interpretation. Nothing follows from that about a software systems ability to attach intrinsic meaning to anything.
This is a debate about nothing. Turing completness tells us no matter how much it appears that a given Turing complete representation can only usefully process data about certain kinds of things in reality it can process data about anything any other language can do.
Well duh, but this (and the halting problem) have been taught yet systemically ignored in programming language design and this is exactly the same argument.
We are sitting around in the armchair trying to come up with a better means of logic/data representation (be it a programming language the underlying AI structure) as if the debate is about mathematical elegance or some such objective notion. Until you prove to me that any system in AIXI can duplicate the behavior (modulo semantic changes as to what we call a punishment) the other system can and vice versa that is the likely scenario.
So what would make one model for AI better than another? These vague theoretical issues? No, no more than how fancy your type system is determines the productivenesss of your programming language. Ultimately, the hurdle to overcome is that HUMANS need to build and reason about these systems and we are more inclined to certain kinds of mistakes than others. For instance I might write a great language using the full calculus of inductive constructions as a type system and still do type inference almost everywhere but if my language looks like line noise not human words all that math is irrelevant.
I mean ask yourself why is human programming and genetic programming so different. Because what model you use to build up your system has a far greater impact on your ability to understand what is going on than on any other effects. Sure, if you write in pure assembly JMPs everywhere with crazy code packing tricks it goes faster but you still lose.
If I’m right about this case as well it can only be decided by practical experiments where you have people try and reason in (simplified) versions of the systems and see what can and can’t be easily fixed.
You seem to be missing the point. AIXI should be able to reason effectively if it incorporates a solution to the problem of naturalistic induction which this whole sequence is trying to get at. But the OP argues that even an implausibly-good approximation of AIXI won’t solve that problem on its own. We can’t fob the work off onto an AI using this model. (The OP makes this argument, first in “AIXI goes to school,” and then more technically in “Death to AIXI.”)
Tell me if this seems like a strawman of your comment, but you seem to be saying we just need to make AIXI easier to program. That won’t help if we don’t know how to solve the problem—as you point out in another comment, part of our own understanding is not directly accessible to our understanding, so we don’t know how our own brain-design solves this (to the extent that it does).
A TM can pro cess data about anything, providing a human is supplying the interpretation. Nothing follows from that about a software systems ability to attach intrinsic meaning to anything.