A good and useful abstraction that is entirely equivalent to Turing machines, and to humans much more useful, is Lambda calculus and Combinator calculi. Many of these systems are known to be Turing complete.
Lambda calculus and other Combinator calculi are rule-sets that rewrite a string of symbols expressed in a formal grammar. Fortunately the symbol-sets in all such grammars are finite and can therefore be expressed in binary. Furthermore, all the Turing complete ones of these calculi have a system of both linked lists and boolean values, so equivalent with the Turing machine model expressed in this article, one can write a programme in a binary combinator calculus and feed it a linked list of boolean values expressed in the combinator calculus itself and then have it reduce itself (return) a linked list of boolean values.
Personally I prefer combinator calculi to Turing machines mainly because they are vastly easier to program.
I love lambda calculus! I considered formulating Solomonoff induction in lambda calculus instead, but I’m more familiar with Turing machines and that’s what the literature uses so far. (Also Zvonkin and Levin 1970 use partial recursive functions.) It wasn’t obvious to me how to alter lambda calculus to have a monotonic input and output, like with the three-tape Turing machine above.
To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program’s output match our observations. If multiple programs match the output, we should chose the smallest one.
A good and useful abstraction that is entirely equivalent to Turing machines, and to humans much more useful, is Lambda calculus and Combinator calculi. Many of these systems are known to be Turing complete.
Lambda calculus and other Combinator calculi are rule-sets that rewrite a string of symbols expressed in a formal grammar. Fortunately the symbol-sets in all such grammars are finite and can therefore be expressed in binary. Furthermore, all the Turing complete ones of these calculi have a system of both linked lists and boolean values, so equivalent with the Turing machine model expressed in this article, one can write a programme in a binary combinator calculus and feed it a linked list of boolean values expressed in the combinator calculus itself and then have it reduce itself (return) a linked list of boolean values.
Personally I prefer combinator calculi to Turing machines mainly because they are vastly easier to program.
Related:
Lambda Calculus
SKI Calculus
Binary Lambda Calculus
Binary Combinatory Logic
I love lambda calculus! I considered formulating Solomonoff induction in lambda calculus instead, but I’m more familiar with Turing machines and that’s what the literature uses so far. (Also Zvonkin and Levin 1970 use partial recursive functions.) It wasn’t obvious to me how to alter lambda calculus to have a monotonic input and output, like with the three-tape Turing machine above.
You do that by having a term that takes a linked list of boolean values and returns a linked list of boolean values.
To expand on what parent said, pretty much all modern computer languages are equivalent to Turing machines (Turing complete). This includes Javascript, Java, Ruby, PHP, C, etc. If I understand Solomonoff induction properly, testing all possible hypothesis implies generating all possible programs in say Javascript and testing them to see which program’s output match our observations. If multiple programs match the output, we should chose the smallest one.
Correct. It is just harder to exhaustively generate JS, and again harder to judge simplicity.