When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can’t.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them—in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid “while” and “repeat” commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple—a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion… Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn’t the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Not a complete answer, but here’s commentary from a ffdn review of Chapter 14:
Kevin S. Van Horn 7/24/10 . chapter 14 Harry is jumping to conclusions when he tells McGonagall that the Time-Turner isn’t even Turing computable. Time travel simulation is simply a matter of solving fixed-point equation f(x) = x. Here x is the information sent back in time, and f is a function that maps the information received from the future to the information that gets sent back in time. If a solution exists at all, you can find it to any desired degree of accuracy by simply enumerating all possible rational values of x until you find one that satisfies the equation. And if f is known to be both continuous and have a convex compact range, then the Brouwer fixed-point theorem guarantees that there will be a solution.
So the only way I can see that simulating the Time-Turner wouldn’t be Turing computable would be if the physical laws of our universe give rise to fixed-point equations that have no solutions. But the existence of the Time-Turner then proves that the conditions leading to no solution can never arise.
I got the impression that what “not Turing-computable” meant is that there’s no way to only compute what ‘actually happens’; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the ‘false’ timelines.
When speaking about sensory inputs, it makes sense to say that different species (even different individuals) have different ranges, so one can percieve something and other can’t.
With computation it is known that sufficiently strong programming languages are in some sense equal. For example, you could speak about relative advantages of Basic, C/C++, Java, Lisp, Pascal, Python, etc., but in each of these languages you can write a simulator of the remaining ones. This means that if an algorithm can be implemented in one of these languages, it can be implemented in all of them—in worst case, it would be implemented as a simulation of another language running its native implementation.
There are some technical details, though. Simulating another program is slower and requires more memory than the original program. So it could be argued that on a given hardware you could do a program in language X which uses all the memory and all available time, so it does not necessarily follow that you can do the same program in language Y. But on this level of abstraction we ignore hardware limits. We assume that the computer is fast enough and has enough memory for whatever purpose. (More precisely, we assume that in available time a computer can do any finite number of computation steps; but it cannot do an infinite number of steps. The memory is also unlimited, but in a finite time you can only manage to use a finite amount of memory.)
So on this level of abstraction we only care about whether something can or cannot be implemented by a computer. We ignore time and space (i.e. speed and memory) constraints. Some problems can be solved by algorithms, others can not. (Then, there are other interesting levels of abstraction which care about time and space complexity of algorithms.)
Are all programming languages equal in the above sense? No. For example, although programmers generally want to avoid infinite loops in their programs, if you remove a potential for infinite loops from the programming language (e.g. in Pascal you forbid “while” and “repeat” commands, and a possibility to call functions recursively), you lose ability to simulate programming languages which have this potential, and you lose ability to solve some problems. On the other hand, some universal programming languages seem extremely simple—a famous example is a Turing machine. This is very useful, because it is easier to do mathematical proofs about a simple language. For example if you invent a new programming language X, all you have to do to prove its universality, is to write a Turing machine simulator, which is usually very simple.
Now back to the original discussion… Eliezer suggests that brain functionality should be likened to computation, not to sensory input. A human brain is computationally universal, because (given enough time, pen and paper) we can simulate a computer program, so all brains should be equal when optimally used (differing only in speed and use of resources). In another comment he adds that ability to compute isn’t the same as ability to understand. Therefore (my conclusion) what one human can understand, another human can at least correctly calculate without understanding, given a correct algorithm.
Wow. That’s really cool, thank you. Upvoted you, jeremysalwen and Nornagest. :)
Could you also explain why the HPMoR universe isn’t Turing computable? The time-travel involved seems simple enough to me.
Not a complete answer, but here’s commentary from a ffdn review of Chapter 14:
I got the impression that what “not Turing-computable” meant is that there’s no way to only compute what ‘actually happens’; you have to somehow iteratively solve the fixed-point equation, maybe necessarily generating experiences (waves hands confusedly) corresponding to the ‘false’ timelines.
Sounds rather like our own universe, really.
There’s also the problem of an infinite number of possible solutions.
The number of solutions is finite but (very, very, mind-bogglingly) large.
Ah. It’s math.
:) Thanks.