Reductionism does have a caveat, and this is “a fact about maps” and not “a fact about the territory”: the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when you measure the pattern of pits they encode a particular blob of binary data. From a human utility perspective, the CD is fully replaceable with a chromodynamically dissimilar other CD that happens to have those same characteristics.
Computers are full of examples of this, where the least important level is not the fundamental level. In in some cases, each level is not just built upon lower levels, but ought to be fully independent of them. If your lisp doesn’t implement the lambda calculus because of a silicon fault, an atomic model would correctly represent this, but it would be representing a mathematically unimportant bug. A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine. A model which took account of the substrate would be missing the point.
I think the point is that the model of four elements we use to describe the CD is also contained within the chromodynamic model—the four elements are a less accurate abstraction of the chromodynamic model, even if we don’t recognize it as such when we used the more abstract model.
In the same way, Newtonian Mechanics is a less accurate abstraction of Special Relativity.
Therefore, no matter how precise Newtonian Mechanics is, it does not match up exactly with reality. Because it is an abstraction, it contains inaccuracies. The SR version of the same process will always be more accurate than the NM version, though the SR version is also probably not completely accurate.
A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine.
I don’t think that is true. For Lisp to mean anything to any machine, it must first be compiled into the machine language of that particular machine. Because this process is fundamentally different for different types of machines, the way the same Lisp behaves on each machine will be highly dependent on its specific translation into machine language. In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.
This is the similar to calculating the trajectory of an artillery shell with Newtonian Mechanics vs Special Relativity. The difference between the two will be so small that it is almost unmeasurable, but there will definitely be a difference between them.
In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.
I am going to have to disagree here. A given Lisp will require a Bounded-Tape Turing Machine of tape size N and head state count M and symbol table Q.
If the ARM processor running Windows NT can supply that, Lisp is possible. If a x86 running Unix can supply that, Lisp is possible.
If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.
“If the Lisp is untrue to the Specification, that is a fact about the Implementation, not the Mathematics behind it.”
What about the speed of operation? The specification does not set any requirements for this, and so two different Lisp implementations which differ in that property can both be correct yet produce different output.
Even if it runs at one clock cycle per millenia, it would still theoretically be able to run any given program, and produce exactly the same output.
The time function is also external to the LISP implementation; it is a call to the OS, so the output that is printing the current time doesn’t count.
Given a program P consisting of a linear bit-pattern, that is fed into virtual machine L, and produces a linear bit-pattern B to a section of non-local memory location O.
During the runtime of P on L, the only interaction with non-local memory is writing B to O.
There is no bits passed from non-local memory to local memory.
For all L If and only if L is true to the specification then for any P there is only one possible B.
P is the lisp program source code, which does not read from stdin, keyboard drivers, web sockets or any similar source of external information.
L is a LISP (virtual) machine.
B is some form of data, such as text, binary data, images, etc.
O is some destination could be stdout, screen, speakers, etc.
Ah, ok, I find nothing to disagree with there. Looking back up the conversation, I see that I was responding to the word “behavior”. Specifically, bigjeff5 said:
In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine.
To which you responded:
If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.
So it comes down to: does the “behaviour” of a Lisp implementation include anything besides the output? Which effectively comes down to what question we’re trying to answer about Lisp, or computation, or etc.
The original question was about whether a Lisp machine needs to include abstractions for the substrate it’s running on. The most direct answer is “No, because the specification doesn’t mention anything about the substrate.” More generally, if a program needs introspection it can do it with quine trickery, or more realistically just use a system call.
Bigjeff5 responded by pointing out that the choice of substrate can determine whether or not the Lisp implementation is useful to anybody. This is of course correct, but this is a separate issue from whether or not the Lisp abstraction needs to include anything about its own substrate; a Lisp can be fast or slow, useful or useless, regardless of whether or not its internal abstractions include a reference to or description of the device it is running on.
Is it fair to call the CD data a map in this case? (Perhaps that’s your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.
Reductionism does have a caveat, and this is “a fact about maps” and not “a fact about the territory”: the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when you measure the pattern of pits they encode a particular blob of binary data. From a human utility perspective, the CD is fully replaceable with a chromodynamically dissimilar other CD that happens to have those same characteristics.
Computers are full of examples of this, where the least important level is not the fundamental level. In in some cases, each level is not just built upon lower levels, but ought to be fully independent of them. If your lisp doesn’t implement the lambda calculus because of a silicon fault, an atomic model would correctly represent this, but it would be representing a mathematically unimportant bug. A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine. A model which took account of the substrate would be missing the point.
I think the point is that the model of four elements we use to describe the CD is also contained within the chromodynamic model—the four elements are a less accurate abstraction of the chromodynamic model, even if we don’t recognize it as such when we used the more abstract model.
In the same way, Newtonian Mechanics is a less accurate abstraction of Special Relativity.
Therefore, no matter how precise Newtonian Mechanics is, it does not match up exactly with reality. Because it is an abstraction, it contains inaccuracies. The SR version of the same process will always be more accurate than the NM version, though the SR version is also probably not completely accurate.
I don’t think that is true. For Lisp to mean anything to any machine, it must first be compiled into the machine language of that particular machine. Because this process is fundamentally different for different types of machines, the way the same Lisp behaves on each machine will be highly dependent on its specific translation into machine language. In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.
This is the similar to calculating the trajectory of an artillery shell with Newtonian Mechanics vs Special Relativity. The difference between the two will be so small that it is almost unmeasurable, but there will definitely be a difference between them.
I am going to have to disagree here. A given Lisp will require a Bounded-Tape Turing Machine of tape size N and head state count M and symbol table Q. If the ARM processor running Windows NT can supply that, Lisp is possible. If a x86 running Unix can supply that, Lisp is possible. If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.
“If the Lisp is untrue to the Specification, that is a fact about the Implementation, not the Mathematics behind it.”
What about the speed of operation? The specification does not set any requirements for this, and so two different Lisp implementations which differ in that property can both be correct yet produce different output.
Even if it runs at one clock cycle per millenia, it would still theoretically be able to run any given program, and produce exactly the same output. The time function is also external to the LISP implementation; it is a call to the OS, so the output that is printing the current time doesn’t count.
I think we may have to taboo “output”, as the contention seems to be about what is included by that word.
Given a program P consisting of a linear bit-pattern, that is fed into virtual machine L, and produces a linear bit-pattern B to a section of non-local memory location O. During the runtime of P on L, the only interaction with non-local memory is writing B to O. There is no bits passed from non-local memory to local memory.
For all L If and only if L is true to the specification then for any P there is only one possible B.
P is the lisp program source code, which does not read from stdin, keyboard drivers, web sockets or any similar source of external information.
L is a LISP (virtual) machine.
B is some form of data, such as text, binary data, images, etc.
O is some destination could be stdout, screen, speakers, etc.
Ah, ok, I find nothing to disagree with there. Looking back up the conversation, I see that I was responding to the word “behavior”. Specifically, bigjeff5 said:
To which you responded:
So it comes down to: does the “behaviour” of a Lisp implementation include anything besides the output? Which effectively comes down to what question we’re trying to answer about Lisp, or computation, or etc.
The original question was about whether a Lisp machine needs to include abstractions for the substrate it’s running on. The most direct answer is “No, because the specification doesn’t mention anything about the substrate.” More generally, if a program needs introspection it can do it with quine trickery, or more realistically just use a system call.
Bigjeff5 responded by pointing out that the choice of substrate can determine whether or not the Lisp implementation is useful to anybody. This is of course correct, but this is a separate issue from whether or not the Lisp abstraction needs to include anything about its own substrate; a Lisp can be fast or slow, useful or useless, regardless of whether or not its internal abstractions include a reference to or description of the device it is running on.
Is it fair to call the CD data a map in this case? (Perhaps that’s your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.