As I understand it, Eliezer’s counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing.
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe.
What David Pearce and others seem to be saying is that physics doesn’t disclose the nature of the “fire” in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to “artificially” create intelligent agents. They don’t even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don’t accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don’t agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What David Pearce and others seem to be saying is that physics doesn’t disclose the nature of the “fire” in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to “artificially” create intelligent agents. They don’t even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don’t accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don’t agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)