I’m currently having an exchange with Massimo Pigliucci of Rationally Speaking who might be known here due to his Bloggingheads debate with Eliezer Yudkowsky where he was claiming that “you can simulate the ‘logic’ of photosynthetic reactions in a computer, but you ain’t gonna get sugar as output.” I have a hard time to wrap my mind around his line of reasoning, but I’ll try:
Let’s assume that you wanted to simulate gold. What does it mean to simulate gold?
According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system.
If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.
In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job. Consequently, in the case of gold at least, substrate neutrality is false.
According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system.
The key word here is “represent”, which is not to be confused with “reproduce”.
If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.
What would it take to simulate the missing properties? A particle accelerator or nuclear reactor.
No, we don’t need a nuclear reactor or particle accelerator to simulate, i.e. to represent the missing properties. We need them to reproduce the missing properties. But to simulate something is to represent characteristics of it, not reproduce them.
Now, there’s an obvious opening here for someone to try to build an argument based on the fact that a simulation need not reproduce characteristics. It would then be necessary to argue that mere representation of certain characteristics is sufficient to reproduce others. But that would be a new argument, and I’m just addressing this one.
When I run an old 8 bit game on a Commodore-64 emulator it seems to me that the emulation functionally reproduces a Commodore-64. The experience of playing the game can clearly be faithfully reproduced.
Hasn’t something been reproduced if one cannot tell the difference between the operation of the original system and that of the simulation?
In case of C64 emulator, the game is represented, your experience is reproduced. As for second, I think it’s purely subjectional as it depends on what level of output you expect from simulation. For gamer the emulator game can be “reproduction”, for engineer that seek some details on inner workings of Commodore it can be just an approximation of “real thing” and of no use for him.
In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job. Consequently, in the case of gold at least, substrate neutrality is false.
That just seems confused to me. Simulated gold would be exchanged on simulated gold markets—where it would work just fine.
Gold in a simulation is less useful to us because we can’t use it for everything we could use ‘real’ gold for. However that gold should be just as useful to anything inside the simulation as our gold is to us, barring changes in value due to changes in quantity. Does anyone really think that we would simulate gold in order to use it in exactly the ways we want to use real gold?
But what about Eliezer’s reply to Pigliucci’s photosynthesis argument? As I understand it, Eliezer’s counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing. In other words, we don’t care about simulated sugar because we want the physical stuff itself, but we aren’t so particular when it comes to arithmetic—the same answer in any form will do.
As far as I can tell, this argument still applies to gold unless there are good reasons to think that consciousness is substrate dependent. But as Eliezer pointed out in that diavlog, that doesn’t seem likely.
That reply is entirely begging the question. Whether or not consciousness is a phenomenon “like math” or a phenomenon “like photosynthesis” is exactly is being argued about. So it’s not an answering argument; it’s an assertion.
I completely agree—XiXiDu was summarizing Massimo Pigliucci’s argument, so I figured I’d summarize Eliezer’s reply. The real heart of the question, then, is figuring out which one consciousness is really like. I happen to think that consciousness is closer to math than sugar because we know that intelligence is so, and it seems to me that the rest follows logically from Minsky’s idea that minds are simply what brains do. That is, if consciousness is what an intelligent algorithm feels like from the inside, then it wouldn’t make much sense for it to be substrate-dependent.
As I understand it, Eliezer’s counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing.
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe.
What David Pearce and others seem to be saying is that physics doesn’t disclose the nature of the “fire” in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to “artificially” create intelligent agents. They don’t even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don’t accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don’t agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)
I’m currently having an exchange with Massimo Pigliucci of Rationally Speaking who might be known here due to his Bloggingheads debate with Eliezer Yudkowsky where he was claiming that “you can simulate the ‘logic’ of photosynthetic reactions in a computer, but you ain’t gonna get sugar as output.” I have a hard time to wrap my mind around his line of reasoning, but I’ll try:
Let’s assume that you wanted to simulate gold. What does it mean to simulate gold?
According to Wikipedia to simulate something means to represent certain key characteristics or behaviours of a selected physical system.
If we were going to simulate the chemical properties of gold, would we be able to use it as a vehicle for monetary exchange on the gold market? Surely not, some important characteristics seem to be missing. We do not assign the same value to a simulation of gold that we assign to gold itself.
What would it take to simulate the missing properties? A particle accelerator or nuclear reactor.
In conclusion, we need to create gold to get gold, no simulation apart from the creation of the actual physically identical substance will do the job. Consequently, in the case of gold at least, substrate neutrality is false.
Don’t paper money and electronic money represent gold’s ‘key characteristic’ of being useable for monetary exchange?
The key word here is “represent”, which is not to be confused with “reproduce”.
No, we don’t need a nuclear reactor or particle accelerator to simulate, i.e. to represent the missing properties. We need them to reproduce the missing properties. But to simulate something is to represent characteristics of it, not reproduce them.
Now, there’s an obvious opening here for someone to try to build an argument based on the fact that a simulation need not reproduce characteristics. It would then be necessary to argue that mere representation of certain characteristics is sufficient to reproduce others. But that would be a new argument, and I’m just addressing this one.
When I run an old 8 bit game on a Commodore-64 emulator it seems to me that the emulation functionally reproduces a Commodore-64. The experience of playing the game can clearly be faithfully reproduced.
Hasn’t something been reproduced if one cannot tell the difference between the operation of the original system and that of the simulation?
In case of C64 emulator, the game is represented, your experience is reproduced. As for second, I think it’s purely subjectional as it depends on what level of output you expect from simulation. For gamer the emulator game can be “reproduction”, for engineer that seek some details on inner workings of Commodore it can be just an approximation of “real thing” and of no use for him.
That just seems confused to me. Simulated gold would be exchanged on simulated gold markets—where it would work just fine.
You can simulate anything—at least according to the Church–Turing–Deutsch principle.
See my longer comment here.
Gold in a simulation is less useful to us because we can’t use it for everything we could use ‘real’ gold for. However that gold should be just as useful to anything inside the simulation as our gold is to us, barring changes in value due to changes in quantity. Does anyone really think that we would simulate gold in order to use it in exactly the ways we want to use real gold?
But what about Eliezer’s reply to Pigliucci’s photosynthesis argument? As I understand it, Eliezer’s counterargument was that intelligence and consciousness are like math in the sense that the simulation is the same as the real thing. In other words, we don’t care about simulated sugar because we want the physical stuff itself, but we aren’t so particular when it comes to arithmetic—the same answer in any form will do.
As far as I can tell, this argument still applies to gold unless there are good reasons to think that consciousness is substrate dependent. But as Eliezer pointed out in that diavlog, that doesn’t seem likely.
That reply is entirely begging the question. Whether or not consciousness is a phenomenon “like math” or a phenomenon “like photosynthesis” is exactly is being argued about. So it’s not an answering argument; it’s an assertion.
I completely agree—XiXiDu was summarizing Massimo Pigliucci’s argument, so I figured I’d summarize Eliezer’s reply. The real heart of the question, then, is figuring out which one consciousness is really like. I happen to think that consciousness is closer to math than sugar because we know that intelligence is so, and it seems to me that the rest follows logically from Minsky’s idea that minds are simply what brains do. That is, if consciousness is what an intelligent algorithm feels like from the inside, then it wouldn’t make much sense for it to be substrate-dependent.
This morning I followed another discussion on Facebook between David Pearce and someone else about the same topic and he mentioned a quote by Stephen Hawking:
What David Pearce and others seem to be saying is that physics doesn’t disclose the nature of the “fire” in the equations. For this and other reasons I am increasingly getting the impression that the disagreement all comes down to the question if the Mathematical universe hypothesis is correct, i.e. if Platonism is correct.
None of them seem to doubt that we will eventually be able to “artificially” create intelligent agents. They don’t even doubt that we will be able to use different substrates. The basic disagreement seems to be that, as Constant notices in another comment, a representation is distinct from a reproduction.
People like David Pearce or Massimo Pigliucci seem to be arguing that we don’t accept the crucial distinction between software and hardware.
For us the only difference between a mechanical device, a physical object and software is that the latter is the symbolic (formal language) representation of the former. Software is just the static description of the dynamic state sequence exhibited by an object. One can then use that software (algorithm) and some sort of computational hardware and evoke the same dynamic state sequence so that the machine (computer) mimics the relevant characteristics of the original object.
Massimo Pigliucci and others actually agree about the difference between a physical thing and its mathematical representation but they don’t agree that you can represent the most important characteristic as long as you do not reproduce the physical substrate.
The position hold by those people who disagree with the Less Wrong consensus on this topic is probably best represented by the painting La trahison des images. It is a painting of a pipe. It represents a pipe but it is not a pipe, it is an image of a pipe.
Why would people concerned with artificial intelligence care about all this? That is up to the importance and nature of consciousness and to what extent general intelligence is dependent upon the the brain as a biological substrate and its properties (e.g. the chemical properties of carbon versus silicon).
(Note that I am just trying to account for the different positions here and not argue in favor of substrate-dependence.)