The reason we think intelligence is substrate-independent is that the properties we’re interested in (the ones we define to constitute “intelligence”) do not make reference to any substrate. Can a simulation of a brain design a aeroplane? Yes. Can a simulation of a brain prove Pythagoras’ theorem? Yes. Can a simulation of a brain plan strategically in the presence of uncertainty? Yes. These are the properties we mean when we say “intelligence”. Under a different definition for “intelligence” that stipulates “composed of neurons” or “looks grey and mushy”, intelligence is not substrate-independent. It’s just a word game.
Well, that’s not true for everyone here, I suspect.
Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as “is sentient,” “has consciousness,” “has qualia,” “is a real person,” etc.
Presumably he’d be delighted if someone proved that a simulation of a human created by an AI can’t possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn’t think it’s true. (Nor do I.)
I can’t figure out whether you’re trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.
The reason we think intelligence is substrate-independent is that the properties we’re interested in (the ones we define to constitute “intelligence”) do not make reference to any substrate. Can a simulation of a brain design a aeroplane? Yes. Can a simulation of a brain prove Pythagoras’ theorem? Yes. Can a simulation of a brain plan strategically in the presence of uncertainty? Yes. These are the properties we mean when we say “intelligence”. Under a different definition for “intelligence” that stipulates “composed of neurons” or “looks grey and mushy”, intelligence is not substrate-independent. It’s just a word game.
Well, that’s not true for everyone here, I suspect.
Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as “is sentient,” “has consciousness,” “has qualia,” “is a real person,” etc.
Presumably he’d be delighted if someone proved that a simulation of a human created by an AI can’t possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn’t think it’s true. (Nor do I.)
I can’t figure out whether you’re trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.