Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I mean your input-output map writ broadly.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.
Even if that’s true, are you sure that AI will be optimizing us for the same mix of speed/size that evolution was optimizing for? If the weighting of speed vs size is different, the result of optimization might be different as well.
Can you expand what you mean by “writ broadly”? If we know that speech is not enough because the person might be mute, how do you convince yourself that a certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might accidentally throw away consciousness, then I guess we’re in agreement.
I was thinking of uploads in the Hansonian sense, a shortcut to “building” AI. Instead of understanding AI/consciousness from the ground up and designing de novo an IA, we simply copy an actual person. Copying the person, if successful, produces a computer run person which seems to do the things the person would have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system has memory, so a semi-complete input-output map could not be generated unless you started with a myriad of fresh copies of the person and ran them through all sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map and, like a smart compiler, trying to invent the least amount of code that would produce that, or in another metaphor, try to optimally compress that input-output map. I don’t think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth over every road in the world under every conceivable traffic and weather condition, and then take that very large input output map and try to compress and upload that? Or would you take each part of the car and upload it, and its relationship when assembled, to each other part in the car? You would do the second, there are too many possible inputs to imagine the input-output approach could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair to insist we do something which is more efficient, upload a copy of the machine rather than a compressed input-output map, especially if the ratio of efficiency is > 10^100:1.
I think I have explained that above. TO characterize the machine by its input-output map, you need to consider every possible input. In the case of a person with memory, that means every possible lifetime: the input-output map is gigantic, much bigger than the machine itself, which is the brain/body.
What I think is that we don’t know whether or not consciousness has been thrown away because we don’t even have a method for determining whether the original is conscious or not. To the extent you believe I am conscious, why is it? Until you can answer that, until you can build a consciousness-meter, how do we even check an upload for consciousness? What we could check it for is whether it SEEMS to act like the person uploaded, our sort of fuzzy opinion.
What I would say is IF there is a consciousness-meter even possible, and I think there is but I don’t know, then any optimization that accidentally threw away consciousness would have changed other behaviors away and would be a measurably inferior simulation than a conscious simulation would have been.
If on the other hand there is NO measure of consciousness that could be developed as a consciousness-meter (or consciousness-evaluating program if you prefer), then consciousness is supernatural, which for all intents and purposes means it is make-believe. Literally, you make yourself believe something for reasons which by definition have nothing to do with something happened in the real, natural, measurable world.
Do we agree on any of these last two paragraphs?
Well, presumably you don’t want an atom-by-atom simulation. You want to at least compress each neuron to an approximate input-output map for that neuron, observed from practice, and then use that. Also you might want to take some implementation shortcuts to make the thing run faster. You seem to think that all these changes are obviously harmless. I also lean toward that, but not as strongly as you, because I don’t know where to draw the line between harmless and harmful optimizations.