Surely the point at which your entire sensory input comes from the digital world you are somewhat uploaded, even if part of the processing happens in biological components. what does it mean to “travel” when you can receive sensory inputs from any point in the network? There are several rubicons to be crossed, and transitioning from “has tiny biological part” to “has no biological part” is another, but it’s definitely smaller than “one day an ape, the next day software”. What’s more, what I’m arguing is not that there aren’t disruptive steps, but that each step is small enough to make sense for a non-adventurous person, as a step increase in convenience. It’s the theseus ship of mind uploading.
what does it mean to “travel” when you can receive sensory inputs from any point in the network?
To be able to shorten the time which it takes to be conscious of a sensory input. If the sensor is at point A and you are at distance x from that sensor, you require at least x/c time to be aware of an input from that sensor.
The whole point of travel is to have low-latency, high-bandwidth access to information that exists at some point in the universe.
but that each step is small enough to make sense for a non-adventurous person
It still seems to me that the step from ‘being tied to a specific piece of hardware’ - whether that hardware is an entirely biological brain or an enhanced biological brain—to being pure information capable of moving from hardware to hardware is a pretty big step, regardless of how it is performed. It’s the very essence of digitizing something. A physical book is information tied to hardware; uploading consists of scanning the book.
There are still many intermediate steps. What does it mean “to be conscious of a sensory input”? Are we talking system 1 or system 2? If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not? What if the biological part’s responses can be modelled well enough to be predicted digitally 99.9% of the time, such that a remote near-copy can be almost autonomous by means of optimistic concurrency, correcting course only when the verdict comes back different than predicted. The notion of the brain as a single indivisible unit that “is aware of an input” quickly fades away when the possibilities of software are taken into account, even when only part of you is digital.
There are still many intermediate steps. What does it mean “to be conscious of a sensory input”? Are we talking system 1 or system 2?
The system 1/system 2 distinction is only tangentially related here.
If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not?
It’s irrelevant whether the brain is ‘composed of modules’ or not. If what you mean is whether it is possible for consciousness to be distributed, well that’s a good question. If it’s possible for consciousness to be distributed then you could imagine being ‘spread out’ over a very large computer network (possibly many light-years in length). But the situation becomes tricky because if, say, your ‘leg’ was in one star system and your ‘eye’ was in another star system, stimulus from your eye could not cause a reaction from your leg in time shorter than several years, otherwise you violate the speed of light limit and causality. So either you cannot be ‘spread out’, or your perception of time slows down so extremely that several years seems instantaneous (just like the fraction of a second required for you to move your human leg seems instantaneous now).
I don’t use the word consciousness as it’s a complex concept not really necessary in this context. I approach a mind as an information processing system, and information processing systems can most certainly be distributed. What that means for consciousness depends on what you mean by consciousness I suppose, but I would not like to start that conversation.
The whole idea of uploading concerns human consciousness. Specifically, transferring a human consciousness to a non-biological context. If you’re not talking about human consciousness, then you’re just talking about building an AI.
The route to AI that you’re suggesting is a plausible one; people like Nick Bostrom have talked about scenarios like this at length. Scenarios where we gradually shift our ‘computational substrate’ to non-biological hardware over several generations. But that’s not necessarily what uploading is! As I mentioned, uploading is the transferring of a consciousness from some specific piece of hardware to another piece of hardware. The title and wording of your post implies that you are talking about uploading, but our discussion indicates you are actually talking about building an AI, which is an entirely different concept, and everyone who is confused about this distinction would do well to clearly understand it before talking about it.
Surely the point at which your entire sensory input comes from the digital world you are somewhat uploaded, even if part of the processing happens in biological components. what does it mean to “travel” when you can receive sensory inputs from any point in the network? There are several rubicons to be crossed, and transitioning from “has tiny biological part” to “has no biological part” is another, but it’s definitely smaller than “one day an ape, the next day software”. What’s more, what I’m arguing is not that there aren’t disruptive steps, but that each step is small enough to make sense for a non-adventurous person, as a step increase in convenience. It’s the theseus ship of mind uploading.
To be able to shorten the time which it takes to be conscious of a sensory input. If the sensor is at point A and you are at distance x from that sensor, you require at least x/c time to be aware of an input from that sensor.
The whole point of travel is to have low-latency, high-bandwidth access to information that exists at some point in the universe.
It still seems to me that the step from ‘being tied to a specific piece of hardware’ - whether that hardware is an entirely biological brain or an enhanced biological brain—to being pure information capable of moving from hardware to hardware is a pretty big step, regardless of how it is performed. It’s the very essence of digitizing something. A physical book is information tied to hardware; uploading consists of scanning the book.
There are still many intermediate steps. What does it mean “to be conscious of a sensory input”? Are we talking system 1 or system 2? If the brain is composed of modules, which it likely is, what if some of them are digital and able to move to where the information is and others are not? What if the biological part’s responses can be modelled well enough to be predicted digitally 99.9% of the time, such that a remote near-copy can be almost autonomous by means of optimistic concurrency, correcting course only when the verdict comes back different than predicted. The notion of the brain as a single indivisible unit that “is aware of an input” quickly fades away when the possibilities of software are taken into account, even when only part of you is digital.
The system 1/system 2 distinction is only tangentially related here.
It’s irrelevant whether the brain is ‘composed of modules’ or not. If what you mean is whether it is possible for consciousness to be distributed, well that’s a good question. If it’s possible for consciousness to be distributed then you could imagine being ‘spread out’ over a very large computer network (possibly many light-years in length). But the situation becomes tricky because if, say, your ‘leg’ was in one star system and your ‘eye’ was in another star system, stimulus from your eye could not cause a reaction from your leg in time shorter than several years, otherwise you violate the speed of light limit and causality. So either you cannot be ‘spread out’, or your perception of time slows down so extremely that several years seems instantaneous (just like the fraction of a second required for you to move your human leg seems instantaneous now).
I don’t use the word consciousness as it’s a complex concept not really necessary in this context. I approach a mind as an information processing system, and information processing systems can most certainly be distributed. What that means for consciousness depends on what you mean by consciousness I suppose, but I would not like to start that conversation.
The whole idea of uploading concerns human consciousness. Specifically, transferring a human consciousness to a non-biological context. If you’re not talking about human consciousness, then you’re just talking about building an AI.
Which in turn depends on what you mean by “artificial”.
The route to AI that you’re suggesting is a plausible one; people like Nick Bostrom have talked about scenarios like this at length. Scenarios where we gradually shift our ‘computational substrate’ to non-biological hardware over several generations. But that’s not necessarily what uploading is! As I mentioned, uploading is the transferring of a consciousness from some specific piece of hardware to another piece of hardware. The title and wording of your post implies that you are talking about uploading, but our discussion indicates you are actually talking about building an AI, which is an entirely different concept, and everyone who is confused about this distinction would do well to clearly understand it before talking about it.
You appear to be arguing about definitions. I’m not interested in going down that rabbit hole.