We don’t have to try and upgrade any virtual brains to get most of the benefits.
If we could today create an uploaded dog brain that’s just a normal virtual dog running at 1/1000th realtime, that would be a huge win with no meaningful risk. That would lead us down a relatively stable path of obscenely expensive and slow uploads becoming cheaper every year. In this case cheaper means fast and also more numerous, At the start human society can handle a few slightly superior uploads, by the time uploads get way past us, they will be a society of themselves and on roughly equal footing. (this may be bad for people still running at realtime, but human values will persist)
The dangers of someone making a transcendent AI first are there no matter what. This is not a good argument against a FASTER way to get to safe superintelligence.
So, in this scenario we have obtained a big neural network that imprints on a master and can learn complex physical tasks… and we’re just going to ignore the implications of that while we concentrate on trying to duplicate ourselves?
What’s going to stop me from duplicating just the canine prefrontal cortex and experimenting with it? It’s a nice little classifier / decision maker, I’m sure it has other uses…
Just the capacity to reliably emulate major functional regions of vertebrate brain already puts you on the threshold of creating big powerful nonhuman AI. If puploads come first, they’ll be doing more than catching frisbees in Second Life.
I realize that this risk is kinda insignficant compared to the risk of all life on Earth being wiped out… But I’m more than a little scared of the thought of animal uploads, and the possibility of people creating lifeforms that can be cheaply copied and replicated, without them needing to have any of the physical features that usually elict sympathy from people. We already have plenty of people abusing their animals today, and being able to do it perfectly undetected on your home upload isn’t going to help things.
To say nothing about when it becomes easier to run human uploads. I just yesterday re-read a rather disturbing short story about the stuff you could theoretically do with body-repairing nanomachines, a person who enjoys abusing others, and a “pet substitute”. Err, a two-year old human child, that is.
The supercomputers will be there whether we like it or not. Some of what they run will be attempts at AI. This is so far the only approach that someone unaware of Friendliness issues has a high probability of trying and succeeding with (and not immediately killing us all)
Numerous Un-augmented accelerated uplaods is a narrow safe path, and one we probably won’t follow, but it is a safe path. (so far one of 2, so it’s important) I think the likely win is less than FAI, but the dropoff isn’t so steep either as you walk off the path. Any safe AI approach will suggest profitable nonsafe alternatives.
An FAI failure is almost certainly alien, or not there yet. An augmentation failure is probably less-capable, probably not hostile, probably not strictly executing a utility function, and above all: can be surrounded by other, faster, uploads.
If the first pupload costs half a billion dollars and runs very slow, then even tweaking it will be safer than say, letting neural nets evolve in a rich environment on the same hardware.
What’s going to stop me from duplicating just the canine prefrontal cortex and experimenting with it? It’s a nice little classifier / decision maker, I’m sure it has other uses...
What’s going to stop you is that the prefrontal cortex is just one part of a larger whole. It may be possible to isolate that part, but doing so may be very difficult. Now, if your simulation were running in real time, you could just spawn off a bunch of different experiments pursuing different ideas for how to isolate and use the prefrontal cortex, and just keep doing this until you find something that works. But, if your simulation is running at 1/10000th realtime, as JamesAndrix suggests in his hypothetical, the prospects of this type of method seem dim.
Of course, maybe the existence of the dog brain simulation is sufficient to spur advances in neuroscience to the point where you could just isolate the functioning of the cortex, without the need for millions of experimental runs. Even so, your resulting module is still going to be too slow to be an existential threat.
Just the capacity to reliably emulate major functional regions of vertebrate brain already puts you on the threshold of creating big powerful nonhuman AI.
The threshold, yes. But that threshold is still nontrivial to cross. The question is, given that we can reliably emulate major functional regions of the brain, is it easier to cross the threshold to nonhuman AI, or to full emulations of humans? There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
It may be possible to isolate that part, but doing so may be very difficult.
Why would it be difficult? How would it be difficult?
There is a rather utopian notion of mind uploading, according to which you blindly scan a brain without understanding it, and then turn that data directly into a simulation. I’m aware of two such scanning paradigms. In one, you freeze the brain, microtome it, and then image the sections. In the other you do high-resolution imaging of the living brain (e.g. fMRI) and then you construct a state-machine model for each small 3D volume.
To turn the images of those microtomed sections into an accurate dynamical model requires a lot of interpretive knowledge. The MRI-plus-inference pathway sounds much more plausible as a blind path to brain simulation. But either way, you are going to know what the physical 3D location of every element in your simulation was, and functional neuroanatomy is already quite sophisticated. It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
If you can simulate a human, you can immediately start experimenting with nonhuman cognitive architectures by lobotomizing or lesioning the simulation. But this would already be true for simulated animal brains as well.
It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
That’s true, but ultimately the regions of the brain are not completely islands. The circuitry connecting them is itself intricate. You may, for instance, be able to extract the visual cortex and get it to do some computer vision for you, but I doubt extracting a prefontal cortex will be useful without all the subsystems it depends on. More importantly, how to wire up new configurations (maybe you want to have a double prefrontal cortex: twice the cognitive power!) strikes me as a fundamentally difficulty problem. At that point you probably need to have some legitimate high level understanding of the components and their connective behaviors to succeed. To contrast, a vanilla emulation where you aren’t modifying the architecture or performing virtual surgery requires no such high level understanding.
We don’t have to try and upgrade any virtual brains to get most of the benefits.
If we could today create an uploaded dog brain that’s just a normal virtual dog running at 1/1000th realtime, that would be a huge win with no meaningful risk. That would lead us down a relatively stable path of obscenely expensive and slow uploads becoming cheaper every year. In this case cheaper means fast and also more numerous, At the start human society can handle a few slightly superior uploads, by the time uploads get way past us, they will be a society of themselves and on roughly equal footing. (this may be bad for people still running at realtime, but human values will persist)
The dangers of someone making a transcendent AI first are there no matter what. This is not a good argument against a FASTER way to get to safe superintelligence.
So, in this scenario we have obtained a big neural network that imprints on a master and can learn complex physical tasks… and we’re just going to ignore the implications of that while we concentrate on trying to duplicate ourselves?
What’s going to stop me from duplicating just the canine prefrontal cortex and experimenting with it? It’s a nice little classifier / decision maker, I’m sure it has other uses…
Just the capacity to reliably emulate major functional regions of vertebrate brain already puts you on the threshold of creating big powerful nonhuman AI. If puploads come first, they’ll be doing more than catching frisbees in Second Life.
I realize that this risk is kinda insignficant compared to the risk of all life on Earth being wiped out… But I’m more than a little scared of the thought of animal uploads, and the possibility of people creating lifeforms that can be cheaply copied and replicated, without them needing to have any of the physical features that usually elict sympathy from people. We already have plenty of people abusing their animals today, and being able to do it perfectly undetected on your home upload isn’t going to help things.
To say nothing about when it becomes easier to run human uploads. I just yesterday re-read a rather disturbing short story about the stuff you could theoretically do with body-repairing nanomachines, a person who enjoys abusing others, and a “pet substitute”. Err, a two-year old human child, that is.
The supercomputers will be there whether we like it or not. Some of what they run will be attempts at AI. This is so far the only approach that someone unaware of Friendliness issues has a high probability of trying and succeeding with (and not immediately killing us all)
Numerous Un-augmented accelerated uplaods is a narrow safe path, and one we probably won’t follow, but it is a safe path. (so far one of 2, so it’s important) I think the likely win is less than FAI, but the dropoff isn’t so steep either as you walk off the path. Any safe AI approach will suggest profitable nonsafe alternatives.
An FAI failure is almost certainly alien, or not there yet. An augmentation failure is probably less-capable, probably not hostile, probably not strictly executing a utility function, and above all: can be surrounded by other, faster, uploads.
If the first pupload costs half a billion dollars and runs very slow, then even tweaking it will be safer than say, letting neural nets evolve in a rich environment on the same hardware.
What’s going to stop you is that the prefrontal cortex is just one part of a larger whole. It may be possible to isolate that part, but doing so may be very difficult. Now, if your simulation were running in real time, you could just spawn off a bunch of different experiments pursuing different ideas for how to isolate and use the prefrontal cortex, and just keep doing this until you find something that works. But, if your simulation is running at 1/10000th realtime, as JamesAndrix suggests in his hypothetical, the prospects of this type of method seem dim.
Of course, maybe the existence of the dog brain simulation is sufficient to spur advances in neuroscience to the point where you could just isolate the functioning of the cortex, without the need for millions of experimental runs. Even so, your resulting module is still going to be too slow to be an existential threat.
The threshold, yes. But that threshold is still nontrivial to cross. The question is, given that we can reliably emulate major functional regions of the brain, is it easier to cross the threshold to nonhuman AI, or to full emulations of humans? There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
Why would it be difficult? How would it be difficult?
There is a rather utopian notion of mind uploading, according to which you blindly scan a brain without understanding it, and then turn that data directly into a simulation. I’m aware of two such scanning paradigms. In one, you freeze the brain, microtome it, and then image the sections. In the other you do high-resolution imaging of the living brain (e.g. fMRI) and then you construct a state-machine model for each small 3D volume.
To turn the images of those microtomed sections into an accurate dynamical model requires a lot of interpretive knowledge. The MRI-plus-inference pathway sounds much more plausible as a blind path to brain simulation. But either way, you are going to know what the physical 3D location of every element in your simulation was, and functional neuroanatomy is already quite sophisticated. It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
If you can simulate a human, you can immediately start experimenting with nonhuman cognitive architectures by lobotomizing or lesioning the simulation. But this would already be true for simulated animal brains as well.
That’s true, but ultimately the regions of the brain are not completely islands. The circuitry connecting them is itself intricate. You may, for instance, be able to extract the visual cortex and get it to do some computer vision for you, but I doubt extracting a prefontal cortex will be useful without all the subsystems it depends on. More importantly, how to wire up new configurations (maybe you want to have a double prefrontal cortex: twice the cognitive power!) strikes me as a fundamentally difficulty problem. At that point you probably need to have some legitimate high level understanding of the components and their connective behaviors to succeed. To contrast, a vanilla emulation where you aren’t modifying the architecture or performing virtual surgery requires no such high level understanding.