Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.
That is why I prefer the destructive scanning and brain emulation route, I can much more easily imagine the steps necessary to achieve it. Assuming an approximate model is sufficient, this would be a simple but world changing achievement. An achievement that society seems completely unprepared for. Do any less wrong readers know of strong arguments against this view (assuming simple emulation is sufficient)? Or know of any hypothesised or fictional accounts of likely social outcomes?
Your assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don’t yet even have a consensus on what constitutes progress.
I’m personally working on AI because I think that’s where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you’d rather work on uploading, that’s certainly a reasonable choice.
As for what uploads will do if and when they come to exist, well, there’s going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like,
“Uh… a hatstand?”
“Sorry Mr. Jones, that’s actually a picture of your wife. I think we need to revert yesterday’s bug fixes to your visual cortex.”
But e.g. The Planck Dive is a good story set in a world where that technology is mature enough to be taken for granted.
Emulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible.
And either way, don’t expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It’s only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one.
I’m not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don’t even know would be possible to fix, even if we knew what the problem was in the first place.
What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?
The question of the ultimate goal is a good one. I don’t find arguments of value based on utilitarian values to be very convincing. In contrast I prefer enlightened self interest (other people are important because I like them and feel safe in a world where they are valued). So for me, some form of immortality is much more important than my capabilities (or something else’s in the case of AI) in that state.
In addition, the efficiency gains of being able to ‘step through’ a simulation of a system and the ability to perform repeatable automated experiments on such a system, convey enormous benefits (arguably this capability is what is driving our increasing productivity) so being able to simulate the brain may well lead to exponential improvements in our understanding of psychology and conciousness.
In terms of performance concerns, there is the potential for a step change in the economics of high performance computing, while you may only be willing to spend a couple of thousand dollars on a computer to play games with, you may well take out a (lifetime?) mortgage to ensure you don’t die. In terms of social consequences one could imagine that the world economy would switch from supporting biology to supporting technology (it would be interesting to calculate the relative economic cost of supporting a simulated person rather than a biological one).
Recent work with brain machine interfaces also points towards the enormous flexibility of the mind to adapt to new inputs and outputs. With the improved debugging capability of simulation, mental enhancement becomes substantially more feasible. As our understanding of such interactions improve, a virtual environment could be created which convincingly provides the illusion of a world of limitless abundance.
And then there is the possibility of replication, storing a person in a willing state and reseting them to that state after they complete a task. This leads to the enormous social consequence of convincingly disproving notions such as the soul, free will etc. and creating a world where lives would lose their value in the same way that pirated software does. Such an event has the potential to change our entire culture, perhaps more than any other event, at least equivalent to the reduction in the influence of religion as a result of evolutionary theory and other scientific developments.
Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.
That is why I prefer the destructive scanning and brain emulation route, I can much more easily imagine the steps necessary to achieve it. Assuming an approximate model is sufficient, this would be a simple but world changing achievement. An achievement that society seems completely unprepared for. Do any less wrong readers know of strong arguments against this view (assuming simple emulation is sufficient)? Or know of any hypothesised or fictional accounts of likely social outcomes?
Your assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don’t yet even have a consensus on what constitutes progress.
I’m personally working on AI because I think that’s where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you’d rather work on uploading, that’s certainly a reasonable choice.
As for what uploads will do if and when they come to exist, well, there’s going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like,
“Uh… a hatstand?”
“Sorry Mr. Jones, that’s actually a picture of your wife. I think we need to revert yesterday’s bug fixes to your visual cortex.”
But e.g. The Planck Dive is a good story set in a world where that technology is mature enough to be taken for granted.
The phrase “Fork me on GitHub” has just taken on a more sinister meaning.
I expect that prediction will be probably cracked first.
Thanks for the link, very interesting.
Emulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible.
And either way, don’t expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It’s only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one.
I’m not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don’t even know would be possible to fix, even if we knew what the problem was in the first place.
What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?
The question of the ultimate goal is a good one. I don’t find arguments of value based on utilitarian values to be very convincing. In contrast I prefer enlightened self interest (other people are important because I like them and feel safe in a world where they are valued). So for me, some form of immortality is much more important than my capabilities (or something else’s in the case of AI) in that state.
In addition, the efficiency gains of being able to ‘step through’ a simulation of a system and the ability to perform repeatable automated experiments on such a system, convey enormous benefits (arguably this capability is what is driving our increasing productivity) so being able to simulate the brain may well lead to exponential improvements in our understanding of psychology and conciousness.
In terms of performance concerns, there is the potential for a step change in the economics of high performance computing, while you may only be willing to spend a couple of thousand dollars on a computer to play games with, you may well take out a (lifetime?) mortgage to ensure you don’t die. In terms of social consequences one could imagine that the world economy would switch from supporting biology to supporting technology (it would be interesting to calculate the relative economic cost of supporting a simulated person rather than a biological one).
Recent work with brain machine interfaces also points towards the enormous flexibility of the mind to adapt to new inputs and outputs. With the improved debugging capability of simulation, mental enhancement becomes substantially more feasible. As our understanding of such interactions improve, a virtual environment could be created which convincingly provides the illusion of a world of limitless abundance.
And then there is the possibility of replication, storing a person in a willing state and reseting them to that state after they complete a task. This leads to the enormous social consequence of convincingly disproving notions such as the soul, free will etc. and creating a world where lives would lose their value in the same way that pirated software does. Such an event has the potential to change our entire culture, perhaps more than any other event, at least equivalent to the reduction in the influence of religion as a result of evolutionary theory and other scientific developments.