I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.
On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no superhuman AGI can be developed because it is economically infeasible to research that field as opposed to optimizing accelerating returns from IA creation and implementation. Development of AI whether friendly or not would certainly occur at a faster pace, but if IA proves to simply be easier than AI, which given our poor ability estimate the difficulty of both approaches may be true, development in that field would continue to outpace it. It could certainly instigate either a fast or slow takeoff event from our current perspective, but from the perspective of enhanced humans it would be simply an extension of existing trends.
A similar argument could be made in regard to Hanson’s WBEM based scenarios, through the implication that given the ability to store a mind to some hardware system, it would be more economically efficient to emulate that mind at a faster pace than to parallel process multiple copies of that mind in the same hardware space, and likewise hardware design would trend toward rapid emulation of single workers rather than multiple instances in order to reduce costs accrued by redundancy and increase gains in efficiency accrued by experience. This would imply that mind enhancement of a few high efficiency minds would occur much earlier and that exceptional numbers of emulated workers would be unlikely to be created, but rather that a few high value workers would occupy a large majority of relevant hardware very soon after the creation of such technology.
An IA field with greater pace than AI does of course present its own problems, and I’m not trying to endorse moving towards an IA-first approach with my ramblings. I suppose I’m simply trying to express the belief that discussion of IA as an alternative to AI rather than an instrument toward AI is rather lacking in this forum and I find myself confused as to why.
“Superhuman AI” as the term is generally used is a fixed reference standard, i.e. your average rationalist computer scientist circa 2013. This particular definition has meaning because if we posit that human beings are able to create an AGI, then a first generation superhuman AGI would be able to understand and modify its own source code, thereby starting the FOOM process. If human beings are not smart enough to write an AGI then this is a moot point. But if we are, then we can be sure that once that self-modifying AGI also reaches human-level capability, it will quickly surpass us in a singularity event.
So the point of whether IA advances humans faster or slower than AGI is a rather uninteresting point. All that matters is when a self-modifying AGI becomes more capable than its creators at the time of its inception.
As to your very last point, it is probably because the timescales for AI are much closer than IA. AI is basically a solvable software problem, and there are many supercompute clusters in the world that could are probably capable of running a superhuman AGI at real time speeds, if such a software existed. Significant IA, on the other hand, requires fundamental breakthroughs in hardware...
I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.
I don’t understand why you would use a particular fixed standard for “human level”. It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph (“more capable than its creators at the time of its inception”). Since IA rate determines the capabilities of the AIs creators, then a faster rate of IA than AI would mean that the event of a more capable AGI would never occur.
If a self-modifying AGI is less capable than its creators at the time of its inception, then it will be unable to FOOM, from the perspective of its creators, both because they would be able to develop a better AI in a shorter time than an AI could improve itself, and because if they were developing IA at a greater pace they would advance faster than the AGI that they had developed. Given the same intelligence and rate of work, an easier problem will see more progress. Therefore, if IA is given equal or greater rate of work than AI, and it happens to be an easier problem, then humans would FOOM before AI did. A FOOM doesn’t feel like a FOOM from the perspective of the one experiencing it though.
Your final point makes sense, in that it address the point that the probability of the first fast takeoff being in the AI field may be larger than the IA field, or in that AI is an easier problem. I fail to see why a software problem is inherently easier than a biology or engineering problem though. A fundamental breakthrough in software is just as unlikely as a hardware, and there are more paths to success for IA than AI that are currently being pursued, only one of which is a man-machine interface.
I considered being a bit snarky and posting each of your statements as direct opposites (IE all that matters is if a self modifying human becomes more capable than an AI at the time of its augmentation), but I feel like that would convey the wrong message. The dismissive response genuinely confuses me, but I’m making the assumption that my poor organization has made my point too vague.
It’s not an arbitrary reference point. For a singularity/AI-goes-FOOM event to occur, it needs to have sufficient intelligence and capability to modify itself in a recursive self-improvement process. A chimpanzee is not smart enough to do this. We’ve posited that at least some human beings are capable of creating a more powerful intelligence either though AGI or IA. Therefore the important cutoff where a FOOM event becomes possible is somewhere in-between those two reference levels (the chimpanzee and the circa 2013 rationalist AGI/IA researcher).
Despite my careless phrasing, this isn’t some floating standard that depends on circumstances (having to be smarter than your creators). An AGI or IA simply has to meet some objective minimum level of rationalist and technological capability to start the recursive self-improvement process. The problem is our understanding of the nature of intelligence is not developed enough to predict where that hard cutoff is, so we’re resorting to making qualitative judgements. We think we are capable of starting a singularity event either through AGI or IA means. Therefore anything smarter than we are (“superhuman”) would be equally capable. This is a sufficient, but not necessary requirement—making humans smarter though IA doesn’t mean that an AGI suddenly has to be that much smarter to start its own recursive self-improvement cycle.
My point about software was that an AGI FOOM could happen today. There are datacenters at Google and research supercomputers that are powerful enough to run a recursively improving “artificial scientist” AGI. But IA technology to the level of being able to go super-critical basically requires molecular nanotechnology or equivalently powerful technology (to replace neurons) and/or mind uploading. You won’t get an IA FOOM until you can remove the limitations of biological wetware, but these technologies are at best multiple decades away.
I have been mulling around a rough and mostly unformed idea in my head regarding AI-first vs IA-first strategies, but I was loathe to try and put it into words until I saw this post, and noticed that one of the scenarios that I consider highly probable was completely absent.
On the basis that subhuman AGI poses minimal risk to humanity, and that IA increases the level of optimization ability required of an AI to be considered human level or above, it seems that there is a substantial probability that an IA-first strategy could lead to a scenario in which no superhuman AGI can be developed because it is economically infeasible to research that field as opposed to optimizing accelerating returns from IA creation and implementation. Development of AI whether friendly or not would certainly occur at a faster pace, but if IA proves to simply be easier than AI, which given our poor ability estimate the difficulty of both approaches may be true, development in that field would continue to outpace it. It could certainly instigate either a fast or slow takeoff event from our current perspective, but from the perspective of enhanced humans it would be simply an extension of existing trends.
A similar argument could be made in regard to Hanson’s WBEM based scenarios, through the implication that given the ability to store a mind to some hardware system, it would be more economically efficient to emulate that mind at a faster pace than to parallel process multiple copies of that mind in the same hardware space, and likewise hardware design would trend toward rapid emulation of single workers rather than multiple instances in order to reduce costs accrued by redundancy and increase gains in efficiency accrued by experience. This would imply that mind enhancement of a few high efficiency minds would occur much earlier and that exceptional numbers of emulated workers would be unlikely to be created, but rather that a few high value workers would occupy a large majority of relevant hardware very soon after the creation of such technology.
An IA field with greater pace than AI does of course present its own problems, and I’m not trying to endorse moving towards an IA-first approach with my ramblings. I suppose I’m simply trying to express the belief that discussion of IA as an alternative to AI rather than an instrument toward AI is rather lacking in this forum and I find myself confused as to why.
“Superhuman AI” as the term is generally used is a fixed reference standard, i.e. your average rationalist computer scientist circa 2013. This particular definition has meaning because if we posit that human beings are able to create an AGI, then a first generation superhuman AGI would be able to understand and modify its own source code, thereby starting the FOOM process. If human beings are not smart enough to write an AGI then this is a moot point. But if we are, then we can be sure that once that self-modifying AGI also reaches human-level capability, it will quickly surpass us in a singularity event.
So the point of whether IA advances humans faster or slower than AGI is a rather uninteresting point. All that matters is when a self-modifying AGI becomes more capable than its creators at the time of its inception.
As to your very last point, it is probably because the timescales for AI are much closer than IA. AI is basically a solvable software problem, and there are many supercompute clusters in the world that could are probably capable of running a superhuman AGI at real time speeds, if such a software existed. Significant IA, on the other hand, requires fundamental breakthroughs in hardware...
I seem to have explained myself poorly. You are effectively restating the commonly held (on LessWrong) views that I was attempting to originally address, so I will try to be more clear.
I don’t understand why you would use a particular fixed standard for “human level”. It seems to be arbitrary, and it would be more sensible to use the level of human at the time when a given AGI was developed. You yourself say as much in your second paragraph (“more capable than its creators at the time of its inception”). Since IA rate determines the capabilities of the AIs creators, then a faster rate of IA than AI would mean that the event of a more capable AGI would never occur.
If a self-modifying AGI is less capable than its creators at the time of its inception, then it will be unable to FOOM, from the perspective of its creators, both because they would be able to develop a better AI in a shorter time than an AI could improve itself, and because if they were developing IA at a greater pace they would advance faster than the AGI that they had developed. Given the same intelligence and rate of work, an easier problem will see more progress. Therefore, if IA is given equal or greater rate of work than AI, and it happens to be an easier problem, then humans would FOOM before AI did. A FOOM doesn’t feel like a FOOM from the perspective of the one experiencing it though.
Your final point makes sense, in that it address the point that the probability of the first fast takeoff being in the AI field may be larger than the IA field, or in that AI is an easier problem. I fail to see why a software problem is inherently easier than a biology or engineering problem though. A fundamental breakthrough in software is just as unlikely as a hardware, and there are more paths to success for IA than AI that are currently being pursued, only one of which is a man-machine interface.
I considered being a bit snarky and posting each of your statements as direct opposites (IE all that matters is if a self modifying human becomes more capable than an AI at the time of its augmentation), but I feel like that would convey the wrong message. The dismissive response genuinely confuses me, but I’m making the assumption that my poor organization has made my point too vague.
It’s not an arbitrary reference point. For a singularity/AI-goes-FOOM event to occur, it needs to have sufficient intelligence and capability to modify itself in a recursive self-improvement process. A chimpanzee is not smart enough to do this. We’ve posited that at least some human beings are capable of creating a more powerful intelligence either though AGI or IA. Therefore the important cutoff where a FOOM event becomes possible is somewhere in-between those two reference levels (the chimpanzee and the circa 2013 rationalist AGI/IA researcher).
Despite my careless phrasing, this isn’t some floating standard that depends on circumstances (having to be smarter than your creators). An AGI or IA simply has to meet some objective minimum level of rationalist and technological capability to start the recursive self-improvement process. The problem is our understanding of the nature of intelligence is not developed enough to predict where that hard cutoff is, so we’re resorting to making qualitative judgements. We think we are capable of starting a singularity event either through AGI or IA means. Therefore anything smarter than we are (“superhuman”) would be equally capable. This is a sufficient, but not necessary requirement—making humans smarter though IA doesn’t mean that an AGI suddenly has to be that much smarter to start its own recursive self-improvement cycle.
My point about software was that an AGI FOOM could happen today. There are datacenters at Google and research supercomputers that are powerful enough to run a recursively improving “artificial scientist” AGI. But IA technology to the level of being able to go super-critical basically requires molecular nanotechnology or equivalently powerful technology (to replace neurons) and/or mind uploading. You won’t get an IA FOOM until you can remove the limitations of biological wetware, but these technologies are at best multiple decades away.