As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
Ah, go on Silas. I’m especially sure Alicorn will be delighted to meet you at the SIAI Benton house ;-)
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.