I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints—some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.
So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won’t know what the hell we are doing… of course it will.
Still, that’s too close to an existence proof: it doesn’t really tell you how to do it.
You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants.
Plomin and company haven’t any real success ( in terms of QTLs that explain much of the variance) but for this purpose one doesn’t care about variance explained, just effect size. A rare allele that does the job would be useful. I’d look at groups with high average IQ, but at others also.
There are other possible approaches. If you could error-correct the genome, fix all the mutational noise, you might see higher IQ. You could dig up Gauss and clone him. My favorite idea is finding two human ethnic groups that ‘nick’ - whose F1 offspring exhibit hybrid vigor.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
(Consider the following question in a Bayesian spirit, i.e. the spirit of giving a probability to any event, even if you don’t have an associated frequency for it)
If you had to bet on whether the technology for these genetic engineering efforts (NOT the political will) will be ready by e.g.
2030, 2040, 2050, 2075, 2125,
what kind of odds/probabilities would you bet at?
I would say that it is some sense obvious that higher intelligence is possible, because the process that led to whatever intelligence we have was haphazard (path-dependent, stochastic, and all that) and because what optimization did occur was under severe constraints—some of which no longer apply. Clearly, the best possible performance under severe constraints is inferior to the best possible with fewer constraints.
So, if C-sections allow baby heads to get bigger, or if calories are freely available today, changes in brain development that take advantage of those relaxed constraints ought to be feasible. In principle this does not have to result in people who are damaged or goofy, although they would not do well in ancestral environments. In practice, since we won’t know what the hell we are doing… of course it will.
Still, that’s too close to an existence proof: it doesn’t really tell you how to do it.
You could probably get real improvements by mining existing genetic variation: look at individuals and groups with unusually high IQs, search for causal variants. Plomin and company haven’t any real success ( in terms of QTLs that explain much of the variance) but for this purpose one doesn’t care about variance explained, just effect size. A rare allele that does the job would be useful. I’d look at groups with high average IQ, but at others also.
There are other possible approaches. If you could error-correct the genome, fix all the mutational noise, you might see higher IQ. You could dig up Gauss and clone him. My favorite idea is finding two human ethnic groups that ‘nick’ - whose F1 offspring exhibit hybrid vigor.
As for the singularity: I could, I think, make a pretty good case that scientific and technological progress is slowing down.
I think that this comment highlights the fact that SIAI has a major brand management problem: SIAI is not concerned with “acceleration” of “progress”, but with the development of smarter-that-human AI—which could occur at a point in time where technology and economic indicators show growth, stagnation or even decline.
But those who push the “acceleration” of “progress” brand, have about 10^3 times our marketing budget.
No disrespect to Gregory—it is simply the case that the marketing and info that’s out there has turned the “Singularity” brand sour—the term has lost any precise meaning.
If the problem is Kurzweil’s mesage than it probably doesn’t help SIAI’s brand that he’s listed second.
Anecdotally, I’d say you’re absolutely right and that SIAI’s prospects could be substantially improved by jettisoning the term “singularity”. I’m someone who SIAI should want to target as a supporter, and I’ve mostly come around but the term singularity just radiates bad juju for me. I think I’m going to apply for a visiting fellow spot but frankly, I’m not especially comfortable telling friends and family that I’m planning to work at a place called the Singularity Institute for Artificial Intelligence and not get paid for it (I’m hoping they don’t have the same reaction to the word that I did). I suspect I would have been more supportive earlier if SIAI had been called something else.
I concur. Whenever I describe what I would be doing if I volunteered for SIAI, I avoid mentioning its name entirely and just say that they deal in “robotics” (which I tend to use instead of AI) at the “theoretical level” and that they want to bring to the “level of human intelligence” and that they study “risks to humanity”.
Of course, this is all “counting chickens ’fore they’re hatched” at this point, because I haven’t sent my email/CV to Anna Salamon yet...
Ah, go on Silas. I’m especially sure Alicorn will be delighted to meet you at the SIAI Benton house ;-)
But current predictions of what happens when smarter than human AI is made, somewhat rely on there being a positive relation between brain/processing power and technological innovation.
The brain power and processing power of humanity is ever increasing, more human population, more educated humans and more computing power. We can crunch ever bigger data sets. The science we are trying to do requires us to use these bigger data sets as well (LHC, genomic analysis, weather prediction). Perhaps we have nearly exhausted the simple science and we are left with the increasingly complex, and similar problems will happen to AI if it tries to self-improve. The question would be whether the rate of self-improvement would be greater than or less than the rate of increasing difficulty of the problems it had to solve to self-improve.
Thanks for the response.
(Consider the following question in a Bayesian spirit, i.e. the spirit of giving a probability to any event, even if you don’t have an associated frequency for it)
If you had to bet on whether the technology for these genetic engineering efforts (NOT the political will) will be ready by e.g.
2030, 2040, 2050, 2075, 2125,
what kind of odds/probabilities would you bet at?
I have heard of the theory that the human with the “consensus” genome would be way above average in phenotype.
any idea how much?