To achieve the Singularity in as fast a time as possible, we need not only money, but lots of smart, hard-working people (who will turn out to be mostly White and Asian males). The thing is, those traits are to a large part genetic; and we know that Ashkenazi Jews are smarter on average than other human groups. I am writing this at the risk of further inflating Eliezer’s already massive ego :)
So, an obvious interim solution until we get to the point of enhancing our intelligence through artificial, non-genetic means (or inventing a Seed AI) is to popularize eugenics for intelligence and practice it. This should help improve our main deficiency, which is not the lack of money in my opinion, but the lack of brains. It is human intelligence augmentation, except that it can only work with NEW humans instead of existing ones (which is the Holy Grail we are aspiring to)
Of course, there is a catch: such an eugenics program would have to be kick-started by the current, rather dumb politicians and citizens—and the chances of them screwing things up are quite high, especially given the dubious associations with totally irrational beliefs like antisemitism that are bound to arise.
Unlike most of you, I’m skeptical about the Singularity being achieved in my lifetime. There have been no serious paradigm-shifts in the understanding of science lately, and the AI research seems to be progressing at a very slow pace. Meanwhile, Eliezer hasn’t even started coding because he wants to explore the ramifications of Friendly AI. Fair enough, but I don’t think he is smart enough to get it right philosophically, without an actual experiment for feedback. Aristotle famously got it wrong, by deducing that F = m*v using thought experiments and not bothering to check his philosophy against the real world.
So I think the best thing we can do right now is to convince people that intelligence and conscientiousness are real traits that are extremely desirable to any society (as opposed to how much money a person makes, as the libertarians would argue), that they can in principle be measured, and which are at least in part determined genetically, while making it clear that our end goal is to uplift everyone who wants it, to increase their intelligence to superhuman levels (a la “Rise of the Planet of the Apes”) and to equalize the human races and populations in this regard.
Resorting to several generations of breeding for intelligence doesn’t seem like a very good strategy for getting things done in “as fast a time as possible.”
How confident are you in our ability, supposing everyone mysteriously possessed the will to do so or we somehow implemented such a program against people’s wills, to implement a eugenics program that resulted in, say, as much as a 5% improvement in either the maximum measured intelligence and conscientiousness in the population, or as much as a 5% increase in the frequency of the highest-measured I-and-C ratings (or had some other concretely articulated target benefit, if those aren’t the right ones) in less than, say, five generations?
Very high, due to the Flynn Effect. Humans are already recursively self-improving. The problem is that the self-improvement is too slow compared to the upper bound of what we might see from a recursively self-improving AI.
To achieve the Singularity in as fast a time as possible, we need not only money, but lots of smart, hard-working people (who will turn out to be mostly White and Asian males). The thing is, those traits are to a large part genetic; and we know that Ashkenazi Jews are smarter on average than other human groups. I am writing this at the risk of further inflating Eliezer’s already massive ego :)
So, an obvious interim solution until we get to the point of enhancing our intelligence through artificial, non-genetic means (or inventing a Seed AI) is to popularize eugenics for intelligence and practice it. This should help improve our main deficiency, which is not the lack of money in my opinion, but the lack of brains. It is human intelligence augmentation, except that it can only work with NEW humans instead of existing ones (which is the Holy Grail we are aspiring to)
Of course, there is a catch: such an eugenics program would have to be kick-started by the current, rather dumb politicians and citizens—and the chances of them screwing things up are quite high, especially given the dubious associations with totally irrational beliefs like antisemitism that are bound to arise.
Unlike most of you, I’m skeptical about the Singularity being achieved in my lifetime. There have been no serious paradigm-shifts in the understanding of science lately, and the AI research seems to be progressing at a very slow pace. Meanwhile, Eliezer hasn’t even started coding because he wants to explore the ramifications of Friendly AI. Fair enough, but I don’t think he is smart enough to get it right philosophically, without an actual experiment for feedback. Aristotle famously got it wrong, by deducing that F = m*v using thought experiments and not bothering to check his philosophy against the real world.
So I think the best thing we can do right now is to convince people that intelligence and conscientiousness are real traits that are extremely desirable to any society (as opposed to how much money a person makes, as the libertarians would argue), that they can in principle be measured, and which are at least in part determined genetically, while making it clear that our end goal is to uplift everyone who wants it, to increase their intelligence to superhuman levels (a la “Rise of the Planet of the Apes”) and to equalize the human races and populations in this regard.
Resorting to several generations of breeding for intelligence doesn’t seem like a very good strategy for getting things done in “as fast a time as possible.”
Also, regression to the mean.
How confident are you in our ability, supposing everyone mysteriously possessed the will to do so or we somehow implemented such a program against people’s wills, to implement a eugenics program that resulted in, say, as much as a 5% improvement in either the maximum measured intelligence and conscientiousness in the population, or as much as a 5% increase in the frequency of the highest-measured I-and-C ratings (or had some other concretely articulated target benefit, if those aren’t the right ones) in less than, say, five generations?
Hsu seems pretty confident (http://lesswrong.com/lw/7wj/get_genotyped_for_free_if_your_iq_is_high_enough/5s84) but not due to the Flynn Effect (which may have stalled out already).
Very high, due to the Flynn Effect. Humans are already recursively self-improving. The problem is that the self-improvement is too slow compared to the upper bound of what we might see from a recursively self-improving AI.