Part of the reason is due to coordination problems, which I think would be reduced if the group consisted of clones of a single person with similar education and upbringing, and hence similar values/goals.
Another part of the reason is that we simply don’t have that many von Neumanns today. The [big number] of educated people that you see in the world consist almost entirely of people who are much less intelligent compared to von Neumann.
Not only are there more people today than in von Neumann’s time, but it is far easier to be discovered or to educate yourself. The general prosperity level of the world is also far higher. As a result, I expect, purely on statistical grounds, that there would be far more von Neumann level people today than in von Neumann’s time. I certainly don’t see a shortage of brilliant people in academia, for instance.
What is a test for a von Neumann level intelligence? Do you think “top people” in technical fields today would fail?
My intuition says that if we took the 10000 most intelligent people in the world, put them together and told them to work on some technical project, that would be much less effective than if we could make 10000 copies of the most intelligent person, in part because the 10000th most intelligent person is much less productive than the 1st. As evidence for this, I note that there are very few people whose “known for” list on Wikipedia is nearly as long as von Neumann’s, and you’d expect more such people if the productivity difference between the 1st and the 10000th weren’t very large.
But if it turns out that I’m wrong, and it’s not worth doing the cloning step, then I’d be happy with a “MSI-0.9” that just gathers 10000 top people and sets them to work on MSI-2 (or whatever technologies appears most important to getting a positive Singularity).
As evidence for this, I note that there are very few people whose “known for” list on Wikipedia is nearly as long as von Neumann’s, and you’d expect more such people if the productivity difference between the 1st and the 10000th weren’t very large.
I am not sure a Wikipedia rap sheet is as good a proxy for genius as you claim. I think genius is necessary but not sufficient. I also think “recreating von Neumann” will require context not present in his DNA. There are also issues with parallelizing intellectual work detailed in “the mythical man month,” I am sure you are aware of.
At any rate, instead of trying for MSI-1, which has huge technical obstacles to overcome, why not simply push to acquire financial resources and hire brilliant people to do the work you think is necessary. That is doable with today’s tech, and today’s people.
[comment from the heart, rather than from the head: your description of MSI-1 sounds kind of, well, totalitarian. Don’t you think that’s a little peculiar?]
why not simply push to acquire financial resources and hire brilliant people to do the work you think is necessary
The point is to obtain an insurmountable lead on WBE tech, otherwise you’ll just spur competition and probably end up with Robin Hanson’s Malthusian scenario. (If intelligence explosion were possible, you could win the WBE race by a small margin and translate that into a big win, but for this post I’m assuming that intelligence explosion isn’t possible, so you need to win the race by a large margin.)
[comment from the heart, rather than from the head: your description of MSI-1 sounds kind of, well, totalitarian. Don’t you think that’s a little peculiar?]
(If intelligence explosion were possible, you could win the WBE race by a small margin and translate that into a big win, but for this post I’m assuming that intelligence explosion isn’t possible, so you need to win the race by a large margin.)
Since exploiting intelligence explosion still requires FAI, and FAI could be very difficult, you might still need a large enough margin to perform all the necessary FAI research before your competition stumbles on an AGI.
Part of the reason is due to coordination problems, which I think would be reduced if the group consisted of clones of a single person with similar education and upbringing, and hence similar values/goals.
I thought of an interesting objection to this. What if the cloned agents decided that the gap between themselves and other humans was sufficiently well-defined for them to implement the coherent extrapolated volition of the clones themselves only?
Part of the reason is due to coordination problems, which I think would be reduced if the group consisted of clones of a single person with similar education and upbringing, and hence similar values/goals.
Another part of the reason is that we simply don’t have that many von Neumanns today. The [big number] of educated people that you see in the world consist almost entirely of people who are much less intelligent compared to von Neumann.
Not only are there more people today than in von Neumann’s time, but it is far easier to be discovered or to educate yourself. The general prosperity level of the world is also far higher. As a result, I expect, purely on statistical grounds, that there would be far more von Neumann level people today than in von Neumann’s time. I certainly don’t see a shortage of brilliant people in academia, for instance.
What is a test for a von Neumann level intelligence? Do you think “top people” in technical fields today would fail?
My intuition says that if we took the 10000 most intelligent people in the world, put them together and told them to work on some technical project, that would be much less effective than if we could make 10000 copies of the most intelligent person, in part because the 10000th most intelligent person is much less productive than the 1st. As evidence for this, I note that there are very few people whose “known for” list on Wikipedia is nearly as long as von Neumann’s, and you’d expect more such people if the productivity difference between the 1st and the 10000th weren’t very large.
But if it turns out that I’m wrong, and it’s not worth doing the cloning step, then I’d be happy with a “MSI-0.9” that just gathers 10000 top people and sets them to work on MSI-2 (or whatever technologies appears most important to getting a positive Singularity).
http://en.wikipedia.org/wiki/List_of_things_named_after_Leonhard_Euler
“Mathematical historian Eric Temple Bell estimated that, had Gauss published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years”; http://en.wikipedia.org/wiki/List_of_things_named_after_Carl_Friedrich_Gauss
http://en.wikipedia.org/wiki/Category:Lists_of_things_named_after_mathematicians
(This isn’t to contradict your point, just provide relevant evidence.)
I agree that von Neumann was exceptional.
I am not sure a Wikipedia rap sheet is as good a proxy for genius as you claim. I think genius is necessary but not sufficient. I also think “recreating von Neumann” will require context not present in his DNA. There are also issues with parallelizing intellectual work detailed in “the mythical man month,” I am sure you are aware of.
At any rate, instead of trying for MSI-1, which has huge technical obstacles to overcome, why not simply push to acquire financial resources and hire brilliant people to do the work you think is necessary. That is doable with today’s tech, and today’s people.
[comment from the heart, rather than from the head: your description of MSI-1 sounds kind of, well, totalitarian. Don’t you think that’s a little peculiar?]
The point is to obtain an insurmountable lead on WBE tech, otherwise you’ll just spur competition and probably end up with Robin Hanson’s Malthusian scenario. (If intelligence explosion were possible, you could win the WBE race by a small margin and translate that into a big win, but for this post I’m assuming that intelligence explosion isn’t possible, so you need to win the race by a large margin.)
In that case you’re in for a surprise when you find out what I was referring to by “WBE-enabled institutional controls” for MSI-2. Read Carl Shulman’s Whole Brain Emulation and the Evolution of Superorganisms.
Since exploiting intelligence explosion still requires FAI, and FAI could be very difficult, you might still need a large enough margin to perform all the necessary FAI research before your competition stumbles on an AGI.
I thought of an interesting objection to this. What if the cloned agents decided that the gap between themselves and other humans was sufficiently well-defined for them to implement the coherent extrapolated volition of the clones themselves only?
http://lesswrong.com/lw/932/stupid_questions_open_thread/64r4
Of course, this problem could potentially arise even if the gap was poorly defined...
That isn’t necessarily an objection. Personally, I’m unsure if I would prefer human-CEV to Johnny-CEV.
Agreed. I don’t know much about von Neumann, but I would trust Feynman with my CEV any day.