In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn’t take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn’t allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don’t have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. What is it that is missing that doesn’t allow one of them to prevail?
I would guess that the reason is people don’t work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know “A = B” and “B = C” and “C = D”, you can conclude that “A = D”. In real life your knowledge is more like “so far it seems to me that under usual conditions A is very similar to B”. A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying “give me a static point and a lever and I will move the world” only to realize that your lever is too floppy and you can’t move anything that is too far and heavy.
In Engelbart As UberTool? Robin Hanson talks about a dude who actually tried to apply recursive self-improvement to his company. He is till trying (wow!).
It seems humans, even groups of humans, are not capable of fast recursive self-improvement. That they didn’t take over the world might be partly due to strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn’t allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have said that it is unlikely enough to be ignored. Or that there is not enough data to make a reasonable guess either way. I don’t have the resources to take every idea seriously and assign a probability estimate to it. Some things get just discounted by my intuitive judgment.
I would guess that the reason is people don’t work with exact numbers, only with approximations. If you make a very long equation, the noise kills the signal. In mathematics, if you know “A = B” and “B = C” and “C = D”, you can conclude that “A = D”. In real life your knowledge is more like “so far it seems to me that under usual conditions A is very similar to B”. A hypothetical perfect Bayesian could perhaps assign some probability and work with it, but even our estimates of probabilities are noisy. Also, the world is complex, things do not add to each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules with maybe 90% probabilities. Try to chain dozen of them together, and the result is pathetic. It is like saying “give me a static point and a lever and I will move the world” only to realize that your lever is too floppy and you can’t move anything that is too far and heavy.