How do we know that designing a better intelligence is not an exponentially difficult task ?
Well, the answer could simply be, “you’re right; we don’t know that”. However, I think there is evidence that an ultraintelligent machine could make itself very intelligent indeed.
The human mind, though better at reasoning than anything else that currently exists, still has a multitude of flaws. We can’t symbolically reason at even a millionth the speed of a $15 cell phone (and even if we could, there are still unanswered questions about how to reason), and our intuition is loaded with biases. If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived.
Also, I could be mistaken, but I think people who study rationality and mathematics (among other things?) tend to report increasing marginal utility: once they understand a concept, it becomes easier to understand other concepts. A machine capable of understanding trillions of concepts might be able to learn new ones very easily compared to a human.
Well, the answer could simply be, “you’re right; we don’t know that”. However, I think there is evidence that an ultraintelligent machine could make itself very intelligent indeed.
The human mind, though better at reasoning than anything else that currently exists, still has a multitude of flaws. We can’t symbolically reason at even a millionth the speed of a $15 cell phone (and even if we could, there are still unanswered questions about how to reason), and our intuition is loaded with biases. If you could eliminate all human flaws, you would end up with something more intelligent than the most intelligent human that has ever lived.
Also, I could be mistaken, but I think people who study rationality and mathematics (among other things?) tend to report increasing marginal utility: once they understand a concept, it becomes easier to understand other concepts. A machine capable of understanding trillions of concepts might be able to learn new ones very easily compared to a human.
You might end up with nothing. You really have to start over and build an inference machine vastly different from ours.
This seems true...but it doesn’t argue against a bounded intelligence, just that the bound is very far.