A recent paper showed that ‘Striatal Volume Predicts Level of Video Game Skill Acquisition’. A valid inference would be that an AGI with the computational equivalent of a higher striatal volume would possess a superior cognitive flexibility, at least when it comes to gaming. But what could it accomplish? I’m playing a game called Trackmania, it is a arcade racing game. The top players are so close to the ideal line and therefore the fastest time that a superhuman AI could indeed beat them but only by a few milliseconds. Each millisecond less might demand a order of magnitude more skill, but that doesn’t matter. First of all, there is a absolute limit. Secondly, it doesn’t provide a serious advantage, it doesn’t matter. And that may very well be the case with physics too. There is no guarantee that a faster thinking or increased working memory capacity will ever yield anything genuine without a lot of dumb luck, if at all. It is unlikely that a superhuman AI would come up with a faster than light propulsion or that it would disprove Gödel’s incompleteness theorems.
Of course, we should be careful. And it is absolutely justified that an organisation like the SIAI gets money to do research on those questions. But there is not enough evidence to outweigh the doubt as to impede AI research. We will actually need research of real AGI to answer some of the open questions.
Regarding self-improvement I’m very doubtful too. The human indecision and fuzziness of thinking might very well be a feature. A superhuman AI might very well beat us at Go or the stock exchange, as long as it deals with its own kind and not the irrational agents that we are, but that doesn’t mean it will be able to deal with natural problems orders of magnitude more efficient than we do.
Most of the risks from superhuman AI are associated with advanced nanotechnology. Without it, it will be impotent. Can it solve it, if it is possible at all? Can it implement its results if it can solve it, if it is possible? Because without it, self-improvement will be very hard. What will be even harder is creating copies of it without first building the necessary infrastructure for the computational substrates.
Could an AGI take over the Internet? This is very unlikely. There are spare resources, but not that much. You can’t expect that it would even be suitable as a computational substrate. And how is it going to make use of it before crude measures are taken to shut it down? Many open questions, much speculation.
Paperclipping is another very speculative idea. Is a superhuman artificial general intelligence possible that is mistakenly equipped with the incentive to turn the universe into paperclips? I guess it is possible, but not without hard-coding this incentive deliberately and with great care.
A recent paper showed that ‘Striatal Volume Predicts Level of Video Game Skill Acquisition’. A valid inference would be that an AGI with the computational equivalent of a higher striatal volume would possess a superior cognitive flexibility, at least when it comes to gaming. But what could it accomplish? I’m playing a game called Trackmania, it is a arcade racing game. The top players are so close to the ideal line and therefore the fastest time that a superhuman AI could indeed beat them but only by a few milliseconds. Each millisecond less might demand a order of magnitude more skill, but that doesn’t matter. First of all, there is a absolute limit. Secondly, it doesn’t provide a serious advantage, it doesn’t matter. And that may very well be the case with physics too. There is no guarantee that a faster thinking or increased working memory capacity will ever yield anything genuine without a lot of dumb luck, if at all. It is unlikely that a superhuman AI would come up with a faster than light propulsion or that it would disprove Gödel’s incompleteness theorems.
Of course, we should be careful. And it is absolutely justified that an organisation like the SIAI gets money to do research on those questions. But there is not enough evidence to outweigh the doubt as to impede AI research. We will actually need research of real AGI to answer some of the open questions.
Regarding self-improvement I’m very doubtful too. The human indecision and fuzziness of thinking might very well be a feature. A superhuman AI might very well beat us at Go or the stock exchange, as long as it deals with its own kind and not the irrational agents that we are, but that doesn’t mean it will be able to deal with natural problems orders of magnitude more efficient than we do.
Most of the risks from superhuman AI are associated with advanced nanotechnology. Without it, it will be impotent. Can it solve it, if it is possible at all? Can it implement its results if it can solve it, if it is possible? Because without it, self-improvement will be very hard. What will be even harder is creating copies of it without first building the necessary infrastructure for the computational substrates.
Could an AGI take over the Internet? This is very unlikely. There are spare resources, but not that much. You can’t expect that it would even be suitable as a computational substrate. And how is it going to make use of it before crude measures are taken to shut it down? Many open questions, much speculation.
Paperclipping is another very speculative idea. Is a superhuman artificial general intelligence possible that is mistakenly equipped with the incentive to turn the universe into paperclips? I guess it is possible, but not without hard-coding this incentive deliberately and with great care.