How would you even pose the question of AI risk to someone in the eighteenth century?
I’m trying to imagine what comes out the other end of Newton’s chronophone, but it sounds very much like “You should think really hard about how to prevent the creation of man-made gods.”
I don’t think it’s plausible that people could stumble on the problem statement 300 years ago, but within that hypothetical, it wouldn’t have been too early.
It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of “utility function” in the context of an AI? Is it a computable mathematical function over a model, such that the ‘intelligence’ component computes the action that results in maximum of that function taken over the world state resulting from the action?
Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more “This is what we’d like, go away and do it and get back to us when it’s done...” than “We have this Gantt chart...”.
There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.
Your definition of “explicit” must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens.
In any case, based on the Monadology, I don’t believe Leibniz would consider the creation of a godlike entity to be theologically possible.
How about: “Eventually your machines will be so powerful they can grant wishes. But remember that they are not benevolent. What will you wish for when you can make a wish-machine?”
How would you even pose the question of AI risk to someone in the eighteenth century?
I’m trying to imagine what comes out the other end of Newton’s chronophone, but it sounds very much like “You should think really hard about how to prevent the creation of man-made gods.”
I don’t think it’s plausible that people could stumble on the problem statement 300 years ago, but within that hypothetical, it wouldn’t have been too early.
It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of “utility function” in the context of an AI? Is it a computable mathematical function over a model, such that the ‘intelligence’ component computes the action that results in maximum of that function taken over the world state resulting from the action?
Surely “Effort planning is a key to success”?
Also, and not just wanting to flash academic applause lights but also genuinely curious, which mathematical successes have been due to effort planning? Even in my own mundane commercial programming experiences, the company which won the biggest was more “This is what we’d like, go away and do it and get back to us when it’s done...” than “We have this Gantt chart...”.
There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.
Your definition of “explicit” must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens.
In any case, based on the Monadology, I don’t believe Leibniz would consider the creation of a godlike entity to be theologically possible.
How about: “Eventually your machines will be so powerful they can grant wishes. But remember that they are not benevolent. What will you wish for when you can make a wish-machine?”
Oh, wait… The tale of the Tower of Babel was told via chronophone by people from the future right before succumbing to uFAI!