“Creative blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?”
Remember the significance attributed to Skynet’s becoming ‘self-aware’? [...] The fact is that present-day software developers could straightforwardly program a computer to have ‘self-awareness’ in the behavioural sense — for example, to pass the ‘mirror test’ of being able to use a mirror to infer facts about itself — if they wanted to. [...] AGIs will indeed be capable of self-awareness — but that is because they will be General
Some hope to learn how we can rig their programming to make [AGIs] constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.[...] The battle between good and evil ideas is as old as our species and will go on regardless of the hardware on which it is running
He also says confusing things about induction being inadequate for creativity which I’m guessing he couldn’t support well in this short essay (perhaps he explains better in his books). Not quoting here. His attack on Bayesianism as an explanation for intelligence is valid and interesting, but could be wrong. Given what we know about neural networks, something like this does happen in the brain, and possibly even at a concept level.
The doctrine assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act. This is especially perverse when it comes to an AGI’s values — the moral and aesthetic ideas that inform its choices and intentions — for it allows only a behaviouristic model of them, in which values that are ‘rewarded’ by ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that are ‘punished’ by ‘experience’ are extinguished. As I argued above, that behaviourist, input-output model is appropriate for most computer programming other than AGI, but hopeless for AGI.
His final conclusions are disagreeable. He somehow concludes that the principal bottleneck in AGI research is a philosophical one.
In his last paragraph, he makes the following controversial statement:
For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees.
This would be false if, for example, the mother controls gene expression while a foetus develops and helps shape the brain. We should be able to answer this question definitively once we can grow human babies completely in vitro. Another problem would be the impact of the cultural environment. A way to answer this question would be to see if our Stone Age ancestors would be classified as AGIs under a reasonable definition
[LINK] David Deutsch on why we don’t have AGI yet “Creative Blocks”
Folks here should be familiar with most of these arguments. Putting some interesting quotes below:
http://aeon.co/magazine/being-human/david-deutsch-artificial-intelligence/
“Creative blocks: The very laws of physics imply that artificial intelligence must be possible. What’s holding us up?”
He also says confusing things about induction being inadequate for creativity which I’m guessing he couldn’t support well in this short essay (perhaps he explains better in his books). Not quoting here. His attack on Bayesianism as an explanation for intelligence is valid and interesting, but could be wrong. Given what we know about neural networks, something like this does happen in the brain, and possibly even at a concept level.
His final conclusions are disagreeable. He somehow concludes that the principal bottleneck in AGI research is a philosophical one.
In his last paragraph, he makes the following controversial statement:
This would be false if, for example, the mother controls gene expression while a foetus develops and helps shape the brain. We should be able to answer this question definitively once we can grow human babies completely in vitro. Another problem would be the impact of the cultural environment. A way to answer this question would be to see if our Stone Age ancestors would be classified as AGIs under a reasonable definition