More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
What would yours be?
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
It seems your post seems to be more about free will than intelligence as defined by Muehlhauser in the above article. Free will has been covered quite comprehensibly on LessWrong) so I’m not particularly interested debating about it.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.