although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist.
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
What would yours be?
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I respond to your guestion for the fairness sake, but my reasons are not impressive.
Most of it is probably a wishful thinking, driven by my desire not to have the powerful AI aronud. I am scared at the idea.
The fact that people have felt AI is near for some time and we still do not have it.
Maybe the things which are essential for learning are the same which make human intelligence limited. For instance forgetting things.
Vague feeling, that biologically based inteligence is so complex, that computers are no match.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
It seems your post seems to be more about free will than intelligence as defined by Muehlhauser in the above article. Free will has been covered quite comprehensibly on LessWrong) so I’m not particularly interested debating about it.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).