Calling this “AI risk” seems like a slight abuse of the term. The term “AI risk” as I understand it refers to risks coming from smarter-than-human AI. The risk here isn’t that the drones are too smart, it’s that they’ve been given too much power. Even a dumb AI can be dangerous if it’s hooked up to nuclear warheads.
Calling this “AI risk” seems like a slight abuse of the term. The term “AI risk” as I understand it refers to risks coming from smarter-than-human AI.
I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) “AI risk” in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as “(weak) AI risk” to prevent confusion. However looking at the original post it seems the author already talks about “near-term tool AI” as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.
I originally had “AI risk” in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term “AI risk” shouldn’t leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.
The issue seems to me AI that have too much power over people without being friendly. Whether they get this power by being handed a gun or by outsmarting us doesn’t seem as relevant.
The risk here isn’t that the drones are too smart, it’s that they’ve been given too much power.
No. Actually. That is not the risk I’m discussing here. I would not argue that it isn’t dangerous to give them the ability to kill. It is. But I do argue that my point here is that lethal autonomy could give people too much power—that is to say, to redistribute power unevenly, undoing all the checks and balances and threatening democracy.
According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the “Artificial Intelligence and Robotics” gallery. It’s not smarter than a human—all it can do is play a game and beating humans at a game does not qualify as being smarter than a human.
The dictionary doesn’t define it that way, apparently all it needs to do is something like perceive and recognize shapes.
And what about the term “tool AI”?
Why should I agree that AI always means “smarter than human”? I thought we had the term AGI to make that distinction.
Maybe your point here is not that AI always means “smarter than human” but that “AI risk” for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI—that software can certainly be quite dangerous because of it’s intelligence even if it is not as intelligent as humans.
Calling this “AI risk” seems like a slight abuse of the term. The term “AI risk” as I understand it refers to risks coming from smarter-than-human AI. The risk here isn’t that the drones are too smart, it’s that they’ve been given too much power. Even a dumb AI can be dangerous if it’s hooked up to nuclear warheads.
I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) “AI risk” in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as “(weak) AI risk” to prevent confusion. However looking at the original post it seems the author already talks about “near-term tool AI” as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.
I originally had “AI risk” in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term “AI risk” shouldn’t leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.
The issue seems to me AI that have too much power over people without being friendly. Whether they get this power by being handed a gun or by outsmarting us doesn’t seem as relevant.
No. Actually. That is not the risk I’m discussing here. I would not argue that it isn’t dangerous to give them the ability to kill. It is. But I do argue that my point here is that lethal autonomy could give people too much power—that is to say, to redistribute power unevenly, undoing all the checks and balances and threatening democracy.
According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the “Artificial Intelligence and Robotics” gallery. It’s not smarter than a human—all it can do is play a game and beating humans at a game does not qualify as being smarter than a human.
The dictionary doesn’t define it that way, apparently all it needs to do is something like perceive and recognize shapes.
And what about the term “tool AI”?
Why should I agree that AI always means “smarter than human”? I thought we had the term AGI to make that distinction.
Maybe your point here is not that AI always means “smarter than human” but that “AI risk” for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI—that software can certainly be quite dangerous because of it’s intelligence even if it is not as intelligent as humans.