According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the “Artificial Intelligence and Robotics” gallery. It’s not smarter than a human—all it can do is play a game and beating humans at a game does not qualify as being smarter than a human.
The dictionary doesn’t define it that way, apparently all it needs to do is something like perceive and recognize shapes.
And what about the term “tool AI”?
Why should I agree that AI always means “smarter than human”? I thought we had the term AGI to make that distinction.
Maybe your point here is not that AI always means “smarter than human” but that “AI risk” for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI—that software can certainly be quite dangerous because of it’s intelligence even if it is not as intelligent as humans.
According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the “Artificial Intelligence and Robotics” gallery. It’s not smarter than a human—all it can do is play a game and beating humans at a game does not qualify as being smarter than a human.
The dictionary doesn’t define it that way, apparently all it needs to do is something like perceive and recognize shapes.
And what about the term “tool AI”?
Why should I agree that AI always means “smarter than human”? I thought we had the term AGI to make that distinction.
Maybe your point here is not that AI always means “smarter than human” but that “AI risk” for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI—that software can certainly be quite dangerous because of it’s intelligence even if it is not as intelligent as humans.