The other day I heard this anecdote: Someone’s friend was several years ago dismissive of AI risk concerns, thinking that AGI was very far in the future. When pressed about what it would take to change their mind, they said their fire alarm would be AI solving Montezuma’s Revenge. Well, now it’s solved, what do they say? Nothing; if they noticed they didn’t say. Probably if they were pressed on it they would say they were wrong before to call that their fire alarm.
This story fits with the worldview expressed in “There’s No Fire Alarm for AGI.” I expect this sort of thing to keep happening well past the point of no return.
There is this pattern when people say: “X is the true test of intelligence”, and after a computer does X, they switch to “X is just a mechanical problem, but Y is the true test of intelligence”. (Past values of X include: chess, go, poetry...) There was a meme about it that I can’t find now.
The other day I heard this anecdote: Someone’s friend was several years ago dismissive of AI risk concerns, thinking that AGI was very far in the future. When pressed about what it would take to change their mind, they said their fire alarm would be AI solving Montezuma’s Revenge. Well, now it’s solved, what do they say? Nothing; if they noticed they didn’t say. Probably if they were pressed on it they would say they were wrong before to call that their fire alarm.
This story fits with the worldview expressed in “There’s No Fire Alarm for AGI.” I expect this sort of thing to keep happening well past the point of no return.
Also related: Is That Your True Rejection?
There is this pattern when people say: “X is the true test of intelligence”, and after a computer does X, they switch to “X is just a mechanical problem, but Y is the true test of intelligence”. (Past values of X include: chess, go, poetry...) There was a meme about it that I can’t find now.