A couple of years ago there was an AI trained to beat Tetris. Artificial intelligences are very good at learning video games, so it didn’t take long for it to master the game. Soon it was playing so quickly that the game was speeding up to the point it was impossible to win and blocks were slowly stacking up, but before it could be forced to place the last piece, it paused the game.
As long as the game didn’t continue, it could never lose.
When we ask AI to do something, like play Tetris, we have a lot of assumptions about how it can or should approach that goal, but an AI doesn’t have those assumptions. If it looks like it might not achieve its goal through regular means, it doesn’t give up or ask a human for guidance, it pauses the game.
I’m trying to find the balance between suggesting existential/catastrophic risk and screaming it or coming off too dramatic, any feedback would be welcome.
[Policy makers]
A couple of years ago there was an AI trained to beat Tetris. Artificial intelligences are very good at learning video games, so it didn’t take long for it to master the game. Soon it was playing so quickly that the game was speeding up to the point it was impossible to win and blocks were slowly stacking up, but before it could be forced to place the last piece, it paused the game.
As long as the game didn’t continue, it could never lose.
When we ask AI to do something, like play Tetris, we have a lot of assumptions about how it can or should approach that goal, but an AI doesn’t have those assumptions. If it looks like it might not achieve its goal through regular means, it doesn’t give up or ask a human for guidance, it pauses the game.
(Anecdote source)
I’m trying to find the balance between suggesting existential/catastrophic risk and screaming it or coming off too dramatic, any feedback would be welcome.