At which point you die, for lack of intelligence.
Actually a fairly good metaphor for x-risk, surprisingly.
Of course, it’s a lot easier to make a Tetris-optimizer than a Friendly AI...
I thought Tetris had been proven to always eventually produce an unclearable block sequence.
Only if there is a possibility of a sufficiently large run of S and Z pieces. In many implementations there is not.
At which point you die, for lack of intelligence.
Actually a fairly good metaphor for x-risk, surprisingly.
Of course, it’s a lot easier to make a Tetris-optimizer than a Friendly AI...
I thought Tetris had been proven to always eventually produce an unclearable block sequence.
Only if there is a possibility of a sufficiently large run of S and Z pieces. In many implementations there is not.