Speedrunners have a tendency to totally break video games in half, sometimes in the strangest and most bizarre ways possible. I feel like some of the more convoluted video game speedrun / challenge run glitches out there are actually a good way to build intuition on what high optimisation pressure (like that imposed by a relatively weak AGI) might look like, even at regular human or slightly superhuman levels. (Slightly superhuman being a group of smart people achieving what no single human could)
Two that I recommend:
https://www.youtube.com/watch?v=kpk2tdsPh0A—Tool-assisted run where the inputs are programmed frame by frame by a human, and executed by a computer. Exploits idiosyncracies in Super Mario 64 code that no human could ever use unassisted in order to reduce the amount of times the A button needs to be pressed in a run. I wouldn’t be surprised if this guy knows more about SM64 code than the devs at this point.
https://www.youtube.com/watch?v=THtbjPQFVZI—A glitch using outside-the-game hardware considerations to improve consistency on yet another crazy in-game glitch. Also showcases just how large the attack space is.
These videos are also just incredibly entertaining in their own right, and not ridiculously long, so I hypothesise that they’re a great resource to send more skeptical people if they understand the idea of AGI but are systematically underestimating the difference between “bug-free” (Program will not have bugs during normal operation) and secure. (Program will not have bugs when deliberately pushed towards narrow states designed to create bugs)
For a more serious overview, you could probably find obscure hardware glitches and such to achieve the same lesson.
I’m not sure I agree that it’s a useful intuition pump for the ways an AGI can surprisingly-optimize things. They’re amusing, but fundamentally based on out-of-game knowledge about the structure of the game. Unless you’re positing a simulation hypothesis, AND that AGI somehow escapes the simulation, it’s not really analogous.
They’re amusing, but fundamentally based on out-of-game knowledge about the structure of the game.
Evolutionary and DRL methods are famous for, model-free, within the game, findingexploits and glitches. There’s also chess endgame databases as examples.
Speedrunners have a tendency to totally break video games in half, sometimes in the strangest and most bizarre ways possible. I feel like some of the more convoluted video game speedrun / challenge run glitches out there are actually a good way to build intuition on what high optimisation pressure (like that imposed by a relatively weak AGI) might look like, even at regular human or slightly superhuman levels. (Slightly superhuman being a group of smart people achieving what no single human could)
Two that I recommend:
https://www.youtube.com/watch?v=kpk2tdsPh0A—Tool-assisted run where the inputs are programmed frame by frame by a human, and executed by a computer. Exploits idiosyncracies in Super Mario 64 code that no human could ever use unassisted in order to reduce the amount of times the A button needs to be pressed in a run. I wouldn’t be surprised if this guy knows more about SM64 code than the devs at this point.
https://www.youtube.com/watch?v=THtbjPQFVZI—A glitch using outside-the-game hardware considerations to improve consistency on yet another crazy in-game glitch. Also showcases just how large the attack space is.
These videos are also just incredibly entertaining in their own right, and not ridiculously long, so I hypothesise that they’re a great resource to send more skeptical people if they understand the idea of AGI but are systematically underestimating the difference between “bug-free” (Program will not have bugs during normal operation) and secure. (Program will not have bugs when deliberately pushed towards narrow states designed to create bugs)
For a more serious overview, you could probably find obscure hardware glitches and such to achieve the same lesson.
I’m not sure I agree that it’s a useful intuition pump for the ways an AGI can surprisingly-optimize things. They’re amusing, but fundamentally based on out-of-game knowledge about the structure of the game. Unless you’re positing a simulation hypothesis, AND that AGI somehow escapes the simulation, it’s not really analogous.
Evolutionary and DRL methods are famous for, model-free, within the game, finding exploits and glitches. There’s also chess endgame databases as examples.