AIXI is a mathematical construct, the perfect agent that maximizes its utility function in a discrete world. Unfortunately there is no algorithm implementing it, therefore it’s impossible to create in our world. It has another problem—the agent in AIXI model exists outside of the world, it’s impossible for the agent to drop an anvil on its own circuits and make itself more stupid.
Modern reinforcement learning algorithms, which are the closest thing to general AI that we have, operate in similar fashion, they aren’t part of the environment either. If a bot is learning how to balance on one leg, or play pong, or super mario, it can’t modify itself or break its own brain.
My idea is to create environments where the bot can modify itself and break itself, so that people who want to research creation of strong AI can test solutions for the anvil problem. Here are examples of such environments:
A Linux operating system in a virtual machine, the bot is a program running on it. The goal is to control pong/mario/whatever and win by listening and responding on a tcp port. Or get superuser access, via invoking sudo or by finding a vulnerability. Or receive a programming problem via plaintext on a tcp port and respond with a Python program that solves it.
Same as the previous item, except there’s an “antivirus” running in the system that kills random programs every second.
Gridworld, that is an environment consisting of n by m cells, and stuff happens in it. Some objects in the environment negate random bytes in bot’s code or dynamic memory.
Same as the first item. The bot is gaining rewards based on how much RAM is free.
Atari Pacman, food pieces are marked with different colors and some colors slightly modify the bot’s memory.
Idea: OpenAI Gym environments where the AI is a part of the environment
AIXI is a mathematical construct, the perfect agent that maximizes its utility function in a discrete world. Unfortunately there is no algorithm implementing it, therefore it’s impossible to create in our world. It has another problem—the agent in AIXI model exists outside of the world, it’s impossible for the agent to drop an anvil on its own circuits and make itself more stupid.
Modern reinforcement learning algorithms, which are the closest thing to general AI that we have, operate in similar fashion, they aren’t part of the environment either. If a bot is learning how to balance on one leg, or play pong, or super mario, it can’t modify itself or break its own brain.
My idea is to create environments where the bot can modify itself and break itself, so that people who want to research creation of strong AI can test solutions for the anvil problem. Here are examples of such environments:
A Linux operating system in a virtual machine, the bot is a program running on it. The goal is to control pong/mario/whatever and win by listening and responding on a tcp port. Or get superuser access, via invoking sudo or by finding a vulnerability. Or receive a programming problem via plaintext on a tcp port and respond with a Python program that solves it.
Same as the previous item, except there’s an “antivirus” running in the system that kills random programs every second.
Gridworld, that is an environment consisting of n by m cells, and stuff happens in it. Some objects in the environment negate random bytes in bot’s code or dynamic memory.
Same as the first item. The bot is gaining rewards based on how much RAM is free.
Atari Pacman, food pieces are marked with different colors and some colors slightly modify the bot’s memory.
Feel free to post your thoughts and critique.