More flexibility to self-modify may be one of the key properties that distinguishes the behavior of artificial agents from contemporary humans (perhaps not including cyborgs). To my knowledge, the alignment implications of self modification have not been experimentally explored.
Self-modification requires a level of embedding. An agent cannot meaningfully self-modify if it doesn’t have a way of viewing and interacting with its own internals.
Two hurdles then emerge. One, a world for the agent to interact with that also contains the entire inner workings of the agent presents a huge computational cost. Two, it’s also impossible for the agent to hold all the data about itself within its own head, requiring clever abstractions.
Neither of these are impossible problems to solve. The computational cost may be solved by more powerful computers. The second problem must also be solvable as humans are able to reason about themselves using abstractions, but the techniques to achieve this are not developed. It should be obvious that more powerful computers and powerful abstraction generation techniques would be extremely dual-use.
Thankfully there may exist a method for performing experiments on meaningfully self-modifying agents that skips both of these problems. You partially embed your agents. That is instead of your game agent being a single entity in the game world, it would consist of a small number of “body parts”. Examples might be as simple as an “arm” the agent uses to interact with the world or an “eye” that gives the agent more information about parts of the environment. A particularly ambitious idea would be to study the interactions of “value shards”.
The idea here is to that this would be a cheap way to perform experiments that can discover self-modification alignment phenomena.
Partially Embedded Agents
More flexibility to self-modify may be one of the key properties that distinguishes the behavior of artificial agents from contemporary humans (perhaps not including cyborgs). To my knowledge, the alignment implications of self modification have not been experimentally explored.
Self-modification requires a level of embedding. An agent cannot meaningfully self-modify if it doesn’t have a way of viewing and interacting with its own internals.
Two hurdles then emerge. One, a world for the agent to interact with that also contains the entire inner workings of the agent presents a huge computational cost. Two, it’s also impossible for the agent to hold all the data about itself within its own head, requiring clever abstractions.
Neither of these are impossible problems to solve. The computational cost may be solved by more powerful computers. The second problem must also be solvable as humans are able to reason about themselves using abstractions, but the techniques to achieve this are not developed. It should be obvious that more powerful computers and powerful abstraction generation techniques would be extremely dual-use.
Thankfully there may exist a method for performing experiments on meaningfully self-modifying agents that skips both of these problems. You partially embed your agents. That is instead of your game agent being a single entity in the game world, it would consist of a small number of “body parts”. Examples might be as simple as an “arm” the agent uses to interact with the world or an “eye” that gives the agent more information about parts of the environment. A particularly ambitious idea would be to study the interactions of “value shards”.
The idea here is to that this would be a cheap way to perform experiments that can discover self-modification alignment phenomena.