Huh. This brings up the question of whether or not it would be possible to simulate the AGI code in a test-run without regular risks. Maybe create some failsafe that is invisible to the AGI that destroys it if it is “let out of the box” or (to incorporate Holden’s suggestion, since it just came to me) having a “tool mode” where the AGI’s agent-properties (decision making, goal setting, etc.) are non-functional.
Huh. This brings up the question of whether or not it would be possible to simulate the AGI code in a test-run without regular risks. Maybe create some failsafe that is invisible to the AGI that destroys it if it is “let out of the box” or (to incorporate Holden’s suggestion, since it just came to me) having a “tool mode” where the AGI’s agent-properties (decision making, goal setting, etc.) are non-functional.