Huh. This brings up the question of whether or not it would be possible to simulate the AGI code in a test-run without regular risks. Maybe create some failsafe that is invisible to the AGI that destroys it if it is “let out of the box” or (to incorporate Holden’s suggestion, since it just came to me) having a “tool mode” where the AGI’s agent-properties (decision making, goal setting, etc.) are non-functional.
Surely NASA code is thoroughly tested in simulation runs. It’s the equivalent of having a known-perfect method of boxing an AI.
Huh. This brings up the question of whether or not it would be possible to simulate the AGI code in a test-run without regular risks. Maybe create some failsafe that is invisible to the AGI that destroys it if it is “let out of the box” or (to incorporate Holden’s suggestion, since it just came to me) having a “tool mode” where the AGI’s agent-properties (decision making, goal setting, etc.) are non-functional.
But NASA code can’t check itself—there’s no attempt at having an AI go over it.
Yes, but even ordinary simulation testing produces software that’s much better on its first real run than software that has never been run at all.