It seems to me that one of the problems facing the whole space is a lack of any bounds for misalignment scenarios.
It could be helpful to have clear impossible and possible baskets to build a clearer set of objectives to work through.
An example might be. Can a pure gray goo physically escape our atmosphere, or would it need to use a launch system.
It gives a solvable bound on how the system would embody itself if it values survival in the long term.… Which any runaway computronium scenario would seem to imply. If space by launch system is a bound, then the runaway is restricted until after use of launch system… Which gives us new instrumentals
If we can’t say anything about behavioral bounds based on physical bounds then… The problem becomes something like asking people to work on making ghost or leprechaun traps....
Discovering and controlling any, or ideally many, of these essentially… “great AGI filters” could be a way of adding layers of… instrumental control if that’s a term
Defining Boundaries on Outcomes
It seems to me that one of the problems facing the whole space is a lack of any bounds for misalignment scenarios.
It could be helpful to have clear impossible and possible baskets to build a clearer set of objectives to work through.
An example might be. Can a pure gray goo physically escape our atmosphere, or would it need to use a launch system.
It gives a solvable bound on how the system would embody itself if it values survival in the long term.… Which any runaway computronium scenario would seem to imply. If space by launch system is a bound, then the runaway is restricted until after use of launch system… Which gives us new instrumentals
If we can’t say anything about behavioral bounds based on physical bounds then… The problem becomes something like asking people to work on making ghost or leprechaun traps....
Discovering and controlling any, or ideally many, of these essentially… “great AGI filters” could be a way of adding layers of… instrumental control if that’s a term