I am thinking that a bounded reasoner trying to decide whether killing and replacing is a thing to do might need a method of factoring in taht they can’t understand the system completely. Ontological inertia will mean that a system preserved will retain value that you do not understand also but recreating imports only the kinds of value you do know to value. Thus a policy of assuming that there is unknowable value in keeping a system intact balances against making known improvements and the balance between how big an improvement can be before it circumvents the need to keep fences standing. An agent that recreates a system for neglible improvement in effect assumes to have near infallible knowledge.
I am thinking that a bounded reasoner trying to decide whether killing and replacing is a thing to do might need a method of factoring in taht they can’t understand the system completely. Ontological inertia will mean that a system preserved will retain value that you do not understand also but recreating imports only the kinds of value you do know to value. Thus a policy of assuming that there is unknowable value in keeping a system intact balances against making known improvements and the balance between how big an improvement can be before it circumvents the need to keep fences standing. An agent that recreates a system for neglible improvement in effect assumes to have near infallible knowledge.