I was hoping to make the discussion more concrete, We might be arguing about different types of systems,..
Talking mathematically, what is the domain of your utility function for the system you are suggesting? And does the function change over time, if so what governs the change?
We might be arguing about different types of systems
Well, yes, I think that type of system you are talking about is a particularly ineffective type of maximizer, and the problems it has are not general to maximizers.
Talking mathematically, what is the domain of your utility function for the system you are suggesting? And does the function change over time, if so what governs the change?
The utility function should be over possible states of the block universe, and it should only change when discoveries of how the universe works reveal that it is based on fundamental misconceptions.
You have a block world (as in eternalism)?) representation of the world that includes the AI system itself ( and the block world representation inside that system, and so forth?). My mind boggles at this a bit. How does it know what it will do before it makes the decision to do it? Formal proofs?
I suspect I need to see a formal (ish) spec of the system, so I can talk intelligently about how it might or might not fall into the pitfalls I see.
I was hoping to make the discussion more concrete, We might be arguing about different types of systems,..
Talking mathematically, what is the domain of your utility function for the system you are suggesting? And does the function change over time, if so what governs the change?
Well, yes, I think that type of system you are talking about is a particularly ineffective type of maximizer, and the problems it has are not general to maximizers.
The utility function should be over possible states of the block universe, and it should only change when discoveries of how the universe works reveal that it is based on fundamental misconceptions.
You have a block world (as in eternalism)?) representation of the world that includes the AI system itself ( and the block world representation inside that system, and so forth?). My mind boggles at this a bit. How does it know what it will do before it makes the decision to do it? Formal proofs?
I suspect I need to see a formal (ish) spec of the system, so I can talk intelligently about how it might or might not fall into the pitfalls I see.