I have an intuition that the dutch-book arguments still apply in very relevant ways. I mostly want to talk about how maximization appears to be convergent. Let’s see how this goes as a comment.
My main point: if you think an intelligent agent forms and pursues instrumental goals, then I think that agent will be doing a lot of maximization inside, and will prefer to not get dutch-booked relative to its instrumental goals.
---
First, an obvious take on the pizza non-transitivity thing.
If I’m that person desiring a slice of pizza, I’m perhaps desiring it because it will leave me full + taste good + not cost too much.
Is there something wrong with me paying some money to switch the pizza slice back and forth? Well, if the reason I cared about the pizza was that it was low-cost tasty food, then I guess I’m doing a bad job at getting what I care about.
If I enjoy the process of paying for a different slice of pizza, or am indifferent to it, then that’s a different story. And it doesn’t hurt much to pay 1 cent a couple of times anyway.
----
Second, suppose I’m trying to get to the moon. How would I go about it?
I might start with estimates about how valuable different suboutcomes are, relative to my attempt to get to the moon. For instance, I might begin with the theory that I need to have a million dollars to get to the moon, and that I’ll need to acquire some rocket fuel too.
If I’m trying to get to the moon soon, I will be open to plans that make me money quickly, and teach me how to get rocket fuel. I would also like better ideas about how I should get to the moon, and if you told about how calculus and finite-element-analysis would be useful, I’ll update my plans. (And if I were smarter, I might have figured that out on my own.)
If I think that I need a much better grasp of calculus, I might then dedicate some time to learning about it. If you offer me a plan for learning more about calculus, better and faster, I’ll happily update and follow it. If I’m smart enough to find a better plan on my own, by thinking, I’ll update and follow it.
----
So, you might think that I can be an intelligent agent, and basically not do anything in my mind that looks like “maximizing”. I disagree! In my above parable, it should be clear that my mind is continually selecting options that look better to me. I think this is happening very ubiquitously in my mind, and also in agents that are generally intelligent.
I have an intuition that the dutch-book arguments still apply in very relevant ways. I mostly want to talk about how maximization appears to be convergent. Let’s see how this goes as a comment.
My main point: if you think an intelligent agent forms and pursues instrumental goals, then I think that agent will be doing a lot of maximization inside, and will prefer to not get dutch-booked relative to its instrumental goals.
---
First, an obvious take on the pizza non-transitivity thing.
If I’m that person desiring a slice of pizza, I’m perhaps desiring it because it will leave me full + taste good + not cost too much.
Is there something wrong with me paying some money to switch the pizza slice back and forth? Well, if the reason I cared about the pizza was that it was low-cost tasty food, then I guess I’m doing a bad job at getting what I care about.
If I enjoy the process of paying for a different slice of pizza, or am indifferent to it, then that’s a different story. And it doesn’t hurt much to pay 1 cent a couple of times anyway.
----
Second, suppose I’m trying to get to the moon. How would I go about it?
I might start with estimates about how valuable different suboutcomes are, relative to my attempt to get to the moon. For instance, I might begin with the theory that I need to have a million dollars to get to the moon, and that I’ll need to acquire some rocket fuel too.
If I’m trying to get to the moon soon, I will be open to plans that make me money quickly, and teach me how to get rocket fuel. I would also like better ideas about how I should get to the moon, and if you told about how calculus and finite-element-analysis would be useful, I’ll update my plans. (And if I were smarter, I might have figured that out on my own.)
If I think that I need a much better grasp of calculus, I might then dedicate some time to learning about it. If you offer me a plan for learning more about calculus, better and faster, I’ll happily update and follow it. If I’m smart enough to find a better plan on my own, by thinking, I’ll update and follow it.
----
So, you might think that I can be an intelligent agent, and basically not do anything in my mind that looks like “maximizing”. I disagree! In my above parable, it should be clear that my mind is continually selecting options that look better to me. I think this is happening very ubiquitously in my mind, and also in agents that are generally intelligent.