Suppose you’re a human, and you think we’re onto something with modern physics. Humans have a brain that does not exceed a meter in radius. Humans have a brain that weighs less than 100kgs. By the Bekenstein bound, we know the average human brain could have at most 2.6*10^42 bits before forming a black hole, and for the upper bounds I gave, that number is larger, but note, it is still finite.
Now let’s say you want to make decisions. What does it mean to make decisions? That’s tricky, but we can say it has something to do with performing actions based on sense-perceptions. Let’s assume that you are fully aware of your entire brain and its information contents as your sense-perception, and you can completely modify its information contents as your action space. We now have a finite action space and finite perception space, which allows for a finite decision space wherein we define a function that tells you the best choice in action space from every element in perception space. This function can be given by a finite length program.
All a useful decision theory needs is to tell you what one of the best actions for a given perception is. Maybe it’s not always unique, but there is no need to even assign utility functions to consider what the best action is. All you need is to form a partial order for your preferences over the collection, so let’s only talk about preferences (which could be given by utility functions, but don’t have to be).
“Now, now!” You say. “What about potentially larger actors, such as actors that I could modify myself into?” Well, first of all, you still have a finite decision space and you would have to be choosing between what actors to become. Just because you can think about an infinite path of actors that you could transform into, step after step, and perhaps achieve something increasingly large doesn’t mean that there is a useful corresponding decision theory for you now.
We can also use Bremermann’s Limit to show that even if you somehow managed to survive for the entire expected length of the universe (as our very useful thermodynamic models give) you would still have a finite amount of computation that you can perform before taking that action. This means you can at best make a finite-accuracy prediction of possible futures. At some fidelity, your ability to predict infinite processes accurately fails, and you weigh the possible futures that you predict from your finite time and finite sense-perceptions into a choice in finite action space.
Contra Infinite Ethics, Utility Functions
Suppose you’re a human, and you think we’re onto something with modern physics. Humans have a brain that does not exceed a meter in radius. Humans have a brain that weighs less than 100kgs. By the Bekenstein bound, we know the average human brain could have at most 2.6*10^42 bits before forming a black hole, and for the upper bounds I gave, that number is larger, but note, it is still finite.
Now let’s say you want to make decisions. What does it mean to make decisions? That’s tricky, but we can say it has something to do with performing actions based on sense-perceptions. Let’s assume that you are fully aware of your entire brain and its information contents as your sense-perception, and you can completely modify its information contents as your action space. We now have a finite action space and finite perception space, which allows for a finite decision space wherein we define a function that tells you the best choice in action space from every element in perception space. This function can be given by a finite length program.
All a useful decision theory needs is to tell you what one of the best actions for a given perception is. Maybe it’s not always unique, but there is no need to even assign utility functions to consider what the best action is. All you need is to form a partial order for your preferences over the collection, so let’s only talk about preferences (which could be given by utility functions, but don’t have to be).
“Now, now!” You say. “What about potentially larger actors, such as actors that I could modify myself into?” Well, first of all, you still have a finite decision space and you would have to be choosing between what actors to become. Just because you can think about an infinite path of actors that you could transform into, step after step, and perhaps achieve something increasingly large doesn’t mean that there is a useful corresponding decision theory for you now.
We can also use Bremermann’s Limit to show that even if you somehow managed to survive for the entire expected length of the universe (as our very useful thermodynamic models give) you would still have a finite amount of computation that you can perform before taking that action. This means you can at best make a finite-accuracy prediction of possible futures. At some fidelity, your ability to predict infinite processes accurately fails, and you weigh the possible futures that you predict from your finite time and finite sense-perceptions into a choice in finite action space.