Some mechanism in your (finite) brain is still making that decision.
Sure. But I can express a preference about infinitely many cases in a finite statement. In particular, my preferences includes something like the following: given the existence of k sentient, sapient entities, and given i < j ⇐ k, I prefer i entities getting tortured to j entities getting tortured assuming everything else is otherwise identical.
Alas, your brain can’t handle those numbers—beyond a certain point. They can’t even be input into your brain in your lifetime.
If we are talking about augmenting your brain with a machine, so it is able to deal with these huge numbers, those aren’t really the preferences of a human being any more—and you still don’t get to “unbounded” in a finite time—due to the finite visible universe.
I’m not sure how utility (and expected utility) are physically represented in the human brain. Dopamine levels and endorphin levels are the most obvious candidates, but there are probably also various proxies. However, I figure a 16-bit number would probably cover it pretty well. It may seem counter-intuitive—but you don’t really need more than that to make decisions of the type you describe—even for numbers of people with (say) 256-bit representations.
Omega comes up to you and offers you a choice, it will kill either n or 2n people depending on what you ask. When you ask what n is Omega explains that it is an integer, but is unfortunately far too large to define it within your lifetime. Would you not still pick n in this dilemma? I know I would.
This isn’t quite enough to prove an unbounded utility function, but if we slightly modify the choice so it is n people die with certainty versus 2n people die with 99.999% probability and nobody dies with 0.001% probability then it is enough.
Your brain could probably make that kind of decision with only a few bits of utility. The function would go: lots-of-death bad, not-so-much-death not so bad. IMO, in no way is that evidence that the brain represents unbounded utilities.
The function would go: lots-of-death bad, not-so-much-death not so bad.
Try using numbers. If you try to bound the function, there will be a sufficiently large n where you will prefer the 99.999% probability of 2n people dying to 100% of n people dying.
To recap, I objected: “your brain can’t handle those numbers”. To avoid the huge numbers, they were replaced with “n”—and a bizarre story about an all-knowing being. If you go back to the numbers, we are back to the first objection again—there are some numbers that are too big for unmodified humans to handle. No, I can’t tell you which numbers—but they are out there.
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
I’d suggest that you may be able to argue that the brain does not explicitly implement a utility function as such, which makes sense because utility functions are monstrously complex. Instead, the brain likely implements a bunch of heuristics and other methods of approximating / instantiating a set of desires that could hypothetically be modeled by a utility function (that is unbounded).
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
“your brain can’t handle those numbers” wasn’t “questioning the validity of algebra”. It was questioning whether the human brain can represent—or even receive—the large numbers in question.
To avoid the huge numbers, they were replaced with “n”
as though this were somehow an indictment of the argument.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
To avoid the huge numbers, they were replaced with “n”
as though this were somehow an indictment of the argument.
Dragging in Omega to represent the huge quantities for the human seems to have been a desperate move.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
Well, that’s OK—but the issue is what shape the human utility function is. You can’t just extrapolate out to infinity from a small number of samples near to the origin!
I think there are limits to human happiness and pain—and whatever else you care to invoke as part of the human utility function—so there’s actually a finite representation with bounded utility—and I think that it is the best approximation to what the brain is actually doing.
A fallible process. Pain might seem proportional to number of lashes at first—but keep going for a while, and you will see that they have a non-linear relationship.
Sure. But I can express a preference about infinitely many cases in a finite statement. In particular, my preferences includes something like the following: given the existence of k sentient, sapient entities, and given i < j ⇐ k, I prefer i entities getting tortured to j entities getting tortured assuming everything else is otherwise identical.
Alas, your brain can’t handle those numbers—beyond a certain point. They can’t even be input into your brain in your lifetime.
If we are talking about augmenting your brain with a machine, so it is able to deal with these huge numbers, those aren’t really the preferences of a human being any more—and you still don’t get to “unbounded” in a finite time—due to the finite visible universe.
I’m not sure how utility (and expected utility) are physically represented in the human brain. Dopamine levels and endorphin levels are the most obvious candidates, but there are probably also various proxies. However, I figure a 16-bit number would probably cover it pretty well. It may seem counter-intuitive—but you don’t really need more than that to make decisions of the type you describe—even for numbers of people with (say) 256-bit representations.
Think about it this way:
Omega comes up to you and offers you a choice, it will kill either n or 2n people depending on what you ask. When you ask what n is Omega explains that it is an integer, but is unfortunately far too large to define it within your lifetime. Would you not still pick n in this dilemma? I know I would.
This isn’t quite enough to prove an unbounded utility function, but if we slightly modify the choice so it is n people die with certainty versus 2n people die with 99.999% probability and nobody dies with 0.001% probability then it is enough.
Your brain could probably make that kind of decision with only a few bits of utility. The function would go: lots-of-death bad, not-so-much-death not so bad. IMO, in no way is that evidence that the brain represents unbounded utilities.
Try using numbers. If you try to bound the function, there will be a sufficiently large n where you will prefer the 99.999% probability of 2n people dying to 100% of n people dying.
To recap, I objected: “your brain can’t handle those numbers”. To avoid the huge numbers, they were replaced with “n”—and a bizarre story about an all-knowing being. If you go back to the numbers, we are back to the first objection again—there are some numbers that are too big for unmodified humans to handle. No, I can’t tell you which numbers—but they are out there.
The grandparent is a reductio of your assertion (and thus, if you agree that “not-so-much-death is not so bad”, a disproof). You seem to be questioning the validity of algebra rather than retracting the claim. Do you have a counterargument?
I’d suggest that you may be able to argue that the brain does not explicitly implement a utility function as such, which makes sense because utility functions are monstrously complex. Instead, the brain likely implements a bunch of heuristics and other methods of approximating / instantiating a set of desires that could hypothetically be modeled by a utility function (that is unbounded).
“your brain can’t handle those numbers” wasn’t “questioning the validity of algebra”. It was questioning whether the human brain can represent—or even receive—the large numbers in question.
What you said was:
as though this were somehow an indictment of the argument.
Anyway, the important thing is: several people have already explained how a finite system can express an unbounded utility function without having to explicitly express numbers of unbounded size.
Dragging in Omega to represent the huge quantities for the human seems to have been a desperate move.
Well, that’s OK—but the issue is what shape the human utility function is. You can’t just extrapolate out to infinity from a small number of samples near to the origin!
I think there are limits to human happiness and pain—and whatever else you care to invoke as part of the human utility function—so there’s actually a finite representation with bounded utility—and I think that it is the best approximation to what the brain is actually doing.
Some people can. It’s called proof by induction.
This is not how proof by induction works.
If you think the proof is flawed, find a counterexample.
A real, independently-verifiable counterexample, not just a nebulous spot on the number line where a counterexample might exist.
The proof by induction is correct. “Extrapolating from a small number of samples”, however, is not proof by induction.
A fallible process. Pain might seem proportional to number of lashes at first—but keep going for a while, and you will see that they have a non-linear relationship.