The problem with “Act on what you feel in your heart” is that it’s too generalizable. It proves too much, because of course someone else might feel something different and some of those things might be horrible.
It looks like there’s all this undefined behavior, and demons coming out the nose from the outside because you aren’t looking at the exact details of what’s going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It’s still deterministic.
As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It’s entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout “Nasal demons!”, but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.
The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven’t bothered to understand, and saying “Who can possibly say what that system will do?”
Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren’t accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That’s undefined behavior with respect to the C/C++ standard. But it’s perfectly predictable if you know what platform you’re on.
Other people who aren’t meta-ethical anti-realists’ utility functions are not really negotiable either. You can’t really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.
And if you actually examine what’s going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting “Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!” you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won’t be made ethical by reasoning with them anyway. I don’t believe the 9/11 hijackers were psychopaths, but that’s the holy book problem I mentioned, and a rare case.
In most cases of undefined C constructs, there isn’t another carefully-tuned structure that’s doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don’t have obvious consequences to the user. So it would have been a fair criticism to say “You believe something because you believe it in your heart, and you’ve justified not signing your children up for cryonics because you believe in an afterlife,” because (A) they actually do that, (B) it’s a result of them having an epistemology which doesn’t track the truth.
Disclaimer: I’m not signed up for cryonics, though if I had kids, they would be.
my utility function is basically whatever I want it to be.
I very much doubt that. At least with present technology you cannot self-modify to prefer dead babies over live ones; and there’s presumably no technological advance that can make you want to.
my utility function is basically whatever I want it to be.
If utility functions are those constructed by the VNM theorem, your utility function is your wants; it is not something you can have wants about. There is nothing in the machinery of the theorem that allows for a utility function to talk about itself, to have wants about wants. Utility functions and the lotteries that they evaluate belong to different worlds.
Are there theorems about the existence and construction of self-inspecting utility functions?
It looks like there’s all this undefined behavior, and demons coming out the nose from the outside because you aren’t looking at the exact details of what’s going on in with their feelings that are choosing the beliefs. Though a C compiler given an undefined construct may cause your program to crash, it will never literally cause demons to come out of your nose, and you could figure this out if you looked at the implementation of the compiler. It’s still deterministic.
As an atheistic meta-ethical ant-realist, my utility function is basically whatever I want it to be. It’s entirely internal. From the outside, from someone who has a system where they follow something external and clearly specified, they could shout “Nasal demons!”, but demons will never come out my nose, and my internal, ever so frighteningly non-negotiable desires are never going to include planned famines. It has reliable internal structure.
The mistake is looking at a particular kind of specification that defines all the behavior, and then looking at a system not covered by that specification, but which is controlled by another specification you haven’t bothered to understand, and saying “Who can possibly say what that system will do?”
Some processors (even x86) have instructions (such as bit rotate) which are useful for significant performance boosts in stuff like cryptography, and yet aren’t accessible from C or C++, and to use it you have to perform hacks like writing the machine code out as bytes, casting its address to a function pointer and calling it. That’s undefined behavior with respect to the C/C++ standard. But it’s perfectly predictable if you know what platform you’re on.
Other people who aren’t meta-ethical anti-realists’ utility functions are not really negotiable either. You can’t really give them a valid argument that will convince them not to do something evil if they happen to be psychopaths. They just have internal desires and things they care about, and they care a lot more about having a morality which sounds logical when argued for than I do.
And if you actually examine what’s going on with the feelings of people with feeling-driven epistemology that makes them believe things, instead of just shouting “Nasal demons! Unspecified behavior! Infinitely beyond the reach of understanding!” you will see that the non-psychopathic ones have mostly-deterministic internal structure to their feelings that prevents them from believing that they should murder Sharon Tate. And psychopaths won’t be made ethical by reasoning with them anyway. I don’t believe the 9/11 hijackers were psychopaths, but that’s the holy book problem I mentioned, and a rare case.
In most cases of undefined C constructs, there isn’t another carefully-tuned structure that’s doing the job of the C standard in making the behavior something you want, so you crash. And faith-epistemology does behave like this (crashing, rather than running hacky cryptographic code that uses the rotate instructions) when it comes to generating beliefs that don’t have obvious consequences to the user. So it would have been a fair criticism to say “You believe something because you believe it in your heart, and you’ve justified not signing your children up for cryonics because you believe in an afterlife,” because (A) they actually do that, (B) it’s a result of them having an epistemology which doesn’t track the truth.
Disclaimer: I’m not signed up for cryonics, though if I had kids, they would be.
I very much doubt that. At least with present technology you cannot self-modify to prefer dead babies over live ones; and there’s presumably no technological advance that can make you want to.
If utility functions are those constructed by the VNM theorem, your utility function is your wants; it is not something you can have wants about. There is nothing in the machinery of the theorem that allows for a utility function to talk about itself, to have wants about wants. Utility functions and the lotteries that they evaluate belong to different worlds.
Are there theorems about the existence and construction of self-inspecting utility functions?