I’m not sure what you’re trying to say here, but if you consider this a relative weakness of Solomonoff Induction, then I think you’re looking at it the wrong way. We will know it as well as we possibly could given the evidence available. Humans are subject to the constraints that Solomonoff Induction is subject to, and more.
koning_robot
Hrrm. I don’t think it’s that simple. Looking at that page, I imagine nonprogrammers wonder:
What are comments?
What are strings?
What is this “#=>” stuff?
“primitives”?
… This seems to be written for people who are already familiar with some other language. Better to show a couple of examples so that they recognize patterns and become curious.
What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you’re just making something up to rationalize your preconceptions.
Hm, I’ve been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I’ve considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn’t think of a suitable reward (there’s not much that I like). I hadn’t thought of just making something up, like pieces of paper. Thanks for the inspiration!
Do you have specific ideas useful for resolving this question?
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
It’s usually best to avoid using the word “rationality” in such contexts.
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
Because it feels good. My ongoing survival leaves me cold entirely.
It’s different. The fact that I feel bad when confronted with my own mortality doesn’t mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.
Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument.
I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don’t think the emotions in this case form that strong an argument. There’s a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.
Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular “person” continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.
I accept this objection; I cannot describe in physical terms what “pleasure” refers to.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
No. I deliberately re-used a similar construct to Wireheading theories to expose more easily that many people disagree with this.
Yes, but they disagree because what they want is not the same as what they would like.
The “weak points” I spoke of is that you consider some “weaknesses” of your position, namely others’ mental states, but those are not the weakest of your position, nor are you using the strongest “enemy” arguments to judge your own position, and the other pieces of data also indicate that there’s mind-killing going on.
The value of others’ mental states is not a weakness of my position; I just considered them irrelevant for the purposes of the experience machine thought experiment. The fact that hooking up to the machine would take away resources that could be used to help others weighs against hooking up. I am not necessarily in favor of wireheading.
I am not aware of weaknesses of my position, nor in what way I am mind-killing. Can you tell me?
[...] it’s almost an applause light.
Yes! So why is nobody applauding? Because they disagree with some part of it. However, the part they disagree with is not what the referent of “pleasure” is, or what kind of elaborate outside-world engineering is needed to bring it about (which has instrumental value on my view), but the part where I say that the only terminal value is in mental states that you cannot help but value.
The burden of proof isn’t actually on my side. A priori, nothing has value. I’ve argued that the quality of mental states has (terminal) value. Why should we also go to any length to placate desires?
I remember starting it, and putting it away because yes, I disagreed with so many things. Especially the present subject; I couldn’t find any arguments for the insistence on placating wants rather than improving experience. I’ll read it in full next week.
And unsupported strong claim. Dozens of implications and necessary conditions in evolutionary psychology if the claim is assumed true. No justification. No arguments. Only one or two weak points looked up by the claim’s proponent.
This comment has justification. I don’t see how this would affect evolutionary psychology. I’m not sure if I’m parsing your last sentence here correctly; I didn’t “look up” anything, and I don’t know what the weak points are.
Assuming that the scenario you paint is plausible and the optimal way to get there, then yeah, that’s where we should be headed. One of the explicit truths of your scenario is that “they’re all feeling the best they could possibly feel”. But your scenario is a bad intuition pump. You deliberately constructed this scenario so as to manipulate me into judging what the inhabitants experience as less than that, appealing to some superstitious notion of true/pure/honest/all-natural pleasure.
You may be onto something when you say I might be confusing labels and concepts, but I am not saying that the label “pleasure” refers to something simple. I am only saying that the quality of mental states is the only thing we should care about (note the word should, I’m not saying that is currently the way things are).
A priori, nothing matters. But sentient beings cannot help but make value judgements regarding some of their mental states. This is why the quality of mental states matters.
Wanting something out there in the world to be some way, regardless of whether anyone will ever actually experience it, is different. A want is a proposition about reality whose apparent falsehood makes you feel bad. Why should we care about arbitrary propositions being true or false?
- 24 Aug 2012 14:08 UTC; 0 points) 's comment on Not for the Sake of Pleasure Alone by (
“Desire” denotes your utility function (things you want). “Pleasure” denotes subjectively nice-feeling experiences. These are not necessarily the same thing.
Indeed they are not necessarily the same thing, which is why my utility function should not value that which I “want” but that which I “like”! The top-level post all but concludes this. The conclusion the author draws just does not follow from what came before. The correct conclusion is that we may still be able to “just” program an AI to maximize pleasure. What we “want” may be complex, but what we “like” may be simple. In fact, that would be better than programming an AI to make the world into what we “want” but not necessarily “like”.
There’s nothing superstitious about caring about stuff other than your own mental state.
If you mean that others’ mental states matter equally much, then I agree (but this distracts from the point of the experience machine hypothetical). Anything else couldn’t possibly matter.
Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don’t think you gave me anything but an appeal to emotion.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
Pleasurable experiences. My life facilitates them, but it doesn’t have to be “my” life. Anyone’s life will do.
Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?
Whoops, thread necromancy.