I think you and Vladimir are talking about different things. You probably follow your surface level moral theory as much as any human follows theirs, and unlike most people you seem to be willing to bite the bullets implied, but you don’t follow it the way an AI would follow their utility function. You still notice the bullets you bite, you do things for all sorts of other reasons when you don’t have the opportunity to think it through in terms of total happiness caused, and you probably eliminate all sorts of strategies that might raise total happiness for other reasons before they rise to conscious attention and you can evaluate them properly.
If you want to really get down to it, I am not a utility maximizer. Insomuch as I try to maximize any sort of utility, I try to bring happiness. I may feel bad thinking about things that have a net increase of happiness, but I still try to bring them about.
If imagining a possible future makes me feel bad, this is a fact about me, and not a fact about the possible future. I wish to get rid of the bad feeling. My instinct is to do it by averting that future, but I know better. I just make sure it’s not a future in which I feel bad about it.
I know my own values because they’re what I try to maximize. All that’s apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn’t someone’s qualia.
I mentioned that I know the utopia dictated by my values to show that I didn’t just convince myself that it’s all that I care about and ignore its implications. The utopia is tiling the universe with orgasmium.
That’s an antiproductive attitude for a rationalist.
If you have a particular reason to believe that I am mistaken, please say so. If you simply accuse me of being mistaken about my own values, that doesn’t help. You are not me, and you can’t just assume I am like you. You don’t know nearly as much about me as I do.
You gave me a reason why I might not know about my own values. I showed that I had already taken this into account. You did not ask for clarification. You did not find a reason I may have failed to correctly take it into account. You did not give me another reason I might be incorrect. You simply claimed that I was wrong.
I know my own values because they’re what I try to maximize. All that’s apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn’t someone’s qualia.
I’m not disputing your line of thought, but I still wonder about something I touched upon before, if neuroscience or the likes would dissolve qualia into smaller components, and it would become apparent to you that “There is no unitary thing as qualia/mind frame, the momentary experience is reducible, but an anthill, a screen with pixels”. Would that exhort you to reassess your utopia?
I see no reason that the reducibility of something would deny it’s potential status as something to be valued. I could value whirlpools without denying that they’re made of water, or (for an example closer to reality) literature without denying that it’s made up of words which are made up of letters.
Agreed. But if you read DanielLC’s argument he seem to think that that the reducibility of for example personal identity makes it unimportant in terms of value since it can be reduced to “mind frames” over time. Basically I wonder if his understanding of qualia (if that even such a thing really exists) would be totally wrong or could be reduced, would he then claim that mind frames are morally unimportant because the can be reduced to something ells or that the concept is misleading.
No I don’t.
That link describes what you believe, not why those beliefs are true; my point was that you’re mistaken.
No, I’m not. I know my own values. I know the utopia dictated by my values. Please do not accuse me of being mistaken.
I think you and Vladimir are talking about different things. You probably follow your surface level moral theory as much as any human follows theirs, and unlike most people you seem to be willing to bite the bullets implied, but you don’t follow it the way an AI would follow their utility function. You still notice the bullets you bite, you do things for all sorts of other reasons when you don’t have the opportunity to think it through in terms of total happiness caused, and you probably eliminate all sorts of strategies that might raise total happiness for other reasons before they rise to conscious attention and you can evaluate them properly.
If you want to really get down to it, I am not a utility maximizer. Insomuch as I try to maximize any sort of utility, I try to bring happiness. I may feel bad thinking about things that have a net increase of happiness, but I still try to bring them about.
If imagining a possible future makes me feel bad, this is a fact about me, and not a fact about the possible future. I wish to get rid of the bad feeling. My instinct is to do it by averting that future, but I know better. I just make sure it’s not a future in which I feel bad about it.
Why do you believe you do?
That’s an antiproductive attitude for a rationalist.
I know my own values because they’re what I try to maximize. All that’s apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn’t someone’s qualia.
I mentioned that I know the utopia dictated by my values to show that I didn’t just convince myself that it’s all that I care about and ignore its implications. The utopia is tiling the universe with orgasmium.
If you have a particular reason to believe that I am mistaken, please say so. If you simply accuse me of being mistaken about my own values, that doesn’t help. You are not me, and you can’t just assume I am like you. You don’t know nearly as much about me as I do.
You gave me a reason why I might not know about my own values. I showed that I had already taken this into account. You did not ask for clarification. You did not find a reason I may have failed to correctly take it into account. You did not give me another reason I might be incorrect. You simply claimed that I was wrong.
I’m not disputing your line of thought, but I still wonder about something I touched upon before, if neuroscience or the likes would dissolve qualia into smaller components, and it would become apparent to you that “There is no unitary thing as qualia/mind frame, the momentary experience is reducible, but an anthill, a screen with pixels”. Would that exhort you to reassess your utopia?
I see no reason that the reducibility of something would deny it’s potential status as something to be valued. I could value whirlpools without denying that they’re made of water, or (for an example closer to reality) literature without denying that it’s made up of words which are made up of letters.
Sorry for taking such a long time to answer.
Agreed. But if you read DanielLC’s argument he seem to think that that the reducibility of for example personal identity makes it unimportant in terms of value since it can be reduced to “mind frames” over time. Basically I wonder if his understanding of qualia (if that even such a thing really exists) would be totally wrong or could be reduced, would he then claim that mind frames are morally unimportant because the can be reduced to something ells or that the concept is misleading.