I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
Does that apply only to people you can’t interact with in principle, or to everyone you don’t interact with? E.g., if I tell you “Someone on the other side of the planet is being tortured, if you press this button a rescue team will be sent”, do you pay $1 to get access to the button, or do you stare at me blankly and say “I’ll never meet this ‘victim’ or ‘rescue team’, they don’t meaningfully exist.” and go donate to a charity that sends donors pictures of the kids they’ve helped?
I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
I’m a solipsist in the sense that I don’t think it makes sense to believe in things that I can’t see or touch or hear or deduce the existence of from things that I can see or touch or hear. That seems like realism, to me. It involves solipsist elements only insofar as existence isn’t an absolute state but is amenable to probabilities, as a consequence of the fact that probabilities are fundamentally subjective, as is evidence.
I endorse a broad form of selfishness. It makes me happy to make others happy or to see others happy or to know that my actions made others happy. I don’t really care to define altruism right now. I don’t think that will be relevant?
I have a broad definition of interaction in mind here. If they connect to me through a chain of variables, then that is interaction. I think the more direct the chain the more interaction that is and the more valuable the person becomes, this is why I care more for family members than people in other universes who I will never meet. So, I will interact with the victim and rescue team, under my understanding of interaction. And $1 is cheap. So I help them.
I notice I am very confused. Are you some form of solipsist? Do you endorse selfishness? Or believe that “altruism” just means “I care about my own mental state of believing my decisions make others happy”?
Does that apply only to people you can’t interact with in principle, or to everyone you don’t interact with? E.g., if I tell you “Someone on the other side of the planet is being tortured, if you press this button a rescue team will be sent”, do you pay $1 to get access to the button, or do you stare at me blankly and say “I’ll never meet this ‘victim’ or ‘rescue team’, they don’t meaningfully exist.” and go donate to a charity that sends donors pictures of the kids they’ve helped?
I’m a solipsist in the sense that I don’t think it makes sense to believe in things that I can’t see or touch or hear or deduce the existence of from things that I can see or touch or hear. That seems like realism, to me. It involves solipsist elements only insofar as existence isn’t an absolute state but is amenable to probabilities, as a consequence of the fact that probabilities are fundamentally subjective, as is evidence.
I endorse a broad form of selfishness. It makes me happy to make others happy or to see others happy or to know that my actions made others happy. I don’t really care to define altruism right now. I don’t think that will be relevant?
I have a broad definition of interaction in mind here. If they connect to me through a chain of variables, then that is interaction. I think the more direct the chain the more interaction that is and the more valuable the person becomes, this is why I care more for family members than people in other universes who I will never meet. So, I will interact with the victim and rescue team, under my understanding of interaction. And $1 is cheap. So I help them.