Altruism is a preference. On my view, that preference is just incoherent, because it refers to entities that are meaningless. But even without that, there’s no great argument for why anyone should be altruistic, or for any moral claims.
I don’t think it’s possible in principle to configure a mind to pursue incoherent goals. If it was accepted to be coherent, then it would be possible.
I grant that altruism is (seems) incoherent if the existence of other minds is incoherent. But if ‘Strong Verificationism’ is wrong, and Eliezer is right, then it seems obviously possible to create a mind that cares about other minds, no?
there’s no great argument for why anyone should be altruistic, or for any moral claims.
there are great arguments for why it’s possible to design an altruistic mind. a mind with altruistic values will generally be more likely to achieve the altruistic values if ze has / keeps them, and vice versa. do you disagree with that?
I’m not sure. I think even if the strong claim here is wrong and realism is coherent, it’s still fundamentally unknowable, and we can’t get any evidence at all in favor. That might be enough to doom altruism.
It’s hard for me to reason well about a concept I believe to be incoherent, though.
AFAIU, under strong(er?) verificationism, it’s also incoherent to say that your past and future selves exist. so all goals are doomed, not just altruistic ones.
alternatively, maybe if you merge all the minds, then you can verify other minds exist and take care of them. plus, maybe different part of your brain communicating isn’t qualitatively different from different brains communicating with each other (although it probably is).
I haven’t written specifically about goals, but being that claims about future experiences are coherent, preferences over the distribution of such are also, and one can act on their beliefs about how their actions affect said distribution. This doesn’t require the past to exist.
Altruism is a preference. On my view, that preference is just incoherent, because it refers to entities that are meaningless. But even without that, there’s no great argument for why anyone should be altruistic, or for any moral claims.
I don’t think it’s possible in principle to configure a mind to pursue incoherent goals. If it was accepted to be coherent, then it would be possible.
I grant that altruism is (seems) incoherent if the existence of other minds is incoherent. But if ‘Strong Verificationism’ is wrong, and Eliezer is right, then it seems obviously possible to create a mind that cares about other minds, no?
there are great arguments for why it’s possible to design an altruistic mind. a mind with altruistic values will generally be more likely to achieve the altruistic values if ze has / keeps them, and vice versa. do you disagree with that?
I’m not sure. I think even if the strong claim here is wrong and realism is coherent, it’s still fundamentally unknowable, and we can’t get any evidence at all in favor. That might be enough to doom altruism.
It’s hard for me to reason well about a concept I believe to be incoherent, though.
AFAIU, under strong(er?) verificationism, it’s also incoherent to say that your past and future selves exist. so all goals are doomed, not just altruistic ones.
alternatively, maybe if you merge all the minds, then you can verify other minds exist and take care of them. plus, maybe different part of your brain communicating isn’t qualitatively different from different brains communicating with each other (although it probably is).
I haven’t written specifically about goals, but being that claims about future experiences are coherent, preferences over the distribution of such are also, and one can act on their beliefs about how their actions affect said distribution. This doesn’t require the past to exist.