You cannot trust the hardware not to skew this adjustment in all kinds of self-srving ways. The point is that the hardware will mess up any attempt to compute this on object level, all you can do it compute at the meta-level (where the HW corruption is less pronounced), come with firm rules, and then just stick to them on object level.
The specific object-level algorithms do not matter—the corrupted HW starts with the answer it wants the computation to produce, and then backpropages it to skew the computation towards the desired answer.
What makes you, and so many others in this community, so sure that our hardware is so great and incorruptible when it is making these rules for our future self to follow? To me, that’s completely backwards: won’t our future self always be more familiar with the demands of the future, by virtue of actually being there, than our past selves will be?
That’s an interesting point/question. My thinking is that there’s a tradeoff: if you make decisions in the moment you benefit from having that more up to date information, but you’re also going to be more tempted to do the wrong thing. I suppose that OP and others believe that the latter side of the tradeoff has more weight.
I don’t think your future self has the disadvantage of being more likely to be tempted to do the wrong thing and I don’t know where that assumption comes from.
I will concede that I think meanness and suffering of others makes children sad whereas adults are more likely to rationalize it. That’s why I am hugely skeptical when people tout maturity as a virtue. For the sake of capturing the benefits of childishness, it may be better to set rules early.
Yet idealistic children, such as I have been, would be tempted to redistribute wealth so that no one has to be homeless, even though for a child it may not feel good to steal someone’s money whether they acquired it through luck or by work. (Here’s where I will get political for the rest of this post; your mileage may vary about my examples.) I still don’t know the specifics of why not to redistribute wealth, but my heroes are economically literate, have managed to keep their idealism, and yet loudly disavow coercive redistribution; I assume they have learned that economic reality was not as morally simple as I would have expected it to be. In that case, it would seem that childish idealism could be misleading, but childish open-mindedness would be helpful.
(When you try to capture the benefits of childishness by writing down your deontology, it is very important to remember the benefit of open-mindedness. But a deontology with strict open-mindedness added to it is a set of guidelines, not rules; can it really be called a deontology at that point?)
Another example of people making better moral judgment as they learn more: future to-be parents in 1995 would never have guessed the prominence that trans rights has gained in the 2010s, and would never have guessed that A) their own children would want to be trans, B) they would be very serious about it, and C) many respectable, middle-to-upper-class educated people would actually think it’s deeply important for you to respect their identity. Many of those parents, observing other people becoming more open-minded and progressive in their thinking, would find themselves even more considerate of their child’s autonomy. They would learn that the world was not as morally simple as expected. I think this is an incredibly noble outcome, even though conformity is rarely a good motivation.
The availability of emotionally-charged information is exactly what causes the issue. It’s easier to avoid temptations by setting rules for yourself ahead of time than by relying on your ability to resist them when they are right in front of you.
That claim seems pretty strong to me. In general, biases are tough, but it is accepted we have at least some ability to mitigate them. It sounds like your position is saying the opposite wrt moral decision making and that seems overly pessimistic to me.
I am actually agreeing—but I am saying that the way we’d actually accomplish it is by relying on the meta-level as much as possible. E.g. by catching that the object-level conclusion does not reconsider with our meta-level principles, or noticing that it’s incongruent in some way.
You cannot trust the hardware not to skew this adjustment in all kinds of self-srving ways. The point is that the hardware will mess up any attempt to compute this on object level, all you can do it compute at the meta-level (where the HW corruption is less pronounced), come with firm rules, and then just stick to them on object level.
The specific object-level algorithms do not matter—the corrupted HW starts with the answer it wants the computation to produce, and then backpropages it to skew the computation towards the desired answer.
What makes you, and so many others in this community, so sure that our hardware is so great and incorruptible when it is making these rules for our future self to follow? To me, that’s completely backwards: won’t our future self always be more familiar with the demands of the future, by virtue of actually being there, than our past selves will be?
That’s an interesting point/question. My thinking is that there’s a tradeoff: if you make decisions in the moment you benefit from having that more up to date information, but you’re also going to be more tempted to do the wrong thing. I suppose that OP and others believe that the latter side of the tradeoff has more weight.
I don’t think your future self has the disadvantage of being more likely to be tempted to do the wrong thing and I don’t know where that assumption comes from.
I will concede that I think meanness and suffering of others makes children sad whereas adults are more likely to rationalize it. That’s why I am hugely skeptical when people tout maturity as a virtue. For the sake of capturing the benefits of childishness, it may be better to set rules early.
Yet idealistic children, such as I have been, would be tempted to redistribute wealth so that no one has to be homeless, even though for a child it may not feel good to steal someone’s money whether they acquired it through luck or by work. (Here’s where I will get political for the rest of this post; your mileage may vary about my examples.) I still don’t know the specifics of why not to redistribute wealth, but my heroes are economically literate, have managed to keep their idealism, and yet loudly disavow coercive redistribution; I assume they have learned that economic reality was not as morally simple as I would have expected it to be. In that case, it would seem that childish idealism could be misleading, but childish open-mindedness would be helpful.
(When you try to capture the benefits of childishness by writing down your deontology, it is very important to remember the benefit of open-mindedness. But a deontology with strict open-mindedness added to it is a set of guidelines, not rules; can it really be called a deontology at that point?)
Another example of people making better moral judgment as they learn more: future to-be parents in 1995 would never have guessed the prominence that trans rights has gained in the 2010s, and would never have guessed that A) their own children would want to be trans, B) they would be very serious about it, and C) many respectable, middle-to-upper-class educated people would actually think it’s deeply important for you to respect their identity. Many of those parents, observing other people becoming more open-minded and progressive in their thinking, would find themselves even more considerate of their child’s autonomy. They would learn that the world was not as morally simple as expected. I think this is an incredibly noble outcome, even though conformity is rarely a good motivation.
The availability of emotionally-charged information is exactly what causes the issue. It’s easier to avoid temptations by setting rules for yourself ahead of time than by relying on your ability to resist them when they are right in front of you.
That claim seems pretty strong to me. In general, biases are tough, but it is accepted we have at least some ability to mitigate them. It sounds like your position is saying the opposite wrt moral decision making and that seems overly pessimistic to me.
I am actually agreeing—but I am saying that the way we’d actually accomplish it is by relying on the meta-level as much as possible. E.g. by catching that the object-level conclusion does not reconsider with our meta-level principles, or noticing that it’s incongruent in some way.