You sure it’s not just executing an adaption? Why?
It is exactly executing an adaption. No “just” about it though. An AI programmed to maximise paperclips is motivated by increasing the number of paperclips. It’s executing its program.
I had this post in mind. I see no reason to link behavior that ‘seems moral’ to the internal sensation of motivation by those terminal values, and if we’re not talking about introspection about decision-making, then why are we using the word motivation?
This post seems to be discussing a particular brand of moral reasoning- basically, deliberative utilitarian judgments- which seems like a rather incomplete picture of human morality as a whole, and it seems like it’s just sweeping under the rug the problem of where values come from in the first place. I should make clear that first he has to describe what values are before he can describe where values come from, but if it’s an incomplete description of values, that can cause problems down the line.
Vaniver, I really appreciate the rigor you are bringing to this discussion. The OP struck me as very deliberative-utilitarian as well. If we want to account (or propagate) for a shared human morality, than certainly, it must be rational. But it seems to me, that the long history of searching for a rational-basis-for-morality clearly points away from the well trodden ground of this utilitarianism.
From Plato and Aristotle to the Enlightenment until Nietzsche (especially to the present day), it seems the project of accounting for morality as though it were an inherent attribute of humanity, expressible through axioms and predetermined by the universe, is a bunk and, perhaps even, an irrational project. Morality, I think can only be shared, if you have a shared goal for winning life.
A complete description of values requires a discussion on what makes life worth living and what is a good life, or more simply goals. Without the tools to determine and rationalize what are good goals for me, I will never be able to make a map of morality and choose the values and virtues relevant to me on my quest.
I would note there is often a meaningful difference between individual and social virtues. You and I could share expectations about only our conduct when we interact and not the other’s private conduct. It is easy to imagine people spending more effort on inducing their neighbors to keep their lawns pretty than their dishes pretty, for example.
It is exactly executing an adaption. No “just” about it though. An AI programmed to maximise paperclips is motivated by increasing the number of paperclips. It’s executing its program.
I had this post in mind. I see no reason to link behavior that ‘seems moral’ to the internal sensation of motivation by those terminal values, and if we’re not talking about introspection about decision-making, then why are we using the word motivation?
This post seems to be discussing a particular brand of moral reasoning- basically, deliberative utilitarian judgments- which seems like a rather incomplete picture of human morality as a whole, and it seems like it’s just sweeping under the rug the problem of where values come from in the first place. I should make clear that first he has to describe what values are before he can describe where values come from, but if it’s an incomplete description of values, that can cause problems down the line.
Vaniver, I really appreciate the rigor you are bringing to this discussion. The OP struck me as very deliberative-utilitarian as well. If we want to account (or propagate) for a shared human morality, than certainly, it must be rational. But it seems to me, that the long history of searching for a rational-basis-for-morality clearly points away from the well trodden ground of this utilitarianism.
From Plato and Aristotle to the Enlightenment until Nietzsche (especially to the present day), it seems the project of accounting for morality as though it were an inherent attribute of humanity, expressible through axioms and predetermined by the universe, is a bunk and, perhaps even, an irrational project. Morality, I think can only be shared, if you have a shared goal for winning life.
A complete description of values requires a discussion on what makes life worth living and what is a good life, or more simply goals. Without the tools to determine and rationalize what are good goals for me, I will never be able to make a map of morality and choose the values and virtues relevant to me on my quest.
Does that jive?
Yes.
I would note there is often a meaningful difference between individual and social virtues. You and I could share expectations about only our conduct when we interact and not the other’s private conduct. It is easy to imagine people spending more effort on inducing their neighbors to keep their lawns pretty than their dishes pretty, for example.