It is irrelevant what we desire or want, as is what we act for. The only thing that is relevant is that which we like.
Saying a word with emphasis doesn’t clarify its meaning or motivate the relevance of what it’s intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It’s not even clear what each of these means, and these distinctions don’t automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don’t quite match those of people it designed.
I think I understand what koning_robot was going for here, but I can’t approach it except through a description. This description elicits a very real moral and emotional reaction within me, but I can’t describe or pin down what exactly is wrong with it. Despite that, I still don’t like it.
So, some of the dystopian Fun Worlds that I imagine are rooms where non AI lifeforms have no intelligence of their own anymore, as it was not needed. These lifeforms are incredibly simple and are little more than dopamine receptors (I’m not up to date on the neuroscience of pleasure, I remember its not really dopamine but am not sure what the chemical(s) that correspond to happiness are). The lifeforms are all identical and interchangeable. They do not sing or dance. Yet they are extremely happy, in a chemical sort of sense. Still, I would not like to be one.
Values are worth acting on, even if we don’t understand them exactly, so long as we understand in a general sense what they tell us. That future would suck horribly and I don’t want it to happen.
Saying a word with emphasis doesn’t clarify its meaning or motivate the relevance of what it’s intended to refer to. There are many senses in which doing something may be motivated: there is wanting (System 1 urge to do something), planning (System 2 disposition to do something), liking (positive System 1 response to an event) and approving (System 2 evaluation of an event). It’s not even clear what each of these means, and these distinctions don’t automatically help with deciding what to actually do. To make matters even more complicated, there is also evolution with its own tendencies that don’t quite match those of people it designed.
See Approving reinforces low-effort behaviors, The Blue-Minimizing Robot, Urges vs. Goals: The analogy to anticipation and belief.
I accept this objection; I cannot describe in physical terms what “pleasure” refers to.
I think I understand what koning_robot was going for here, but I can’t approach it except through a description. This description elicits a very real moral and emotional reaction within me, but I can’t describe or pin down what exactly is wrong with it. Despite that, I still don’t like it.
So, some of the dystopian Fun Worlds that I imagine are rooms where non AI lifeforms have no intelligence of their own anymore, as it was not needed. These lifeforms are incredibly simple and are little more than dopamine receptors (I’m not up to date on the neuroscience of pleasure, I remember its not really dopamine but am not sure what the chemical(s) that correspond to happiness are). The lifeforms are all identical and interchangeable. They do not sing or dance. Yet they are extremely happy, in a chemical sort of sense. Still, I would not like to be one.
Values are worth acting on, even if we don’t understand them exactly, so long as we understand in a general sense what they tell us. That future would suck horribly and I don’t want it to happen.