I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think “Bayesian insights are good,” we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
By attaching “goodness” to things too far outside our feedback loops, like “ending hunger,” we get things like counterproductive aid spending. By attaching “goodness” too strongly to subgoals close to individual feedback loops, like “publishing papers,” we get a flood of inconsequential academic articles at the expense of general knowledge.
I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think “Bayesian insights are good,” we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., “reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good” becomes “reading posts on Less Wrong is good, therefore it is a valid end goal in itself”. Is that what you’re getting at?
I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think “Bayesian insights are good,” we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.
By attaching “goodness” to things too far outside our feedback loops, like “ending hunger,” we get things like counterproductive aid spending. By attaching “goodness” too strongly to subgoals close to individual feedback loops, like “publishing papers,” we get a flood of inconsequential academic articles at the expense of general knowledge.
This seems related to the tendency to gradually reify instrumental values as terminal values. e.g., “reading posts on Less Wrong helps me find better ways to accomplish my goals therefore is good” becomes “reading posts on Less Wrong is good, therefore it is a valid end goal in itself”. Is that what you’re getting at?