Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I’m going to read a literature that might influence my moral beliefs, I’d rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional “argument.”
Morals are axioms. They’re ultimately arbitrary. Relying on arguments with logic and reason for deciding the axioms of your morals is silly, go with what feels right. Then use logic and reason to best actualize on those beliefs. Try to trace morality too far down and you’ll realize it’s all ultimately pointless, or at least there’s no single truth to the matter.
Morals can be axioms, I suppose, but IME what many of us have as object-level “morals” are instead the sorts of cached results that could in principle be derived from axioms. Often, those morals are inconsistent with one another; in those cases using logic and reason to actualize them leads at best to tradeoffs, more often to self-defeating cycles as I switch from one so-called “axiom” to another, sometimes to utter paralysis as these “axioms” come into conflict.
An alternative to that is to analyze my own morality and edit it (insofar as possible) for consistency.
You’re welcome to treat your moral instincts as ineluctable primitives if you wish, of course, but it’s not clear to me that I ought to.
The desire to have a set of morals that derive from consistent axioms can be considered an “ineluctable” as well. It’s simply that your preference to have consistent morals in some cases overrides your other ineluctable preferences...and this conflict is another instance of the paralysis you mentioned.
The morals are indeed cached results...they are the best approximation of the morals that would have been most useful for propagating your genes in the ancestral environment that the combination of evolution and natural selection could come up with.
Try to trace morality too far down and you’ll realize it’s all ultimately pointless, or at least there’s no single truth to the matter.
Why care? It all adds up to normalcy.
If there is a physical law preventing me from caring about things once I realize they are arbitrary in certain conceptual frameworks please enlighten me on it.
Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I’m going to read a literature that might influence my moral beliefs, I’d rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional “argument.”
Morals are axioms. They’re ultimately arbitrary. Relying on arguments with logic and reason for deciding the axioms of your morals is silly, go with what feels right. Then use logic and reason to best actualize on those beliefs. Try to trace morality too far down and you’ll realize it’s all ultimately pointless, or at least there’s no single truth to the matter.
Morals can be axioms, I suppose, but IME what many of us have as object-level “morals” are instead the sorts of cached results that could in principle be derived from axioms. Often, those morals are inconsistent with one another; in those cases using logic and reason to actualize them leads at best to tradeoffs, more often to self-defeating cycles as I switch from one so-called “axiom” to another, sometimes to utter paralysis as these “axioms” come into conflict.
An alternative to that is to analyze my own morality and edit it (insofar as possible) for consistency.
You’re welcome to treat your moral instincts as ineluctable primitives if you wish, of course, but it’s not clear to me that I ought to.
The desire to have a set of morals that derive from consistent axioms can be considered an “ineluctable” as well. It’s simply that your preference to have consistent morals in some cases overrides your other ineluctable preferences...and this conflict is another instance of the paralysis you mentioned.
The morals are indeed cached results...they are the best approximation of the morals that would have been most useful for propagating your genes in the ancestral environment that the combination of evolution and natural selection could come up with.
Why care? It all adds up to normalcy.
If there is a physical law preventing me from caring about things once I realize they are arbitrary in certain conceptual frameworks please enlighten me on it.
I’m not suggesting that any emotion should be attached to the lack of a great truth or true indisputable morals; I’m simply stating the obvious,
Morals are modeled as axioms in certain formulations.