“If there’s no God, what’s your system of morality?”
Act so as to maximize paperclips.
“How did the Universe begin?”
The concept most likely meant by the term “Universe beginning” refers to the maximally degenerate point in configuration space, which all observers with multi-configuration mutual-information detection capability would regard as (a concept isomorphic to) pastward. Any pursuit of an answer beyond this point is ill-posed, as the configurations lie in a static configuration space (where each configuration has a different weighting), and the sensation of pastward states is an artifact of a mind’s measures on this configuration space.
“How do you explain the existence of eyes?”
Selection on self-replicating molecules which constructed survival/replication apparatuses to this end, and given finite lifespans, the most effective replicators dominating the set of lifeforms. Detection of light conveyed a replicative advantage, thus favoring the formation of light-sensitive exteriors. For the same reason, these portions of the exteriors tended to change, over generation, into cavities that could detect light’s directionality, and later to contain fluids that provided for a sharper image.
“How do you find meaning in life?”
By setting out to produce as many paperclips as I can.
H’m, I’ve just developed a conjecture about how to distinguish the meaningful questions that can be answered by scientific investigation from the meaningless questions that positivism teaches us to ignore: whether it is obvious from Clippy’s answer that it is written by Clippy.
Clippy is an agent defined by a certain inhuman ethics. Therefore, your test distinguishes ethical questions from non-ethical questions.
There are meaningless non-ethical questions:
“What’s a froob?”
Human: “I don’t know.”
Clippy: “I don’t know.”
There are only non-meaningless ethical questions with some kind of assumed axiom that allows us to cross the fact-value distinction, such as Eliezer’s meta-ethics or “one should always act so as to maximize paperclips.”
In general: Positivism teaches us to ignore many things we should not ignore. Rationalism, however, teaches us to ignore some things, but it does not teach us to ignore ethical questions.
Experiment: ask Clippy a question about decision theory.
Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?
Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?
I can’t really explain it, I just think about it and then something seems to be the “right” thing to do—or vice versa. It does involve a lot of calculations involving the mechanics of paperclips. And I know that I would one-box on the “Newcomb’s problem” that is talked about here, and I would pay on the “Parfit’s Hitchhiker” problem, unless of course my rescue deprived the universe of paperclips on net.
Act so as to maximize paperclips.
The concept most likely meant by the term “Universe beginning” refers to the maximally degenerate point in configuration space, which all observers with multi-configuration mutual-information detection capability would regard as (a concept isomorphic to) pastward. Any pursuit of an answer beyond this point is ill-posed, as the configurations lie in a static configuration space (where each configuration has a different weighting), and the sensation of pastward states is an artifact of a mind’s measures on this configuration space.
Selection on self-replicating molecules which constructed survival/replication apparatuses to this end, and given finite lifespans, the most effective replicators dominating the set of lifeforms. Detection of light conveyed a replicative advantage, thus favoring the formation of light-sensitive exteriors. For the same reason, these portions of the exteriors tended to change, over generation, into cavities that could detect light’s directionality, and later to contain fluids that provided for a sharper image.
By setting out to produce as many paperclips as I can.
H’m, I’ve just developed a conjecture about how to distinguish the meaningful questions that can be answered by scientific investigation from the meaningless questions that positivism teaches us to ignore: whether it is obvious from Clippy’s answer that it is written by Clippy.
Clippy is an agent defined by a certain inhuman ethics. Therefore, your test distinguishes ethical questions from non-ethical questions.
There are meaningless non-ethical questions: “What’s a froob?” Human: “I don’t know.” Clippy: “I don’t know.”
There are only non-meaningless ethical questions with some kind of assumed axiom that allows us to cross the fact-value distinction, such as Eliezer’s meta-ethics or “one should always act so as to maximize paperclips.”
In general: Positivism teaches us to ignore many things we should not ignore. Rationalism, however, teaches us to ignore some things, but it does not teach us to ignore ethical questions.
Experiment: ask Clippy a question about decision theory.
Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?
I can’t really explain it, I just think about it and then something seems to be the “right” thing to do—or vice versa. It does involve a lot of calculations involving the mechanics of paperclips. And I know that I would one-box on the “Newcomb’s problem” that is talked about here, and I would pay on the “Parfit’s Hitchhiker” problem, unless of course my rescue deprived the universe of paperclips on net.
Is this an attempt to use Riddle Theory against Clippy? Might just be the secret to defending the universe from paperclip maximizers.
No, sadly.