One shouldn’t confuse there being a huge debate over something with the problem being unsolved, far less unsolvable (look at the debate over free will or worse p-zombies). I have actually solved the problem of the moral value of animals to my satisfaction (my solution could be wrong, of course). As for the problem of dealing with peoples having multiple copies this really seems like the problem of reducing “magical reality fluid” which while hard seems like it should be possible.
also in math, it might be possible to generalize things, but not necessarily, and not always uniquely
Well, yes. But in general if you’re trying to elucidate some concept in your moral reasoning you should ask yourself the specific reason why you care about that specific concept until you reach concepts that looks like they should have canonical reductions, then you reduce them. If in doing so you end up with multiple possible reductions that probably mean you didn’t go deep enough and should be asking why you care about that specific concept some more so that you can pinpoint the reduction you are actually interested in. If after all that you’re still left with multiple possible reductions for a certain concept, that you appear to value terminally, and not for any other reasons, then you should still be able to judge between possible reductions using the other things you care about: elegance, tractability, etc. (though if you end up in this situation it probably means you made an error somewhere...)
generalize n-th differentials over real numbers
I’m not sure what you’re referring to here...
Also, looking at the possibilities you enumerate again, 3 appear incoherent. Contradictions are for logical systems, if you have a component of your utility function which is monotone increasing in the quantity of blue in the universe and another component which is monotone decreasing in the quantity of blue in the universe, they partially or totally cancel one another but that doesn’t result in a contradiction.
One shouldn’t confuse there being a huge debate over something with the problem being unsolved
sure, good point. Nevertheless, if I’m correct, there still isn’t any Scientifically Accepted Unique Solution for the moral value of animals, even though individuals (like you) might have their own solutions (the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?)
generalize n-th differentials over real numbers
(that was just some random example, it’s fractional calculus which I heard a presentation about recently. Not especially relevant here though :))
though if you end up in this situation it probably means you made an error somewhere...
I just found a nice example for the topic of the post that doesn’t seem to be reducible to anything else: see the post “The “Scary problem of Qualia”. There is no obvious answer, we didn’t really encounter the question so far in practice (but we probably will in the future), and other than its impact on our utility functions, it seems to be the typical “tree falls in forest” one, not really constraining anything in the real world. So the extrapolated utility function seems to be at least category 2.
there still isn’t any Scientifically Accepted Unique Solution for the moral value of animals
There isn’t any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.
the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?
It certainly appear to uniquely follow.
see the post “The “Scary problem of Qualia”.
That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.
but it most likely isn’t. “X computes Y” is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don’t assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?)
(nevertheless, sure, the SAUS might not always exist… but above question still doesn’t seem to have any LW Approved Unique Solution (tm) either :))
I’m saying that although it isn’t ontologically fundamental, our utility function might still build on it (it “feels real enough”), so we might have problems if we try to extrapolate said function to full generality.
One shouldn’t confuse there being a huge debate over something with the problem being unsolved, far less unsolvable (look at the debate over free will or worse p-zombies). I have actually solved the problem of the moral value of animals to my satisfaction (my solution could be wrong, of course). As for the problem of dealing with peoples having multiple copies this really seems like the problem of reducing “magical reality fluid” which while hard seems like it should be possible.
Well, yes. But in general if you’re trying to elucidate some concept in your moral reasoning you should ask yourself the specific reason why you care about that specific concept until you reach concepts that looks like they should have canonical reductions, then you reduce them. If in doing so you end up with multiple possible reductions that probably mean you didn’t go deep enough and should be asking why you care about that specific concept some more so that you can pinpoint the reduction you are actually interested in. If after all that you’re still left with multiple possible reductions for a certain concept, that you appear to value terminally, and not for any other reasons, then you should still be able to judge between possible reductions using the other things you care about: elegance, tractability, etc. (though if you end up in this situation it probably means you made an error somewhere...)
I’m not sure what you’re referring to here...
Also, looking at the possibilities you enumerate again, 3 appear incoherent. Contradictions are for logical systems, if you have a component of your utility function which is monotone increasing in the quantity of blue in the universe and another component which is monotone decreasing in the quantity of blue in the universe, they partially or totally cancel one another but that doesn’t result in a contradiction.
Which is your solution, if I may ask?
sure, good point. Nevertheless, if I’m correct, there still isn’t any Scientifically Accepted Unique Solution for the moral value of animals, even though individuals (like you) might have their own solutions (the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?)
(that was just some random example, it’s fractional calculus which I heard a presentation about recently. Not especially relevant here though :))
I just found a nice example for the topic of the post that doesn’t seem to be reducible to anything else: see the post “The “Scary problem of Qualia”. There is no obvious answer, we didn’t really encounter the question so far in practice (but we probably will in the future), and other than its impact on our utility functions, it seems to be the typical “tree falls in forest” one, not really constraining anything in the real world. So the extrapolated utility function seems to be at least category 2.
There isn’t any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.
It certainly appear to uniquely follow.
That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.
but it most likely isn’t. “X computes Y” is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don’t assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?)
(nevertheless, sure, the SAUS might not always exist… but above question still doesn’t seem to have any LW Approved Unique Solution (tm) either :))
Are you saying you think qualia is ontologically fundamental or that it isn’t real or what?
I’m saying that although it isn’t ontologically fundamental, our utility function might still build on it (it “feels real enough”), so we might have problems if we try to extrapolate said function to full generality.
If something is not ontologically fundamental and doesn’t reduce to anything which is, then that thing isn’t real.