there still isn’t any Scientifically Accepted Unique Solution for the moral value of animals
There isn’t any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.
the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?
It certainly appear to uniquely follow.
see the post “The “Scary problem of Qualia”.
That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.
but it most likely isn’t. “X computes Y” is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don’t assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?)
(nevertheless, sure, the SAUS might not always exist… but above question still doesn’t seem to have any LW Approved Unique Solution (tm) either :))
I’m saying that although it isn’t ontologically fundamental, our utility function might still build on it (it “feels real enough”), so we might have problems if we try to extrapolate said function to full generality.
There isn’t any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.
It certainly appear to uniquely follow.
That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.
but it most likely isn’t. “X computes Y” is a model in our head that is useful to predict what e.g. computers do, which breaks down if you zoom in (qualia appear in exactly what stage of a CPU pipeline?) or don’t assume the computer is perfect (how much rounding error is allowed to make the simulation a person and not random noise?)
(nevertheless, sure, the SAUS might not always exist… but above question still doesn’t seem to have any LW Approved Unique Solution (tm) either :))
Are you saying you think qualia is ontologically fundamental or that it isn’t real or what?
I’m saying that although it isn’t ontologically fundamental, our utility function might still build on it (it “feels real enough”), so we might have problems if we try to extrapolate said function to full generality.
If something is not ontologically fundamental and doesn’t reduce to anything which is, then that thing isn’t real.