To quote step 2 of the original algorithm:
For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair.
To quote step 2 of the original algorithm:
For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair.
If something is not ontologically fundamental and doesn’t reduce to anything which is, then that thing isn’t real.
Given that we are able to come to agreement about certain moral matters and the existence of moral progress, I do think that the evidence favor the existence of a well-behaved idealized reasoning system that we are approximating when we do moral reasoning.
Can you give a detailed example of this?
This for a start.
Are you saying you think qualia is ontologically fundamental or that it isn’t real or what?
there still isn’t any Scientifically Accepted Unique Solution for the moral value of animals
There isn’t any SAUS for the problem of free will either. Nonetheless, it is a solved problem. Scientists are not in the business of solving that kind of problems, those problems generally being considered philosophical in nature.
the question is whether the solution uniquely follows from your other preferences, or is somewhat arbitrary?
It certainly appear to uniquely follow.
see the post “The “Scary problem of Qualia”.
That seems easy to answer. Modulo a reduction of computation of course but computation seems like a concept which ought to be canonically reducible.
Did the person come into existence:
Ve came into existence whenever a computation isomorphic to verself was performed.
For Harry had only loaned his Cloak, not given it
That seems like it answer your question: his invisible copies aren’t borrowing the cloak from him because they are him.
You seem to be taking a position that’s different from Eliezer’s, since AFAIK he has consistently defended this approach that you call “wrong” (for example in the thread following my first linked comment).
Well, Eliezer_2009 do seem to underestimate the difficulty of the extrapolation problem.
If you have some idea of how to “derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments” that doesn’t involve just “looking at where people moralities concentrate as you present moral arguments to them” then I’d be interested to know what it is.
Have I solved that problem? No. But humans do seem to be able to infer from the way pebblesorters reason that they are referring to primality and also it seem that by looking at mathematicians reasoning about Goldbach’s Conjecture we should be able to infer what they refer to when they speak about Goldbach’s Conjecture and we don’t do that by looking at what position they end up concentrating at when we present them with all possible arguments. So the problem ought to be solvable.
Assuming “when we do moral reasoning we are approximating some computation”, what reasons do we have for thinking that the “some computation” will allow us to fully reduce “pain” to a set of truth conditions? What are some properties of this computation that you can cite as being known, that leads you to think this?
Are you trying to imply that it could be the case that the computation that we approximate is only defined over a fixed ontology which doesn’t correspond to the correct ontology and simply return a domain error when one try to apply it to the real world? Well that doesn’t seem to be the case because we are able to do moral reasoning at a more fine-grained level than the naive ontology and a lot of the concept that seems relevant to moral reasoning, like qualia for example, seems like they ought to have canonical reductions. I detail the way in which I see us doing that kind of elucidation in this comment.
Quiditch, the lack of adequate protection on time turners before Harry gave them the idea put protective shell on them… Seriously, just reread the fic.
Eliezer seems to think that moral arguments are meaningful, but their meanings are derived only from how humans happen to respond to them (or more specifically to whatever coherence humans may show in their responses to moral arguments).
No. What he actually says is that when we do moral reasoning we are approximating some computation in the same ways that the pebblesorters are approximating primality. What make moral arguments valid or invalid is whether the arguments actually establish what they where trying to establish in the context of the actual concept of rightness which is being approximated in the same way that an argument by a pebblesorter that 6 is an incorrect heap because 6=2*3 is judged valid because that is an argument establishing 6 is not a prime number. Of course, to determine what computation we actually are trying to approximate or to establish that we actually are approximating something, looking at the coherence human show in their response to moral arguments is an excellent idea, but it is not how you define morality.
Looking at the first link you provided, I think that looking at where people moralities concentrate as you present moral arguments to them is totally the wrong way to look at this problem. Consider for example Goldbach’s Conjecture. If you where to take peoples and present random arguments to them about the conjecture it doesn’t seem to necessarily be the case, depending on what distribution you use on possible arguments, that people opinions will concentrate to the correct conclusion concerning the validity of the conjecture. That doesn’t mean that people can’t talk meaningfully about the validity of Goldbach’s Conjecture. Should we be able to derive the computation that we approximate when we talk about morality by examining the dynamic of how people reacts to moral arguments? The answer is yes, but it isn’t a trivial problem.
As for the second link you provided, you argue that the way we react to moral arguments could be totally random or depend on trivial details, which doesn’t appear to be the case and which seems in contradiction with the fact that we do manage to agree about a lot of thing concerning morality and that moral progress do seems to be a thing.
huge, unsolved debates over morality
One shouldn’t confuse there being a huge debate over something with the problem being unsolved, far less unsolvable (look at the debate over free will or worse p-zombies). I have actually solved the problem of the moral value of animals to my satisfaction (my solution could be wrong, of course). As for the problem of dealing with peoples having multiple copies this really seems like the problem of reducing “magical reality fluid” which while hard seems like it should be possible.
also in math, it might be possible to generalize things, but not necessarily, and not always uniquely
Well, yes. But in general if you’re trying to elucidate some concept in your moral reasoning you should ask yourself the specific reason why you care about that specific concept until you reach concepts that looks like they should have canonical reductions, then you reduce them. If in doing so you end up with multiple possible reductions that probably mean you didn’t go deep enough and should be asking why you care about that specific concept some more so that you can pinpoint the reduction you are actually interested in. If after all that you’re still left with multiple possible reductions for a certain concept, that you appear to value terminally, and not for any other reasons, then you should still be able to judge between possible reductions using the other things you care about: elegance, tractability, etc. (though if you end up in this situation it probably means you made an error somewhere...)
generalize n-th differentials over real numbers
I’m not sure what you’re referring to here...
Also, looking at the possibilities you enumerate again, 3 appear incoherent. Contradictions are for logical systems, if you have a component of your utility function which is monotone increasing in the quantity of blue in the universe and another component which is monotone decreasing in the quantity of blue in the universe, they partially or totally cancel one another but that doesn’t result in a contradiction.
Why exactly do you call 1 unlikely? The whole metaethics sequence argue in favor in 1 (If I understand what you mean by 1 correctly), so what part of that argument do you think is wrong specifically?
There is an entire sequence dedicated to how to define concepts and the specific problem of categories as they matter for your utility function is studied in this post where it is argued those problems should be solved by moral arguments and the whole metaethics sequence argue for the fact that moral arguments are meaningful.
Now if you’re asking me if we have a complete reduction of some concept relevant to our utility function all the way down to fundamental physics then the answer is no. That doesn’t mean that partial reductions of some concepts potentially relevant to our utility function has never been accomplished or that complete reduction is not possible.
So are you arguing that the metaethics sequence is wrong and that moral arguments are meaningless or are you arguing that the GRT is wrong and that reduction of the concept which appear in your utility function is impossible despite them being real or what? I still have no idea what it is that you’re arguing for exactly.
Actually, dealing with a component of your ontology not being real seems like a far harder problem than the problem of such a component not being fundamental.
According to the Great Reductionist Thesis everything real can be reduced to a mix of physical reference and logical reference. In which case, if every component of your ontology is real, you can obtain a formulation of your utility function in terms of fundamental things.
The case where some components of your ontology can’t be reduced because they’re not real and where your utility function refer explicitly to such entity seem considerably harder, but that is exactly the problem that someone who realize God doesn’t actually exist is confronted with, and we do manage that kind of ontology crisis.
So are you saying that the GRT is wrong or that none of the things that we value are actually real or that we can’t program a computer to perform reduction (which seems absurd given that we have managed to perform some reductions already) or what? Because I don’t see what you’re trying to get at here.
Also I totally think there was a respectable hard problem
So you do have a solution to the problem?
Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w’ in W such that p is true in the possible world w and false in the possible world w’.
Then the question become: to be able to reason in all generality what collection of possible worlds should one use?
That’s a very hard question.
It’s explained in detail in chapter 25 that the genes that make a person a wizard do not do so by building some complex machinery which allow you to become a wizard; the genes that make you a wizard constitute a marker which indicate to the source of magic that you should be allowed to cast spells.
My intention was not to appear confrontational. It actually seemed obvious when I began thinking about this problem that the order in which we check actions in step 1 shouldn’t matter but that ended up being completely wrong. That was what I was trying to convey though I admit I might have done so in a clumsy manner.