Great! Now I can see several points where I disagree or would like more information.
1) Is X really asserting that Y shares his ultimate moral framework (i.e. that they would converge given time and arguments etc)?
If Y is a psychopath murderer who will simply never accept that he shouldn’t kill, can I still judge that Y should refrain from killing? On the current form, to do so would involve asserting that we share a framework, but even people who know this to be false can judge that he shouldn’t kill, can’t they?
2) I don’t know what it means to be the solution to a problem. You say:
‘I should Z’ means that Z answers the question, “What will save my people? How can we all have more fun? How can we get more control over our own lives? What’s the funniest jokes we can tell? …”
Suppose Z is the act of saying “no”. How does this answer the question (or ‘solve the problem’)? Suppose it leads you to have a bit less fun and others to have a bit more fun and generally has positive effects on some parts of the question and negative on others. How are these integrated? As you phrased it, it is clearly not a unified question and I don’t know what makes one act rather than another an answer to a list of questions (when presumably it doesn’t satisfy each one in the list). Is there some complex and not consciously known weighting of the terms? I thought you denied that earlier in the series. This part seems very non-algorithmic at the moment.
3) The interpretation says ‘implicitly defined by the machinery … which they both use to make desirability judgments’?
What if there is not such machinery that they both use? I thought only X’s machinery counted here as X is the judger.
4) You will have to say more about ‘implicitly defined by the machinery … use[d] to make desirability judgments’. This is really vague. I know you have said more on this, but never in very precise terms, just by analogy.
5) Is the problem W meant to be the endpoint of thought (i.e. the problem that would be arrived at), or is it meant to be the current idea which involves requests for self modification (e.g. ‘Save a lot of lives, promote happiness, and factor in whatever things I have not thought of but could be convinced of.’) It is not clear from the current statement (or indeed your previous posts), but would be made clear by a solution to (4).
Great! Now I can see several points where I disagree or would like more information.
1) Is X really asserting that Y shares his ultimate moral framework (i.e. that they would converge given time and arguments etc)?
If Y is a psychopath murderer who will simply never accept that he shouldn’t kill, can I still judge that Y should refrain from killing? On the current form, to do so would involve asserting that we share a framework, but even people who know this to be false can judge that he shouldn’t kill, can’t they?
2) I don’t know what it means to be the solution to a problem. You say:
‘I should Z’ means that Z answers the question, “What will save my people? How can we all have more fun? How can we get more control over our own lives? What’s the funniest jokes we can tell? …”
Suppose Z is the act of saying “no”. How does this answer the question (or ‘solve the problem’)? Suppose it leads you to have a bit less fun and others to have a bit more fun and generally has positive effects on some parts of the question and negative on others. How are these integrated? As you phrased it, it is clearly not a unified question and I don’t know what makes one act rather than another an answer to a list of questions (when presumably it doesn’t satisfy each one in the list). Is there some complex and not consciously known weighting of the terms? I thought you denied that earlier in the series. This part seems very non-algorithmic at the moment.
3) The interpretation says ‘implicitly defined by the machinery … which they both use to make desirability judgments’?
What if there is not such machinery that they both use? I thought only X’s machinery counted here as X is the judger.
4) You will have to say more about ‘implicitly defined by the machinery … use[d] to make desirability judgments’. This is really vague. I know you have said more on this, but never in very precise terms, just by analogy.
5) Is the problem W meant to be the endpoint of thought (i.e. the problem that would be arrived at), or is it meant to be the current idea which involves requests for self modification (e.g. ‘Save a lot of lives, promote happiness, and factor in whatever things I have not thought of but could be convinced of.’) It is not clear from the current statement (or indeed your previous posts), but would be made clear by a solution to (4).