Saying it’s encoding human irrationality is taking the viewpoint that the human reaction to the fat-man trolley problem is an error of reasoning, where the particular machinery humans use to decide what to do gives an answer that does not maximize human values.
It makes some sense to say that a human is a holistic entity that can’t be divided into “values” and “algorithms”. I argued that point in “Only humans can have human values”. But taking that view, together with the view that you should cling to human values, means you can’t be a transhumanist. You can’t talk about improving humans, because implementing human values comes down to being human. Any “improvement” to human reasoning means giving different answers, which means getting “wrong” answers. And you can’t have a site like LessWrong, that talks about how to avoid errors that humans systematically make—because, like in the trolley problem case, you must claim they aren’t errors, they’re value judgements.
You can still have a LessWrong, because one can clearly demonstrate that people avoidably draw wrong conclusions from unreliable screening tests, commit conjunction fallacies, and so on. There are agreed ways of getting at the truth on these things and people are capable of understanding the errors that they are making, and avoiding making those errors.
Values are a harder problem. Our only source of moral knowledge (assuming there is such a thing, but those who believe there is not must dismiss this entire conversation as moonshine) is what people generally do and say. If contradictions are found, where does one go for evidence to resolve them?
You’re right—there is a class of problems for which we can know what the right answer is, like the Monty Hall problem. (Although I notice that the Sleeping Beauty problem is a math problem on which we were not able to agree on what the right answer was, because people had linguistic disagreements on how to interpret the meaning of the problem.)
And you can’t have a site like LessWrong, that talks about how to avoid errors that humans systematically make—because, like in the trolley problem case, you must claim they aren’t errors, they’re value judgements.
Even when holding a view that human values can’t be improved, rationality techniques are still useful, because human values conflict with each other and have to be prioritized or weighted.
If value knowing the truth, and I also in the holistic sense “value” making the conjunction fallacy, then LessWrong is still helpful to me provided I value the first more than the second, or if the weighting is such that the net value score is increased even though the individual conjunction fallacy value is decreased.
Saying it’s encoding human irrationality is taking the viewpoint that the human reaction to the fat-man trolley problem is an error of reasoning, where the particular machinery humans use to decide what to do gives an answer that does not maximize human values.
It makes some sense to say that a human is a holistic entity that can’t be divided into “values” and “algorithms”. I argued that point in “Only humans can have human values”. But taking that view, together with the view that you should cling to human values, means you can’t be a transhumanist. You can’t talk about improving humans, because implementing human values comes down to being human. Any “improvement” to human reasoning means giving different answers, which means getting “wrong” answers. And you can’t have a site like LessWrong, that talks about how to avoid errors that humans systematically make—because, like in the trolley problem case, you must claim they aren’t errors, they’re value judgements.
You can still have a LessWrong, because one can clearly demonstrate that people avoidably draw wrong conclusions from unreliable screening tests, commit conjunction fallacies, and so on. There are agreed ways of getting at the truth on these things and people are capable of understanding the errors that they are making, and avoiding making those errors.
Values are a harder problem. Our only source of moral knowledge (assuming there is such a thing, but those who believe there is not must dismiss this entire conversation as moonshine) is what people generally do and say. If contradictions are found, where does one go for evidence to resolve them?
You’re right—there is a class of problems for which we can know what the right answer is, like the Monty Hall problem. (Although I notice that the Sleeping Beauty problem is a math problem on which we were not able to agree on what the right answer was, because people had linguistic disagreements on how to interpret the meaning of the problem.)
Even when holding a view that human values can’t be improved, rationality techniques are still useful, because human values conflict with each other and have to be prioritized or weighted.
If value knowing the truth, and I also in the holistic sense “value” making the conjunction fallacy, then LessWrong is still helpful to me provided I value the first more than the second, or if the weighting is such that the net value score is increased even though the individual conjunction fallacy value is decreased.