You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.