Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Huh? I could have used any other example to highlight that consistency of beliefs and actions can not be a sufficient definition of rationality to care about. I just thought since he is the president of the SIAI it would be an appropriate example.
You didn’t phrase it as though it were an example, you phrased it as a summary. Your comment states that Luke’s point is about the Singularity, which was not mentioned in the post.
You didn’t phrase it as though it were an example, you phrased it as a summary.
Phew, I certainly didn’t expect that. I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
Let’s analyze my comment:
1a) Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility,...
Since his original post did not talk about the Singularity it is instantly obvious that the above sentence can be read as:
1b) Your post is basically saying that if you hold belief X and that belief X is the right thing to do,...
2a) …then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning.
The end of that sentence makes it clear that I was actually talking about the original post by referring to the consistency of acting according to your beliefs. It could be read as:
2b) …then if you act according to belief X you are rational (consistency) and therefore winning.
3a) And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
That sentence shows how anyone could choose any belief about the future, frame it as an unprovable prediction and act accordingly and yet fit the definition of rationality that has been outlined in the original post. It could be read as:
3b) And since nobody can disprove belief X, you will be winning winning winning....without actually achieving anything ever.
I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Huh? I could have used any other example to highlight that consistency of beliefs and actions can not be a sufficient definition of rationality to care about. I just thought since he is the president of the SIAI it would be an appropriate example.
You didn’t phrase it as though it were an example, you phrased it as a summary. Your comment states that Luke’s point is about the Singularity, which was not mentioned in the post.
Phew, I certainly didn’t expect that. I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
Let’s analyze my comment:
1a) Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility,...
Since his original post did not talk about the Singularity it is instantly obvious that the above sentence can be read as:
1b) Your post is basically saying that if you hold belief X and that belief X is the right thing to do,...
2a) …then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning.
The end of that sentence makes it clear that I was actually talking about the original post by referring to the consistency of acting according to your beliefs. It could be read as:
2b) …then if you act according to belief X you are rational (consistency) and therefore winning.
3a) And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
That sentence shows how anyone could choose any belief about the future, frame it as an unprovable prediction and act accordingly and yet fit the definition of rationality that has been outlined in the original post. It could be read as:
3b) And since nobody can disprove belief X, you will be winning winning winning....without actually achieving anything ever.
The problem is that you have a history of bringing Singularity issues into posts that are not about the Singularity. (Or at least, have a history of making comments that look like that.) Two examples that spring readily to mind are using a post about Leverage Research to critique SIAI and bringing in post-Singularity scenarios when commenting on a post about current-day issues. With such a history, it’s not obvious that your comment couldn’t have been about the Singularity.
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
I would say the karmic reaction disagrees.
You could have used “working for the second coming of Jesus” as just as good an example and just as personal a one.
Incidentally I am 95% sure to know why he made this post and it has to do with the Singularity. Which will become clear in a few days.