So, let me explain why Actual Winning isn’t a strong measure of rationality.
Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility, then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning. And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
I hope your next post is going to explain why I should care.
Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Huh? I could have used any other example to highlight that consistency of beliefs and actions can not be a sufficient definition of rationality to care about. I just thought since he is the president of the SIAI it would be an appropriate example.
You didn’t phrase it as though it were an example, you phrased it as a summary. Your comment states that Luke’s point is about the Singularity, which was not mentioned in the post.
You didn’t phrase it as though it were an example, you phrased it as a summary.
Phew, I certainly didn’t expect that. I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
Let’s analyze my comment:
1a) Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility,...
Since his original post did not talk about the Singularity it is instantly obvious that the above sentence can be read as:
1b) Your post is basically saying that if you hold belief X and that belief X is the right thing to do,...
2a) …then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning.
The end of that sentence makes it clear that I was actually talking about the original post by referring to the consistency of acting according to your beliefs. It could be read as:
2b) …then if you act according to belief X you are rational (consistency) and therefore winning.
3a) And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
That sentence shows how anyone could choose any belief about the future, frame it as an unprovable prediction and act accordingly and yet fit the definition of rationality that has been outlined in the original post. It could be read as:
3b) And since nobody can disprove belief X, you will be winning winning winning....without actually achieving anything ever.
I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
This site does have an unhealthy preoccupation with UFAI, but that doesn’t mean that literally every post is a coded way of talking about it. Limit your mockery to the proper venues, please.
If I believe that buying a lottery ticket for $1 has a 90% chance of winning $1 million, the rational thing to do [1] is buy the ticket. Even if the actual chance of winning [2] is 1 in a billion.
And if I was correct about the probability of winning but still lost, that also does not change that it was the rational thing to do.
[1]: under ordinary assumptions about preferences [2]: the probability I would have assigned if I had much better information
What if you have the option c: think and figure out that the actual chance is 1 in a billion. This completely summarizes the issue. If there’s 3 populations of agents:
A: agents whom grossly overestimate chance of winning but somehow don’t buy the ticket (perhaps the reasoning behind the estimate, due to it’s sloppiness, is not given enough weight against the ‘too good to be true’ heuristics),
B: agents whom grossly overestimate chance of winning, and buy the ticket,
C: agents whom correctly estimate chance of winning and don’t buy the ticket.
C does the best, A does the second best, and B loses. B may also think itself a rationalist, but is behaving in an irrational manner due to not accommodating for B’s cognitive constraints. Perhaps agents from A whom read of cognitive biases and decide that they don’t have those biases, become agents in B, while to become an agent in C you have to naturally have something, and also train yourself.
Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility, then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning. And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
I hope your next post is going to explain why I should care.
Did you even read the post? Luke doesn’t even mention the Singularity, much less claim that it is near, or that working on it automatically rational and winning.
Huh? I could have used any other example to highlight that consistency of beliefs and actions can not be a sufficient definition of rationality to care about. I just thought since he is the president of the SIAI it would be an appropriate example.
You didn’t phrase it as though it were an example, you phrased it as a summary. Your comment states that Luke’s point is about the Singularity, which was not mentioned in the post.
Phew, I certainly didn’t expect that. I thought it was completely obvious to everyone that the post does not talk about the Singularity and that therefore my comment couldn’t possible be about the Singularity either.
Let’s analyze my comment:
1a) Your post is basically saying that if you believe that a negative Singularity is likely and that a positive Singularity has lots of expected utility,...
Since his original post did not talk about the Singularity it is instantly obvious that the above sentence can be read as:
1b) Your post is basically saying that if you hold belief X and that belief X is the right thing to do,...
2a) …then if you work to achieve a positive Singularity you are rational (consistency) and therefore winning.
The end of that sentence makes it clear that I was actually talking about the original post by referring to the consistency of acting according to your beliefs. It could be read as:
2b) …then if you act according to belief X you are rational (consistency) and therefore winning.
3a) And since nobody can disprove your claim that the Singularity is near, until the very end of the universe, you will be winning winning winning....without actually achieving anything ever.
That sentence shows how anyone could choose any belief about the future, frame it as an unprovable prediction and act accordingly and yet fit the definition of rationality that has been outlined in the original post. It could be read as:
3b) And since nobody can disprove belief X, you will be winning winning winning....without actually achieving anything ever.
The problem is that you have a history of bringing Singularity issues into posts that are not about the Singularity. (Or at least, have a history of making comments that look like that.) Two examples that spring readily to mind are using a post about Leverage Research to critique SIAI and bringing in post-Singularity scenarios when commenting on a post about current-day issues. With such a history, it’s not obvious that your comment couldn’t have been about the Singularity.
You have succeeded to mix together an unbased personal accusation with a difficult epistemic problem. The complexity of the problem makes it difficult to exactly point out the inappropriateness of the offense… but obviously, it is there, readers see it and downvote accordingly.
The epistemic problem is basicly this: feeling good is an important part of everyone’s utility function. If a belief X makes one happy, shouldn’t it be rational (as in: increasing expected utility) to believe it, even if it’s false? Especially if the belief is unfalsifiable, so the happiness caused by belief will never be countered by a sadness of falsification.
And then you pick Luke as an example, accusing him that this is exactly what he is doing (kind of wireheading himself psychologically). Since what Luke is doing is a group value here, you have added a generous dose of mindkilling to a question that is rather difficult even without doing so. But even without that, it’s unnecessarily personally offensive.
The correct answer is along the lines that if Luke has also something else in his utility function, believing a false belief may prevent him from getting it. (Because he might wait for Singularity to provide him this thing, which would never happen; but without this belief he might have followed his goal directly and achieved it.) If the expected utility of achieving those other goals is greater than expected utility of feeling good by thinking false thoughts, then false belief is a net loss, and it even prevents one from realizing and fixing it. But this explanation can be countered by more epistemic problems, etc.
For now, let me just state openly that I would prefer to discuss difficult epistemic problems in a thread without this kind of contributions. Maybe even on a website without this kind of contributions.
I would say the karmic reaction disagrees.
You could have used “working for the second coming of Jesus” as just as good an example and just as personal a one.
Incidentally I am 95% sure to know why he made this post and it has to do with the Singularity. Which will become clear in a few days.
This site does have an unhealthy preoccupation with UFAI, but that doesn’t mean that literally every post is a coded way of talking about it. Limit your mockery to the proper venues, please.
A more straightforward example:
If I believe that buying a lottery ticket for $1 has a 90% chance of winning $1 million, the rational thing to do [1] is buy the ticket. Even if the actual chance of winning [2] is 1 in a billion.
And if I was correct about the probability of winning but still lost, that also does not change that it was the rational thing to do.
[1]: under ordinary assumptions about preferences
[2]: the probability I would have assigned if I had much better information
What if you have the option c: think and figure out that the actual chance is 1 in a billion. This completely summarizes the issue. If there’s 3 populations of agents:
A: agents whom grossly overestimate chance of winning but somehow don’t buy the ticket (perhaps the reasoning behind the estimate, due to it’s sloppiness, is not given enough weight against the ‘too good to be true’ heuristics),
B: agents whom grossly overestimate chance of winning, and buy the ticket,
C: agents whom correctly estimate chance of winning and don’t buy the ticket.
C does the best, A does the second best, and B loses. B may also think itself a rationalist, but is behaving in an irrational manner due to not accommodating for B’s cognitive constraints. Perhaps agents from A whom read of cognitive biases and decide that they don’t have those biases, become agents in B, while to become an agent in C you have to naturally have something, and also train yourself.