It matches his pattern of behavior to freak out about AI every time there is an advance, and I’m basically accusing him of being susceptible to confirmation bias, perhaps the most common human failing even when trying to be rational.
He claims to think AI is bound to destroy us, and literally wrote about how everyone should just give up. (Which I originally thought was for April Fool’s Day, but turned out to not be.) He can’t be expected to carefully scrutinize the evidence to only give it the weight it deserves, or even necessarily the right sign. If you were to ask the same thing in reverse about a massive skeptic who thought there was no point even caring for the next fifty years, you wouldn’t have to have had them quadruple the length of time before to be unimpressed with them doing so next time AI failed to be what people claimed it was.
He doesn’t want to give up but doesn’t expect to succeed either. The remaining option is “Dying with Dignity” by fighting for survival in the face of approaching doom.
You’re assuming that the updates are mathematical and unbiased, which is the opposite of how people actually work. If your updates are highly biased, it is very easy to just make large updates in that direction any time new evidence shows up. As you get more sure of yourself, these updates start getting larger and larger rather than smaller as they should.
I’m hardly missing the point. It isn’t impressive to have it be exactly 75%, not more or less, so the fact that it can’t always be that is irrelevant. His point isn’t that that particular exact number matters, it’s that the number eventually becomes very small. But since the number being very small compared to what it should be does not prevent it from being made smaller by the same ratio, his point is meaningless. It isn’t impressive to fulfill an obvious bias toward updating in a certain direction.
How many times do you think he has changed his expected time to disaster to 25% of what it was?
It matches his pattern of behavior to freak out about AI every time there is an advance, and I’m basically accusing him of being susceptible to confirmation bias, perhaps the most common human failing even when trying to be rational.
He claims to think AI is bound to destroy us, and literally wrote about how everyone should just give up. (Which I originally thought was for April Fool’s Day, but turned out to not be.) He can’t be expected to carefully scrutinize the evidence to only give it the weight it deserves, or even necessarily the right sign. If you were to ask the same thing in reverse about a massive skeptic who thought there was no point even caring for the next fifty years, you wouldn’t have to have had them quadruple the length of time before to be unimpressed with them doing so next time AI failed to be what people claimed it was.
He doesn’t want to give up but doesn’t expect to succeed either. The remaining option is “Dying with Dignity” by fighting for survival in the face of approaching doom.
My point was that (0.25)^n for large n is very small, so no, it would not be easy.
You’re assuming that the updates are mathematical and unbiased, which is the opposite of how people actually work. If your updates are highly biased, it is very easy to just make large updates in that direction any time new evidence shows up. As you get more sure of yourself, these updates start getting larger and larger rather than smaller as they should.
I’m hardly missing the point. It isn’t impressive to have it be exactly 75%, not more or less, so the fact that it can’t always be that is irrelevant. His point isn’t that that particular exact number matters, it’s that the number eventually becomes very small. But since the number being very small compared to what it should be does not prevent it from being made smaller by the same ratio, his point is meaningless. It isn’t impressive to fulfill an obvious bias toward updating in a certain direction.