I agree about the issue of unresolved arguments. Was agreement reached and that″s why the debate stopped? No way to tell.
Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
BTW sorry to see that linkrot continues to be a problem in the future.
I took the liberty of a creating a wiki page about the AI-foom debate, with links to all of the posts collected in one place, in case anyone wants to refer to it in the future.
Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
I find myself reluctant to support this idea. I think the main reason is that it seems very hard to translate my degrees of belief into probability numbers. So I’m afraid that I’ll update my beliefs correctly in response to other people’s arguments, but state the wrong numbers. Is this a skill that we can learn to perform better?
Right now I just try to indicate my degrees of belief using English words, like “I’m sure”, “I think it’s likely”, “perhaps”, etc., which has the disadvantage of not being very precise, but the advantage of requiring little mental effort (which I can redirect into for example thinking about whether an argument is correct or not).
ETA: It does seem that there are situations where the extra mental effort required to state probability estimates would be useful, like in the AI-Foom debate, where there is persistent disagreement after an extensive discussion. The disputants can perhaps use probability estimates to track down which individual beliefs (e.g., conditional probabilities) are causing their overall disagreement.
Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
Would that be desirable? I know, for example, that when reading Robin’s posts on that topic I often updated away from Robin’s position (weak arguments from a strong debater is evidence that there are not stronger arguments). Given this possibility, having public numbers diverging in such a way would be rather dramatic and decidedly favour dishonesty.
In general there are just far too many signalling reasons to avoid having ‘probability estimates’ public. Very few discussions even here are sufficiently rational as to make those numbers beneficial.
When your estimates are tracked (which was the purpose of predictionbook.com [disclaimer: financial interest]) it becomes much harder to signal with them without blowing your publicly visible calibration.
It does. Of course, given that I was primed with the ‘AI-foom’ debate I found the thought of worrying what people will think of your calibration a little amusing. :)
I agree about the issue of unresolved arguments. Was agreement reached and that″s why the debate stopped? No way to tell.
Particularly the epic AI-foom debate between Robin and Eliezer on OB, over whether AI or brain simulations were more likely to dominate the next century, was never clearly resolved with updated probability estimates from the two participants. In fact probability estimates were rare in general. Perhaps a step forward would be for disputants to publicize their probability estimates and update them as the conversation proceeds.
BTW sorry to see that linkrot continues to be a problem in the future.
I took the liberty of a creating a wiki page about the AI-foom debate, with links to all of the posts collected in one place, in case anyone wants to refer to it in the future.
I find myself reluctant to support this idea. I think the main reason is that it seems very hard to translate my degrees of belief into probability numbers. So I’m afraid that I’ll update my beliefs correctly in response to other people’s arguments, but state the wrong numbers. Is this a skill that we can learn to perform better?
Right now I just try to indicate my degrees of belief using English words, like “I’m sure”, “I think it’s likely”, “perhaps”, etc., which has the disadvantage of not being very precise, but the advantage of requiring little mental effort (which I can redirect into for example thinking about whether an argument is correct or not).
ETA: It does seem that there are situations where the extra mental effort required to state probability estimates would be useful, like in the AI-Foom debate, where there is persistent disagreement after an extensive discussion. The disputants can perhaps use probability estimates to track down which individual beliefs (e.g., conditional probabilities) are causing their overall disagreement.
Would that be desirable? I know, for example, that when reading Robin’s posts on that topic I often updated away from Robin’s position (weak arguments from a strong debater is evidence that there are not stronger arguments). Given this possibility, having public numbers diverging in such a way would be rather dramatic and decidedly favour dishonesty.
In general there are just far too many signalling reasons to avoid having ‘probability estimates’ public. Very few discussions even here are sufficiently rational as to make those numbers beneficial.
When your estimates are tracked (which was the purpose of predictionbook.com [disclaimer: financial interest]) it becomes much harder to signal with them without blowing your publicly visible calibration.
It does. Of course, given that I was primed with the ‘AI-foom’ debate I found the thought of worrying what people will think of your calibration a little amusing. :)