Tim, I’ve asked Eliezer about this now and then in various comments, and he has always maintained that he is being sincere and honest in his writings on the singularity. Recently he had a series of postings on the importance of honesty, such as Protected from Myself. The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility, for good or evil. So I don’t see any particular reason to discount his recent estimate of at least a 70% chance that AI could become super-intelligent within 100 years. Likewise with Robin’s initial estimate of less than 1% for this event; I don’t see any reason why he wouldn’t be reporting that honestly.
For the disagreement results to hold, the participants don’t have to be perfect truth-tellers, they just need to honestly give their opinions on the issue in question. Now Robin hints above that he may be revising his estimate, and says he considers it possible that Eliezer’s position is also shifting. Maybe 1% vs 70% is no longer the state of play. But if they do choose to offer their revised estimates at some time in the future, and if they were to continue to do so, the disagreement theorems would pretty well force them to agree within a few rounds, I think. If that didn’t happen then yes, maybe they would be lying; more likely IMO is that they suspect that the other is lying; and most likely I still think is that they suspect the other is simply being unreasonable.
Tim, I’ve asked Eliezer about this now and then in various comments, and he has always maintained that he is being sincere and honest in his writings on the singularity. Recently he had a series of postings on the importance of honesty, such as Protected from Myself. The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility, for good or evil. So I don’t see any particular reason to discount his recent estimate of at least a 70% chance that AI could become super-intelligent within 100 years. Likewise with Robin’s initial estimate of less than 1% for this event; I don’t see any reason why he wouldn’t be reporting that honestly.
For the disagreement results to hold, the participants don’t have to be perfect truth-tellers, they just need to honestly give their opinions on the issue in question. Now Robin hints above that he may be revising his estimate, and says he considers it possible that Eliezer’s position is also shifting. Maybe 1% vs 70% is no longer the state of play. But if they do choose to offer their revised estimates at some time in the future, and if they were to continue to do so, the disagreement theorems would pretty well force them to agree within a few rounds, I think. If that didn’t happen then yes, maybe they would be lying; more likely IMO is that they suspect that the other is lying; and most likely I still think is that they suspect the other is simply being unreasonable.