Without taking a poll of anything except my memory, Eliezer+Roko+VladNesov+Alicorn are against, DavidGerard+waitingforgodel+vaniver are for.
I’m for. I believe Tim Tyler is for.
Aumann agreement works in the case of hidden information—all you need are posteriors and common knowledge of the event alone.
Human’s have this unfortunate feature of not being logically omniscient. In such cases where people don’t see all the logical implications of an argument we can treat those implications as hidden information. If this wasn’t the case then the censorship would be totally unnecessary as Roko’s argument didn’t actually include new information. We would have all turned to stone already.
Roko increased his estimation and Eliezer decreased his estimation—and the amounts they did so are balanced according to the strength of their private signals.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?
I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.
Also, when I posted the material, I hadn’t thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.
David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.
It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side’s evidence well below the acceptable limits.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement.
They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.
To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer’s comment.
I’m curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can’t reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?
I would donate even if he said that he revised his estimate upwards.
I would then seriously reconsider my evaluation of him, but as it stands the offer is for him to weigh in at all, not weigh in on my side.
edit: I misparsed your comment. That particular answer would dance very close to ‘no comment’, but unless it seemed constructed that way on purpose, I would still donate.
Yeah, that’s fair. One of the things I was curious about was, in fact, whether you would take that answer as a hedge, but “it depends” is a perfectly legitimate answer to that question.
I’m for. I believe Tim Tyler is for.
Human’s have this unfortunate feature of not being logically omniscient. In such cases where people don’t see all the logical implications of an argument we can treat those implications as hidden information. If this wasn’t the case then the censorship would be totally unnecessary as Roko’s argument didn’t actually include new information. We would have all turned to stone already.
There is no way for you to have accurately assessed this. Roko and Eliezer aren’t idealized Bayesian agents, it is extremely unlikely they performed a perfect Aumann agreement. If one is more persuasive than the other for reasons other than the evidence they share than their combined support for the proposition may not be worth the same as two people who independently came to support the proposition. Besides which, according to you, what information did they share exactly?
I had a private email conversation with Eliezer that did involve a process of logical discourse, and another with Carl.
Also, when I posted the material, I hadn’t thought it through. One I had thought it through, I realized that I had accidentally said more than I should have done.
David_Gerard, Jack, timtyler, waitingforgodel, and Vaniver do not currently outweigh Eliezer_Yudkowsky, FormallyknownasRoko, Vladimir_Nesov, and Alicorn, as of now, in my mind.
It does not need to be a perfect Aumann agreement; a merely good one will still reduce the chances of overcounting or undercounting either side’s evidence well below the acceptable limits.
They are approximations of Bayesian agents, and it is extremely likely they performed an approximate Aumann agreement.
To settle this particular question, however, I will pay money. I promise to donate 50 dollars to the Singularity Institute for Artificial Intelligence, independent of other plans to donate, if Eliezer confirms that he did revise his estimate down; or if he confirms that he did not revise his estimate down. Payable within two weeks of Eliezer’s comment.
I’m curious: if he confirms instead that the change in his estimate, if there was one, was small enough relative to his estimate that he can’t reliably detect it or detect its absence, although he infers that he updated using more or less the same reasoning you use above, will you donate or not?
I will donate.
I would donate even if he said that he revised his estimate upwards.
I would then seriously reconsider my evaluation of him, but as it stands the offer is for him to weigh in at all, not weigh in on my side.
edit: I misparsed your comment. That particular answer would dance very close to ‘no comment’, but unless it seemed constructed that way on purpose, I would still donate.
Yeah, that’s fair. One of the things I was curious about was, in fact, whether you would take that answer as a hedge, but “it depends” is a perfectly legitimate answer to that question.