I am prepared to pay out anywhere between $20 and $100 to AI ethicists of the DAIR/”Stochastic Parrots” school of thought if they provide their object-level arguments against the idea that preventing AI from killing everyone is a real and important issue. This pay will depend on their notability within AI ethics, as well as the clarity and persuasiveness of their arguments.
Conditions for the bounty
The bounty must be claimed by an AI ethicist of the DAIR/”Stochastic Parrots” school of thought. Ethicists from other schools of thought (such as the “what if self-driving cars face trolley problems” school of thought) may be given bounties on a case-by-case basis, but probably not. Any member of DAIR or coauthor of the “Stochastic Parrots” paper counts for this, but people outside of these specific circles may qualify at my discretion, if I believe that their intellectual output is similar to or connected with DAIR or the “Stochastic Parrots” coauthors.
The arguments provided by the claimant must be posted publicly, ideally in the comment section of this thread.
The arguments provided by the claimant must be object-level. This means that they must discuss concrete subjects specific to the issues at hand. This is in contrast to meta-level arguments, which focus on facts about the question (rather than about the issues it addresses), such as difficulties involved in future prediction, the cultural milieu of contemporary AI notkilleveryoneism, the framing of my questions, etc. Note that I have nothing against meta-level arguments; it’s just that I’ve already seen plenty of meta-level arguments by AI ethicists against AI notkilleveryoneism, and I want to see some object-level arguments.
The arguments provided by the claimant must be a good-faith summary of the claimant’s actual object-level arguments against AI notkilleveryoneism. For example, “AI notkilleveryoneism is unimportant because paperclips are shiny” will not count, even if made by a qualifying claimant, even though it is object-level. I do not expect that I will need to invoke this condition, but I may do so at my discretion.
The following AI ethicists will be presumptively considered valid claimants, and will fall into the most notable category (meaning that I will pay each of them the maximum $100 bounty assuming they follow all the terms of the bounty, unless I notice loophole abuse): Emily Bender Timnit Gebru Margaret Mitchell Melanie Mitchell Emile Torres
Note that there is no requirement for the arguments to change my mind, or even to be persuasive in the slightest. The only requirements are the above ones. If someone manages to abuse a loophole to get there, I will pay them the minimum bounty of $20, and then modify the rules for all future claimants to preempt this loophole.
So far, Emile Torres has already responded to the bounty by recommending their book as the place where their object-level arguments have been written. I will judge this as soon as I am able to check this book out from a library near me.
Note that I may need to close this bounty if I get too many claims from it, because I have a limited budget. All the more reason to get your arguments in here soon!
BOUNTY AVAILABLE: AI ethicists, what are your object-level arguments against AI notkilleveryoneism?
I am prepared to pay out anywhere between $20 and $100 to AI ethicists of the DAIR/”Stochastic Parrots” school of thought if they provide their object-level arguments against the idea that preventing AI from killing everyone is a real and important issue. This pay will depend on their notability within AI ethics, as well as the clarity and persuasiveness of their arguments.
Conditions for the bounty
The bounty must be claimed by an AI ethicist of the DAIR/”Stochastic Parrots” school of thought. Ethicists from other schools of thought (such as the “what if self-driving cars face trolley problems” school of thought) may be given bounties on a case-by-case basis, but probably not. Any member of DAIR or coauthor of the “Stochastic Parrots” paper counts for this, but people outside of these specific circles may qualify at my discretion, if I believe that their intellectual output is similar to or connected with DAIR or the “Stochastic Parrots” coauthors.
The arguments provided by the claimant must be posted publicly, ideally in the comment section of this thread.
The arguments provided by the claimant must be object-level. This means that they must discuss concrete subjects specific to the issues at hand. This is in contrast to meta-level arguments, which focus on facts about the question (rather than about the issues it addresses), such as difficulties involved in future prediction, the cultural milieu of contemporary AI notkilleveryoneism, the framing of my questions, etc. Note that I have nothing against meta-level arguments; it’s just that I’ve already seen plenty of meta-level arguments by AI ethicists against AI notkilleveryoneism, and I want to see some object-level arguments.
The arguments provided by the claimant must be a good-faith summary of the claimant’s actual object-level arguments against AI notkilleveryoneism. For example, “AI notkilleveryoneism is unimportant because paperclips are shiny” will not count, even if made by a qualifying claimant, even though it is object-level. I do not expect that I will need to invoke this condition, but I may do so at my discretion.
The following AI ethicists will be presumptively considered valid claimants, and will fall into the most notable category (meaning that I will pay each of them the maximum $100 bounty assuming they follow all the terms of the bounty, unless I notice loophole abuse):
Emily Bender
Timnit Gebru
Margaret Mitchell
Melanie Mitchell
Emile Torres
Note that there is no requirement for the arguments to change my mind, or even to be persuasive in the slightest. The only requirements are the above ones. If someone manages to abuse a loophole to get there, I will pay them the minimum bounty of $20, and then modify the rules for all future claimants to preempt this loophole.
So far, Emile Torres has already responded to the bounty by recommending their book as the place where their object-level arguments have been written. I will judge this as soon as I am able to check this book out from a library near me.
Note that I may need to close this bounty if I get too many claims from it, because I have a limited budget. All the more reason to get your arguments in here soon!