Do you realize you failed to specify any of that? I feel I’m being slightly generous by interpreting “and the world doesn’t end” to mean a causal relationship, e.g. the existence of the first AGI has to inspire someone else to create a more dangerous version if the AI doesn’t do so itself. (Though I can’t pay if the world ends for some other reason, and I might die beforehand.) Of course, you might persuade whatever judge we agree on to rule in your favor before I would consider the question settled.
(In case it’s not clear, the comment I just linked comes from 2010 or thereabouts. This is not a worry I made up on the spot.)
Given the the fact that the bet is 100 to 1 in my favor, I would be happy to let you judge the result yourself.
Or you could agree to whatever result Eliezer agrees with. However, with Eliezer the conditions are specified, and “the world doesn’t end” just means that we’re still alive with the artificial intelligence running for a week.
Do you realize you failed to specify any of that? I feel I’m being slightly generous by interpreting “and the world doesn’t end” to mean a causal relationship, e.g. the existence of the first AGI has to inspire someone else to create a more dangerous version if the AI doesn’t do so itself. (Though I can’t pay if the world ends for some other reason, and I might die beforehand.) Of course, you might persuade whatever judge we agree on to rule in your favor before I would consider the question settled.
(In case it’s not clear, the comment I just linked comes from 2010 or thereabouts. This is not a worry I made up on the spot.)
Given the the fact that the bet is 100 to 1 in my favor, I would be happy to let you judge the result yourself.
Or you could agree to whatever result Eliezer agrees with. However, with Eliezer the conditions are specified, and “the world doesn’t end” just means that we’re still alive with the artificial intelligence running for a week.