If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)
donate the money to charity under the view ‘and you’re both wrong, so there!’
say that the prediction is implicitly a big AND - ‘there will be an AI by 2100 AND said first AI will not have… etc.‘, and that the conditions allow ‘short-circuiting’ when any AI is created; with this change, reaching 2100 is a loss on your part.
Like #2, but the loss is on Eliezer’s part (the bet changes to ‘I think there won’t be an AI by 2100, but if there is, it won’t be Friendly and etc.’)
I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.
Eliezer and I are probably about equally confident that “there will not be AI by 2100, and both Eliezer and Unknown will still be alive” is incorrect. So it doesn’t seem very fair to select either 2 or 3. So option 1 seems better.
If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)
You could either
donate the money to charity under the view ‘and you’re both wrong, so there!’
say that the prediction is implicitly a big AND - ‘there will be an AI by 2100 AND said first AI will not have… etc.‘, and that the conditions allow ‘short-circuiting’ when any AI is created; with this change, reaching 2100 is a loss on your part.
Like #2, but the loss is on Eliezer’s part (the bet changes to ‘I think there won’t be an AI by 2100, but if there is, it won’t be Friendly and etc.’)
I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.
Eliezer and I are probably about equally confident that “there will not be AI by 2100, and both Eliezer and Unknown will still be alive” is incorrect. So it doesn’t seem very fair to select either 2 or 3. So option 1 seems better.