Or more workably, we should bet on how quickly obviously Unfriendly AI will emerge and begin the destructive acts that will eventually destroy us all.
I think even if you believe that UFAI will kill us the scenario that there will be obvious UFAI and at the same time a world where a bet like this will be played out is inprobable.
The version that sort of works are “existence futures” (that’s not the standard name, unless I happened to reinvent it). Eli pays someone $1 now, and that person pays Eli $N dollars in M years. N is a combination of the interest rate and the person’s belief that they and Eli will still exist in M years. If I think there’s a 50% chance that the world will end at the start of 2016, and Eli thinks the chance is 0%, both of us would see a deal where he gives me a dollar now and I give him $1.50 on Feb 1st as profitable.
The trouble with them is that if I really think the world is likely to end on Jan 1st, I’m probably making a host of similar decisions that make it unlikely that I’ll actually be able to repay him on Feb—if I’m selling everything and playing video games until the world ends, then when the world doesn’t end, where is the money going to come from to repay Eli?
Right, but the only way this makes sense for Eli is if he’s paying me less money now than I’ll pay him in the future, and the only way it makes sense for me is if I get to spend the money before the world ends. If I have to put $1.50 into escrow in order to get access to $1, then I’m losing money, not getting it.
You might be able to get it to work with durable assets—if I want to use my car and house up until the world ending, then I can ask Eli to pay me for them now in order to get them after I think the world will end. But it’s not clear this works out any better for the endtimer than taking out a standard 30-year loan that they don’t expect to have to repay.
I think even if you believe that UFAI will kill us the scenario that there will be obvious UFAI and at the same time a world where a bet like this will be played out is inprobable.
Why? I don’t think UFAI can kill us in mere hours/minutes, and at some point during the months or years it will probably take, someone, somewhere, will notice what’s going on. They probably wouldn’t be able to stop it, but there’s every chance it would become known.
And for my apparently abundant downvoters, the whole reason I’m asking to take bets is because some people repeatedly profess to believe precisely that every fresh machine-learning advancement is a dangerous step closer to a very sudden self-improving UFAI that destroys us all. So I’d like to find out their degrees of certainty regarding exactly how far machine learning can go before it crosses a classification boundary and becomes UFAI.
Anyway, I’ve got the first of several ANNABELL training sets being processed on a spare machine in the office.
Why? I don’t think UFAI can kill us in mere hours/minutes, and at some point during the months or years it will probably take, someone, somewhere, will notice what’s going on.
By the standard of “someone somewhere noticing” we have proof that aliens are on earth. Nobody will take that as a way of resolving a bet.
A powerful UFAI can control cyberspace before it kills everyone. You would need cyberspace to function to resolve a bet.
It took a while after WWII till Italy’s public recognized that the Mafia was back. An UFAI would make a mistake at waging infowar for it’s identity to become public knowledge. If you believe that UFAI’s would go FOOM then they are unlikely to make mistakes.
And for my apparently abundant downvoters, the whole reason I’m asking to take bets is because some people repeatedly profess to believe precisely that every fresh machine-learning advancement is a dangerous step closer to a very sudden self-improving UFAI that destroys us all.
You get the downvotes because the average person who holds that belief has no reason to take the other side of the bet. Getting money on doomsday is not a valuable outcome.
I think even if you believe that UFAI will kill us the scenario that there will be obvious UFAI and at the same time a world where a bet like this will be played out is inprobable.
The version that sort of works are “existence futures” (that’s not the standard name, unless I happened to reinvent it). Eli pays someone $1 now, and that person pays Eli $N dollars in M years. N is a combination of the interest rate and the person’s belief that they and Eli will still exist in M years. If I think there’s a 50% chance that the world will end at the start of 2016, and Eli thinks the chance is 0%, both of us would see a deal where he gives me a dollar now and I give him $1.50 on Feb 1st as profitable.
The trouble with them is that if I really think the world is likely to end on Jan 1st, I’m probably making a host of similar decisions that make it unlikely that I’ll actually be able to repay him on Feb—if I’m selling everything and playing video games until the world ends, then when the world doesn’t end, where is the money going to come from to repay Eli?
The usual answer is escrow.
Right, but the only way this makes sense for Eli is if he’s paying me less money now than I’ll pay him in the future, and the only way it makes sense for me is if I get to spend the money before the world ends. If I have to put $1.50 into escrow in order to get access to $1, then I’m losing money, not getting it.
You might be able to get it to work with durable assets—if I want to use my car and house up until the world ending, then I can ask Eli to pay me for them now in order to get them after I think the world will end. But it’s not clear this works out any better for the endtimer than taking out a standard 30-year loan that they don’t expect to have to repay.
I see. Yes, you point out a valid problem.
Why? I don’t think UFAI can kill us in mere hours/minutes, and at some point during the months or years it will probably take, someone, somewhere, will notice what’s going on. They probably wouldn’t be able to stop it, but there’s every chance it would become known.
And for my apparently abundant downvoters, the whole reason I’m asking to take bets is because some people repeatedly profess to believe precisely that every fresh machine-learning advancement is a dangerous step closer to a very sudden self-improving UFAI that destroys us all. So I’d like to find out their degrees of certainty regarding exactly how far machine learning can go before it crosses a classification boundary and becomes UFAI.
Anyway, I’ve got the first of several ANNABELL training sets being processed on a spare machine in the office.
By the standard of “someone somewhere noticing” we have proof that aliens are on earth. Nobody will take that as a way of resolving a bet.
A powerful UFAI can control cyberspace before it kills everyone. You would need cyberspace to function to resolve a bet.
It took a while after WWII till Italy’s public recognized that the Mafia was back. An UFAI would make a mistake at waging infowar for it’s identity to become public knowledge. If you believe that UFAI’s would go FOOM then they are unlikely to make mistakes.
You get the downvotes because the average person who holds that belief has no reason to take the other side of the bet. Getting money on doomsday is not a valuable outcome.