Care to elaborate on the every day thing? Aside from literal coins, your cell phone is perfectly capable of generating pseudorandom numbers, and I’m almost never without mine.
I guess whether your point stands depends on whether we are more concerned with abstract theory or practical decision making.
Here are some circumstances where you don’t have access to an unpredictable random number generator:
--You need to make a decision very quickly and so don’t have time to flip a coin
--Someone is watching you and will behave differently towards you if they see you make the decision via randomness, so consulting a coin isn’t a random choice between options but rather an additional option with its own set of payoffs
--Someone is logically entangled with you and if you randomize they will no longer be.
--You happen to be up against someone who is way smarter than you and can predict your coin / RNG / etc.
Admittedly, while in some sense these things happen literally every day to all of us, they typically don’t happen for important decisions.
But there are important decisions having to do with acausal trade that fit into this category, that either we our our AI successors will face one day.
And even if that wasn’t true, decision theory is decision THEORY. If one theory outperforms another in some class of cases, that’s a point in its favor, even if the class of cases is unusual.
EDIT: See Paul Christiano’s example below, it’s an excellent example because it takes Caspar’s paper and condenses it into a very down-to-earth, probably-has-actually-happened-to-someone-already example.
Care to elaborate on the every day thing? Aside from literal coins, your cell phone is perfectly capable of generating pseudorandom numbers, and I’m almost never without mine.
I guess whether your point stands depends on whether we are more concerned with abstract theory or practical decision making.
Here are some circumstances where you don’t have access to an unpredictable random number generator:
--You need to make a decision very quickly and so don’t have time to flip a coin
--Someone is watching you and will behave differently towards you if they see you make the decision via randomness, so consulting a coin isn’t a random choice between options but rather an additional option with its own set of payoffs
--Someone is logically entangled with you and if you randomize they will no longer be.
--You happen to be up against someone who is way smarter than you and can predict your coin / RNG / etc.
Admittedly, while in some sense these things happen literally every day to all of us, they typically don’t happen for important decisions.
But there are important decisions having to do with acausal trade that fit into this category, that either we our our AI successors will face one day.
And even if that wasn’t true, decision theory is decision THEORY. If one theory outperforms another in some class of cases, that’s a point in its favor, even if the class of cases is unusual.
EDIT: See Paul Christiano’s example below, it’s an excellent example because it takes Caspar’s paper and condenses it into a very down-to-earth, probably-has-actually-happened-to-someone-already example.