There are really two cases here. In the first case, you predict prior to going on either ride that your preferences are stable, but you’re wrong—having been coerced to ride BTMRR, you discover that you prefer it. I don’t believe this case poses any problems for the normative theory that will follow—preference orderings can change with new information as long as those changes aren’t known in advance.
In the second case, you know that whichever choice you make, you will ex post facto be glad that you made that choice and not the other. Can humans be in this state? Maybe. I’m not sure what to think about this.
There are really two cases here. In the first case, you predict prior to going on either ride that your preferences are stable, but you’re wrong—having been coerced to ride BTMRR, you discover that you prefer it. I don’t believe this case poses any problems for the normative theory that will follow—preference orderings can change with new information as long as those changes aren’t known in advance.
In the second case, you know that whichever choice you make, you will ex post facto be glad that you made that choice and not the other. Can humans be in this state? Maybe. I’m not sure what to think about this.