More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.
I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).
Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.
99% for you (see https://wiki.lesswrong.com/wiki/Least_convenient_possible_world )
I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).
Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.