why do I believe that it’s accuracy for other people (probably mostly psych students) applies to my actions?
Because historically, in this fictional world we’re imagining, when psychologists have said that a device’s accuracy was X%, it turned out to be within 1% of X%, 99% of the time.
in this fictional world we’re imagining, when psychologists have said that a device’s accuracy was X%, it turned out to be within 1% of X%, 99% of the time.
99% of the time for me, or for other people? I may not be correct in all cases, but I have evidence that I _am_ an outlier on at least some dimensions of behavior and thought. There are numerous topics where I’ll make a different choice than 99% of people.
More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.
More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.
I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).
Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.
Because historically, in this fictional world we’re imagining, when psychologists have said that a device’s accuracy was X%, it turned out to be within 1% of X%, 99% of the time.
99% of the time for me, or for other people? I may not be correct in all cases, but I have evidence that I _am_ an outlier on at least some dimensions of behavior and thought. There are numerous topics where I’ll make a different choice than 99% of people.
More importantly, when the fiction diverges by that much from the actual universe, it takes a LOT more work to show that any lessons are valid or useful in the real universe.
99% for you (see https://wiki.lesswrong.com/wiki/Least_convenient_possible_world )
I believe the goal of these thought experiments is not to figure out whether you should, in practice, sit in the waiting room or not (honestly, nobody cares what some rando on the internet would do in some rando waiting room).
Instead, the goal is to provide unit tests for different proposed decision theories as part of research on developing self modifying super intelligent AI.