Instead of measuring “bad events per unit of time measured from the other person’s point of view”, wouldn’t “bad events per unit of subjective time” be a much better metric which doesn’t fall prey to this paradox?
And why are you bothering to distinguish between “there is no true preferred rest frame” and “there is a true rest frame which is perfectly indistinguishable from all the other moving ones”? They both make the exact same predictions, so why not just fold them into one hypothesis? What does that little epiphenominal tag hanging off one of them get you? Just because relativity is derivable from some fairly basic starting conditions doesn’t seem to imply that there is an indistinguishable true rest frame to me, though I may be missing something obvious.
Instead of measuring “bad events per unit of time measured from the other person’s point of view”, wouldn’t “bad events per unit of subjective time” be a much better metric
Indeed it would. Otherwise, any simulated human universe could be significantly ethically improved by adding a simple code that would make it run slowly (from our point of view; inside the simulation it would be the same) when the humans inside the simulation are happy.
Instead of measuring “bad events per unit of time measured from the other person’s point of view”, wouldn’t “bad events per unit of subjective time” be a much better metric which doesn’t fall prey to this paradox?
And why are you bothering to distinguish between “there is no true preferred rest frame” and “there is a true rest frame which is perfectly indistinguishable from all the other moving ones”? They both make the exact same predictions, so why not just fold them into one hypothesis? What does that little epiphenominal tag hanging off one of them get you? Just because relativity is derivable from some fairly basic starting conditions doesn’t seem to imply that there is an indistinguishable true rest frame to me, though I may be missing something obvious.
Indeed it would. Otherwise, any simulated human universe could be significantly ethically improved by adding a simple code that would make it run slowly (from our point of view; inside the simulation it would be the same) when the humans inside the simulation are happy.