John, that would be an interesting middle ground of sorts—the trouble being that from a social perspective, you probably do want as much wealth generated as possible, if it’s any sort of wealth that can be reinvested.
Reeves, if both players play (C, C) and then divide up the points evenly at the end, isn’t that sort of… well… communism?
Eliezer, I’m surprised at you! While your personal political inclinations may be libertarian/capitalist, you don’t get to end the discussion by saying that you are a Blue this is the idea of the hated Greens, so it must be wrong.
As you’ve said: “Politics is the mind-killer” and that wasn’t an honest attempt at engaging with an idea, it was a cached thought to allow you to reject it without even turning on your brain in the first place.
Because it’s rejecting the premise of a perfectly good experiment, not because it would be a bad idea in real life. Also there’s difficulties with having canonical utility measures across agents, but that’s a separate point.
Nesov, fixed.
John, that would be an interesting middle ground of sorts—the trouble being that from a social perspective, you probably do want as much wealth generated as possible, if it’s any sort of wealth that can be reinvested.
Reeves, if both players play (C, C) and then divide up the points evenly at the end, isn’t that sort of… well… communism?
Eliezer, I’m surprised at you! While your personal political inclinations may be libertarian/capitalist, you don’t get to end the discussion by saying that you are a Blue this is the idea of the hated Greens, so it must be wrong.
As you’ve said: “Politics is the mind-killer” and that wasn’t an honest attempt at engaging with an idea, it was a cached thought to allow you to reject it without even turning on your brain in the first place.
Is this wrong for other reason than cached thoughts though ? (Probably yes, but you didn’t explain it).
Because it’s rejecting the premise of a perfectly good experiment, not because it would be a bad idea in real life. Also there’s difficulties with having canonical utility measures across agents, but that’s a separate point.