A decision tree (the entirety of my game theory experience has been a few online videos, so I likely have the terminology wrong), with decision 1 at the top and the end outcomes at the bottom. The sections marked ‘max’ have the decider trying to pick the highest-value end outcome, and the sections marked ‘min’ have the decider trying to pick the lowest-value end outcome. The numbers in every line except the bottom propagate up depending on which solution will be picked by whoever is currently doing the picking, so if Max and Min maximize and minimize properly the tree’s value is 6. I don’t quite remember how the three branches being pruned off work.
Ben_Welchner
I’m pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.
It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.
EDIT: Just in case this comes off as disparaging LW’s upvote generosity or average comment quality, it’s not.
He also notes that the experts who’d made failed predictions and employed strong defenses tended to update their confidence, while the experts who’d made failed predictions but didn’t employ strong defenses did update.
I assume there’s a ‘not’ missing in one of those.
Given humanity’s complete lack of experience with absolute power, it seems like you can’t even take that cliche for weak evidence. Having glided through the article and comments again, I also don’t see where Eliezer said “rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?
(No, I wasn’t the one who downvoted)
And would newer readers know what “EY” meant?
Given it’s right after an anecdote about someone whose name starts with “E”, I think they could make an educated guess.
That’s one hell of a grant proposal/foundation.
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn’t a very hive-mindey community, unless you count atheism.
(The singularity, yes, you’re very much in the minority with the most skeptical quartile expecting it in 2150)
In other words, why didn’t the story mention its (wealthy, permissive, libertarian) society having other arrangements in such a contentious matter—including, with statistical near-certainty, one of the half-dosen characters on the bridge of the Impossible Possible World?
It was such a contentious issue centuries (if I’m reading properly) ago, when ancients were still numerous enough to hold a lot of political power and the culture was different enough that Akon can’t even wrap his head around the question. That’s plenty of time for cultural drift to pull everyone together, especially if libertarianism remains widespread as the world gets more and more upbeat, especially if anti-rapers are enough part of the mainstream culture to “statistically-near-certainly” have a seat on the Impossible Possible World.
It’s not framed as an irreconcilable ideological difference (to the extent those exist at all in the setting). The ancients were against it because they remembered it being something basically objectively horrible, and that became more and more outdated as the world became nicer.
On a similar note, what should be 13.9′s solution links to 13.8′s solution.
I’m also finding this really interesting and approachable. Thanks very much.
I recall another article about optimization processes or probability pumps being used to rig elections; I would imagine it’s a lighthearted reference to that, but I can’t turn it up by searching. I’m not even sure if it came before this comment.
(Richard_Hollerith2 hasn’t commented for over 2.5 years, so you’re not likely to get a response from him)
Take for example an agent that is facing the Prisoner’s dilemma. Such an agent might originally tend to cooperate and only after learning about game theory decide to defect and gain a greater payoff. Was it rational for the agent to learn about game theory, in the sense that it helped the agent to achieve its goal or in the sense that it deleted one of its goals in exchange for a allegedly more “valuable” goal?
The agent’s goals aren’t changing due to increased rationality, but just because the agent confused him/herself. Even if this is a payment-in-utilons and no-secondary-consequences Dilemma, it can still be rational to cooperate if you expect the other agent will be spending the utilons in much the same way. If this is a more down-to-earth Prisoner’s Dilemma, shooting for cooperate/cooperate to avoid dicking over the other agent is a perfectly rational solution that no amount of game theory can dissuade you from. Knowledge of game theory here can only change your mind if it shows you a better way to get what you already want, or if you confuse yourself reading it and think defecting is the ‘rational’ thing to do without entirely understanding why.
You describe a lot of goals as terminal that I would describe as instrumental, even in their limited context. While it’s true that our ideals will be susceptible to culture up until (if ever) we can trace and order every evolutionary desire in an objective way, not many mathematicians would say “I want to determine if a sufficiently-large randomized Conway board would converge to an all-off state so I will have determined if a sufficiently-large randomized Conway board would converge to an all-off state”. Perhaps they find it an interesting puzzle or want status from publishing it, but there’s certainly a higher reason than ‘because they feel it’s the right thing to do’. No fundamental change in priorities needs occur between feeding one’s tribe and solving abstract mathematical problems.
I won’t extrapolate my arguments farther than this, since I really don’t have the philosophical background it needs.
Nitpick: LW doesn’t actually have a large proportion of cryonicists, so you’re not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) ‘considering it’, but comparing that to the current proportion makes me skeptical they’ll sign up.