Point taken.
Snowyowl
Yes, but that only poses a problem if a large number of agents make large contributions at the same time. If they make individually large contributions at different times or if they spread their contributions out over a period of time, they will see the utility per dollar change and be able to adjust accordingly. Presumably some sort of equilibrium will eventually emerge.
Anyway, this is probably pretty irrelevant to the real world, though I agree that the math is interesting.
In Dirk Gently’s universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won’t.
I made a prediction with sha1sum 0000000000000000000000000000000000000000. It’s the prediction that sha1sum will be broken. I’ll only reveal the exact formulation once I know whether it was true or false.
Out of curiosity, which time was Yudkowsky actually telling the truth? When he said those five assertions were lies, or when he said the previous sentence was a lie? I don’t want to make any guesses yet. This post broke my model; I need to get a new one before I come back.
Sorry, my mistake. I misread the OP.
I don’t think it’s quite the same. The underlaying mathematics are the same, but this version side-steps the philosophical and game-theoretical issues with the other (namely, acausal behaviour).
Incidentally; If you take both boxes with probability p each time you enter the room, then your expected gain is p1000 + (1-p) 1000000. For maximum gain, take p=0; i.e. always take only box B.
EDIT: Assuming money is proportional to utility.
The first time you enter the room, the boxes are both empty, so you can’t ever get more than $1,000,000. But you’re otherwise correct.
Er… yes. But I don’t think it undermines my point that we are unlikely to be assimilated by aliens in the near future.
This is a very interesting read. I have, on occasion, been similarly aware of my own subsystems. I didn’t like it much; there was a strong impulse to reassert a single “self”, and I wouldn’t be able to function normally in that state. Moreover, some parts of my psyche belonged to several subsystems at once, which made it apparently impossible to avoid bias (at least for the side that wanted to avoid bias).
In case you’re interested, the split took the form of a debate between my atheist leanings, my Christian upbringing, and my rationalist “judge”. In decreasing order of how much they were controlled by emotion.
we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Let’s be Bayesian about this.
Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.
It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.
I don’t have enough information to give exact probabilities, but it’s a lot more likely than you seem to think that we will survive the next billion years without assimilation from an alien UFAI.
Personally, I think the most likely scenario is either that Earth is somehow special and intelligent life is rarer than we give it credit for; or that alien UFAIs are generally not interested in interstellar/intergalactic travel.
EDIT: More rigorously, let Uf be the event “Alien UFAIs are a threat to us”, and Ap be the event “We exist today” (anthropic principle). The prior probability P(Uf) is large, by your arguments, but P(Ap given Uf) is much smaller than P(Ap given not-Uf). Since we observe Ap to be true, the posterior probability P(Uf given Ap) is fairly small.
And now I know what those characters might feel like.
A rational intelligence would have acted otherwise, but I’m not one, and it’s good to be reminded of that.
I lost.
I played it with the intention of winning, but I gave in to my emotions on the penultimate day. My wife was already dead, it was “two days until every cell on Earth is dead”, and if I went to work I might never have time for my daughter again. This wasn’t a rational decision, and in retrospect I regret it, but given another chance I wouldn’t change it. I’m pretty sure I would do that in real life too—or I would if I hadn’t already had a Flash game explain the consequences to me.
Apparently I am well-intentioned but too emotional. Potentially useful information.
There was a riot on the third day, but you just drive through. Presumably people after that are either too depressed to riot, too dead, or know better than to distract you while you’re busy saving the world.
Assert a confidence of (1 − 1/googolplex) and your ego far exceeds that of mental patients who think they’re God.
For the record, I assign a probability larger than 1/googleplex to the possibility that one of the mental patients actually is God.
All the others were deleted already :)
True. Still, it’s an incentive not to make posts that will negatively impact your karma.
I don’t see the purpose of such discussion. All the posts which are not criticising Roko’s argument will be downvoted into oblivion. That’s not a discussion, it’s a monologue. The only aspect of this mess worth discussing any more is the censorship itself.
(I think it was uncalled for. We have downvoting for a reason.)
Agreed, I think there is a little bit of a personality-cult effect. I don’t think it’s the main contributor though, and you have to consider that EY has a better incentive than most of us to carefully proofread his comments before submitting them. Plus, he’s been on the site for longer.
And if you reject science, you conclude that scientists are out to get you. The boot fits; upvoted.