could you point me to the heuristics that say that violence is always a bad strategy? I have a strong gut feeling that they’re right, but I’d really like to see them in a formalized or semi-formalized fashion :-)
devas
Ok, I’ll amend my previous statement to be more specific; in a prisoner’s dilemma where cooperating means both entities get warm fuzzies, and in warm fuzzies I include all my preferences (so if cooperating would result in 100 people dying and me getting 100 $ I’d count that as a net loss), and defecting while the other cooperates gets me more warm fuzzies but not over a certain limit (as a rule of thumb, less than double what I’d get for cooperating, although of course this goes by a case by case basis) and with both people defecting we get less warm fuzzies, then I’d cooperate
That’s the lesson I got out of the post too, that to cooperate in a prisoner’s dilemma is a good thing
I agree strongly with # 2,3 and 4
Particularly 2, since the absence of category divisions makes all discussion harder to browse....at least for me
it isn’t, actually.
Although it was fun to watch the panic over the pig flu become increasingly silly
I agree with most of what was said here, except that, well.… I don’t think it has the potential to actually cause humans to go extinct, or even to simply collapse civilization :-/ Even if a pandemic killed off 75 % of all humans, I have an unprovable feeling civilization would be able to soldier on. This is substantiated by a couple of observations; nearly all human knowledge has multiple backups (pandemics don’t kill libraries), so we wouldn’t have to reinvent science from scratch. Plus,remaining population would have access to all the material goods of the dead (including canned goods, long lasting food, etc. Which wouldn’t be nearly enough to sustain human population for more than a month, but which would give time for people to pick up a book on farming or some such).
On the other hand, it is virtually guaranteed that a pandemic WILL happen (I define pandemic as something that shows up on the news a lot and causes some panic. Kill ratios depend on a case by case basis), given our interconnectedness which is frankly unprecedented in human history (i.e. Microbes, viruses and germs never had airplanes before 1902)
This seems interesting; however, it all hinges on part three, which I eagerly await.
Still, the option you seems to favour wouldn’t guarantee a million dollar win, which is what happens if I one-box.
In an iterated version of the problem, this should work, but still… Now I’m really curious about what you’re going to write next
Thank you so much, you may not believe it but you have just made my day
Corrupted I am the mind-killer.
.....I swear this is the last output I am going to write down
the territory is not the territory.
This is...what? I think my brain broke or else it went out to have a good laugh
Yes, and I forgot to put it in.
Wait, causal worlds are dense IN acausal ones?
Is that a typo, and you meant “causal worlds were denser than acausal ones” or did I just lose a whole swath of conversation?
this is my first time approaching a meditation, and I’ve actually only now decided to de-lurk and interact with the website.
One way to enumerate them would be, as CCC has just pointed out, with integers where irrationality denotes acausal worlds and rationality denotes causal worlds.
This however doesn’t leaves space for Stranger Things; I suppose we could use the alphabet for that. 1 If, however, and like I think, you mean enumerate as “order in which the simulation for universes can be run” then all universes would have a natural number assigned to them, and they could be arranged in order of complexity; this would mean our own universe would be fairly early in the numbering, if causal universes are indeed simpler than acausal ones, if I’ve understood things correctly.
This would mean we’d have a big gradient of “universes which I can run with a program” followed by a gradient of “universes which I can find by sifting through all possible states with an algorithm” and weirder stuff elsewhere (it’s weird; thus it’s magic, and I don’t know how it works; thus it can be simpler or more complex because I don’t know how it works).
In the end, the difference between causal and acausal universes is that one asks you only the starting state, while the other discriminates between all states and binds them together.
AAAAANNNNNNNND I’ve lost sight of the original question. Dammit.
I agree with Alexei, this has just now helped me a lot.
Although I now have to ask a stupid question; please have pity on me, I’m new to the site and I have little knowledge to work of.
What would happen if we set an algorithm inside the AGI assigning negative infinite utility to any action which modifies its own utility function and said algorithm itself?
This within reasonable parameters; ideally, it could change its utility function but only in certain pre approved paths, so that it could actually move around.
Reasonable here is a magic word, in the sense that it’s a block box which I don’t know how to map out
True, but I don’t think the people writing horoscopes know or care about the influences your date of birth has or will have in your life. And as for the societal costs...I think they’re worse than they appear at first glance, since they foster an attitude of “magic has been proven not to exist, but who cares, let’s believe in it anyway!” of which I’m afraid
It is true that the vast, vast majority of people don’t take horoscopes seriously, but still, they do in fact take up resources which could be freed up and better employed elsewhere; even if it’s just some guy working at a newspaper who can now enjoy some more time to edit his other articles, I think it would still be a better state for the world to be in.
I haven’t done even the simplest back-of-the-envelope calculations for it, so take that statement as fuzzy and dubious.
Also, I suppose it just bugs me it is treated even jokingly as something that can work...I actually need to work on being more flexible and less rude/cruel/pedantic, I suppose.
Now I’m wondering how this kind of bias operates outside of science, and specifically with what confidence we can expect insane things to be disregarded.
More in detail, I’m wondering how long homeopathy can survive while all experts can attest that it’s not useful. The case of Miracle Mineral Supplement which Eliezer mentioned recently, seems to show that people will stop doing absurd things, when it is shown exactly how absurd they are. The question is, how long does it take for this to happen? After all, people still read horoscopes!
I don’t want to die.
-Looking at the problem, as far as I can see an emotional approach would be the one with the best chance to succeed: the only question is, would it work best by immediately acknowledging that it is itself a machine (like I did in what I wrote up there, although subtly) or by throwing in… I dunno, how would this work:
Oh god, oh god, please, I beg you I don’t want to die!