If this happened I would devote my life to the cause of starting a global thermonuclear war
andrew sauer
Well there are all sorts of horrible things a slightly misaligned AI might do to you.
In general, if such an AI cares about your survival and not your consent to continue surviving, you no longer have any way out of whatever happens next. This is not an out there idea, as many people have values like this and even more people have values that might be like this if slightly misaligned.
An AI concerned only with your survival may decide to lobotomize you and keep you in a tank forever.
An AI concerned with the idea of punishment may decide to keep you alive so that it can punish you for real or perceived crimes. Given the number of people who support disproportionate retribution for certain types of crimes close to their heart, and the number of people who have been convinced (mostly by religion) that certain crimes (such as being a nonbeliever/the wrong kind of believer) deserve eternal punishment, I feel confident in saying that there are some truly horrifying scenarios here from AIs adjacent to human values.
An AI concerned with freedom for any class of people that does not include you (such as the upper class), may decide to keep you alive as a plaything for whatever whims those it cares about have.
I mean, you can also look at the kind of “EM society” that Robin Hanson thinks will happen, where everybody is uploaded and stern competition forces everyone to be maximally economically productive all the time. He seems to think it’s a good thing, actually.
There are other concerns, like suffering subroutines and spreading of wild animal suffering across the cosmos, that are also quite likely in an AI takeoff scenario, and also quite awful, though they won’t personally effect any currently living humans.
Well, given that death is one of the least bad options here, that is hardly reassuring...
Fuck, we’re all going to die within 10 years aren’t we?
Never, ever take anybody seriously who argues as if Nature is some sort of moral guide.
I had thought something similar when reading that book. The part about the “conditioners” is the oldest description of a singleton achieving value lock-in that I’m aware of.
If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn’t seem to add too much risk?
Just so long as you’re okay with us being eaten by giant monsters that didn’t do enough research into whether we were sentient.
I’m okay with that, said Slytherin. Is everyone else okay with that? (Internal mental nods.)
I’d bet quite a lot they’re not actually okay with that, they just don’t think it will happen to them...
the vigintillionth digit of pi
Sorry if I came off confrontational, I just mean to say that the forces you mention which are backed by deep mathematical laws, aren’t fully aligned with “the good”, and aren’t a proof that things will work out well in the end. If you agree, good, I just worry with posts like these that people will latch onto “Elua” or something similar as a type of unjustified optimism.
The problem with this is that there is no game-theoretical reason to expand the circle to, say, non-human animals. We might do it, and I hope we do, but it wouldn’t benefit us practically. Animals have no negotiating power, so their treatment is entirely up to the arbitrary preferences of whatever group of humans ends up in charge, and so far that hasn’t worked out so well (for the animals anyway, the social contract chugs along just fine).
The ingroup preference force is backed by game theory, the expansion of the ingroup to other groups which have some bargaining power is as well, but the “universal love” force, if there is such a thing, is not. There is no force of game theory that would stop us from keeping factory farms going even post-singularity, or doing something equivalent with different powerless beings we create for that purpose.
When one species learns to cooperate with others of its own kind, the better to exploit everything outside that particular agreement, this does not seem to me even metaphorically comparable to some sort of universal benevolent force, but just another thing that happens in our brutish, amoral world.
Let’s see: first choice: yellow=red,green=blue. An illustration in how different framings make this problem sound very different, this framing is probably the best argument for blue I’ve seen lol
Second choice: There’s no reason to press purple. You’re putting yourself at risk, and if anyone else pressed purple you’re putting them even more at risk.
TL;DR Red,Red,Red,Red,Red,Blue?,Depends,Red?,Depends,Depends
1,2: Both are the same, I pick red since all the harm caused by this decision is on people who have the option of picking red as well. Red is a way out of the bind, and it’s a way out that everybody can take, and me taking red doesn’t stop that. The only people you’d be saving by taking blue are the other people who thought they needed to save people by taking blue, making the blue people dying an artificial and avoidable problem.
3,4: Same answer for the same reason, but even more so since people are less likely to be bamboozled into taking the risk
5: Still red, even more so since blue pillers have a way out even after taking the pill
6: LOL this doesn’t matter at all. I mean you shouldn’t sin, kind of by definition, but omega’s challenge won’t be met so it doesn’t change anything from how things are now.
7: This is disanalogous because redpilling in this case(i.e. displaying) is not harmless if everyone does it, it allows the government to force this action. Whether to display or refuse would depend on further details, such as how bad submission to this government would actually be, and whether there are actually enough potential resisters to make a difference.
8: In the first option you accomplish nothing, as stated in the prompt. Burnout is just bad, it’s not like it gets better if enough people do it lol. It’s completely disanalogous since option 2(red?) is unambiguously better, it’s better for you and makes it more likely for the world to be saved. Unlike the original problem where some people can die as a result of red winning.
9: This is disanalogous since the people you’re potentially saving by volunteering are not other volunteers, they are people going for recreation. There is an actual good being served by making people who want to hike more safe, and “just don’t hike” doesn’t work the same way “just don’t bluepill” does since people hike for its own sake, knowing the risks. Weigh the risks and volunteer if you think decreasing risk to hikers is worth taking on some risk to yourself, and don’t if you don’t.
10: Disanalogous for the exact same reason. People go to burning man for fun, they know there might be some (minimal) risk. Go if you want to go enough to take on the risk, otherwise don’t go. Except in this case going doesn’t even decrease the risk for others who go, so it’s even less analogous to the pill situation!
Game-theory considerations aside, this is an incredibly well-crafted scissor statement!
The disagreement between red and blue is self-reinforcing, since whichever you initially think is right, you can say everyone will live if they’d just all do what you are doing. It pushes people to insult each other and entrench their positions even further, since from red’s perspective blues are stupidly risking their lives and unnecesarily weighing on their conscience when they would be fine if nobody chose blue in the first place, and from blue’s perspective red is condemning them to die for their own safety. Red calls blue stupid, blue calls red evil. Not to mention the obvious connection to the wider red and blue tribes, “antisocial” individualism vs “bleeding-heart” collectivism. (though not a perfect correspondance, I’d consider myself blue tribe but would choose red in this situation. You survive no matter what and the only people who might die as a consequence also all had the “you survive no matter what” option.)
“since”?(distance 3)
I guess that would be a pretty big coincidence lol
Is this actually a random lapse into Shakespearean English or just a typo?
commenting here so I can find this comment again
I thought foom was just a term for extremely fast recursive self-improvement.
What do you mean?