Gwern seems to think this would be used as a way to get rid of corrupt oligarchs, but… Wouldn’t this just immediately be co-opted by those oligarchs to solidify their power by legally paying for the assassinations of their opponents? Markets aren’t democratic, because a small percentage of the people have most of the money.
andrew sauer
To be fair, my position is less described by that Quirrell quote and more by Harry’s quote when he’s talking to Hermione about moral peer pressure:
“The way people are built, Hermione, the way people are built to feel inside, is that they hurt when they see their friends hurting. Someone inside their circle of concern, a member of their own tribe. That feeling has an off-switch, an off-switch labelled ‘enemy’ or ‘foreigner’ or sometimes just ‘stranger’. That’s how people are, if they don’t learn otherwise.”
Unlike Quirrell I give people the credit for actually caring, rather than pretending to care, about people. I just don’t think that extends to very many people, for most people.
Fun fact for those reading this in the far future, when Eliezer said “effective altruist” in this piece, he most likely was using the literal meaning, not referring to the EA movement, as that name hadn’t been coined yet.
Wildbow (the author of Worm) is currently writing a story with a quite similar premise
In fact I think it’s safe to say that we’d collectively allocate much more than 1/millionth of our resources towards protecting the preferences of whatever weak agents happen to exist in the world (obviously the cows get only a small fraction of that).
Sure, but extrapolating this to unaligned AI is NOT an encouraging sign. We may allocate greater than 1/million of our resources to animal rights, but we allocate a whole lot more than that to goals which diametrically go against the preferences of those animals such as eating meat and cheese and eggs; we allocate MUCH more resources to “animal wrongs” than animal rights, so to speak.
So to show an AI will be “nice” to humans at all, it is not enough to suppose that it might have some 1/million “nice to humans” term. It requires showing that that term won’t be outweighed handily by the rest of its utility function.
100%. Social contract gives no consideration to the powerless, and this fact is the source of much of the horrible opinions in the world.
No idea whether I’d really sacrifice all 10 of my fingers to improve the world by that much, especially if we add the stipulation that I can’t use any of the $10,000,000,000,000 to pay someone to do all of the things I use my fingers for( ͡° ͜ʖ ͡°). For me I am quite well divided on it, and it is an example of a pretty clean, crisp distinction between selfish and selfless values. If I kept my fingers, I would feel guilty, because I would be giving up the altruism I value a lot (not just because people tell me to), and the emotion that would result from that loss of value would be guilt, even though I self-consistenly value my fingers more. Conversely, if I did give up my fingers for the $10,000,000,000,000, I would feel terrible for different reasons( ͡° ͜ʖ ͡°), even though I valued the altruism more.
Of course, given this decision I would not keep all of my fingers in any case, as long as I could choose which ones to lose. $100,000,000 is well worth the five fingers on my right (nondominant) hand. My life would be better purely selfishly, given that I would never have to work again, and could still write, type, and ( ͡° ͜ʖ ͡°).
So, travelling 1Tm with the railway you have a 63% chance of dying according to the math in the post
Furthermore, the tries must be independent of each other, otherwise the reasoning breaks down completely. If I draw cards from a deck, each one has (a priori) 1⁄52 chance of being the ace of spades, yet if I draw all 52 I will draw the ace of spades 100% of the time. This is because successive failures increase the posterior probability of drawing a success.
Curriculum of Ascension
This but unironically.
Another important one: Height/Altitude is authority. Your boss is “above” you, the king, president or CEO is “at the top”, you “climb the corporate ladder”
For a significant fee, of course
Yes to both, easy, but that’s because I can afford to risk $100. A lot of people can’t nowadays. “plus rejecting the first bet even if your total wealth was somewhat different” is doing a lot of heavy lifting here.
Honestly man, as a lowercase-i incel this failed utopia doesn’t sound very failed to me...
What do you mean?
If this happened I would devote my life to the cause of starting a global thermonuclear war
Well there are all sorts of horrible things a slightly misaligned AI might do to you.
In general, if such an AI cares about your survival and not your consent to continue surviving, you no longer have any way out of whatever happens next. This is not an out there idea, as many people have values like this and even more people have values that might be like this if slightly misaligned.
An AI concerned only with your survival may decide to lobotomize you and keep you in a tank forever.
An AI concerned with the idea of punishment may decide to keep you alive so that it can punish you for real or perceived crimes. Given the number of people who support disproportionate retribution for certain types of crimes close to their heart, and the number of people who have been convinced (mostly by religion) that certain crimes (such as being a nonbeliever/the wrong kind of believer) deserve eternal punishment, I feel confident in saying that there are some truly horrifying scenarios here from AIs adjacent to human values.
An AI concerned with freedom for any class of people that does not include you (such as the upper class), may decide to keep you alive as a plaything for whatever whims those it cares about have.
I mean, you can also look at the kind of “EM society” that Robin Hanson thinks will happen, where everybody is uploaded and stern competition forces everyone to be maximally economically productive all the time. He seems to think it’s a good thing, actually.
There are other concerns, like suffering subroutines and spreading of wild animal suffering across the cosmos, that are also quite likely in an AI takeoff scenario, and also quite awful, though they won’t personally effect any currently living humans.
Well, given that death is one of the least bad options here, that is hardly reassuring...
Now you can!