So, travelling 1Tm with the railway you have a 63% chance of dying according to the math in the post
andrew sauer
Furthermore, the tries must be independent of each other, otherwise the reasoning breaks down completely. If I draw cards from a deck, each one has (a priori) 1⁄52 chance of being the ace of spades, yet if I draw all 52 I will draw the ace of spades 100% of the time. This is because successive failures increase the posterior probability of drawing a success.
Curriculum of Ascension
This but unironically.
Another important one: Height/Altitude is authority. Your boss is “above” you, the king, president or CEO is “at the top”, you “climb the corporate ladder”
For a significant fee, of course
Yes to both, easy, but that’s because I can afford to risk $100. A lot of people can’t nowadays. “plus rejecting the first bet even if your total wealth was somewhat different” is doing a lot of heavy lifting here.
Honestly man, as a lowercase-i incel this failed utopia doesn’t sound very failed to me...
What do you mean?
If this happened I would devote my life to the cause of starting a global thermonuclear war
Well there are all sorts of horrible things a slightly misaligned AI might do to you.
In general, if such an AI cares about your survival and not your consent to continue surviving, you no longer have any way out of whatever happens next. This is not an out there idea, as many people have values like this and even more people have values that might be like this if slightly misaligned.
An AI concerned only with your survival may decide to lobotomize you and keep you in a tank forever.
An AI concerned with the idea of punishment may decide to keep you alive so that it can punish you for real or perceived crimes. Given the number of people who support disproportionate retribution for certain types of crimes close to their heart, and the number of people who have been convinced (mostly by religion) that certain crimes (such as being a nonbeliever/the wrong kind of believer) deserve eternal punishment, I feel confident in saying that there are some truly horrifying scenarios here from AIs adjacent to human values.
An AI concerned with freedom for any class of people that does not include you (such as the upper class), may decide to keep you alive as a plaything for whatever whims those it cares about have.
I mean, you can also look at the kind of “EM society” that Robin Hanson thinks will happen, where everybody is uploaded and stern competition forces everyone to be maximally economically productive all the time. He seems to think it’s a good thing, actually.
There are other concerns, like suffering subroutines and spreading of wild animal suffering across the cosmos, that are also quite likely in an AI takeoff scenario, and also quite awful, though they won’t personally effect any currently living humans.
Well, given that death is one of the least bad options here, that is hardly reassuring...
Fuck, we’re all going to die within 10 years aren’t we?
Never, ever take anybody seriously who argues as if Nature is some sort of moral guide.
I had thought something similar when reading that book. The part about the “conditioners” is the oldest description of a singleton achieving value lock-in that I’m aware of.
If accepting this level of moral horror is truly required to save the human race, then I for one prefer paperclips. The status quo is unacceptable.
Perhaps we could upload humans and a few cute fluffy species humans care about, then euthanize everything that remains? That doesn’t seem to add too much risk?
Just so long as you’re okay with us being eaten by giant monsters that didn’t do enough research into whether we were sentient.
I’m okay with that, said Slytherin. Is everyone else okay with that? (Internal mental nods.)
I’d bet quite a lot they’re not actually okay with that, they just don’t think it will happen to them...
the vigintillionth digit of pi
Sorry if I came off confrontational, I just mean to say that the forces you mention which are backed by deep mathematical laws, aren’t fully aligned with “the good”, and aren’t a proof that things will work out well in the end. If you agree, good, I just worry with posts like these that people will latch onto “Elua” or something similar as a type of unjustified optimism.
No idea whether I’d really sacrifice all 10 of my fingers to improve the world by that much, especially if we add the stipulation that I can’t use any of the $10,000,000,000,000 to pay someone to do all of the things I use my fingers for( ͡° ͜ʖ ͡°). For me I am quite well divided on it, and it is an example of a pretty clean, crisp distinction between selfish and selfless values. If I kept my fingers, I would feel guilty, because I would be giving up the altruism I value a lot (not just because people tell me to), and the emotion that would result from that loss of value would be guilt, even though I self-consistenly value my fingers more. Conversely, if I did give up my fingers for the $10,000,000,000,000, I would feel terrible for different reasons( ͡° ͜ʖ ͡°), even though I valued the altruism more.
Of course, given this decision I would not keep all of my fingers in any case, as long as I could choose which ones to lose. $100,000,000 is well worth the five fingers on my right (nondominant) hand. My life would be better purely selfishly, given that I would never have to work again, and could still write, type, and ( ͡° ͜ʖ ͡°).