This actually isn’t true: nuclear power was already becoming cheaper than coal and so on, and improvements have been available. The problem is actually regulatory: Starting at around 1970 various reasons have caused the law to make even the same tech to become MUCH more expensive. This was avoidable and some other countries like France managed to make it keep going cheaper than alternative sources. This talks about it in detail. Here’s a graph from the article:
Andrew Vlahos
I’d love to do this, but would have a hard time paying out because, for reasons beyond my control and caused by other people’s irrationality, I’m on SSI (although that might change in a few years). In the US people can’t save more than $2000 in liquid assets without losing their benefits, so I can’t take much, and probably wouldn’t be able to pay out because every transaction must be justified to the government, and although small purchases for entertainment would go through I’d have a hard time defending paying $1000 or whatever on a bet. Also, I’ve tried to work around this with crypto and lost all I paid in a scam.
I was thinking about just lying about what I could pay back, but being alienated by what seems to be the only sane and good community on the planet would be a much bigger cost. (Other people try to be sane and good, and the lesson I’ve learned is that “ethics” is what people talk about when they are about to make things worse for everyone except for the rationalist community).
Yes! Finally someone gets it. And this isn’t just from things that people consider bad, but from what they consider good also. For most of my life “good” meant what people talk about when they are about to make things worse for everyone, and it’s only recently that I had enough hope to even consider cryonics, thinking that anyone having power over me would reliably cause situation worse than death regardless of how good their intentions were.
Elieser is trying to code in a system of ethics that would remain valid even if the programmers are wrong about important things, and therefore is one of very few people with even a chance of successfully designing a good AI, but almost everyone else is just telling the AI what they should do. That’s why I oppose the halt in AI research he wants.
Actually I posted a comment below the article, quoting an Alcor representative’s clarification:
“Most Members submit a Statement of Revival Preferences document to state your expectations upon revival.
Alcor cannot guarantee that it will be followed since it will be many years into the future before you are revived.
I have attached the document for your review.” (and the document was very detailed)
So Alcor says that they actually are willing to do this and are trying, although they of course can’t guarantee that society won’t in the future decide to force revive people against their will anyway.
New update: I can’t do this anyway because I’m getting partial disability (Social Security Supplemental Income) and Rudi Hoffman said insurance companies won’t insure people who get any disability payments, even if they have a job. I can’t even save up for it slowly because in the US people on SSSI are forbidden from saving more than $2,000 in funds (reason: bureaucratic stupidity) and although I can save by putting money into an ABLE account (which has its own bureaucratic complications) the limit is $100,000 which might not be enough if prices adjust for inflation before I have enough. :(
Cryptocurrency won’t fix this: I’ve tried crypto before and got scammed, so it can’t be trusted even if the government doesn’t catch me trying to evade the law.
Something really frustrating is that the reason I’m even on disability in the first place is because of society’s insanity.
An Alcor representative clarified the point:
“Most Members submit a Statement of Revival Preferences document to state your expectations upon revival.
Alcor cannot guarantee that it will be followed since it will be many years into the future before you are revived.
I have attached the document for your review.”
So I guess this is already being done
Actually I think you did understand my post. What I’m confused about is that I wanted to have the option to specify “I don’t want to be brought back unless X and Y”, I asked them and they said they wouldn’t allow me to do this, and you said that they did allow you to do this. I asked a few years ago and got a similar answer.
Could someone else who signed up for Alcor reply to this and say if they got something like that?
But I asked Alcor specifically if something like this would be possible, and they said that it wouldn’t be. (Along with CI)
Not me. However, I thought of that part in Dr. Seuss where someone watches a bee to make it more productive, someone watches that watcher to make him more productive, someone watches him and so on.
Social media could be a factor, but a much bigger one is that kids are so ludicrously overcontrolled all day every day that they often get no opportunity for good experiences.
My childhood was much closer to Comazotz from A Wrinkle in Time than to a healthy upbringing.
Yeah, portions are way too big now. I’m 6 feet, 4 inches tall. Having two meals per day is quite enough for me, I only order one thing when I go to restaurants and I’m always too full to eat dessert. If I was a normal height and tried eating three meals per day, I would definitely be too fat.
(To be clear, I’m in the US. It’s extreme portion sizes get commentary from visiting europeans)
Not quite what you asked, but there’s a post: “The Best Textbooks on Every Subject” that seems like it can help.
There are three big problems with this idea.
First, we don’t know how to program an AI to value morality in the first place. You said “An AI that was programmed to be moral would...” but programming the AI to do even that much is the hard part. Deciding which morals to program in would be easy by comparison.
Second, this wouldn’t be a friendly AI. We want an AI that doesn’t think that it is good to smash Babylonian babies against rocks or torture humans in Hell for all of eternity like western religions say, or torture humans in Naraka for 10^21 years like the Buddhists say.
Third, you seem to be misunderstanding the probabilities here. Someone once said to consider what the world would be like if Pascal’s wager worked, and someone else asked if they should consider the contradictory parts and falsified parts of Catholicism to be true also. I don’t think you will get much support for this kind of thing from a group whose leader posted this.
Yes it did, it’s clear that my prediction was wrong
This is true, although I don’t think you’ll get much interest about this because it’s so obvious.
This isn’t from Christianity, but actually goes back to hunter-gatherers and had a useful function. See this description of “insulting the meat”. https://www.psychologytoday.com/us/blog/freedom-learn/201105/how-hunter-gatherers-maintained-their-egalitarian-ways
(to be clear, I’m not sure whether this still has a useful function or not)
https://waitbutwhy.com/2019/08/giants.html has a pretty convincing (to me) explanation of this. Basically the way human psychology works is that people have conflicts at the highest available struggle, and when no outside enemies are a threat they turn internally. For a nice graphical illustration, skip to ”Me against my brothers; my brothers and me against my cousins; my cousins, my brothers, and me against strangers.”
huh?
It would help. However, Twitter makes money based on energetic engagement, and no emotion drives behavior better than rage, so they don’t want to fix it.
It’s like the situation with phone companies. There actually are effective ways to prevent spoofed phone numbers, according to my dad who works at a telecom company. However, since scammers and telemarketers are by far the biggest customers, phone companies won’t make the changes needed to do this.
If that was the case we would be doomed far worse than if alignment was extremely hard. It’s only because of all the writing that people like Eliezer have done talking about how hard it is and how we are not on track, plus the many examples of total alignment failures already observed in existing AIs (like these or these), that I have any hope for the future at all.
Remember, the majority of humans use as the source of their morality a religion that says that most people are tortured in hell for all eternity (or, if an eastern religion, tortured in a Naraka for a time massively longer than the real age of the universe so far which is basically the same thing). Even atheists who think they are false often still believe they have good moral teachings: For example, the writer of the popular webcomic Freefall is an Atheist Transhumanist Libertarian and his serious proposed AI alignment method is to teach them to support the values taught in human religions.
Even if you avoid this extremely common failure mode, planned societies run for the good of everyone are still absolutely horrible. Almost all Utopias in fiction suck even when they go the way the author says it would. In the real world, when the plans hit real human psychology, economics and so on, the result is invariably disaster. Imagine living in an average kindergarten all day every day, and that’s one of the better options. The life I had was more like Comazotz from A Wrinkle in Time, and it didn’t end when school was let out.
We also wouldn’t be allowed to leave. Now, for the supposed good of the beneficiaries, generally runaways are forcably returned to their home and terminally ill people in constant agony are forced to stay alive. The implication of your idea being true would that you should kill yourself now while you still have the chance.
The good news is that, instead, only the tiny minority of people able to notice problems right in front of them (even without suffering from them personally) have any chance of successful alignment.