Brilliant!
I really like this quote, but not because of it’s sexual connotations :) The ultimate slippery thing you must grasp firmly until you penetrate is your mind.
Brilliant!
I really like this quote, but not because of it’s sexual connotations :) The ultimate slippery thing you must grasp firmly until you penetrate is your mind.
Paraphrasing my previous comment in another way: determinism is no excuse for you to be sloppy, lazy or unattentive in your decision process.
I feel the obligation to post this warning again: Don’t think “Ohhh, the decision I’ll make is already determined, so I can as well relax and don’t worry too much.” Remember you will face the consequences of whatever you decide so make the best possible choice, maximize utility!
After I wrote my comment I continued to think about it and I guess I might be wrong.
I no longer think that Eliezer fell into the correspondence bias trap. In fact, Lenin’s actions seem to show his basic disposition. Another person in his situation would probably act differently.
What I still don’t like is the idea of moral responsibility. Who is gonna be the judge on that? That’s why I prefer to think of consequences of actions. Although I guess that morality is just another way of evaluating those so in the end it might be the same.
Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin would still be a jerk? Yes, that’s exactly what I’m suggesting.
Eliezer, sorry but you fell into the correspondence bias trap.
I agree with your post if you substitute “moral responsibility” with “consequences”. We all make decisions and we will have to face the consequences. Lenin enslaved millions, now people call him a jerk. But I don’t think he is worse a person than any other human.
Consider that brain tumors can cause aggressive behaviors.
Two thoughts:
-randomness: if the future is not determined it is completely unpredictable, in other words, it is random. Wouldn’t the universe be a strange place if your decisions have a strong component of randomness? Perhaps in the next hour you would decide to take all the paper in your office and fold it into tiny boxes...
-Danger: since the future is already determined I can as well sit back and relax and don’t worry. Don’t fall into this trap.
But, by and large, it all adds up to normality. If your understanding of many-worlds is the tiniest bit shaky, and you are contemplating whether to believe some strange proposition, or feel some strange emotion, or plan some strange strategy, then I can give you very simple advice: Don’t.
Good to know.
Your decision theory should (almost always) be the same,…
Where is the exception?
In fact, I feel the need to write a bit more.
This blog is the best on the internet and I have never read the principles of rationality explained so effectively. I have the impression(please correct me if I’m wrong) that some people here are a bit envious of Eliezer. Why? Because he didn’t go through traditional academia and nevertheless is doing a great job. I guess that for many who spent(or wasted) years in order to get a traditional academic diploma it must be very annoying to see someone overtake them on an intellectual level without having to jump through all the academia hoops.
Furthermore, I really think Eliezer should get all support he needs because he is doing an important job(maybe the most important of all) in trying to solve the FGAI problem. And I guess that must be a tremendous burden for him, both intellectually and emotionally. I know, there are others working on it who also deserve credit.
When Eliezer makes a mistake, point it out, but try to be polite.
I think that maybe and only maybe, Eliezer could be the man to shape the future of the universe, at least one who will make a SIGNIFICANT contribution. So in writing positive comments I’m trying to be supportive (when I’m better off financially I will also consider donating money). And those trying to bring him down are doing us all a disservice.
I know, I know, this comment of mine is 80% emotional and only 20% rational(oops, bias detected). Corrections and criticisms are welcome!
PS: Eliezer, don’t get a big head, ok? ;)
Another great post. Eliezer I really don’t trust you 100% but I try to read and understand everything you write with great interest. I agree with you in that a lot of the negative commenters here seem to underestimate the mental work you have put into all this.
Eliezer,
ok, thanks for the link.
Where can I sign up for cryonics if I live outside the United States and Europe?
Then if you open the envelope and find that amount of money, roll a die and switch the envelope at that probability.
The probability of the die coming up == f(amount of dollars)? f being the probability function.
Eliezer: Good article. I always feel angry when I read some futurist’s claim where there is no real scientific or bayesian basis for it, but because the futurist has some status this is put forth as the gospel. It’s a social game, people posing as authorities when they didn’t do their homework. Maybe we should apply the saying: “Shut up and calculate!”
PS: The same can be said for all kind of gurus in other fields as well.
burger flipper, ok let’s play the AI box experiment:
However, before you read on, answer a simple question: if Eliezer tomorrow announces that he finally has solved the FGAI problem and just needs $ 1,000,000 to build one, would you be willing to donate cash for it? . . . . . . . . . . . . .
If you answered yes to the question above, you just let the AI out of the box. How do you know you can trust Eliezer? How do you know he doesn’t have evil intentions, or that he didn’t make a mistake in his math? The only way to be 100% sure is to know enough about the specific GAI he is building.
So what do we do now? Should we oppose the singularity? Is the singularity a good idea after all? Who shall we trust with the future of the universe?
Yes, I know, I know strictly speaking this isn’t the AI-box experiment, but still...
Addendum to my previous post:
The worst thing, the argument is so compelling that even I’m not sure about what I would do.
Regarding the AI-Box experiment:
I’ve been very fascinated by this since I first read about it months ago. I even emailed Eliezer but he refused to give me any details. So I have thought about it on and off and eventually had a staggering insight… well, if you want I will convince you to let the AI out of the box… after reading just a couple of lines of text. Any takers? Caveat: after the experiment you have to publicly declare if you let it out or not.
One hint: Eliezer will be immune to this argument.
@Vladimir Nesov:
why do you say that the problem disappears when you have probabilities?
I guess you still have the same basic problem which is: what are your priors? You cannot bootstrap from nothing and I think that is what the tortoise was hinting at, that there are hidden assumptions in our reasoning that we are not aware of and that you can’t think without using those hidden assumptions.
One suggestion for the flaw:
Conclusions from this article: a) you are never safe b) you must understand a) on a emotional basis c) the only way to achieve b) is through an experience of failure after following the rules you trusted
The flaw is that the article actually does the opposite of what it wants to accomplish: by giving the warning(a) it makes people feel safer. In order to convey the necessary emotion of “not feeling safe”(b) Eliezer had to make the PS regarding the flaw.
In a certain sense this also negates c). I think Eliezer doesn’t really want us to fail(c) in order to recognize a), the whole point of overcomingbias.com is to prevent humans from failing. So if Eliezer did a good job in conveying the necessary insecurity through his PS then hopefully c) won’t happen to you.
Roland
There was a matematician that developed a method for solving hard problems. Instead of attacking the problem frontally(trying to crack the nut) he started to build a framework around it, create all kinds of useful mathematical abstractions, gradually dissolving the question(==the nut) until the solution became evident.
This ties in with: And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.
I guess that Bayesian networks, or at least bayesian thinking was invented before this application for AI. After that it was just one inferential step to apply this to AI. What if bayesianism hadn’t been invented yet? Would it make sense to bang the head against the problem, hoping to find a solution? In the same vein I have the suspicion that many of the remaining problems in AI might be too many inferential steps away to solve directly. In this case there will be need to improve the surrounding knowledge first.