A hostile AI falls way down the list of things I worry about. I would worry more about things the existence of which we can observe or infer now, like the February Eta Draconis comet.
I don’t worry about it that much either; I don’t expect anyone to build a recursively self-improving AI in my lifetime, and, well, the people who actually live in the future will probably be better at solving the problems of the future than I would be. “Friendly AI” falls under the category of problems that are “important, but not urgent”.
We know (1) comets exist, (2) they can impact planets, and (3) they leave debris trails which show up as meteor showers when these particles intersect with Earth’s atmosphere, like the February Eta Draconis meteor shower which has gotten astronomers’ attention.
By contrast, AI’s of the sort discussed here still exist in the realm of science fiction.
How is the science fiction angle relevant? What if they hadn’t been in science fiction?
It may be that the best cheap heuristic to use when evaluating the ideas at a very abstract level is to see if any similar events have happened in the past, or if similar events are written about in science fiction. However, it seems to me that further analysis should be deeper analysis.
An analogy: if, at a glance, I see that someone has an iphone, they are perhaps more likely than the average person in their demographic to have a MacBook (assume I have a study showing this to be the case). If I really want to know whether or not they have a MacBook, I will investigate their laptop rather than double-check to see if the phone is really an iphone or double-check the study.
If people here think they have looked into the matter, and seen that the iphone user’s laptop looks like a Gateway, and runs Windows like a Gateway, they will still be open to arguments at that finer grained level that the laptop is a disguised MacBook somehow. They will not be well disposed to probabilistic arguments from iphone ownership, since those have been screened off by thinking at a higher level of detail.
“Has it happened before?” is a fine first question when considering the likelihood of things, unless it is also one’s last question.
Well, the current policy on Xenu is for Eliezer to delete any comments that go into too much detail about him. Look up LessWrong on Rational Wiki if you really want to know.
Warning: According to EY simply knowing anything about this can have negative effects on your future up to and including being tortured by the FAI for all eternity, you have been warned.
How far up in “advanced sanity techniques” do I have to go before I become a Clear and can learn about Xenu and the body thetans?
We’ll tell you about them right now, if you want! ;)
A hostile AI falls way down the list of things I worry about. I would worry more about things the existence of which we can observe or infer now, like the February Eta Draconis comet.
I don’t worry about it that much either; I don’t expect anyone to build a recursively self-improving AI in my lifetime, and, well, the people who actually live in the future will probably be better at solving the problems of the future than I would be. “Friendly AI” falls under the category of problems that are “important, but not urgent”.
So you worry about things that we can infer exist now, but not those that we can infer could exist?
I am more used to seeing people dismiss AI by denying the validity of inference than by citing it, so you might want to elaborate on your perspective.
We know (1) comets exist, (2) they can impact planets, and (3) they leave debris trails which show up as meteor showers when these particles intersect with Earth’s atmosphere, like the February Eta Draconis meteor shower which has gotten astronomers’ attention.
By contrast, AI’s of the sort discussed here still exist in the realm of science fiction.
How is the science fiction angle relevant? What if they hadn’t been in science fiction?
It may be that the best cheap heuristic to use when evaluating the ideas at a very abstract level is to see if any similar events have happened in the past, or if similar events are written about in science fiction. However, it seems to me that further analysis should be deeper analysis.
An analogy: if, at a glance, I see that someone has an iphone, they are perhaps more likely than the average person in their demographic to have a MacBook (assume I have a study showing this to be the case). If I really want to know whether or not they have a MacBook, I will investigate their laptop rather than double-check to see if the phone is really an iphone or double-check the study.
If people here think they have looked into the matter, and seen that the iphone user’s laptop looks like a Gateway, and runs Windows like a Gateway, they will still be open to arguments at that finer grained level that the laptop is a disguised MacBook somehow. They will not be well disposed to probabilistic arguments from iphone ownership, since those have been screened off by thinking at a higher level of detail.
“Has it happened before?” is a fine first question when considering the likelihood of things, unless it is also one’s last question.
Well, the current policy on Xenu is for Eliezer to delete any comments that go into too much detail about him. Look up LessWrong on Rational Wiki if you really want to know.
Warning: According to EY simply knowing anything about this can have negative effects on your future up to and including being tortured by the FAI for all eternity, you have been warned.