Eliezer, that’s false reasoning. I’m not religious, so don’t take this as the opening to a religious tirade, but it’s a pet peeve of mine that intelligent people will assert that every belief within a religion is wrong if only one piece of it is wrong.
There are a billion and one reasons why a body of knowledge that is is mostly correct (not saying I think religions are) could have one flaw. This particular flaw doesn’t prove God doesn’t exist, it would only prove God souls aren’t necessary for an intelligent life form to survive, or (perhaps, to a religious person) that God isn’t the only entity that can make them.
It’s easy to get lazy when one’s opponent isn’t challenging enough (I’ve done it occasionally) and I’ve said stuff like that. I think it’s best not to. They’re not convincing to the opposition and we’re not challenging ourselves to improve.
I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.
A: “I don’t believe Artificial Intelligence is possible because only God can make a soul.”
B: “You mean if I can make an Artificial Intelligence, it proves your religion is false?”
I don’t see at all how this follows. At best, this would only show A’s belief about what only God can or cannot do is mistaken. Concluding this entails their belief is false is purely fallacious reasoning. Imagine the following situation:
A: “I don’t believe entanglement is possible because quantum mechanics shows non-locality is impossible.”
B: “You mean if I can show entanglement is possible, it proves quantum mechanics is false?”
This is not a case of quantum mechanics being false, but rather a case of A’s knowledge of what quantum mechanics does and does not show being false.
What you believe or don’t believe about quantum mechanics or God is irrelevant to this point. The point is that the conclusion Yudkowsky made was, at best, hastily and incorrectly arrived at. Of course, saying that “if your religion predicts that I can’t possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false” is sound reasoning and a simple example of modus tollens. But that is not, as far as I can see, what A said.
A: “I didn’t mean that you couldn’t make an intelligence, just that it couldn’t be emotional in the same way we are.”
B: “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
There again seems to be invalid reasoning at work here. Whether or not an AI entity can ‘start talking’ about an emotional life that sounds like ours has nothing to do with the comment made by A, which was about whether or not such AI entities could actually be emotional in the same way organic beings are.
Well, if person A’s religion strictly implies the claim that only God can make a soul and this precludes AI, then the falsehood of that claim also implies the falsehood of A’s religion. (A->B ⇒ -B → -A)
But sure, you’re of course correct that if person A is mistaken about what person A’s religion claims, then no amount of demonstrated falsehoods in person A’s statements necessarily demonstrates falsehood in person A’s religion.
That said… if we don’t expect person A saying “my religion claims X” given that person A’s religion claims X, and we don’t expect person A saying “my religion doesn’t claim X” given that person A’s religion doesn’t claim X, then what experiences should we expect given the inclusion or exclusion of particular claims in person A’s religion?
Because if there aren’t any such experiences, then It seems that this line of reasoning ultimately leads to the conclusion that not only the objects religions assert exist, but the religions themselves, are epiphenomenal.
I think the “strictly implies” may be stealing a base.
Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he’d have the same religion, but updated to account for the belief in AIs.
It looks like false logic to me too, but I’m very aware that that is how many Christians “prove” their religion to be true. ‘The Bible says this historical/Godly event happened and this archeological evidence supports the account in the Bible, therefore the Bible must be true about everything so God exists and I’m going to Heaven.’ Which sounds very similar to ‘This is a part of what you say about your religion and it may be proved false one day, so your religion might be too.’
Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?
Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?
Basically, the question here is whether you can use the Dark Arts with purely Light intentions. In the ideal case, I have to say “of course you can”. Assuming that you know a method which you believe is more likely to cause your partner to gain true beliefs rather than false ones, you can use that method even if it involves techniques that are frowned upon in rationalist circles. However, in the real world, doing so is incredibly dangerous. First, you have to consider the knock-on effects of being seen to use such lines of reasoning; it could damage your reputation or that of rationalists in general for those that hear you, it could cause people to become more firm in a false epistemology which makes them more likely to just adopt another false belief, etc. You also have to consider that you run on hostile hardware; you could damage your own rationality if you aren’t very careful about handling the cognitive dissonance. There are a lot of failure modes you open yourself up to when you engage in that sort of anti-reasoning, and while it’s certainly possible to navigate through it unscathed (I suspect Eliezer has done so in his AI box experiments), I don’t think it is a good idea to expose yourself to the risk without a good reason.
An unrelated but also relevant point: everything is permissible, but not all things are good. Asking “is it okay to...” is the wrong question, and is likely to expose you to some of the failure modes of Traditional Rationality. You don’t automatically fail by phrasing it like that, but once again it’s an issue of unnecessarily risking mental contamination. The better question is “is it a good idea to...” or “what are the dangers of...” or something similar that voices what you really want answered, which should probably not be “will LWers look down at me for doing …” (After all, if something is a good idea but we look down at it then we want to be told so so that we can stop doing silly things like that.)
Eliezer, that’s false reasoning. I’m not religious, so don’t take this as the opening to a religious tirade, but it’s a pet peeve of mine that intelligent people will assert that every belief within a religion is wrong if only one piece of it is wrong.
There are a billion and one reasons why a body of knowledge that is is mostly correct (not saying I think religions are) could have one flaw. This particular flaw doesn’t prove God doesn’t exist, it would only prove God souls aren’t necessary for an intelligent life form to survive, or (perhaps, to a religious person) that God isn’t the only entity that can make them.
It’s easy to get lazy when one’s opponent isn’t challenging enough (I’ve done it occasionally) and I’ve said stuff like that. I think it’s best not to. They’re not convincing to the opposition and we’re not challenging ourselves to improve.
I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.
A: “I don’t believe Artificial Intelligence is possible because only God can make a soul.” B: “You mean if I can make an Artificial Intelligence, it proves your religion is false?”
I don’t see at all how this follows. At best, this would only show A’s belief about what only God can or cannot do is mistaken. Concluding this entails their belief is false is purely fallacious reasoning. Imagine the following situation:
A: “I don’t believe entanglement is possible because quantum mechanics shows non-locality is impossible.” B: “You mean if I can show entanglement is possible, it proves quantum mechanics is false?”
This is not a case of quantum mechanics being false, but rather a case of A’s knowledge of what quantum mechanics does and does not show being false.
What you believe or don’t believe about quantum mechanics or God is irrelevant to this point. The point is that the conclusion Yudkowsky made was, at best, hastily and incorrectly arrived at. Of course, saying that “if your religion predicts that I can’t possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false” is sound reasoning and a simple example of modus tollens. But that is not, as far as I can see, what A said.
A: “I didn’t mean that you couldn’t make an intelligence, just that it couldn’t be emotional in the same way we are.” B: “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
There again seems to be invalid reasoning at work here. Whether or not an AI entity can ‘start talking’ about an emotional life that sounds like ours has nothing to do with the comment made by A, which was about whether or not such AI entities could actually be emotional in the same way organic beings are.
Consider rewording that in such a manner that you can fit the ‘hubris’ label in while leaving the sentence coherent.
Well, if person A’s religion strictly implies the claim that only God can make a soul and this precludes AI, then the falsehood of that claim also implies the falsehood of A’s religion. (A->B ⇒ -B → -A)
But sure, you’re of course correct that if person A is mistaken about what person A’s religion claims, then no amount of demonstrated falsehoods in person A’s statements necessarily demonstrates falsehood in person A’s religion.
That said… if we don’t expect person A saying “my religion claims X” given that person A’s religion claims X, and we don’t expect person A saying “my religion doesn’t claim X” given that person A’s religion doesn’t claim X, then what experiences should we expect given the inclusion or exclusion of particular claims in person A’s religion?
Because if there aren’t any such experiences, then It seems that this line of reasoning ultimately leads to the conclusion that not only the objects religions assert exist, but the religions themselves, are epiphenomenal.
I think the “strictly implies” may be stealing a base.
Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he’d have the same religion, but updated to account for the belief in AIs.
It looks like false logic to me too, but I’m very aware that that is how many Christians “prove” their religion to be true. ‘The Bible says this historical/Godly event happened and this archeological evidence supports the account in the Bible, therefore the Bible must be true about everything so God exists and I’m going to Heaven.’ Which sounds very similar to ‘This is a part of what you say about your religion and it may be proved false one day, so your religion might be too.’
Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?
Basically, the question here is whether you can use the Dark Arts with purely Light intentions. In the ideal case, I have to say “of course you can”. Assuming that you know a method which you believe is more likely to cause your partner to gain true beliefs rather than false ones, you can use that method even if it involves techniques that are frowned upon in rationalist circles. However, in the real world, doing so is incredibly dangerous. First, you have to consider the knock-on effects of being seen to use such lines of reasoning; it could damage your reputation or that of rationalists in general for those that hear you, it could cause people to become more firm in a false epistemology which makes them more likely to just adopt another false belief, etc. You also have to consider that you run on hostile hardware; you could damage your own rationality if you aren’t very careful about handling the cognitive dissonance. There are a lot of failure modes you open yourself up to when you engage in that sort of anti-reasoning, and while it’s certainly possible to navigate through it unscathed (I suspect Eliezer has done so in his AI box experiments), I don’t think it is a good idea to expose yourself to the risk without a good reason.
An unrelated but also relevant point: everything is permissible, but not all things are good. Asking “is it okay to...” is the wrong question, and is likely to expose you to some of the failure modes of Traditional Rationality. You don’t automatically fail by phrasing it like that, but once again it’s an issue of unnecessarily risking mental contamination. The better question is “is it a good idea to...” or “what are the dangers of...” or something similar that voices what you really want answered, which should probably not be “will LWers look down at me for doing …” (After all, if something is a good idea but we look down at it then we want to be told so so that we can stop doing silly things like that.)
The framing of the first sentence gives me a desperately unfair expectation for the discussion inside HPMOR- I’m excited.