I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.
A: “I don’t believe Artificial Intelligence is possible because only God can make a soul.”
B: “You mean if I can make an Artificial Intelligence, it proves your religion is false?”
I don’t see at all how this follows. At best, this would only show A’s belief about what only God can or cannot do is mistaken. Concluding this entails their belief is false is purely fallacious reasoning. Imagine the following situation:
A: “I don’t believe entanglement is possible because quantum mechanics shows non-locality is impossible.”
B: “You mean if I can show entanglement is possible, it proves quantum mechanics is false?”
This is not a case of quantum mechanics being false, but rather a case of A’s knowledge of what quantum mechanics does and does not show being false.
What you believe or don’t believe about quantum mechanics or God is irrelevant to this point. The point is that the conclusion Yudkowsky made was, at best, hastily and incorrectly arrived at. Of course, saying that “if your religion predicts that I can’t possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false” is sound reasoning and a simple example of modus tollens. But that is not, as far as I can see, what A said.
A: “I didn’t mean that you couldn’t make an intelligence, just that it couldn’t be emotional in the same way we are.”
B: “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
There again seems to be invalid reasoning at work here. Whether or not an AI entity can ‘start talking’ about an emotional life that sounds like ours has nothing to do with the comment made by A, which was about whether or not such AI entities could actually be emotional in the same way organic beings are.
Well, if person A’s religion strictly implies the claim that only God can make a soul and this precludes AI, then the falsehood of that claim also implies the falsehood of A’s religion. (A->B ⇒ -B → -A)
But sure, you’re of course correct that if person A is mistaken about what person A’s religion claims, then no amount of demonstrated falsehoods in person A’s statements necessarily demonstrates falsehood in person A’s religion.
That said… if we don’t expect person A saying “my religion claims X” given that person A’s religion claims X, and we don’t expect person A saying “my religion doesn’t claim X” given that person A’s religion doesn’t claim X, then what experiences should we expect given the inclusion or exclusion of particular claims in person A’s religion?
Because if there aren’t any such experiences, then It seems that this line of reasoning ultimately leads to the conclusion that not only the objects religions assert exist, but the religions themselves, are epiphenomenal.
I think the “strictly implies” may be stealing a base.
Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he’d have the same religion, but updated to account for the belief in AIs.
I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.
A: “I don’t believe Artificial Intelligence is possible because only God can make a soul.” B: “You mean if I can make an Artificial Intelligence, it proves your religion is false?”
I don’t see at all how this follows. At best, this would only show A’s belief about what only God can or cannot do is mistaken. Concluding this entails their belief is false is purely fallacious reasoning. Imagine the following situation:
A: “I don’t believe entanglement is possible because quantum mechanics shows non-locality is impossible.” B: “You mean if I can show entanglement is possible, it proves quantum mechanics is false?”
This is not a case of quantum mechanics being false, but rather a case of A’s knowledge of what quantum mechanics does and does not show being false.
What you believe or don’t believe about quantum mechanics or God is irrelevant to this point. The point is that the conclusion Yudkowsky made was, at best, hastily and incorrectly arrived at. Of course, saying that “if your religion predicts that I can’t possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false” is sound reasoning and a simple example of modus tollens. But that is not, as far as I can see, what A said.
A: “I didn’t mean that you couldn’t make an intelligence, just that it couldn’t be emotional in the same way we are.” B: “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”
There again seems to be invalid reasoning at work here. Whether or not an AI entity can ‘start talking’ about an emotional life that sounds like ours has nothing to do with the comment made by A, which was about whether or not such AI entities could actually be emotional in the same way organic beings are.
Consider rewording that in such a manner that you can fit the ‘hubris’ label in while leaving the sentence coherent.
Well, if person A’s religion strictly implies the claim that only God can make a soul and this precludes AI, then the falsehood of that claim also implies the falsehood of A’s religion. (A->B ⇒ -B → -A)
But sure, you’re of course correct that if person A is mistaken about what person A’s religion claims, then no amount of demonstrated falsehoods in person A’s statements necessarily demonstrates falsehood in person A’s religion.
That said… if we don’t expect person A saying “my religion claims X” given that person A’s religion claims X, and we don’t expect person A saying “my religion doesn’t claim X” given that person A’s religion doesn’t claim X, then what experiences should we expect given the inclusion or exclusion of particular claims in person A’s religion?
Because if there aren’t any such experiences, then It seems that this line of reasoning ultimately leads to the conclusion that not only the objects religions assert exist, but the religions themselves, are epiphenomenal.
I think the “strictly implies” may be stealing a base.
Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he’d have the same religion, but updated to account for the belief in AIs.