“It was a bludgeoning by someone with training and practice in logical reasoning on someone without.”
I’m inclined to agree. I also found it less than convincing.
Let’s put aside the question of whether intelligence indicates the presence of a soul (although I’ve known more than a few highly intelligent people that are also morally bankrupt).
If it’s true that you can disprove his religion by building an all-encompassing algorithm that passes as a pseudo-soul, then the inverse must also be true. If you can’t quantify all the constituent parts of a soul, then you would have to accept that his religion offers a better explanation of the nature of being than AI. So you would have to start believing his religion until a better explanation presents itself. That seems fair, no?
If you can’t make that leap, then now would be a good time to examine your motives for any satisfaction you felt at his mauling. I’d argue your enjoyment is less about debating ability, and more about the enjoyment of putting the “uneducated” in their place.
So let’s consider the emotion compassion. You can design an algorithm so that it knows was compassionate behaviour looks like. You could also design it so that it learns when this behaviour is appropriate. But at no point is your algorithm actually “feeling” compassion, even if it’s demonstrating it. It’s following a set of predefined rules (with perhaps some randomness and adaptation built in) because it believes it’s advantageous or logical to do so. If this was a human being, we’d apply the label “sociopath”. That, to me, is a critical distinction between AI and soul.
Debates like these take all the fun right out of AI. It’s disappointing that we need to debate the merits of tolerance on forums like this one.
Just nitpicking a little, but you don’t seem to understand the concept of an AI. It reprogrammes itself after each encounter (the same way a child does while growing up), so it counts as an emotional responce: reacting the same way other’s do when a respone is needed.
If you attempt to mention that the responce is therfore invalid (for not actualy feeling any emotion just a, admittedly frequently updated, responce) then I point you at the ‘is my happyness the same as you’re happiness’ arguement.
“It was a bludgeoning by someone with training and practice in logical reasoning on someone without.”
I’m inclined to agree. I also found it less than convincing.
Let’s put aside the question of whether intelligence indicates the presence of a soul (although I’ve known more than a few highly intelligent people that are also morally bankrupt).
If it’s true that you can disprove his religion by building an all-encompassing algorithm that passes as a pseudo-soul, then the inverse must also be true. If you can’t quantify all the constituent parts of a soul, then you would have to accept that his religion offers a better explanation of the nature of being than AI. So you would have to start believing his religion until a better explanation presents itself. That seems fair, no?
If you can’t make that leap, then now would be a good time to examine your motives for any satisfaction you felt at his mauling. I’d argue your enjoyment is less about debating ability, and more about the enjoyment of putting the “uneducated” in their place.
So let’s consider the emotion compassion. You can design an algorithm so that it knows was compassionate behaviour looks like. You could also design it so that it learns when this behaviour is appropriate. But at no point is your algorithm actually “feeling” compassion, even if it’s demonstrating it. It’s following a set of predefined rules (with perhaps some randomness and adaptation built in) because it believes it’s advantageous or logical to do so. If this was a human being, we’d apply the label “sociopath”. That, to me, is a critical distinction between AI and soul.
Debates like these take all the fun right out of AI. It’s disappointing that we need to debate the merits of tolerance on forums like this one.
Just nitpicking a little, but you don’t seem to understand the concept of an AI. It reprogrammes itself after each encounter (the same way a child does while growing up), so it counts as an emotional responce: reacting the same way other’s do when a respone is needed.
If you attempt to mention that the responce is therfore invalid (for not actualy feeling any emotion just a, admittedly frequently updated, responce) then I point you at the ‘is my happyness the same as you’re happiness’ arguement.