How sure are you that brain emulations would be conscious?
I don’t know and I don’t care. But if you were to ask me, “how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong”, I’d say, “almost certain”. If you were to ask me the follow-up question, “and therefore, should we grant them the same rights we grant biological humans”, I’d also say “yes”, though with a lower certainty, maybe somewhere around 0.9. There’s a non-trivial chance that the emergence of non-biological humans would cause us to radically re-examine our notions of morality.
I don’t know and I don’t care. But if you were to ask me, “how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong”, I’d say, “almost certain”.
It may be worth noting that even today, people can have fun or have extended conversations with chatbots. And people will in the right circumstances befriend a moving stick. The bar to actually have an interesting conversation is likely well below that needed for whole brain emulation.
Unfortunately the video you linked to is offline, but still:
The bar to actually have an interesting conversation is likely well below that needed for whole brain emulation.
Is there a chatbot in existence right now that could participate in this conversation we are having right here ?
Yes, people befriend chatbots and sticks, as well as household pets, but I am not aware of any cat or stick that could pass for a human on the internet. Furthermore, most humans (with the possible exception of some die-hard cat ladies) would readily agree that cats are nowhere near as intelligent as other humans, and neither are plants.
The Turing Test requires the agent to act indistinguishably from a human; it does not merely require other humans to befriend the agent.
Sure, but there’s likely a large gap between “have an interesting conversation” and “pass the Turing test”. The second is likely much more difficult than the first.
I was using “interesting conversation” as a short-hand for “a kind of conversation that usually occurs on Less Wrong, for example the one we are having right now” (which, admittedly, may or may not be terribly interesting, depending on who’s listening).
Do you believe that passing the Turing Task is much harder than fully participating in our current conversation (and doing so at least as well as we are doing right now) ? If so, how ?
I don’t know and I don’t care. But if you were to ask me, “how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong”, I’d say, “almost certain”. If you were to ask me the follow-up question, “and therefore, should we grant them the same rights we grant biological humans”, I’d also say “yes”, though with a lower certainty, maybe somewhere around 0.9. There’s a non-trivial chance that the emergence of non-biological humans would cause us to radically re-examine our notions of morality.
It may be worth noting that even today, people can have fun or have extended conversations with chatbots. And people will in the right circumstances befriend a moving stick. The bar to actually have an interesting conversation is likely well below that needed for whole brain emulation.
Unfortunately the video you linked to is offline, but still:
Is there a chatbot in existence right now that could participate in this conversation we are having right here ?
Yes, people befriend chatbots and sticks, as well as household pets, but I am not aware of any cat or stick that could pass for a human on the internet. Furthermore, most humans (with the possible exception of some die-hard cat ladies) would readily agree that cats are nowhere near as intelligent as other humans, and neither are plants.
The Turing Test requires the agent to act indistinguishably from a human; it does not merely require other humans to befriend the agent.
Sure, but there’s likely a large gap between “have an interesting conversation” and “pass the Turing test”. The second is likely much more difficult than the first.
I was using “interesting conversation” as a short-hand for “a kind of conversation that usually occurs on Less Wrong, for example the one we are having right now” (which, admittedly, may or may not be terribly interesting, depending on who’s listening).
Do you believe that passing the Turing Task is much harder than fully participating in our current conversation (and doing so at least as well as we are doing right now) ? If so, how ?