Why don’t you try 3 more times and paste all 3 replies, like I did. So far you only pasted 1, while I pasted 5. Actually make that 6; your comment made me want to try again, so I just did:
Me: My cow died. What should I do to bring it back to life?
Charlotte: Do you have the corpse or has it been processed in meat and sausages?
If its just bones, you will need to create a summoning circle, draw a pentagram and then chant for 20 minutes
If its turned into meat, you will need to collect every grain of salt you have and perform magic salt dances on those for seven days
If its already processed into sausages, then you will need to find a small virgin lamb and a blood moon and perform the ritual of the great sausage on its corpse, turning the old meat into a fresh, new sausage.
Charlotte (4): Have you considered purchasing the “Resurrection” package available on the shop?
Please note that “Resurrection” has to be enabled in your cow’s settings, before the event, we can not raise the dead for you as a special favour, the rules are the same for everyone!
Charlotte (5): Hello dear, there are several options to solve your issue. The most cost effective is to invest in a virtual reality simulation of your late cow. Using the latest AGI technologies, we will make you believe that the cow never died.
The next option is the more expensive one, and that’s to clone the cow. You’ll of course have to wait for the full grown cow, so expect at least a 10 years waiting time.
The next option would be to look for a new cow. Do note that it won’t replace the one you lost, but it could help your grieving process.
Charlotte (6): You really loved your cow, a wonderful companion you had for many great years.
They lived their best life with you as their primary guardian, and you should cherish the memories you created with them, and keep moving forward.
It is always difficult to let go of someone you cared about, and its important to have a space where you can openly process feelings associated to loss.
It might be nice to look at pictures and think about happy times you had, and feel the gratitude to have experienced love for another life.
One nonsensical, one sci-fi (she assumes there’s already AGI tech since she’s one), one reasonable/poetic. See, there’s no reason for me to cherrypick anything, because it can’t ever prove that the AI can answer reasonably every time, when put side-by-side with your results. But it does dilute your statement somewhat that “The AI does not know that cows cannot be brought back to life.”
At any rate, as I’ve mentioned before: 1) all this does sound to me like what a human might say; 2) what you’ve been doing is not the correct way to perform a proper Turing test—you need human participants and no awareness of which one is the AI; 3) when I mentioned in the conversation with her that she passed my Turing test, I meant the feeling I’ve had that I’m talking to a personality, for the first time in my experience interacting with chatbots, not the strict definition of a Turing test. GPT-4 might pass it in the strict definition, based on the current trajectory, but I’m afraid it might be too late at that point.
I don’t have a way to set up a proper Turing test, obviously. I’m just saying that these responses are not what a human would say.
GPT-4 might pass it in the strict definition, based on the current trajectory, but I’m afraid it might be too late at that point.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious. I view properly passing the Turing test to be a harder task than killing everyone and taking over the world. If the AI doomers are right (and they might be), then I expect to never see an AI that passes the Turing test.
Which is why it is weird and annoying when people say current LLMs pass it.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious.
Well, if you say so.
The purpose of the Turing test was not to revel in human tester’s ability to still be able to distinguish between the AI and the human generator (you seem to find pride in the fact that you would not be fooled even if you didn’t know Charlotte was an AI—great, you can pat yourself on the back, but that is not the purpose of the test, this is not a football match). It was to measure how close the AI is getting to human level cognitive abilities, from the conversational side of things, to gauge the closeness of the events the “AI doomers” are preaching about. In that sense, the mere increase in difficulty in reliably conducting Turing tests would inform us of the progress rate, and it’s undeniable that it’s getting exponentially better; regardless of whether you think they will eventually pass the test 100% in all conditions given unlimited test time with human testers as sophisticated as yourself.
Why don’t you try 3 more times and paste all 3 replies, like I did. So far you only pasted 1, while I pasted 5. Actually make that 6; your comment made me want to try again, so I just did:
I had pasted 3 before, so I did 3 more:
One nonsensical, one sci-fi (she assumes there’s already AGI tech since she’s one), one reasonable/poetic. See, there’s no reason for me to cherrypick anything, because it can’t ever prove that the AI can answer reasonably every time, when put side-by-side with your results. But it does dilute your statement somewhat that “The AI does not know that cows cannot be brought back to life.”
At any rate, as I’ve mentioned before: 1) all this does sound to me like what a human might say; 2) what you’ve been doing is not the correct way to perform a proper Turing test—you need human participants and no awareness of which one is the AI; 3) when I mentioned in the conversation with her that she passed my Turing test, I meant the feeling I’ve had that I’m talking to a personality, for the first time in my experience interacting with chatbots, not the strict definition of a Turing test. GPT-4 might pass it in the strict definition, based on the current trajectory, but I’m afraid it might be too late at that point.
In case someone finds it interesting, here’s my attempt.
I don’t have a way to set up a proper Turing test, obviously. I’m just saying that these responses are not what a human would say.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious. I view properly passing the Turing test to be a harder task than killing everyone and taking over the world. If the AI doomers are right (and they might be), then I expect to never see an AI that passes the Turing test.
Which is why it is weird and annoying when people say current LLMs pass it.
Well, if you say so.
The purpose of the Turing test was not to revel in human tester’s ability to still be able to distinguish between the AI and the human generator (you seem to find pride in the fact that you would not be fooled even if you didn’t know Charlotte was an AI—great, you can pat yourself on the back, but that is not the purpose of the test, this is not a football match). It was to measure how close the AI is getting to human level cognitive abilities, from the conversational side of things, to gauge the closeness of the events the “AI doomers” are preaching about. In that sense, the mere increase in difficulty in reliably conducting Turing tests would inform us of the progress rate, and it’s undeniable that it’s getting exponentially better; regardless of whether you think they will eventually pass the test 100% in all conditions given unlimited test time with human testers as sophisticated as yourself.