The amount of detail an AI would need for simulating realistic NPC’s for you may be influenced substantially by a significant list of things like whether you are an introvert or an extrovert, what your job is, how many people you interact with over the course of a day and to what level of detail, and how many of those people you have very deep conversations with and how frequently, and if an acceptable AI answer to you mentioning to someone ‘Everyone and everything seems so depressingly bland and repetitive’ is a doctor telling you ‘Have you tried taking medication? I’ll write you a prescription for an antidepressant.’
Yes, this is the sort of consideration I had in mind. I’m glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.
Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.
My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they would be conscious on a comparable level to me, and/or would require comparable processing power to just simulating another person? For example, I’ve interacted a ton with my friends and family, and built up detailed mental models of their minds. Could they be chatbots/npcs, with minds that are nothing like the models I’ve made?
(Another idea: What if they are exactly like the models I’ve made? What if the chatbot works by detecting what I expect someone to say, and then having them say that, with a bit of random variation thrown in?)
The thing is, if you get suspicious you don’t immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks,” I’ve been fooled! An AI has taken over the world and simulated chatbots as human beings!” unless they suffer from paranoia.
Your question, “How much more sophisticated would they need to be” is answered by the question “depends”. If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you’re a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let’s be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).
If you’ve interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.
[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]
Thanks for the response. Yes, it depends on how much interaction I have with human beings and on the kind of people I interact with. I’m mostly interested in my own case, of course, and I interact with a fair number of fairly diverse, fairly intelligent human beings on a regular basis.
If you’re a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle
Ah, but would it? I’m not so sure, that’s why I made this post.
Yes, if everyone always said what I predicted, things would be obvious, but recall I specified that random variation would be added. This appears to be how dream characters work: You can carry on sophisticated conversations with them, but (probably) they are governed by algorithms that feed off your own expectations. That being said, I now realize that the variation would have to be better than random in order to account for how e.g. EY consistently says things that are on-point and insightful despite being surprising to me.
Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That’s not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.
Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn’t happened yet. or maybe the patches that dont come up in long enoguh get commented out.
The amount of detail an AI would need for simulating realistic NPC’s for you may be influenced substantially by a significant list of things like whether you are an introvert or an extrovert, what your job is, how many people you interact with over the course of a day and to what level of detail, and how many of those people you have very deep conversations with and how frequently, and if an acceptable AI answer to you mentioning to someone ‘Everyone and everything seems so depressingly bland and repetitive’ is a doctor telling you ‘Have you tried taking medication? I’ll write you a prescription for an antidepressant.’
Yes, this is the sort of consideration I had in mind. I’m glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.
Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.
My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they would be conscious on a comparable level to me, and/or would require comparable processing power to just simulating another person? For example, I’ve interacted a ton with my friends and family, and built up detailed mental models of their minds. Could they be chatbots/npcs, with minds that are nothing like the models I’ve made?
(Another idea: What if they are exactly like the models I’ve made? What if the chatbot works by detecting what I expect someone to say, and then having them say that, with a bit of random variation thrown in?)
The thing is, if you get suspicious you don’t immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks,” I’ve been fooled! An AI has taken over the world and simulated chatbots as human beings!” unless they suffer from paranoia.
Your question, “How much more sophisticated would they need to be” is answered by the question “depends”. If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you’re a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let’s be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).
If you’ve interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.
[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]
Thanks for the response. Yes, it depends on how much interaction I have with human beings and on the kind of people I interact with. I’m mostly interested in my own case, of course, and I interact with a fair number of fairly diverse, fairly intelligent human beings on a regular basis.
Ah, but would it? I’m not so sure, that’s why I made this post.
Yes, if everyone always said what I predicted, things would be obvious, but recall I specified that random variation would be added. This appears to be how dream characters work: You can carry on sophisticated conversations with them, but (probably) they are governed by algorithms that feed off your own expectations. That being said, I now realize that the variation would have to be better than random in order to account for how e.g. EY consistently says things that are on-point and insightful despite being surprising to me.
Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That’s not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.
Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn’t happened yet. or maybe the patches that dont come up in long enoguh get commented out.