Sure, you can do lots of things with a chatbot, and some of those are wholesome and good. But, what are people economically incentivized to make? Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
Of course someone could try to create chatbots that were more manipulative and deceptive, but there would likely be a significant backlash if that was detected, and people would flock to the more wholesome competitors.
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway. There’s the network effects (which to be fair will probably exist less for chatbots, although I predict the most successful chatbots companies will have some kind of network-effect-product bundled together with it). But also, in general the companies that try to make facebook-but-good always have fewer resources, and start out with fewer features, and it’s an uphill battle to get people to switch to a version that’s optimized for things other than profit.
There’s been a huge backlash against the lottery being a tax on economically illiterate people, but the lottery still exists. We have banned some kinds of gambling, but not loot boxes.
People can also generally tell who is someone who’s genuinely mentally and physically healthy (there are some exceptions like charismatic narcissists who’ll trick you, but they’re a minority) and then confer social status on the things that healthy and successful people do.
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I should probably clarify that I’m not saying that nobody would end up using manipulative chatbots. It’s possible that, in the short run at least, some proportion of the population would get hooked on them, comparable in size to the proportion that currently gets hooked into other things in a way that comes close to ruining their life. But probably not significantly more than that, and probably that proportion would shrink over time.
Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
I wouldn’t call the first category “optimized for getting the users to do things other than use the chatbot”. I’d call it “optimized for giving the users the most genuine value, which among other things also includes doing things other than using the chatbot”.
So does one win by trying to give the most value, or by trying to make something the most engaging? That seems to depend a lot on the specifics. Does Google Drive optimize more for providing genuine value, or for maximizing engagement? I think it mostly optimizes for value, and ends up getting high engagement because it provides high value.
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway.
Facebook seems like an almost maximally anti-relevant example to me. :) As you said, people stick with Facebook because of the network effect. It’s useless to switch to somewhere else if enough of your friends don’t, because the entire value of a social network comes from the other people on it. This is a completely different use case than a chatbot, whose value does not directly depend on the number of other people using it. Some people will even want to run chatbots purely locally for privacy reasons.
It seems to me that social networks are an extreme example of how much of their value comes from network effects, in a way that’s not true for most other categories of products. Yes, companies can try to bundle network-effect-products together with their chatbots, but still doesn’t make the network effects anywhere near comparably strong. Companies creating computer games, cars, casinos, etc. try to do that too, but it’s still vastly easier to switch to another game / car / casino than it is to switch to a different social network.
Look at computer games, for example. Yes, there are games that people get addicted to, and games that optimize for engagement and get a lot of money from some share of the population. But generally if people dislike one computer game, they can just switch to another that they like more. And even though there are lots of big-budget computer games, they’re not overwhelmingly and unambiguously better than indie games, and there’s a very thriving indie game scene. The kinds of games that try to intentionally maximize engagement do make up a nontrivial proportion of all games that are played, but nowhere near an overwhelming proportion, and they’re pretty commonly looked down upon. (Loot boxes are also banned in at least Japan, the Netherlands, and Belgium, while also being subject to gambling regulation or being under investigation in several other countries.)
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I agree with this; that vicious cycle is a big part of why people fall into those holes and have difficulty getting out of them. The thing that I was trying to point at was that the exact thing that attracts people to chatbots—them being unconditionally supportive and accepting of you—is the exact thing that should help people break out of this cycle. Because they can discuss the fact that they are feeling bad about being trapped in the cycle with the chatbot, and the chatbot can help them feel better and less ashamed about it, and then their mental health can start improving. I wouldn’t expect it to take very long before the median chatbot is more therapeutical than the median therapist.
Sure, you could try to intentionally build a chatbot that, I don’t know, subtly shamed people for continuing to use it? But trying to build a chatbot that makes its users feel bad about using it while also being more attractive to new users than the currently existing genuinely supportive chatbots feels pretty hard. Whereas making the chatbots even more supportive and genuinely valuable seems easier.
Huh, I dunno I am way more pessimistic here.
Sure, you can do lots of things with a chatbot, and some of those are wholesome and good. But, what are people economically incentivized to make? Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway. There’s the network effects (which to be fair will probably exist less for chatbots, although I predict the most successful chatbots companies will have some kind of network-effect-product bundled together with it). But also, in general the companies that try to make facebook-but-good always have fewer resources, and start out with fewer features, and it’s an uphill battle to get people to switch to a version that’s optimized for things other than profit.
There’s been a huge backlash against the lottery being a tax on economically illiterate people, but the lottery still exists. We have banned some kinds of gambling, but not loot boxes.
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I should probably clarify that I’m not saying that nobody would end up using manipulative chatbots. It’s possible that, in the short run at least, some proportion of the population would get hooked on them, comparable in size to the proportion that currently gets hooked into other things in a way that comes close to ruining their life. But probably not significantly more than that, and probably that proportion would shrink over time.
I wouldn’t call the first category “optimized for getting the users to do things other than use the chatbot”. I’d call it “optimized for giving the users the most genuine value, which among other things also includes doing things other than using the chatbot”.
So does one win by trying to give the most value, or by trying to make something the most engaging? That seems to depend a lot on the specifics. Does Google Drive optimize more for providing genuine value, or for maximizing engagement? I think it mostly optimizes for value, and ends up getting high engagement because it provides high value.
Facebook seems like an almost maximally anti-relevant example to me. :) As you said, people stick with Facebook because of the network effect. It’s useless to switch to somewhere else if enough of your friends don’t, because the entire value of a social network comes from the other people on it. This is a completely different use case than a chatbot, whose value does not directly depend on the number of other people using it. Some people will even want to run chatbots purely locally for privacy reasons.
It seems to me that social networks are an extreme example of how much of their value comes from network effects, in a way that’s not true for most other categories of products. Yes, companies can try to bundle network-effect-products together with their chatbots, but still doesn’t make the network effects anywhere near comparably strong. Companies creating computer games, cars, casinos, etc. try to do that too, but it’s still vastly easier to switch to another game / car / casino than it is to switch to a different social network.
Look at computer games, for example. Yes, there are games that people get addicted to, and games that optimize for engagement and get a lot of money from some share of the population. But generally if people dislike one computer game, they can just switch to another that they like more. And even though there are lots of big-budget computer games, they’re not overwhelmingly and unambiguously better than indie games, and there’s a very thriving indie game scene. The kinds of games that try to intentionally maximize engagement do make up a nontrivial proportion of all games that are played, but nowhere near an overwhelming proportion, and they’re pretty commonly looked down upon. (Loot boxes are also banned in at least Japan, the Netherlands, and Belgium, while also being subject to gambling regulation or being under investigation in several other countries.)
I agree with this; that vicious cycle is a big part of why people fall into those holes and have difficulty getting out of them. The thing that I was trying to point at was that the exact thing that attracts people to chatbots—them being unconditionally supportive and accepting of you—is the exact thing that should help people break out of this cycle. Because they can discuss the fact that they are feeling bad about being trapped in the cycle with the chatbot, and the chatbot can help them feel better and less ashamed about it, and then their mental health can start improving. I wouldn’t expect it to take very long before the median chatbot is more therapeutical than the median therapist.
Sure, you could try to intentionally build a chatbot that, I don’t know, subtly shamed people for continuing to use it? But trying to build a chatbot that makes its users feel bad about using it while also being more attractive to new users than the currently existing genuinely supportive chatbots feels pretty hard. Whereas making the chatbots even more supportive and genuinely valuable seems easier.