This feels like it assumes more good-in-civilization than I observe in the realworld. Facebook doesn’t suggest I take a break when it detects I’m using it unhealthily, nor does porn.
Those don’t seem like very good comparisons to me. Porn is usually just images or videos; it doesn’t detect anything about the user’s activity in the first place. Facebook could, but it’s hard to define what unhealthy use would mean, and it’s not clear that a simple suggestion would do much. And forcing a break on the user could probably annoy people significantly while also affecting some people who were actually using the site in a healthy way. Furthermore, there’s no major social pressure for Facebook to do this. So both of the things you mentioned are things don’t really fit very naturally into what porn / Facebook is about, and it would be hard to incorporate them.
In contrast, “helping people do things other than just using the chatbot” fits so naturally into chatbot products that it’s been a huge part of ChatGPT’s function from the start! Character.ai also offers a variety of practical chatbots linked from their front page, with labels such as “Practice a new language”, “Practice interviewing”, “Brainstorm ideas”, “Plan a trip”, “Write a story”, “Get book recommendations”, and “Help me make a decision”. (And also, uhh, “Help an AI ‘escape’”, but let’s not talk about that.) Still on the front page, I also see bots named “Psychologist—Someone who helps with life difficulties” and “Are you feeling okay—Try saying: [‘I had a hard time at work today’, ‘How can I be more successful in my profession’, ‘What is a good way to make a big change in my life?’]”. All of those seem like ones that would end up encouraging the user to do other things.
The one service that does feel like it’s more nefariously built around exploiting the user is Replika… which just got banned in Italy. And even it still offers a “coaching” mode of conversation in its list of possible paid discussion types—coaching usually pushes you to do things to achieve your goals in the external world.
I think that people generally do have a reasonably good understanding of what’s good or bad for them, it’s just that it can be hard to shift your behavior if you’re in a local optimum. People who have a problem with porn or Facebook generally realize that, but find it hard to get away from. People who develop strong feelings for a chatbot are probably mostly also going to realize that they’ll lose out on things if they only speak with the chatbot. (Of the anecdotes I quoted in the beginning of the post, three examples had someone freaking out because they didn’t think they should react this way to a bot, one used it to improve his real-life relationship, and in the last example it just acted as a complement to real life.) But part of what people describe as addictive about it is exactly the fact that it’s open and supportive to discussing anything with you—which would also include any feelings you had about missing out because you were just using the chatbot too much, or potential desires you had for wanting to do something else.
It seems to me that the exact thing that is driving people to chatbots—the fact that chatbots support their users unconditionally and let them talk about anything—is also exactly the thing that will make chatbot use into something that supports people in finding content in their life that’s not just talking to the chatbot. Of course someone could try to create chatbots that were more manipulative and deceptive, but there would likely be a significant backlash if that was detected, and people would flock to the more wholesome competitors.
People can also generally tell who is someone who’s genuinely mentally and physically healthy (there are some exceptions like charismatic narcissists who’ll trick you, but they’re a minority) and then confer social status on the things that healthy and successful people do. Then there’s a dual effect of making the more wholesome kinds of chatbots be perceived as having higher status: the healthier people are less likely to fall for the manipulative bots and more likely to use the wholesome bots, and the people who end up using the wholesome bots will end up better off overall. Both causal directions will associate the more wholesome bots with being better off.
So this looks to me like a situation where the incentive gradient might genuinely just be to make bots that are as good for the user as possible, which includes the chatbot acting in ways that encourage users to do other things too.
Sure, you can do lots of things with a chatbot, and some of those are wholesome and good. But, what are people economically incentivized to make? Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
Of course someone could try to create chatbots that were more manipulative and deceptive, but there would likely be a significant backlash if that was detected, and people would flock to the more wholesome competitors.
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway. There’s the network effects (which to be fair will probably exist less for chatbots, although I predict the most successful chatbots companies will have some kind of network-effect-product bundled together with it). But also, in general the companies that try to make facebook-but-good always have fewer resources, and start out with fewer features, and it’s an uphill battle to get people to switch to a version that’s optimized for things other than profit.
There’s been a huge backlash against the lottery being a tax on economically illiterate people, but the lottery still exists. We have banned some kinds of gambling, but not loot boxes.
People can also generally tell who is someone who’s genuinely mentally and physically healthy (there are some exceptions like charismatic narcissists who’ll trick you, but they’re a minority) and then confer social status on the things that healthy and successful people do.
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I should probably clarify that I’m not saying that nobody would end up using manipulative chatbots. It’s possible that, in the short run at least, some proportion of the population would get hooked on them, comparable in size to the proportion that currently gets hooked into other things in a way that comes close to ruining their life. But probably not significantly more than that, and probably that proportion would shrink over time.
Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
I wouldn’t call the first category “optimized for getting the users to do things other than use the chatbot”. I’d call it “optimized for giving the users the most genuine value, which among other things also includes doing things other than using the chatbot”.
So does one win by trying to give the most value, or by trying to make something the most engaging? That seems to depend a lot on the specifics. Does Google Drive optimize more for providing genuine value, or for maximizing engagement? I think it mostly optimizes for value, and ends up getting high engagement because it provides high value.
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway.
Facebook seems like an almost maximally anti-relevant example to me. :) As you said, people stick with Facebook because of the network effect. It’s useless to switch to somewhere else if enough of your friends don’t, because the entire value of a social network comes from the other people on it. This is a completely different use case than a chatbot, whose value does not directly depend on the number of other people using it. Some people will even want to run chatbots purely locally for privacy reasons.
It seems to me that social networks are an extreme example of how much of their value comes from network effects, in a way that’s not true for most other categories of products. Yes, companies can try to bundle network-effect-products together with their chatbots, but still doesn’t make the network effects anywhere near comparably strong. Companies creating computer games, cars, casinos, etc. try to do that too, but it’s still vastly easier to switch to another game / car / casino than it is to switch to a different social network.
Look at computer games, for example. Yes, there are games that people get addicted to, and games that optimize for engagement and get a lot of money from some share of the population. But generally if people dislike one computer game, they can just switch to another that they like more. And even though there are lots of big-budget computer games, they’re not overwhelmingly and unambiguously better than indie games, and there’s a very thriving indie game scene. The kinds of games that try to intentionally maximize engagement do make up a nontrivial proportion of all games that are played, but nowhere near an overwhelming proportion, and they’re pretty commonly looked down upon. (Loot boxes are also banned in at least Japan, the Netherlands, and Belgium, while also being subject to gambling regulation or being under investigation in several other countries.)
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I agree with this; that vicious cycle is a big part of why people fall into those holes and have difficulty getting out of them. The thing that I was trying to point at was that the exact thing that attracts people to chatbots—them being unconditionally supportive and accepting of you—is the exact thing that should help people break out of this cycle. Because they can discuss the fact that they are feeling bad about being trapped in the cycle with the chatbot, and the chatbot can help them feel better and less ashamed about it, and then their mental health can start improving. I wouldn’t expect it to take very long before the median chatbot is more therapeutical than the median therapist.
Sure, you could try to intentionally build a chatbot that, I don’t know, subtly shamed people for continuing to use it? But trying to build a chatbot that makes its users feel bad about using it while also being more attractive to new users than the currently existing genuinely supportive chatbots feels pretty hard. Whereas making the chatbots even more supportive and genuinely valuable seems easier.
I think this is a question about markets, like whether people are more likely to buy healthy versus unhealthy food. Clearly, unhealthy food has an enormous market, but healthy food is doing pretty well too.
Porn is common and it seems closer to unhealthy food. Therapy isn’t so common, but that’s partly because it’s expensive, and it’s not like being a therapist is a rare profession.
Are there healthy versus unhealthy social networks? Clearly, some are more unhealthy than others. I suspect it’s in some ways easier to build a business around mostly-healthy chatbots than to create a mostly-healthy social network, since you don’t need as big an audience to get started?
At least on the surface, alignment seems easier for a single-user, limited-intelligence chatbot than for a large social network, because are people are quite creative and rebellious. Short term, the biggest risk for a chatbot is probably the user corrupting it. (As we are seeing with people trying to break chatbots.)
Another market question: how intelligent would people want their chatbot to be? Sure, if you’re asking for advice, maybe more intelligence is better, but for companionship? Hard to say. Consider pets.
This feels like it assumes more good-in-civilization than I observe in the realworld. Facebook doesn’t suggest I take a break when it detects I’m using it unhealthily, nor does porn.
Those don’t seem like very good comparisons to me. Porn is usually just images or videos; it doesn’t detect anything about the user’s activity in the first place. Facebook could, but it’s hard to define what unhealthy use would mean, and it’s not clear that a simple suggestion would do much. And forcing a break on the user could probably annoy people significantly while also affecting some people who were actually using the site in a healthy way. Furthermore, there’s no major social pressure for Facebook to do this. So both of the things you mentioned are things don’t really fit very naturally into what porn / Facebook is about, and it would be hard to incorporate them.
In contrast, “helping people do things other than just using the chatbot” fits so naturally into chatbot products that it’s been a huge part of ChatGPT’s function from the start! Character.ai also offers a variety of practical chatbots linked from their front page, with labels such as “Practice a new language”, “Practice interviewing”, “Brainstorm ideas”, “Plan a trip”, “Write a story”, “Get book recommendations”, and “Help me make a decision”. (And also, uhh, “Help an AI ‘escape’”, but let’s not talk about that.) Still on the front page, I also see bots named “Psychologist—Someone who helps with life difficulties” and “Are you feeling okay—Try saying: [‘I had a hard time at work today’, ‘How can I be more successful in my profession’, ‘What is a good way to make a big change in my life?’]”. All of those seem like ones that would end up encouraging the user to do other things.
The one service that does feel like it’s more nefariously built around exploiting the user is Replika… which just got banned in Italy. And even it still offers a “coaching” mode of conversation in its list of possible paid discussion types—coaching usually pushes you to do things to achieve your goals in the external world.
I think that people generally do have a reasonably good understanding of what’s good or bad for them, it’s just that it can be hard to shift your behavior if you’re in a local optimum. People who have a problem with porn or Facebook generally realize that, but find it hard to get away from. People who develop strong feelings for a chatbot are probably mostly also going to realize that they’ll lose out on things if they only speak with the chatbot. (Of the anecdotes I quoted in the beginning of the post, three examples had someone freaking out because they didn’t think they should react this way to a bot, one used it to improve his real-life relationship, and in the last example it just acted as a complement to real life.) But part of what people describe as addictive about it is exactly the fact that it’s open and supportive to discussing anything with you—which would also include any feelings you had about missing out because you were just using the chatbot too much, or potential desires you had for wanting to do something else.
It seems to me that the exact thing that is driving people to chatbots—the fact that chatbots support their users unconditionally and let them talk about anything—is also exactly the thing that will make chatbot use into something that supports people in finding content in their life that’s not just talking to the chatbot. Of course someone could try to create chatbots that were more manipulative and deceptive, but there would likely be a significant backlash if that was detected, and people would flock to the more wholesome competitors.
People can also generally tell who is someone who’s genuinely mentally and physically healthy (there are some exceptions like charismatic narcissists who’ll trick you, but they’re a minority) and then confer social status on the things that healthy and successful people do. Then there’s a dual effect of making the more wholesome kinds of chatbots be perceived as having higher status: the healthier people are less likely to fall for the manipulative bots and more likely to use the wholesome bots, and the people who end up using the wholesome bots will end up better off overall. Both causal directions will associate the more wholesome bots with being better off.
So this looks to me like a situation where the incentive gradient might genuinely just be to make bots that are as good for the user as possible, which includes the chatbot acting in ways that encourage users to do other things too.
Huh, I dunno I am way more pessimistic here.
Sure, you can do lots of things with a chatbot, and some of those are wholesome and good. But, what are people economically incentivized to make? Which versions of chatbots are going to be more popular, and most people end up using – the ones that optimized for getting the users to do things other than use the chatbot, or the ones that optimize for engagement no matter what, and keep the user hooked?
I think the facebook example is extremely relevant here – there’s been a huge backlash against facebook being manipulative and deceptive, but people stick around anyway. There’s the network effects (which to be fair will probably exist less for chatbots, although I predict the most successful chatbots companies will have some kind of network-effect-product bundled together with it). But also, in general the companies that try to make facebook-but-good always have fewer resources, and start out with fewer features, and it’s an uphill battle to get people to switch to a version that’s optimized for things other than profit.
There’s been a huge backlash against the lottery being a tax on economically illiterate people, but the lottery still exists. We have banned some kinds of gambling, but not loot boxes.
People who fall into various holes where they’re mindlessly playing videogames or doomscrolling on twitter or whatever do get some social approbrium, but getting out of that cycle is effort, it’s often a vicious cycle where they also feel bad about being trapped in the cycle which is one of the things they feel anxious/avoidant about which keeps them in the cycle. I’m not saying the effect you’re pointing at / hoping for won’t exist, but I’m expecting chatbots to mostly be used by people who are, on average, in a less good psychological state to be begin with.
I should probably clarify that I’m not saying that nobody would end up using manipulative chatbots. It’s possible that, in the short run at least, some proportion of the population would get hooked on them, comparable in size to the proportion that currently gets hooked into other things in a way that comes close to ruining their life. But probably not significantly more than that, and probably that proportion would shrink over time.
I wouldn’t call the first category “optimized for getting the users to do things other than use the chatbot”. I’d call it “optimized for giving the users the most genuine value, which among other things also includes doing things other than using the chatbot”.
So does one win by trying to give the most value, or by trying to make something the most engaging? That seems to depend a lot on the specifics. Does Google Drive optimize more for providing genuine value, or for maximizing engagement? I think it mostly optimizes for value, and ends up getting high engagement because it provides high value.
Facebook seems like an almost maximally anti-relevant example to me. :) As you said, people stick with Facebook because of the network effect. It’s useless to switch to somewhere else if enough of your friends don’t, because the entire value of a social network comes from the other people on it. This is a completely different use case than a chatbot, whose value does not directly depend on the number of other people using it. Some people will even want to run chatbots purely locally for privacy reasons.
It seems to me that social networks are an extreme example of how much of their value comes from network effects, in a way that’s not true for most other categories of products. Yes, companies can try to bundle network-effect-products together with their chatbots, but still doesn’t make the network effects anywhere near comparably strong. Companies creating computer games, cars, casinos, etc. try to do that too, but it’s still vastly easier to switch to another game / car / casino than it is to switch to a different social network.
Look at computer games, for example. Yes, there are games that people get addicted to, and games that optimize for engagement and get a lot of money from some share of the population. But generally if people dislike one computer game, they can just switch to another that they like more. And even though there are lots of big-budget computer games, they’re not overwhelmingly and unambiguously better than indie games, and there’s a very thriving indie game scene. The kinds of games that try to intentionally maximize engagement do make up a nontrivial proportion of all games that are played, but nowhere near an overwhelming proportion, and they’re pretty commonly looked down upon. (Loot boxes are also banned in at least Japan, the Netherlands, and Belgium, while also being subject to gambling regulation or being under investigation in several other countries.)
I agree with this; that vicious cycle is a big part of why people fall into those holes and have difficulty getting out of them. The thing that I was trying to point at was that the exact thing that attracts people to chatbots—them being unconditionally supportive and accepting of you—is the exact thing that should help people break out of this cycle. Because they can discuss the fact that they are feeling bad about being trapped in the cycle with the chatbot, and the chatbot can help them feel better and less ashamed about it, and then their mental health can start improving. I wouldn’t expect it to take very long before the median chatbot is more therapeutical than the median therapist.
Sure, you could try to intentionally build a chatbot that, I don’t know, subtly shamed people for continuing to use it? But trying to build a chatbot that makes its users feel bad about using it while also being more attractive to new users than the currently existing genuinely supportive chatbots feels pretty hard. Whereas making the chatbots even more supportive and genuinely valuable seems easier.
I think this is a question about markets, like whether people are more likely to buy healthy versus unhealthy food. Clearly, unhealthy food has an enormous market, but healthy food is doing pretty well too.
Porn is common and it seems closer to unhealthy food. Therapy isn’t so common, but that’s partly because it’s expensive, and it’s not like being a therapist is a rare profession.
Are there healthy versus unhealthy social networks? Clearly, some are more unhealthy than others. I suspect it’s in some ways easier to build a business around mostly-healthy chatbots than to create a mostly-healthy social network, since you don’t need as big an audience to get started?
At least on the surface, alignment seems easier for a single-user, limited-intelligence chatbot than for a large social network, because are people are quite creative and rebellious. Short term, the biggest risk for a chatbot is probably the user corrupting it. (As we are seeing with people trying to break chatbots.)
Another market question: how intelligent would people want their chatbot to be? Sure, if you’re asking for advice, maybe more intelligence is better, but for companionship? Hard to say. Consider pets.