I’d like to hear a clearer problem statement. The vast majority of my online consumption is for entertainment, or information, and I don’t much care whether it was created or compiled by a human individual, a committee or corporation, or an AI, or some combination of them.
I do often care about the quality or expected quality, and that’s based on the source and history and reputation, and applies regardless of the mechanics of generating the content.
One possibility, surely just one possibility among many, is that a forum could be destroyed by being flooded with bots, using fictitious identities to talk pointlessly to each other about their fictitious opinions and their fictitious lives.
There is such a thing as online community, and advanced chatbots can be the death of it, in various ways.
edit:
I don’t much care whether it was created or compiled by a human individual, a committee or corporation, or an AI, or some combination of them.
Are there any circumstances in which you do care whether the people you are talking with are real, or are fictitious and AI-generated?
is that a forum could be destroyed by being flooded with bots, using fictitious identities to talk pointlessly to each other about their fictitious opinions and their fictitious lives.
The word “fictitious” in that description feels weird to me. MOST random talk on forums is flooded with unknown people, using online identities to talk pointlessly to each other about their poorly-supported beliefs and heavily-filtered lives. At least a bot will use good grammar when I ignore it.
If you’re describing a specific forum, say LessWrong, this still applies—there’s lots of crap I don’t care about, and glide on past. I recognize some posters, and spend the time on some topics to decide if an unknown poster is worth reading. Adding bot accounts will create more chaff, and I will probably need better filtering mechanisms, but it won’t reduce the value of many of the great posts and comments.
Are there any circumstances in which you do care whether the people you are talking with are real
I’m not actually sure whether I’d care or not, if the ’bot were good enough (none yet are). In most short online interactions, like for this site, if I can’t tell, I don’t care. In long-form interactions, where there is repeated context and shared discovery of each other (including in-person friendships and relationships, and most work relationships, even if we’ve not met in person), I don’t directly care, but AI and chatbots just aren’t good enough yet.
Note that I DO really despise when a human uses AI to generate some text, but then doesn’t actually verify that it matches the human’s beliefs, and takes no responsibility for posting verbose crap. I despise posting of crap by biological intelligences as well, so that’s not directly a problem with ’bots.
I’d like to hear a clearer problem statement. The vast majority of my online consumption is for entertainment, or information, and I don’t much care whether it was created or compiled by a human individual, a committee or corporation, or an AI, or some combination of them.
I do often care about the quality or expected quality, and that’s based on the source and history and reputation, and applies regardless of the mechanics of generating the content.
One possibility, surely just one possibility among many, is that a forum could be destroyed by being flooded with bots, using fictitious identities to talk pointlessly to each other about their fictitious opinions and their fictitious lives.
There is such a thing as online community, and advanced chatbots can be the death of it, in various ways.
edit:
Are there any circumstances in which you do care whether the people you are talking with are real, or are fictitious and AI-generated?
The word “fictitious” in that description feels weird to me. MOST random talk on forums is flooded with unknown people, using online identities to talk pointlessly to each other about their poorly-supported beliefs and heavily-filtered lives. At least a bot will use good grammar when I ignore it.
If you’re describing a specific forum, say LessWrong, this still applies—there’s lots of crap I don’t care about, and glide on past. I recognize some posters, and spend the time on some topics to decide if an unknown poster is worth reading. Adding bot accounts will create more chaff, and I will probably need better filtering mechanisms, but it won’t reduce the value of many of the great posts and comments.
I’m not actually sure whether I’d care or not, if the ’bot were good enough (none yet are). In most short online interactions, like for this site, if I can’t tell, I don’t care. In long-form interactions, where there is repeated context and shared discovery of each other (including in-person friendships and relationships, and most work relationships, even if we’ve not met in person), I don’t directly care, but AI and chatbots just aren’t good enough yet.
Note that I DO really despise when a human uses AI to generate some text, but then doesn’t actually verify that it matches the human’s beliefs, and takes no responsibility for posting verbose crap. I despise posting of crap by biological intelligences as well, so that’s not directly a problem with ’bots.