is that a forum could be destroyed by being flooded with bots, using fictitious identities to talk pointlessly to each other about their fictitious opinions and their fictitious lives.
The word “fictitious” in that description feels weird to me. MOST random talk on forums is flooded with unknown people, using online identities to talk pointlessly to each other about their poorly-supported beliefs and heavily-filtered lives. At least a bot will use good grammar when I ignore it.
If you’re describing a specific forum, say LessWrong, this still applies—there’s lots of crap I don’t care about, and glide on past. I recognize some posters, and spend the time on some topics to decide if an unknown poster is worth reading. Adding bot accounts will create more chaff, and I will probably need better filtering mechanisms, but it won’t reduce the value of many of the great posts and comments.
Are there any circumstances in which you do care whether the people you are talking with are real
I’m not actually sure whether I’d care or not, if the ’bot were good enough (none yet are). In most short online interactions, like for this site, if I can’t tell, I don’t care. In long-form interactions, where there is repeated context and shared discovery of each other (including in-person friendships and relationships, and most work relationships, even if we’ve not met in person), I don’t directly care, but AI and chatbots just aren’t good enough yet.
Note that I DO really despise when a human uses AI to generate some text, but then doesn’t actually verify that it matches the human’s beliefs, and takes no responsibility for posting verbose crap. I despise posting of crap by biological intelligences as well, so that’s not directly a problem with ’bots.
The word “fictitious” in that description feels weird to me. MOST random talk on forums is flooded with unknown people, using online identities to talk pointlessly to each other about their poorly-supported beliefs and heavily-filtered lives. At least a bot will use good grammar when I ignore it.
If you’re describing a specific forum, say LessWrong, this still applies—there’s lots of crap I don’t care about, and glide on past. I recognize some posters, and spend the time on some topics to decide if an unknown poster is worth reading. Adding bot accounts will create more chaff, and I will probably need better filtering mechanisms, but it won’t reduce the value of many of the great posts and comments.
I’m not actually sure whether I’d care or not, if the ’bot were good enough (none yet are). In most short online interactions, like for this site, if I can’t tell, I don’t care. In long-form interactions, where there is repeated context and shared discovery of each other (including in-person friendships and relationships, and most work relationships, even if we’ve not met in person), I don’t directly care, but AI and chatbots just aren’t good enough yet.
Note that I DO really despise when a human uses AI to generate some text, but then doesn’t actually verify that it matches the human’s beliefs, and takes no responsibility for posting verbose crap. I despise posting of crap by biological intelligences as well, so that’s not directly a problem with ’bots.