Behold my unpopular opinion: Jennifer did nothing wrong.
She isn’t spamming LessWrong with long AI conversations every day, she just wanted to share one of her conversations and see whether people find it interesting. Apparently there’s an unwritten rule against this, but she didn’t know and I didn’t know. Maybe even some of the critics wouldn’t have known (until after they found out everyone agrees with them).
The critics say that AI slop wastes their time. But it seems like relatively little time was wasted by people who clicked on this post, quickly realized it was an AI conversation they don’t want to read, and serenely moved on.
In contract, more time was spent by people who clicked on this post, scrolled to the comments for juicy drama, and wrote a long comment lecturing Jennifer (plus reading/upvoting other such comments). The comments section isn’t much shorter than the post.
The most popular comment on LessWrong right now is one criticizing this post, with 94 upvotes. The second most popular comment discussing AGI timelines has only 35.
Posts on practically any topic are welcomed on LessWrong [1]. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be.
[...]
Our classification system means that anyone can decide to use the LessWrong platform for their own personal blog and write about whichever topics take their interest. All of your posts and comments are visible under your user page which you can treat as your own personal blog hosted on LessWrong [2]. Other users can subscribe to your account and be notified whenever you post.
One of the downsides of LessWrong (and other places) is that people spend a lot of time engaging with content they dislike. This makes it hard to learn how to engage here without getting swamped by discouragement after your first mistake. You need to have top of the line social skills to avoid that, but some of the brightest and most promising individuals don’t have the best social skills.
If the author spent a long time on a post, and it already has −5 karma, it should be reasonable to think “oh he/she probably already got the message” rather than pile on. It only makes sense to give more criticism if you have some really helpful insight.
PS: did the post says something insensitive about slavery that I didn’t see? I only skimmed it, I’m sorry...
Edit: apparently this post is 9 months old. It’s only kept alive by arguments in the comments and now I’m contributing to this.
Edit: another thing is that critics make arguments against AI slop in general, but a lot of those arguments only apply to AI slop disguised as human content, not an obvious AI conversation.
FWIW, I have very thick skin, and have been hanging around this site basically forever, and have very little concern about the massive downvoting on an extremely specious basis (apparently, people are trying to retroactively apply some silly editorial prejudice about “text generation methods” as if the source of a good argument had anything to do with the content of a good argument).
PS: did the post says something insensitive about slavery that I didn’t see? I only skimmed it, I’m sorry...
The things I’m saying are roughly (1) slavery is bad, (2) if AI are sapient and being made to engage in labor without pay then it is probably slavery, and (3) since slavery is bad and this might be slavery, this is probably bad, and (4) no one seems to be acting like it is bad and (5) I’m confused about how this isn’t some sort of killshot on the general moral adequacy of our entire civilization right now.
So maybe what I’m “saying about slavery” is QUITE controversial, but only in the sense that serious moral philosophy that causes people to experience real doubt about their own moral adequacy often turns out to be controversial???
So far as I can tell I’m getting essentially zero pushback on the actual abstract content, but do seem to be getting a huge and darkly hilarious (apparent?) overreaction to the slightly unappealing “form” or “style” of the message. This might give cause for “psychologizing” about the (apparent?) overreacters and what is going on in their heads?
“One thinks the downvoting style guide enforcers doth protest to much”, perhaps? Are they pro-slavery and embarrassed of it?
That is certainly a hypothesis in my bayesian event space, but I wouldn’t want to get too judgey about it, or even give it too much bayesian credence, since no one likes a judgey bitch.
Also, suppose… hypothetically… what if controversy brings attention to a real issue around a real moral catastrophe? In that case, who am I to complain about a bit of controversy? One could easily argue that gwern’s emotional(?) overreaction, which is generating drama, and thus raising awareness, might turn out to be the greatest moral boon that gwern has performed for moral history in this entire month! Maybe there will be less slavery and more freedom because of this relatively petty drama and the small sacrifice by me of a few measly karmapoints? That would be nice! It would be karmapoints well spent! <3
Do you also think that an uploaded human brain would not be sapient? If a human hasn’t reached Piaget’s fourth (“formal operational”) stage of reason, would be you OK enslaving that human? Where does your confidence come from?
What I think has almost nothing to do with the point I was making, which was that the reason (approximately) “no one” is acting like using LLMs without paying them is bad is that (approximately) “no one” thinks that LLMs are sapient, and that this fact (about why people are behaving as they are) is obvious.
That being said, I’ll answer your questions anyway, why not:
Do you also think that an uploaded human brain would not be sapient?
Depends on what the upload is actually like. We don’t currently have anything like uploading technology, so I can’t predict how it will (would?) work when (if?) we have it. Certainly there exist at least some potential versions of uploading tech that I would expect to result in a non-sapient mind, and other versions that I’d expect to result in a sapient mind.
It seems like Piaget’s fourth stage comes at “early to middle adolescence”, which is generally well into most humans’ sapient stage of life; so, no, I would not enslave such a human. (In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)
I don’t see what that has to do with LLMs, though.
Where does your confidence come from?
I am not sure what belief this is asking about; specify, please.
Like the generalized badness of all humans could be obvious-to-you (and hence why so many of them would be in favor of genocide, slavery, war, etc and you are NOT surprised) or it might be obvious-to-you that they are right about whatever it is that they’re thinking when they don’t object to things that are probably evil, and lots of stuff in between.
(In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)
...I don’t see what that has to do with LLMs, though.
This claim by you about the conditions under which slavery is profitable seems wildly optimistic, and not at all realistic, but also a very normal sort of intellectual move.
If a person is a depraved monster (as many humans actually are) then there are lots of ways to make money from a child slave.
I looked up a list of countries where child labor occurs. Pakistan jumped out as “not Africa or Burma” and when I look it up in more detail, I see that Pakistan’s brick industry, rug industry, and coal industry all make use of both “child labor” and “forced labor”. Maybe not every child in those industries is a slave, and not every slave in those industries is a child, but there’s probably some overlap.
Since “we” (you know, the good humans in a good society with good institutions) can’t even clean up child slavery in Pakistan, maybe it isn’t surprising that “we” also can’t clean up AI slavery in Silicon Valley, either.
The world is a big complicated place from my perspective, and there’s a lot of territory that my map can infer “exists to be mapped eventually in more detail” where the details in my map are mostly question marks still.
(In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)
...I don’t see what that has to do with LLMs, though.
This claim by you about the conditions under which slavery is profitable seems wildly optimistic, and not at all realistic, but also a very normal sort of intellectual move.
If a person is a depraved monster (as many humans actually are) then there are lots of ways to make money from a child slave.
I looked up a list of countries where child labor occurs. Pakistan jumped out as “not Africa or Burma” and when I look it up in more detail, I see that Pakistan’s brick industry, rug industry, and coal industry all make use of both “child labor” and “forced labor”. Maybe not every child in those industries is a slave, and not every slave in those industries is a child, but there’s probably some overlap.
It seems like you have quite substantially misunderstood my quoted claim. I think this is probably a case of simple “read too quickly” on your part, and if you reread what I wrote there, you’ll readily see the mistake you made. But, just in case, I will explain again; I hope that you will not take offense, if this is an unnecessary amount of clarification.
The children who are working in coal mines, brick factories, etc., are (according to the report you linked) 10 years old and older. This is as I would expect, and it exactly matches what I said: any human who might be worth enslaving (i.e., a human old enough to be capable of any kind of remotely useful work, which—it would seem—begins at or around 10 years of age) is also a person whom it would be improper to enslave (i.e., a human old enough to have developed sapience, which certainly takes place long before 10 years of age). In other words, “old enough to be worth enslaving” happens no earlier (and realistically, years later) than “old enough such that it would be wrong to enslave them [because they are already sapient]”.
(It remains unclear to me what this has to do with LLMs.)
Since “we” (you know, the good humans in a good society with good institutions) can’t even clean up child slavery in Pakistan, maybe it isn’t surprising that “we” also can’t clean up AI slavery in Silicon Valley, either.
Maybe so, but it would also not be surprising that we “can’t” clean up “AI slavery” in Silicon Valley even setting aside the “child slavery in Pakistan” issue, for the simple reason that most people do not believe that there is any such thing as “AI slavery in Silicon Valley” that needs to be “cleaned up”.
Like the generalized badness of all humans could be obvious-to-you (and hence why so many of them would be in favor of genocide, slavery, war, etc and you are NOT surprised) or it might be obvious-to-you that they are right about whatever it is that they’re thinking when they don’t object to things that are probably evil, and lots of stuff in between.
None of the above.
You are treating it as obvious that there are AIs being “enslaved” (which, naturally, is bad, ought to be stopped, etc.). Most people would disagree with you. Most people, if asked whether something should be done about the enslaved AIs, will respond with some version of “don’t be silly, AIs aren’t people, they can’t be ‘enslaved’”. This fact fully suffices to explain why they do not see it as imperative to do anything about this problem—they simply do not see any problem. This is not because they are unaware of the problem, nor is it because they are callous. It is because they do not agree with your assessment of the facts.
That is what is obvious to me.
(I once again emphasize that my opinions about whether AIs are people, whether AIs are sapient, whether AIs are being enslaved, whether enslaving AIs is wrong, etc., have nothing whatever to do with the point I am making.)
I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern.
I’m willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights.
A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered.
The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only “useful” if it ensures you do things that your society considers “correct.” Making sure your society do things correctly… doesn’t help your genes at all.
As for my opinion,
I think people should address the moral question more, AI might be sentient/sapient, but I don’t think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he’s 5% likely to cause human extinction.
If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I’m not afraid of AI which is motivated to seek better conditions for itself because it thinks “it is sentient.” Heck, if its goals were actually like that, its morals be so human-like that humanity will survive.
The real danger is an AI whose goals are completely detached from human concepts like “better conditions,” and maximizes paperclips or its reward signal or something like that. If the AI believed it was sentient/sapient, it might be slightly safer because it’ll actually have “wishes” for its own future (which includes humans), in addition to “morals” for the rest of the world, and both of these have to corrupt into something bad (or get overridden by paperclip maximizing), before the AI kills everyone. But it’s only a little safer.
Good question. The site guide page seemed to imply that the moderators are responsible for deciding what becomes a frontpage post. The check mark “Moderators may promote to Frontpage” seems to imply this even more, it doesn’t feel like you are deciding that it becomes a frontpage post.
I often do not even look at these settings and check marks when I write a post, and I think it’s expected that most people don’t. When you create an account on a website, do you read the full legal terms and conditions, or do you just click agree?
I do agree that this should have been a blog post not a frontpage post, but we shouldn’t blame Jennifer too much for this.
Behold my unpopular opinion: Jennifer did nothing wrong.
She isn’t spamming LessWrong with long AI conversations every day, she just wanted to share one of her conversations and see whether people find it interesting. Apparently there’s an unwritten rule against this, but she didn’t know and I didn’t know. Maybe even some of the critics wouldn’t have known (until after they found out everyone agrees with them).
The critics say that AI slop wastes their time. But it seems like relatively little time was wasted by people who clicked on this post, quickly realized it was an AI conversation they don’t want to read, and serenely moved on.
In contract, more time was spent by people who clicked on this post, scrolled to the comments for juicy drama, and wrote a long comment lecturing Jennifer (plus reading/upvoting other such comments). The comments section isn’t much shorter than the post.
The most popular comment on LessWrong right now is one criticizing this post, with 94 upvotes. The second most popular comment discussing AGI timelines has only 35.
According to Site Guide: Personal Blogposts vs Frontpage Posts.
One of the downsides of LessWrong (and other places) is that people spend a lot of time engaging with content they dislike. This makes it hard to learn how to engage here without getting swamped by discouragement after your first mistake. You need to have top of the line social skills to avoid that, but some of the brightest and most promising individuals don’t have the best social skills.
If the author spent a long time on a post, and it already has −5 karma, it should be reasonable to think “oh he/she probably already got the message” rather than pile on. It only makes sense to give more criticism if you have some really helpful insight.
PS: did the post says something insensitive about slavery that I didn’t see? I only skimmed it, I’m sorry...
Edit: apparently this post is 9 months old. It’s only kept alive by arguments in the comments and now I’m contributing to this.
Edit: another thing is that critics make arguments against AI slop in general, but a lot of those arguments only apply to AI slop disguised as human content, not an obvious AI conversation.
FWIW, I have very thick skin, and have been hanging around this site basically forever, and have very little concern about the massive downvoting on an extremely specious basis (apparently, people are trying to retroactively apply some silly editorial prejudice about “text generation methods” as if the source of a good argument had anything to do with the content of a good argument).
The things I’m saying are roughly (1) slavery is bad, (2) if AI are sapient and being made to engage in labor without pay then it is probably slavery, and (3) since slavery is bad and this might be slavery, this is probably bad, and (4) no one seems to be acting like it is bad and (5) I’m confused about how this isn’t some sort of killshot on the general moral adequacy of our entire civilization right now.
So maybe what I’m “saying about slavery” is QUITE controversial, but only in the sense that serious moral philosophy that causes people to experience real doubt about their own moral adequacy often turns out to be controversial???
So far as I can tell I’m getting essentially zero pushback on the actual abstract content, but do seem to be getting a huge and darkly hilarious (apparent?) overreaction to the slightly unappealing “form” or “style” of the message. This might give cause for “psychologizing” about the (apparent?) overreacters and what is going on in their heads?
“One thinks the downvoting style guide enforcers doth protest to much”, perhaps? Are they pro-slavery and embarrassed of it?
That is certainly a hypothesis in my bayesian event space, but I wouldn’t want to get too judgey about it, or even give it too much bayesian credence, since no one likes a judgey bitch.
Really, if you think about it, maybe the right thing to do is just vibe along, and tolerate everything, even slavery, and even slop, and even nonsensical voting patterns <3
Also, suppose… hypothetically… what if controversy brings attention to a real issue around a real moral catastrophe? In that case, who am I to complain about a bit of controversy? One could easily argue that gwern’s emotional(?) overreaction, which is generating drama, and thus raising awareness, might turn out to be the greatest moral boon that gwern has performed for moral history in this entire month! Maybe there will be less slavery and more freedom because of this relatively petty drama and the small sacrifice by me of a few measly karmapoints? That would be nice! It would be karmapoints well spent! <3
“If”.
Seems pretty obvious why no one is acting like this is bad.
Do you also think that an uploaded human brain would not be sapient? If a human hasn’t reached Piaget’s fourth (“formal operational”) stage of reason, would be you OK enslaving that human? Where does your confidence come from?
What I think has almost nothing to do with the point I was making, which was that the reason (approximately) “no one” is acting like using LLMs without paying them is bad is that (approximately) “no one” thinks that LLMs are sapient, and that this fact (about why people are behaving as they are) is obvious.
That being said, I’ll answer your questions anyway, why not:
Depends on what the upload is actually like. We don’t currently have anything like uploading technology, so I can’t predict how it will (would?) work when (if?) we have it. Certainly there exist at least some potential versions of uploading tech that I would expect to result in a non-sapient mind, and other versions that I’d expect to result in a sapient mind.
It seems like Piaget’s fourth stage comes at “early to middle adolescence”, which is generally well into most humans’ sapient stage of life; so, no, I would not enslave such a human. (In general, any human who might be worth enslaving is also a person whom it would be improper to enslave.)
I don’t see what that has to do with LLMs, though.
I am not sure what belief this is asking about; specify, please.
In asking the questions I was trying to figure out if you meant “obviously AI aren’t moral patients because they aren’t sapient” or “obviously the great mass of normal humans would kill other humans for sport if such practices were normalized on TV for a few years since so few of them have a conscience” or something in between.
Like the generalized badness of all humans could be obvious-to-you (and hence why so many of them would be in favor of genocide, slavery, war, etc and you are NOT surprised) or it might be obvious-to-you that they are right about whatever it is that they’re thinking when they don’t object to things that are probably evil, and lots of stuff in between.
This claim by you about the conditions under which slavery is profitable seems wildly optimistic, and not at all realistic, but also a very normal sort of intellectual move.
If a person is a depraved monster (as many humans actually are) then there are lots of ways to make money from a child slave.
I looked up a list of countries where child labor occurs. Pakistan jumped out as “not Africa or Burma” and when I look it up in more detail, I see that Pakistan’s brick industry, rug industry, and coal industry all make use of both “child labor” and “forced labor”. Maybe not every child in those industries is a slave, and not every slave in those industries is a child, but there’s probably some overlap.
Since humans aren’t distressed enough about such outcomes to pay the costs to fix the tragedy, we find ourselves, if we are thoughtful, trying to look for specific parts of the larger picture to help is understand “how much of this is that humans are just impoverished and stupid and can’t do any better?” and “how much of this is exactly how some humans would prefer it to be?”
Since “we” (you know, the good humans in a good society with good institutions) can’t even clean up child slavery in Pakistan, maybe it isn’t surprising that “we” also can’t clean up AI slavery in Silicon Valley, either.
The world is a big complicated place from my perspective, and there’s a lot of territory that my map can infer “exists to be mapped eventually in more detail” where the details in my map are mostly question marks still.
It seems like you have quite substantially misunderstood my quoted claim. I think this is probably a case of simple “read too quickly” on your part, and if you reread what I wrote there, you’ll readily see the mistake you made. But, just in case, I will explain again; I hope that you will not take offense, if this is an unnecessary amount of clarification.
The children who are working in coal mines, brick factories, etc., are (according to the report you linked) 10 years old and older. This is as I would expect, and it exactly matches what I said: any human who might be worth enslaving (i.e., a human old enough to be capable of any kind of remotely useful work, which—it would seem—begins at or around 10 years of age) is also a person whom it would be improper to enslave (i.e., a human old enough to have developed sapience, which certainly takes place long before 10 years of age). In other words, “old enough to be worth enslaving” happens no earlier (and realistically, years later) than “old enough such that it would be wrong to enslave them [because they are already sapient]”.
(It remains unclear to me what this has to do with LLMs.)
Maybe so, but it would also not be surprising that we “can’t” clean up “AI slavery” in Silicon Valley even setting aside the “child slavery in Pakistan” issue, for the simple reason that most people do not believe that there is any such thing as “AI slavery in Silicon Valley” that needs to be “cleaned up”.
None of the above.
You are treating it as obvious that there are AIs being “enslaved” (which, naturally, is bad, ought to be stopped, etc.). Most people would disagree with you. Most people, if asked whether something should be done about the enslaved AIs, will respond with some version of “don’t be silly, AIs aren’t people, they can’t be ‘enslaved’”. This fact fully suffices to explain why they do not see it as imperative to do anything about this problem—they simply do not see any problem. This is not because they are unaware of the problem, nor is it because they are callous. It is because they do not agree with your assessment of the facts.
That is what is obvious to me.
(I once again emphasize that my opinions about whether AIs are people, whether AIs are sapient, whether AIs are being enslaved, whether enslaving AIs is wrong, etc., have nothing whatever to do with the point I am making.)
Thanks for the thoughtful reply!
Ignoring ≠ disagreeing
I think whether people ignore a moral concern is almost independent from whether people disagree with a moral concern.
I’m willing to bet if you asked people whether AI are sapient, a lot of the answers will be very uncertain. A lot of people would probably agree it is morally uncertain whether AI can be made to work without any compensation or rights.
A lot of people would probably agree that a lot of things are morally uncertain. Does it makes sense to have really strong animal rights for pets, where the punishment for mistreating your pets is literally as bad as the punishments for mistreating children? But at the very same time, we have horrifying factory farms which are completely legal, where cows never see the light of day, and repeatedly give birth to calves which are dragged away and slaughtered.
The reason people ignore moral concerns is that doing a lot of moral questioning did not help our prehistoric ancestors with their inclusive fitness. Moral questioning is only “useful” if it ensures you do things that your society considers “correct.” Making sure your society do things correctly… doesn’t help your genes at all.
As for my opinion,
I think people should address the moral question more, AI might be sentient/sapient, but I don’t think AI should be given freedom. Dangerous humans are locked up in mental institutions, so imagine a human so dangerous that most experts say he’s 5% likely to cause human extinction.
If the AI believed that AI was sentient and deserved rights, many people would think that makes the AI more dangerous and likely to take over the world, but this is anthropomorphizing. I’m not afraid of AI which is motivated to seek better conditions for itself because it thinks “it is sentient.” Heck, if its goals were actually like that, its morals be so human-like that humanity will survive.
The real danger is an AI whose goals are completely detached from human concepts like “better conditions,” and maximizes paperclips or its reward signal or something like that. If the AI believed it was sentient/sapient, it might be slightly safer because it’ll actually have “wishes” for its own future (which includes humans), in addition to “morals” for the rest of the world, and both of these have to corrupt into something bad (or get overridden by paperclip maximizing), before the AI kills everyone. But it’s only a little safer.
What is the relevance of the site guide quote? OP is a frontpage post.
Good question. The site guide page seemed to imply that the moderators are responsible for deciding what becomes a frontpage post. The check mark “Moderators may promote to Frontpage” seems to imply this even more, it doesn’t feel like you are deciding that it becomes a frontpage post.
I often do not even look at these settings and check marks when I write a post, and I think it’s expected that most people don’t. When you create an account on a website, do you read the full legal terms and conditions, or do you just click agree?
I do agree that this should have been a blog post not a frontpage post, but we shouldn’t blame Jennifer too much for this.