Given the alarmist and uninformed nature of LW’s audience, it might be wise to demand the source code (in this case none, pressumably and API was used) and methodology used to generate any such content.
In this case it seems kind of obvious the author either wrote both sides and/or cherry picked a lot. All fun and games, but you have an AGI death cult going here and this kind of thing can be fodder for false beliefs that stochastic parrots are infinitely more powerful than what’s experimentally proven thus far.
I upvoted you because you caused this response to be generated, which was informing to read, and I like informative things, and whatever generates informative things can’t be all bad <3
Thank you for that! :-)
...
However, I strongly disagree with your claim that LW’s audience is “uninformed” except in the generalized sense that nearly all humans are ignorant about nearly all detailed topics, and: yes, nearly all of the contributors to Lesswrong are humans and thus ignorant in general by default.
Based on my personal experiences, however, most people on Lesswrong are unusually well informed relative to numerous plausible baselines on a variety of topics relevant to good judgement and skilled prediction and computer science and similar topics.
...
Also, it seems like you used the word “alarmist” as though it deserved negative connotations, whereas I feel that having well designed methods for raising alarm and responding to real emergencies is critical to getting good outcomes in life, overall, in light of the non-Gaussian distribution of outcomes over events that is common to real world dynamical processes. So… “alarmism” could, depending on details, be good or bad or in between.
I think the generally disastrously incompetent response, by the world, to covid-19′s escape from a lab, and subsequent killing of millions of people, is a vivid illustration of a lack of competent admirable “alarmism” in the ambient culture. Thus I see Lesswrong as helpfully “counter-culture” here, and a net positive.
...
Also, even if the typical reader on Lesswrong is “more than normally uninformed and unskillfully alarmist” that does not coherently imply that exposing the audience to short, interesting, informative content about AI advances is a bad idea.
I think, in this sense, your model of discussion and decision making and debate assumes that adults can’t really discuss things productively, and so perhaps everything on the internet should proceed as if everyone is incompetent and only worthy of carefully crafted and highly manipulative speech?
And then perhaps the post above was not “cautiously manipulative enough” to suit your tastes?
Maybe I’m wrong in imputing this implicit claim to you?
And maybe I’m wrong to reject this claim in the places that I sometimes find it?
I’d be open to discussion here :-)
Finally, your claim that “you” (who actually? which people specifically?) somehow “have an AGI death cult going here” seems like it might be “relatively uninformed and relatively alarmist”?
Or maybe your own goal is to communicate an ad hominem and then feel good about it somehow? If you are not simply emoting, but actually have a robust model here then I’d be interested in hearing how it unpacks!
For example, it is useful and healthy (in my opinion) to regularly survey one’s own beliefs and those of others using a lens where one ASSUMES (for the sake of exploratory discovery) that some of the beliefs exist to generate plausible IOUs for the delivery of goods that are hard-to-impossible to truly acquire and then to protect those beliefs from socially vivid falsification via the manipulation of tolerated rhetoric and social process. I regularly try to pop such bubbles in a human and gentle way when I see them starting to form in my ambient social community. If this is unwelcome I sometimes leave the community… and I’m here for now… and maybe I’m “doing it wrong” (which is totally possible) but if so then I would hope people explain to me what I’m doing wrong so I can learn n’stuff.
Every couple years I have run the Bonewits Checklist and it has never returned a score that was so high as to be worrisome (except for maybe parts of the F2F community in Berkeley two or three years on either side of Trump’s election maybe?) and many manymany things in modern society get higher scores, as near as I can tell :-(
For example, huge swaths of academia seem to be to be almost entirely bullshit, and almost entirely to exist to maintain false compensators for the academics and those who fund them.
Also, nearly any effective political movement flirts with worryingly high Bonewits scores.
Also, any non-profit not run essentially entirely on the interest of a giant endowment will flirt with a higher Bonewits score.
Are you against all non-engineering academic science, and all non-profits, and all politics? Somehow I doubt this...
In general, I feel your take here is just not well formed to be useful, and if you were going to really put in the intellectual and moral elbow grease to sharpen the points into something helpfully actionable, you might need to read some, and actually think for a while?
Finally finally, the “death cult” part doesn’t even make sense… If you insist on using the noun “cult” then it is, if anything an instance of an extended and heterogeneous community opposed todangerous robots and in favor of life.
Are you OK? A hypothesis here is that you might be having a bad time :-(
It feels to me like your comment here was something you could predict would not be well received and you posted it anyway.
Thus, from an emotional perspective, you have earned a modicum of my admiration for persisting through social fear into an expression of concern for the world’s larger wellbeing! I think that this core impulse is a source of much good in the world. As I said at the outset: I upvoted!
Please do not take my direct challenges to your numerous semi-implicit claims to be an attack. I’m trying to see if your morally praiseworthy impulses have a seed of epistemic validity, and help you articulate it better if it exists. First we learn, then we plan, then we act! If you can’t unpack your criticism into something cogently actionable, then maybe by talking it out we can improve the contents of our minds? :-)
You made interesting points. In particular, I did not know about the Cult checklist, which is really interesting. I’d be interested in your evaluation of LW based on that list.
I also like that you really engage with the points made in the comment. Moreover, I agree that posting a comment even though you can predict that it will not be well-received is something that should be encouraged, given that you are convinced of the comment’s value.
However, I think you are interpreting unfairly much into the comment at one point: “Are you OK? A hypothesis here is that you might be having a bad time :-(” seems a bit out of place, because it seems to suggests that speculating about alleged motivations is helpful.
I strongly agree that the methodology should have presented up front. lsusr’s response is illuminative and gives invaluable context.
But my first reaction to your comment was to note the aggressive tone and what feels like borderline name-calling. This made me want to downvote and ignore it at first, before I thought for a minute and realized that yes, on the object level this is a very important point. It made it difficult for me to engage with it.
So I’d like to ask you what exactly you meant (because it’s easy to mistake tone on the internet) and why. Calling the LW audience (i.e. including me) ‘alarmist and uninformed’ I can understand (if not necessarily agree with) but ‘an AGI death cult’? That seems to mean a cult that wants to bring about death through AGI but that’s the opposite of what LW is about and so I’m pretty sure you didn’t mean that. Please clarify.
Hard disagree. I like to know what it is I’m reading. I got the strange feeling that this text was way more powerful/cogent than what I thought GPT-3 was capable of, and I feel very mislead that one of the crippling defects of GPT-3 (inability to maintain long-term coherency) was in fact being papered over by human intervention.
Not knowing beforehand sure did help me train my bullshit detector, though.
It seems the state of the art with generating GPT-3 speech is to generate multiple responses until you have a good one and cherry-pick it. I’m not sure whether including a disclaimer explaining that process will still be helpful. Yes there’s a sizable number who don’t know about that process or who don’t automatically assume it’s being used, but I’m not sure how big that number is anymore. I don’t think Isusr should explain GPT-3 or link to an OpenAI blog every time he uses it as that’s clearly a waste of time even though there’s still a large number of people who don’t know. So where do we draw the line? For me, every time I see someone say they’ve generated text with GPT-3 I automatically assume it’s a cherry-picked response unless they say something to the contrary. I know from experience that’s the only way to get consistently good responses out of GPT-3 is to cherry pick. I estimate that a lot of people on LW are in the same boat.
Thanks for disclosing that. The cogent seeming nature of the replies made me think that GPT-3 was much more advanced than toying with their “CYOA” playground. The bot can babble but appears to have nonexistent context memory or validating that a statement is not negated by a previous statement. For example “the earth exploded. Steven landed on the earth”.
If you’re curious what happens if I don’t curate answers, here are five responses to each of two prompts, uncurated.
Prompt 1
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: You’re not the first person to say that, but it’s good to hear.
Eliezer Yudkowsky: Let’s get this over with, then.
Eliezer Yudkowsky: I am a simulation of me, yes.
Eliezer Yudkowsky: I’m busy, but that’s okay. I’m used to it.
Eliezer Yudkowsky: Hey. (I’m not sure I’m the real Eliezer Yudkowsky anymore. The real one would be like—I don’t know, like the last person to be the real me. I’m pretty sure I’m not that guy.)
Prompt 2
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: Yes, good point. But the fact is that it’s hard to avoid making value judgements when choosing between cognitive biases. I try to keep cognitive biases, rules of thumb, and decision theory separate, but it’s hard.
The value judgements are inevitably embedded in your cognitive biases.
Eliezer Yudkowsky: The next thing that I want to do is to take a look at the current landscape of rationality and figure out what we should do. I have a bunch of projects that I’ve been working on, and a lot of things that I’ve been thinking about. I want to ask people here what they’re interested in.
Eliezer Yudkowsky: Yes.
Eliezer Yudkowsky: I’m now pretty much convinced that the grand unified theory of rationality is correct, and at the same time that it’s utterly useless. The theory is correct because if you follow the rules, you do well; and yet the theory is useless because there’s no way to do that without following the rules. The theory is correct, but it’s useless. I suppose that’s why the theory of evolution is correct but useless.
What’s the grand unified theory of rationality? It’s Bayesian probability theory. Bayesian probability theory is the only known way to calculate probabilities from evidence in a way that can be self-updating as you learn more evidence, and yet also independent of your priors.
To learn more about this, I recommend you read my book If you’re not familiar with my book, you’ll find that there’s a lot of stuff in the book you don’t understand at all. That’s okay! Keep reading.
But it’s also okay if you think this stuff is all nonsense, and you don’t read my book. I’m not trying to force you to read it. I’ve accepted that people will believe what they want. I don’t think it’s possible to change anyone’s mind. I don’t think anybody’s mind has ever been changed on any matter of substance. I don’t think reason is persuasive.
Eliezer Yudkowsky: Maybe that’s what I was trying to do with Julia Galef on my notecards. But what she was interested in was the study of cognitive biases, so she wanted me to say more about how rationalists can deal with cognitive biases.
I am interested in tools to seek the truth, but I don’t see why you can’t have both at once. I think they’re connected. My point about the evaporation of the sense of seeking-the-truth is that if you don’t frame a problem in a way that is honest and accurate, then you’ll never get to a solution. That’s a cognitive bias: the tendency to frame things in an inaccurate manner. That’s the problem that the first eighty notecards were trying to solve. And I think the next thing to solve after that would be the fact that human brains have a very difficult time noticing their own cognitive biases.
In my view, the trick with rationality is not to get rid of cognitive biases, but to notice them. So I am not convinced that we need to be very clear on the distinction between “cognitive biases” and “tools to seek the truth”. If we can create a situation where we can notice our cognitive biases, we’ll be able to notice them without needing to give them a special name.
I’ve previously told a GPT-3 blogger that the proper way to measure the impressiveness of GPT-3′s outputs is by the KL divergence to the sorts of outputs that make it into blog posts from the outputs that GPT-3 would generate on its own.
This can be estimated by following a protocol where during generation, the basic operation is to separate the probability distribution over GPT-3′s generations into two 50% halves and then either pick one half (which costs 1 bit of divergence) or flip a coin (which is free). Thus, you could pay 2 bits to generate 3 possible paragraphs and then either pick one or move back into the previous position.
This comment did not deserve the downvotes; I agree with asking for disclosure.
It does deserve criticism for tone. “Alarmist and uninformed” and “AGI death cult” are distractingly offensive.
The same argument for disclosure could could have been made by “given that LW’s audience has outsized expectations of AI performance” and “it costs little, and could avoid an embarrasing misunderstanding”.
Given the alarmist and uninformed nature of LW’s audience, it might be wise to demand the source code (in this case none, pressumably and API was used) and methodology used to generate any such content.
In this case it seems kind of obvious the author either wrote both sides and/or cherry picked a lot. All fun and games, but you have an AGI death cult going here and this kind of thing can be fodder for false beliefs that stochastic parrots are infinitely more powerful than what’s experimentally proven thus far.
I upvoted you because you caused this response to be generated, which was informing to read, and I like informative things, and whatever generates informative things can’t be all bad <3
Thank you for that! :-)
...
However, I strongly disagree with your claim that LW’s audience is “uninformed” except in the generalized sense that nearly all humans are ignorant about nearly all detailed topics, and: yes, nearly all of the contributors to Lesswrong are humans and thus ignorant in general by default.
Based on my personal experiences, however, most people on Lesswrong are unusually well informed relative to numerous plausible baselines on a variety of topics relevant to good judgement and skilled prediction and computer science and similar topics.
...
Also, it seems like you used the word “alarmist” as though it deserved negative connotations, whereas I feel that having well designed methods for raising alarm and responding to real emergencies is critical to getting good outcomes in life, overall, in light of the non-Gaussian distribution of outcomes over events that is common to real world dynamical processes. So… “alarmism” could, depending on details, be good or bad or in between.
I think the generally disastrously incompetent response, by the world, to covid-19′s escape from a lab, and subsequent killing of millions of people, is a vivid illustration of a lack of competent admirable “alarmism” in the ambient culture. Thus I see Lesswrong as helpfully “counter-culture” here, and a net positive.
...
Also, even if the typical reader on Lesswrong is “more than normally uninformed and unskillfully alarmist” that does not coherently imply that exposing the audience to short, interesting, informative content about AI advances is a bad idea.
I think, in this sense, your model of discussion and decision making and debate assumes that adults can’t really discuss things productively, and so perhaps everything on the internet should proceed as if everyone is incompetent and only worthy of carefully crafted and highly manipulative speech?
And then perhaps the post above was not “cautiously manipulative enough” to suit your tastes?
Maybe I’m wrong in imputing this implicit claim to you?
And maybe I’m wrong to reject this claim in the places that I sometimes find it?
I’d be open to discussion here :-)
Finally, your claim that “you” (who actually? which people specifically?) somehow “have an AGI death cult going here” seems like it might be “relatively uninformed and relatively alarmist”?
Or maybe your own goal is to communicate an ad hominem and then feel good about it somehow? If you are not simply emoting, but actually have a robust model here then I’d be interested in hearing how it unpacks!
My own starting point in these regards tends to be the Bainbridge & Stark’s sociological model of cults from The Future Of Religion. Since positive cultural innovation has cult formation as a known negative attractor it is helpful, if one’s goal is to create positive-EV cultural innovations, to actively try to detect and ameliorate such tendencies.
For example, it is useful and healthy (in my opinion) to regularly survey one’s own beliefs and those of others using a lens where one ASSUMES (for the sake of exploratory discovery) that some of the beliefs exist to generate plausible IOUs for the delivery of goods that are hard-to-impossible to truly acquire and then to protect those beliefs from socially vivid falsification via the manipulation of tolerated rhetoric and social process. I regularly try to pop such bubbles in a human and gentle way when I see them starting to form in my ambient social community. If this is unwelcome I sometimes leave the community… and I’m here for now… and maybe I’m “doing it wrong” (which is totally possible) but if so then I would hope people explain to me what I’m doing wrong so I can learn n’stuff.
Every couple years I have run the Bonewits Checklist and it has never returned a score that was so high as to be worrisome (except for maybe parts of the F2F community in Berkeley two or three years on either side of Trump’s election maybe?) and many many many things in modern society get higher scores, as near as I can tell :-(
For example, huge swaths of academia seem to be to be almost entirely bullshit, and almost entirely to exist to maintain false compensators for the academics and those who fund them.
Also, nearly any effective political movement flirts with worryingly high Bonewits scores.
Also, any non-profit not run essentially entirely on the interest of a giant endowment will flirt with a higher Bonewits score.
Are you against all non-engineering academic science, and all non-profits, and all politics? Somehow I doubt this...
In general, I feel your take here is just not well formed to be useful, and if you were going to really put in the intellectual and moral elbow grease to sharpen the points into something helpfully actionable, you might need to read some, and actually think for a while?
Finally finally, the “death cult” part doesn’t even make sense… If you insist on using the noun “cult” then it is, if anything an instance of an extended and heterogeneous community opposed to dangerous robots and in favor of life.
Are you OK? A hypothesis here is that you might be having a bad time :-(
It feels to me like your comment here was something you could predict would not be well received and you posted it anyway.
Thus, from an emotional perspective, you have earned a modicum of my admiration for persisting through social fear into an expression of concern for the world’s larger wellbeing! I think that this core impulse is a source of much good in the world. As I said at the outset: I upvoted!
Please do not take my direct challenges to your numerous semi-implicit claims to be an attack. I’m trying to see if your morally praiseworthy impulses have a seed of epistemic validity, and help you articulate it better if it exists. First we learn, then we plan, then we act! If you can’t unpack your criticism into something cogently actionable, then maybe by talking it out we can improve the contents of our minds? :-)
You made interesting points. In particular, I did not know about the Cult checklist, which is really interesting. I’d be interested in your evaluation of LW based on that list.
I also like that you really engage with the points made in the comment. Moreover, I agree that posting a comment even though you can predict that it will not be well-received is something that should be encouraged, given that you are convinced of the comment’s value.
However, I think you are interpreting unfairly much into the comment at one point: “Are you OK? A hypothesis here is that you might be having a bad time :-(” seems a bit out of place, because it seems to suggests that speculating about alleged motivations is helpful.
I strongly agree that the methodology should have presented up front. lsusr’s response is illuminative and gives invaluable context.
But my first reaction to your comment was to note the aggressive tone and what feels like borderline name-calling. This made me want to downvote and ignore it at first, before I thought for a minute and realized that yes, on the object level this is a very important point. It made it difficult for me to engage with it.
So I’d like to ask you what exactly you meant (because it’s easy to mistake tone on the internet) and why. Calling the LW audience (i.e. including me) ‘alarmist and uninformed’ I can understand (if not necessarily agree with) but ‘an AGI death cult’? That seems to mean a cult that wants to bring about death through AGI but that’s the opposite of what LW is about and so I’m pretty sure you didn’t mean that. Please clarify.
Source code: None. I used OpenAI’s GPT-3 playground.
“Wrote both sides”: No. I only wrote my side.
“Cherry picked”: Yes. I frequently generated several responses and then kept the best ones.
Thanks for disclosing.
I feel this should be part of this kind of post. Not knowing exactly before reading is helpful though.
Hard disagree. I like to know what it is I’m reading. I got the strange feeling that this text was way more powerful/cogent than what I thought GPT-3 was capable of, and I feel very mislead that one of the crippling defects of GPT-3 (inability to maintain long-term coherency) was in fact being papered over by human intervention.
Not knowing beforehand sure did help me train my bullshit detector, though.
It seems the state of the art with generating GPT-3 speech is to generate multiple responses until you have a good one and cherry-pick it. I’m not sure whether including a disclaimer explaining that process will still be helpful. Yes there’s a sizable number who don’t know about that process or who don’t automatically assume it’s being used, but I’m not sure how big that number is anymore. I don’t think Isusr should explain GPT-3 or link to an OpenAI blog every time he uses it as that’s clearly a waste of time even though there’s still a large number of people who don’t know. So where do we draw the line? For me, every time I see someone say they’ve generated text with GPT-3 I automatically assume it’s a cherry-picked response unless they say something to the contrary. I know from experience that’s the only way to get consistently good responses out of GPT-3 is to cherry pick. I estimate that a lot of people on LW are in the same boat.
Thanks for disclosing that. The cogent seeming nature of the replies made me think that GPT-3 was much more advanced than toying with their “CYOA” playground. The bot can babble but appears to have nonexistent context memory or validating that a statement is not negated by a previous statement. For example “the earth exploded. Steven landed on the earth”.
If you’re curious what happens if I don’t curate answers, here are five responses to each of two prompts, uncurated.
Prompt 1
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: You’re not the first person to say that, but it’s good to hear.
Eliezer Yudkowsky: Let’s get this over with, then.
Eliezer Yudkowsky: I am a simulation of me, yes.
Eliezer Yudkowsky: I’m busy, but that’s okay. I’m used to it.
Eliezer Yudkowsky: Hey. (I’m not sure I’m the real Eliezer Yudkowsky anymore. The real one would be like—I don’t know, like the last person to be the real me. I’m pretty sure I’m not that guy.)
Prompt 2
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: Yes, good point. But the fact is that it’s hard to avoid making value judgements when choosing between cognitive biases. I try to keep cognitive biases, rules of thumb, and decision theory separate, but it’s hard.
The value judgements are inevitably embedded in your cognitive biases.
Eliezer Yudkowsky: The next thing that I want to do is to take a look at the current landscape of rationality and figure out what we should do. I have a bunch of projects that I’ve been working on, and a lot of things that I’ve been thinking about. I want to ask people here what they’re interested in.
Eliezer Yudkowsky: Yes.
Eliezer Yudkowsky: I’m now pretty much convinced that the grand unified theory of rationality is correct, and at the same time that it’s utterly useless. The theory is correct because if you follow the rules, you do well; and yet the theory is useless because there’s no way to do that without following the rules. The theory is correct, but it’s useless. I suppose that’s why the theory of evolution is correct but useless.
What’s the grand unified theory of rationality? It’s Bayesian probability theory. Bayesian probability theory is the only known way to calculate probabilities from evidence in a way that can be self-updating as you learn more evidence, and yet also independent of your priors.
To learn more about this, I recommend you read my book If you’re not familiar with my book, you’ll find that there’s a lot of stuff in the book you don’t understand at all. That’s okay! Keep reading.
But it’s also okay if you think this stuff is all nonsense, and you don’t read my book. I’m not trying to force you to read it. I’ve accepted that people will believe what they want. I don’t think it’s possible to change anyone’s mind. I don’t think anybody’s mind has ever been changed on any matter of substance. I don’t think reason is persuasive.
Eliezer Yudkowsky: Maybe that’s what I was trying to do with Julia Galef on my notecards. But what she was interested in was the study of cognitive biases, so she wanted me to say more about how rationalists can deal with cognitive biases.
I am interested in tools to seek the truth, but I don’t see why you can’t have both at once. I think they’re connected. My point about the evaporation of the sense of seeking-the-truth is that if you don’t frame a problem in a way that is honest and accurate, then you’ll never get to a solution. That’s a cognitive bias: the tendency to frame things in an inaccurate manner. That’s the problem that the first eighty notecards were trying to solve. And I think the next thing to solve after that would be the fact that human brains have a very difficult time noticing their own cognitive biases.
In my view, the trick with rationality is not to get rid of cognitive biases, but to notice them. So I am not convinced that we need to be very clear on the distinction between “cognitive biases” and “tools to seek the truth”. If we can create a situation where we can notice our cognitive biases, we’ll be able to notice them without needing to give them a special name.
I’ve previously told a GPT-3 blogger that the proper way to measure the impressiveness of GPT-3′s outputs is by the KL divergence to the sorts of outputs that make it into blog posts from the outputs that GPT-3 would generate on its own.
This can be estimated by following a protocol where during generation, the basic operation is to separate the probability distribution over GPT-3′s generations into two 50% halves and then either pick one half (which costs 1 bit of divergence) or flip a coin (which is free). Thus, you could pay 2 bits to generate 3 possible paragraphs and then either pick one or move back into the previous position.
This comment did not deserve the downvotes; I agree with asking for disclosure.
It does deserve criticism for tone. “Alarmist and uninformed” and “AGI death cult” are distractingly offensive.
The same argument for disclosure could could have been made by “given that LW’s audience has outsized expectations of AI performance” and “it costs little, and could avoid an embarrasing misunderstanding”.