It is unreasonable to expect an output from biological evolution to do something reasonable in a situation vastly different from the situations it evolved in. In this case, you aren’t going to learn much from the state of mind of a human who has been tortured more than his ancestors could have been tortured.
I’m not sure how the rest of the article is connected to the fiction at the beginning. Given that people tend to generalize from fiction and get to false conclusions, I’m uncomfortable with taking it seriously.
My angle here is “this obviously doesn’t tell you anything about what humans can or should do when they are being maximally tortured. But it is inspirational the way stories can often be in a way that is more about making something feel like a visceral possibility, which didn’t previously feel like a visceral possibility.”
And then, the concrete details that follow are true (well, the metastrategy one is “true” in that “this is why I’m doing it this way”, it doesn’t really go into “but how well does it work actually?”.
The thing I would encourage you to do is at least consider, in various difficult circumstances, whether you can actually just shut up and do the impossible, and imagine what it’d look like to succeed. And then concretely visualize the impossible-seeming plan and whatever your best alternative is, and decide between them as best you can.
This may be an example, but I don’t think it’s an especially central one, for a few reasons:
1. The linked essay discusses, quite narrowly, the act of making predictions about artificial intelligence/the Actual Future based on the contents of science fiction stories that make (more-or-less) concrete predictions on those topics, thus smuggling in a series of warrants that poison the reasoning process from that point onward. This post, by contrast, is about feelings.
2. The process for reasoning about one’s, say, existential disposition, is independent of the process for reasoning regarding the technical details of AI doom. Respective solutions-spaces for the question “How do I deal with this present-tense emotional experience?” and “How do I deal with this future-tense socio-technical possibility?” are quite different. While they may feed into each other (in the case, for instance, of someone who’s decided they must self-soothe and level out before addressing the technical problem that’s staring them down or, conversely, someone who’s decided the most effective anxiety treatment is direct material action regarding the object of anxiety), they’re otherwise quite independent. It’s useful to use a somewhat different (part of your)self to read the Star Wars Extended Universe than you would use to read, i.e., recent AI Safety papers.
3. One principle use of fiction is to open a window into aspects of experience that the reader might not otherwise access. Most directly, fiction can help to empathize with people who are very different from you, or help you come to grips with the fact that other people in fact exist at all. It can also show you things you might not otherwise see, and impart tools for seeing in new and exciting ways. I think reading The Logical Fallacy of Generalization from Fictional Evidence as totally invalidating insights from fiction is a mistake, particularly because the work itself closes with a quote from a work of fiction (which I take as pretty strong evidence the author would not endorse using the work in this way). If you don’t think your implied reading of Yudkowsky here would actually preculde deriving any insight whatsoever from fiction, I’d like to hear what insights from fiction it would permit, since it seems to me like Ray’s committing the most innocent class of this sin, were it a sin. It’s possible you just don’t think fiction is useful at all, and in that case I just wouldn’t try to convince you further.
4. I read Ray’s inclusion of the story as immaterial to his point (this essay is, not-so-secretly, about his own emotional development, with some speculation about the broader utility for others in the community undergoing similar processes). It’s common practice in personal essay writing to open with a bit of fiction, or a poem, or something else that illustrates a point before getting more into the meat of it. Ray happens to have a cute/nerdy memory from his childhood that he connects to a class of thinking that in fact has a rich tradition (or, multiple rich traditions, with parallel schools and approaches in ~every major religious lineage).
[there’s a joke here, too, and I hope you’ll read my tone generously, because I do mean it lightheartedly, about “The Logical Fallacy of Generalization from The Logical Fallacy of Generalization from Fictional Evidence”]
It is unreasonable to expect an output from biological evolution to do something reasonable in a situation vastly different from the situations it evolved in. In this case, you aren’t going to learn much from the state of mind of a human who has been tortured more than his ancestors could have been tortured.
Trying to extract guidance from that story seems like an example of generalizing from fiction.
I’m not sure how the rest of the article is connected to the fiction at the beginning. Given that people tend to generalize from fiction and get to false conclusions, I’m uncomfortable with taking it seriously.
Certainly seems a reasonable worry.
My angle here is “this obviously doesn’t tell you anything about what humans can or should do when they are being maximally tortured. But it is inspirational the way stories can often be in a way that is more about making something feel like a visceral possibility, which didn’t previously feel like a visceral possibility.”
And then, the concrete details that follow are true (well, the metastrategy one is “true” in that “this is why I’m doing it this way”, it doesn’t really go into “but how well does it work actually?”.
The thing I would encourage you to do is at least consider, in various difficult circumstances, whether you can actually just shut up and do the impossible, and imagine what it’d look like to succeed. And then concretely visualize the impossible-seeming plan and whatever your best alternative is, and decide between them as best you can.
This may be an example, but I don’t think it’s an especially central one, for a few reasons:
1. The linked essay discusses, quite narrowly, the act of making predictions about artificial intelligence/the Actual Future based on the contents of science fiction stories that make (more-or-less) concrete predictions on those topics, thus smuggling in a series of warrants that poison the reasoning process from that point onward. This post, by contrast, is about feelings.
2. The process for reasoning about one’s, say, existential disposition, is independent of the process for reasoning regarding the technical details of AI doom. Respective solutions-spaces for the question “How do I deal with this present-tense emotional experience?” and “How do I deal with this future-tense socio-technical possibility?” are quite different. While they may feed into each other (in the case, for instance, of someone who’s decided they must self-soothe and level out before addressing the technical problem that’s staring them down or, conversely, someone who’s decided the most effective anxiety treatment is direct material action regarding the object of anxiety), they’re otherwise quite independent. It’s useful to use a somewhat different (part of your)self to read the Star Wars Extended Universe than you would use to read, i.e., recent AI Safety papers.
3. One principle use of fiction is to open a window into aspects of experience that the reader might not otherwise access. Most directly, fiction can help to empathize with people who are very different from you, or help you come to grips with the fact that other people in fact exist at all. It can also show you things you might not otherwise see, and impart tools for seeing in new and exciting ways. I think reading The Logical Fallacy of Generalization from Fictional Evidence as totally invalidating insights from fiction is a mistake, particularly because the work itself closes with a quote from a work of fiction (which I take as pretty strong evidence the author would not endorse using the work in this way). If you don’t think your implied reading of Yudkowsky here would actually preculde deriving any insight whatsoever from fiction, I’d like to hear what insights from fiction it would permit, since it seems to me like Ray’s committing the most innocent class of this sin, were it a sin. It’s possible you just don’t think fiction is useful at all, and in that case I just wouldn’t try to convince you further.
4. I read Ray’s inclusion of the story as immaterial to his point (this essay is, not-so-secretly, about his own emotional development, with some speculation about the broader utility for others in the community undergoing similar processes). It’s common practice in personal essay writing to open with a bit of fiction, or a poem, or something else that illustrates a point before getting more into the meat of it. Ray happens to have a cute/nerdy memory from his childhood that he connects to a class of thinking that in fact has a rich tradition (or, multiple rich traditions, with parallel schools and approaches in ~every major religious lineage).
[there’s a joke here, too, and I hope you’ll read my tone generously, because I do mean it lightheartedly, about “The Logical Fallacy of Generalization from The Logical Fallacy of Generalization from Fictional Evidence”]